uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,156,628 | arxiv | \section{Introduction}
A massive star dies, then forms a supernova remnant (SNR).
This process produces heavy elements, dusts and cosmic rays, which has important impact on the Galactic interstellar
medium (ISM).
To understand this process, we need study the evolution of SNRs.
\citet{Truelove1999} and \citet{Cioffi1988a} did many analytical and numerical calculations about the evolution.
Comparing the results with the observations, they developed a practical model.
However, there is usually the diverse surrounding environment which will influence the evolution of SNRs.
As a result, the radio morphologies of SNRs are various.
The practical model can explain some regular morphologies, such as bilateral symmetric and circular SNRs, but is powerless to
explain more complex morphologies.
These morphologies can help us infer some important natures of SNRs, so it is significant to study them in detail.
The numerical simulation is an effective method to describe the surrounding environment and obtain the evolution images of
a SNR at different phases.
With the improvement of the computation ability, the two-dimensional (2D) hydrodynamics (HD) simulation shows its power in studying
the magnetic amplification, the diffusive shock acceleration and the instability of SNRs \citep{Jun1996,Kang2006,Fang2012}.
Recently, we can perform three-dimensional (3D) simulations, and also convert the simulation results to radio,
optical or X-ray images in order to compare with observations \citep{Orlando2007,Meyer2015,Zhang2017}.
\citet{Orlando2007} tried to explain asymmetric morphologies of some bilateral supernova remnants by assuming inhomogeneous
density and magnetic field.
They simulated some asymmetric structures in SNRs, but did not describe how the assuming surrounding environment
is formed around the SNRs.
\citet{West2016} thought the surrounding environments are mainly influenced by the Galactic global ISM distribution and
applied a method of magnetohydrodynamics (MHD) simulation to study the Galactic magnetic field model.
They partly explains the assumed surrounding environments by \citet{Orlando2007}, but cannot well simulate many asymmetric
structures.
Thus, there should probably be another factor influencing the surrounding environments.
This factor is possibly the stellar wind of the progenitor.
The progenitor runs in the ISM and blows a stellar wind bubble, which leads to inhomogeneous density distribution and magnetic
field structure.
This certainly influences the following remnant 's evolution and its radio morphology when a supernova explodes in such a bubble.
This assumption is self-consistent and supported by theoretical calculations and observations \citep{Chen1995,Zhang1996,Foster2004,Lee2010}.
\citet{Meyer2015} simulated the stellar wind, then took the result as the initial condition of the SNR simulation.
They concluded that the stellar wind will strongly shape the density distribution of the SNRs.
They only performed the 2D HD simulations and did not obtain the radio images.
The crucial parameters of the 3D MHD simulation include the density and the magnetic field of the ISM, the spatial velocity
and the stellar wind of the progenitor, the explosion energy and the mass of the supernova.
It is impossible to test all combinations of these parameters by now.
In particular, there are two vectorial parameters, the magnetic field of the ISM and the velocity of the progenitor.
Each vector has three components, which largely complicates the conditions that one has to take into account for the 3D simulation.
We in the paper present a 3D MHD simulation where these parameters are fixed but the relative directions of the magnetic
field and the velocity of the progenitor.
We perform two simulations, one for the magnetic field in perpendicular to the velocity, one for the magnetic field in parallel
to the velocity.
In the following text, we call the former the perpendicular simulation and the latter the parallel simulation.
Using canonical values of a massive star, we may obtain many radio morphologies of SNRs based on such a simplification.
We also count different types of SNRs, so that we can better understand our simulation results.
In Sect.2, we describe the simulation model and list the parameters we use.
In Sect.3, we present and discuss the results.
Sect.4 is a summary.
\section{simulation model}
The simulation model is based on a 3D MHD frame with a grid of 128 $\times$ 128 $\times$ 128.
The spatial scale is set to 60 pc $\times$ 60 pc $\times$ 60 pc, i.e. its resolution is 0.47 pc pixel$^{-1}$.
The viscosity and the gravitation have little influence on the simulation, so we ignore them.
The cooling and heating effect mainly influences the luminosity of optical and X-ray radiation,
and we mainly focus radio radiation, so they are not included in the simulation.
In the stellar wind simulation, the thermal conduction is an important process \citep{Meyer2014}, which can
govern the shape, the size and the structure of the stellar winds.
However, it is not the dominant factor in the SNR simulation, so we only discuss its influence in the perpendicular
simulation.
The simulation is based on the ideal conservation equation set:
\begin{equation}
\begin{cases}
\dfrac{\partial \rho}{\partial t} + \nabla \cdot (\rho \bm{v}) = 0 , \\
\dfrac{\partial \rho \bm{v}}{\partial t} + \nabla \cdot (\rho \bm{vv} - \bm{BB}) + \nabla P^* = 0 , \\
\dfrac{\partial E}{\partial t} + \nabla \cdot [(E+P^*)\bm{v} - \bm{B}(\bm{v} \cdot \bm{B})] = 0 , \\
\dfrac{\partial \bm{B}}{\partial t} + \nabla \times (\bm{v} \times \bm{B}) = 0,
\end{cases}
\end{equation}
in which, $\rho$ is mass density, $\bm{v}$ is velocity, $\bm{B}$ is magnetic field intensity, $P^*$ is total
pressure, and $E$ is total energy density.
The simulation contains two models, the stellar wind model and the supernova remnant model.
At first, we simulate the evolution of the stellar wind, and the results are taken as the initial conditions in the
SNR simulation.
Then we perform the SNR simulation and convert the results to relative radio flux density images.
Finally, we compare the simulation radio images with the observed radio images.
We perform the simulations using a code, PLUTO \footnote{http://plutocode.ph.unito.it/}
\citep{Mignone2007, Mignone2012},
and summary the parameters in Table.~\ref{table:parameters}.
The parameters that we do not show the references are just the canonical values we estimate.
\begin{table}
\caption{Summary of Simulation Parameters}
\label{table:parameters}
\centering
\begin{tabular}{l l l}
\hline\hline
Parameters & Value & References \\
\hline
Stellar Wind Parameters\\
\hline
Progenitor Velocity & 40 \kms & 1 \\
Mass-Loss Rate & 3 $\times$ 10$^{-6}$ M$_{\odot}$ yr$^{-1}$ & 2 \\
Stellar Wind Velocity & 800 \kms & 2 \\
Stellar Wind Density & 0.05\ cm$^{-3}$ & 2 \\
Inner Radius & 0.5 pc \\
Evolution Time & 1 million years & 1 \\
\hline
SNR Parameters\\
\hline
Ejecta Mass & 15.3 M$_{\odot}$ & 3\\
Initial Explosion Energy & 1.3$\times$ 10$^{51}$ ergs & 4, 5\\
Initial Radius & 4 pc &\\
Initial Time & 650 years & 6\\
\hline
Other Parameters\\
\hline
Mean Density & 0.5\ cm$^{-3}$ & 7, 8\\
Magnetic Field Intensity & 9 $\mu$G & 9\\
Mean Atomic Weight & 1.3 &\\
Adiabatic Coefficient & 1.7 &\\
Synchrotron Index ($\beta$) & 0.5 &\\
\hline
\end{tabular}\\
\tablerefs{(1)\citealt{Meyer2014}; (2)\citealt{Meyer2015}; (3)\citealt{Sukhbold2016};
(4)\citealt{Poznanski2013}; (5)\citealt{Mueller2016a}; (6)\citealt{Leahy2017a}; (7)\citealt{Nakanishi2006};
(8)\citealt{Nakanishi2016}; (9)\citealt{Haverkorn2015}}
\end{table}
\subsection{The stellar wind model}
How the stellar winds of runaway massive stars evolve is still an unsolved problem, so we only use a reasonable simplified model.
If the stellar winds can influence the SNRs obviously, their spatial scales should be similar to SNRs.
The typical diameters of SNRs are usually several parsecs (pcs).
\citet{Meyer2014} showed that the mass of the star should be at least 40 M$_{\odot}$ to reach such a scale, if the speed of
the star is 40 \kms.
Lower mass means lower speed \citep{Mackey2015}, but lower speed means lower asymmetry, which is inconsistent with the
aim of this paper.
We therefore choose the mass 40 M$_{\odot}$ and the speed 40 \kms as the initial parameters in our simulation.
It is known that the star's life is composed of the main sequence (MS) and the red supergiant (RSG) phase.
However, our tests show the stellar wind in main sequence phase has little impact on the evolution of a SNR, so we only simulate
it for the last one million years.
The mass loss of a 40 M$_{\odot}$ star usually varies from 1 $\times$ 10$^{-6}$ to 1 $\times$ 10$^{-5}$ M$_{\odot}$ yr$^{-1}$
during the last one million years of the star's life \citep{Meyer2014, vanMarle2012, vanMarle2015}, so we use a mass-loss rate of
3 $\times$ 10$^{-6}$ M$_{\odot}$ yr$^{-1}$ for simplicity.
Here we warn readers that it is not reality to accurately estimate the mass-loss rate of a massive star so far
\citep{Meyer2014a, Gvaramadze2014}.
Also, we set the inner radius as 0.5 pc, i.e. the stellar wind is generated from such a small region in the simulation.
This radius is large enough to guarantee the wind blows spherically in the square grid of numerical simulation and small enough to be
consistent with the simplified stellar wind model.
The mass-loss rate \textit{$\dot{M}$}, the inner radius \textit{r}, the velocity \textit{v} and the mass density \textit{$\rho$} of
the stellar wind are linked by
\begin{equation}
\begin{aligned}
\dot{M}=4\pi r^2\rho v.
\end{aligned}
\end{equation}
The initial velocity of the stellar wind originating from the progenitor will not change in 0.5 pc, if we assume it propagates
freely in such a short radius.
Then the velocity should be about 800 \kms and the density is about 0.05 cm$^{-3}$ \citep{Meyer2014}.
In addition, we set the initial surrounding environment before the stellar wind evolution.
We assume the ISM is ideal gas, where the mean atomic weight is 1.3 and the adiabatic coefficient is 1.7.
We set a uniform magnetic field of 9 $\mu$G \citep{Haverkorn2015} and a uniform ISM number density of 0.5 cm$^{-3}$
\citep{Nakanishi2006,Nakanishi2016}, the typical values of the Galactic ISM.
The environment is usually inhomogeneous, which will result in a more complex radio morphology in the simulation.
However, we only want to test how the SNRs are influenced by the stellar winds, so we use a homogeneous ISM in this work.
\subsection{The supernova remnant model}
The evolution of a SNR is divided into three phases, the ejecta-dominated (ED) phase, the Sedov-Taylor (ST) phase and
the pressure-driven snowplow (PDS) phase \citep{Truelove1999}.
The first two phases are classified as "nonradiative", but the radiative loss becomes important in the PDS phase.
Our simulations only cover the first two phases, so we do not need to estimate radiative loss.
For a 40 M$_{\odot}$ star, the ejecta mass is about 15.3 M$_{\odot}$ \citep{Sukhbold2016} and the explosion energy is
about 3.6 $\times$ 10$^{51}$ erg according to the function \citep{Poznanski2013,Mueller2016a},
\begin{equation}
\begin{aligned}
log(E/10^{50}erg)=2.09log(M_{ej}/M_{\odot})-1.78.
\end{aligned}
\end{equation}
To simulate a spherically symmetric explosion, we set an initial radius as 4 pc.
The shock wave of the supernova explosion will spend 650 years to reach 4 pc.
Because the ST phase starts from 1365 years for such a star \citep{Leahy2017a}, it is still in ED phase.
Therefore, we can obtain the 650-years evolution directly from the existed theory \citep{Truelove1999}
which gives the density, pressure and velocity profile.
The magnetic field is not important at this time, so we ignore it here.
In short, the initial conditions are the evolution results after 650 years.
Next we start to simulate the evolution of a SNR in the surrounding environment blown by the stellar wind.
Our simulation has shown the density, the magnetic field, the velocity and the pressure in the whole simulation space.
We further convert these simulation results into radio images in order to compare with real observations.
Assuming the radio emission is totally from synchrotron mechanism, we obtain the radio flux volume density by employing
$i(\nu)=C\rho B_{\perp}^{\beta + 1}\nu^{-\beta}$ \citep{Orlando2007}, in which $\nu$ is the radiation frequency,
C a constant, $\rho$ the density, $B_{\perp}$ the magnetic field perpendicular to the line of sight (LoS) and
$\beta$ the synchrotron spectral index.
The absolute radio flux density is dependent on the constant C, but C contains electron acceleration efficiency which is
difficult to be obtained.
Moreover, the $\nu^{-\beta}$ is also excluded from the equation, because it is meaningless if we do not want to
calculate the absolute radio flux density.
As a result, the final equation used in this work is $i(\nu)=\rho B_{\perp}^{\beta + 1}$.
Then we integrate the $i(\nu)$ along the LoS to obtain relative radio flux density.
The resolution of the simulation is usually higher than the observation, so we smooth the simulation radio images by using
a 2D Gaussian function with $\sigma = 1$.
\begin{figure*}
\centering
\includegraphics[width=0.325\textwidth]{G21-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G120-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G43-eps-converted-to.pdf}
\caption{The typical multi-layers, circular and irregular SNRs: G21.6-0.8, G120.1+1.4 and G43.3-0.2, respectively. }
\label{fig:stat}
\end{figure*}
\begin{table*}
\caption{Statistics of different SNRs}
\label{table:stat}
\centering
\begin{tabular}{l l l}
\hline\hline
Types & Numbers & Samples \\
\hline
unilateral small-radian & 35 &G4.2-3.5, G5.9+3.1, G6.1+0.5, G6.4+4.0, G7.0-0.1, G7.2+0.2,
G11.1+0.1, G11.1-0.7, \\& & G12.2+0.3, G14.3+0.1, G17.4-0.1, G24.7-0.6, G49.2-0.7, G57.2+0.8, G59.8+1.2, \\& & G65.1+0.6,
G310.8-0.4, G327.4+1.0, G338.1+0.4, G348.5-0.0, \\& & G348.7+0.3, G350.0-2.0, G351.7+0.8, G351.9-0.9, G354.1+0.1,
G359.0-0.9\\
\hline
unilateral large-radian & 15 &G0.0+0.0, G1.9+0.3, G3.8+0.3, G8.3-0.0, G9.8+0.6, G18.6-0.2,
G18.8+0.3, G33.2-0.6, \\& & G55.7+3.4, G66.0-0.0, G116.9+0.2, G119.5+10.2, G298.6-0.0, G321.9-1.1, G342.1+0.9\\
\hline
bilateral symmetric & 17 &G0.9+0.1, G1.0-0.1, G3.7-0.2, G8.7-5.0, G16.2-2.7, G21.0-0.4,
G23.3-0.3, G36.6+2.6, \\& & G59.5+0.1, G65.3+5.7, G296.5+10.0, G321.9-0.3, G327.6+14.6, G332.0+0.2, G349.2-0.1,
\\& & G353.9-2.0, G356.3-1.5\\
\hline
bilateral asymmetric & 11 &G11.0-0.0, G21.8-0.6, G29.7-0.3, G42.8+0.6, G53.6-2.2, G54.4-0.3,
G64.5+0.9, \\& & G304.6+0.1, G348.5+0.1, G350.1-0.3, G352.7-0.1\\
\hline
multi-layers & 13 &G21.6-0.8, G24.7+0.6, G46.8-0.3, G85.4+0.7, G93.3+6.9, G109.1-1.0,
G284.3-1.8, \\& & G286.5-1.2, G318.9+0.4, G320.6-1.6, G327.4+0.4, G358.1+1.0, G358.5-0.9\\
\hline
circular & 42 &G4.5+6.8, G5.2-2.6, G6.5-0.4, G11.2-0.3, G11.4-0.1, G15.9+0.2,
G16.7+0.1, G18.1-0.1, \\& & G21.5-0.9, G27.4+0.0, G69.7+1.0, G82.2+5.3, G83.0-0.3, G84.2-0.8, G111.7-2.1,\\& & G120.1+1.4,
G132.7+1.3, G179.0+2.6, G180.0-1.7, G184.6-5.8, G261.9+5.5, G290.1-0.8, \\& & G299.2-2.9, G301.4-1.0, G302.3+0.7,
G308.1-0.7, G310.6-0.3, G311.5-0.3, G315.4-2.3, \\& & G322.5-0.1, G326.3-1.8, G327.1-1.1, G327.2-0.1, G332.4-0.4,
G337.3+1.0, G346.6-0.2, \\& & G354.8-0.8, G355.6-0.0, G355.9-2.5, G356.2+4.5, G358.0+3.8, G359.1-0.5\\
\hline
irregular & 155 &\\
\hline
\end{tabular}\\
\end{table*}
\section{Results and Discussion}
We show the results and compare them with the observations in this section.
Based on \citet{West2016}'s collection of all radio SNRs' images, we classify the SNRs to seven types: unilateral small-radian,
unilateral large-radian, bilateral symmetric, bilateral asymmetric, multi-layers, circular and irregular.
A multi-layers SNR means there are two or more layers on one or two sides.
The typical multi-layers, circular and irregular SNRs are shown in Figure~\ref{fig:stat}.
The statistics of the seven types is listed in Table.~\ref{table:stat}.
We only select 288 SNRs in this statistics, because other images are obscure.
However, we list all samples except for the irregular type for the convenience of readers.
\subsection{Perpendicular Simulation}
\begin{figure*}
\centering
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_xyp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_xzp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_yzp-eps-converted-to.pdf}\newline
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_xyp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_xzp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_yzp-eps-converted-to.pdf}\newline
\includegraphics[width=0.325\textwidth]{t6_xyp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{t6_xzp-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{t6_yzp-eps-converted-to.pdf}\newline
\includegraphics[width=0.325\textwidth]{G332-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G116-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G12-eps-converted-to.pdf}
\caption{Simulation images assuming the velocity is perpendicular to the magnetic field. Top three images show the
stellar wind simulation results at different views. The second row shows the SNR simulation results which apply the top
three images as the initial conditions. The third row shows the relative radio flux density converted from the second row.
The last row shows the real observed radio images of SNRs, G332.0+0.2, G116.9+0.2 and G12.2+0.3 \citep{West2016}.
The three SNRs are bilateral symmetric, unilateral large-radian and unilateral small-radian, respectively. In the top two rows,
the colorful patterns indicate the density distribution with a unit of log(cm$^{-3}$). The length and the direction of the
white arrows respectively indicate the intensity and the direction of magnetic field.}
\label{fig:per}
\end{figure*}
The perpendicular simulation is shown in Figure~\ref{fig:per}.
The top panels show the initial conditions at three directions.
It is composed of two parts, the surrounding environment and the inner supernova explosion region.
The surrounding environment results from the stellar wind evolution and the inner's physics status is calculated based on the
work of \citet{Leahy2017a}.
The initial magnetic field and the progenitor velocity are set to follow the y-axis and z-axis respectively.
This leads to an obvious bow structure in y-z plane and the very chaotic magnetic field in x-z plane.
To make the patterns clearer, the white arrows and the pattern colors are set with different scales in different images.
The values labeled on the color bar are absolute, so they can be used to compare the densities in different images.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t6_xzp_45deg-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{G116-eps-converted-to.pdf}
\caption{The simulated radio image after rotating $45\degr$ along z-axis and the observed radio image of SNR G116.9+0.2
\citep{West2016,Tian2006}.}
\label{fig:45deg}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{3D-eps-converted-to.pdf}
\caption{The simulation 3D image after rotating $50\degr$ along z-axis from the x-z plane. If we rotate it
$45\degr$, the middle two vertical outlines will overlap with each other. Thus, we rotate a little more to make it more distinct.
The colorful patterns indicate the relative radio flux density. The yellow shows the high flux density. The arrows show the
magnetic field. A more yellow arrow means the larger magnetic intensity. (This figure is available online as an animation.)}
\label{fig:3D}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{T-eps-converted-to.pdf}
\includegraphics[width=0.45\textwidth]{xx-eps-converted-to.pdf}
\caption{\textit{Left:} the relative temperature distribution in x-z plane. \textit{Right:} ASCA (Advanced Satellite for
Cosmology and Astrophysics) X-ray image of G116.9+0.2 with CGPS (Canadian Galactic Plane Survey) radio contours overlaid (from
\citet{Pannuti2010}).}
\label{fig:X}
\end{figure*}
The second row of Figure~\ref{fig:per} shows the SNR simulation results after 1200 years.
If we add the initial 650 years, then the age of this artificial SNR is 1850 years.
The radio morphologies, shown in the third row, are a little surprising, especially in x-z plane.
Our simulations can simultaneously result in the bilateral symmetric, the unilateral big-radian and small-radian SNRs.
As a comparison, three real SNRs \citep{West2016} are shown in the bottom panels of Figure~\ref{fig:per}.
This proves that three kind of SNRs may originate from same a progenitor, and their morphologies depend on the view
angle at which we see them.
The bilateral symmetric SNRs have been well studied by simulations and observations \citep{Gaensler1999,Petruk2009a},
but there are still many ambiguities for unilateral SNRs.
Here we show the images toward three directions, but in fact the SNR morphology varies following different view angle.
We take SNR G116.9+0.2 as an example here.
If we rotate $45\degr$ along the z-axis, we can get a unilateral bigger-radian morphology SNR in z-xy plane (see
Figure~\ref{fig:45deg}), which is more similar to the SNR G116.9+0.2.
Moreover, the magnetic field of G116.9+0.2 is parallel to the shell \citep{Sun2011} in the polarization observation,
which is totally different from the result in x-z plane.
However, if we rotate $45\degr$ along the z-axis, the magnetic field becomes similar to the observation (see Figure~\ref{fig:3D}).
The X-ray emission region of G116.9+0.2 is extended away from the radio shell \citep{Pannuti2010}, which is also revealed
by our simulation (see Figure~\ref{fig:X}).
In the left panel of Figure~\ref{fig:X}, the bottom high-temperature region is low-density comparing with the middle panel of
the second row of Figure~\ref{fig:per}, which hints it is a high-temperature low-density region full of ionized gas.
This is an appropriate environment to generate X-ray emission by bremsstrahlung mechanism.
Therefore it is possible that the high speed of the progenitor leads to the extensive X-ray emission.
\citet{Craig1997,Yar-Uyaniker2004,West2016} have ever tried to explain the X-ray morphology, but have not come to the conclusion.
A more specific simulation for SNR G116.9+0.2 will help us further understand it.
It is worthy to be mentioned that we do not add a magnetic field gradient or a density gradient at the beginning.
Even if the initial ISM is uniform, we can still obtain various morphologies.
In other words, the radio morphology is not only dependent on the initial ISM distribution.
Therefore, it is unreasonable to estimate the initial magnetic field or density distribution before the progenitor formation
based on the radio morphology of a SNR.
Also the radio morphology should not be used to infer the large-scale magnetic field or density distribution in Milky Way,
since the local environment has been changed by the stellar wind, which leads to the difference between local and large-scale
environment.
In fact, \citet{Orlando2007} obtained similar radio morphologies based on inhomogeneous initial ISM settings,
but they did not explain the origin of such initial conditions.
\citet{vanMarle2010} took the stellar wind into consideration and explained the its influences based on HD simulations,
but did not get radio images.
Moreover, they both did not consider the motion of the progenitor.
It is well-known that most of stars are moving against the surrounding environment, so our work is a meaningful supplement to the
previous research.
In fact, aiming at particular SNRs, \citet{Vigh2011} tried to study the asymmetries of Tycho SNR,
while \citet{Schneiter2006} generated the morphology of SNR 3C 400.2 and discussed the effect of the thermal conduction.
\citet{Toledo-Roy2014} took the motion of the progenitor into consideration and well explained the
morphology of Kepler SNR by including the stellar wind.
Further, \citet{Toledo-Roy2014a} combined the X-ray and radio emissions and studied SNR G352.7−0.1 based on a MHD simulation, but they
did not consider the stellar wind and the motion of the progenitor in their study.
Figure~\ref{fig:per} has shown that the relative flux densities in different planes are different.
The flux density is low in x-z plane, and higher in x-y plane, then the highest in y-z plane.
So it is reasonable that the unilateral small-radian SNRs appear more frequent than the
unilateral big-radian SNRs, because bright SNRs are easier to be detected.
Such a derivation is supported by the statistics of the SNR morphologies (see Table.~\ref{table:stat}).
Therefore there should exist more undiscovered unilateral big-radian SNRs in our Galaxy.
The third row of Figure~\ref{fig:per} shows that the top flux density of y-z plane is about 20 times larger than that in the x-z plane,
so it is possible to detect more unilateral big-radian SNRs once we get the sensitivity 20 times better.
The fact that the number of the observed SNRs (about 300, see \citet{2014BASI...42...47G}) is much less than the theory prediction of
above 1000 by now \citep{Frail1994a,Tammann1994}, can be partly explained by the simulation results.
\begin{figure*}
\centering
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_xypc-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_xzpc-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_yzpc-eps-converted-to.pdf}\newline
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_xypc-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_xzpc-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_yzpc-eps-converted-to.pdf}\newline
\caption{Simulation images with thermal conductions. They are similar to the top two rows of Figure~\ref{fig:per}.
The only difference is that the thermal conduction is included in the simulation.}
\label{fig:conduction}
\end{figure*}
We also try to check the influence of thermal conduction in the simulation,
because the thermal conduction plays an important role in the evolution of stellar wind \citep{Meyer2014}.
We apply the explicit scheme and the standard thermal conduction coefficients in the code PLUTO.
Figure~\ref{fig:conduction} shows our simulation results.
The simulation reveals that the bow shell has two layers and the magnetic field is also different from
that without the thermal conduction (see Figure~\ref{fig:per}).
\citet{Meyer2015} showed the effects on the mixing of material, which is not obvious in our work,
because we use different parameters.
The simulation including thermal conduction does not show obvious change in the density and magnetic field evolution
around the SNR.
The radio morphologies are similar to those in Figure~\ref{fig:per}, so we do not show them.
In conclusion, the thermal conduction plays a small role in the radio evolution of a SNR.
\subsection{Parallel Simulation}
\begin{figure*}
\centering
\includegraphics[width=0.325\textwidth]{rho_t0_density1_E1_yz-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{rho_t6_density1_E1_yz-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{t6_yz-eps-converted-to.pdf}
\caption{Simulation images assuming the velocity is parallel to the magnetic field. The left panel shows the
stellar wind simulation result at y-z plane. The middle panel shows the SNR simulation result at y-z plane.
The right panel shows the relative radio flux density converted from the middle panel. }
\label{fig:par}
\end{figure*}
The parallel simulation is shown in Figure~\ref{fig:par}.
All initial parameters are same as the perpendicular simulation and the age is also 1850 years.
We warn that the stellar wind region shows obvious radio emission, which is wrong, because there is no
relativistic electron in the stellar wind region and synchrotron mechanism is here not important.
However, it is impossible for us to exclude it from the radio images, because we do not know the boundary of the
relativistic electrons region.
This flaw also influences other simulation radio images.
We only show the y-z plane in Figure~\ref{fig:par}, because the x-z plane is same as the y-z plane.
Moreover, we should see a circular SNR in the x-y plane but in fact a square SNR in our simulation,
because the resolution is not high and every pixel is square.
The stellar wind simulation is time-consuming, so we selectively set a reasonable resolution.
\begin{figure*}
\centering
\includegraphics[width=0.325\textwidth]{t4_yz-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{t6_yz-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{t12_yz-eps-converted-to.pdf}\newline
\includegraphics[width=0.325\textwidth]{G53-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G29-eps-converted-to.pdf}
\includegraphics[width=0.325\textwidth]{G28-eps-converted-to.pdf}
\caption{The upper three images show the simulation relative radio flux density at different ages.
The lower images show the observed radio images of SNRs, G53.6-2.2, G29.7-0.3 and G28.6-0.1 \citep{West2016},
all of which are bilateral asymmetric.}
\label{fig:parc}
\end{figure*}
Figure~\ref{fig:par} shows that the radio morphology is a bilateral asymmetric SNR.
\citet{vanMarle2014} showed that the magnetic field would shape the stellar wind nebulae of asymptotic giant (AGB) stars
as bilateral symmetric morphologies.
Including the motion of ABG stars, \citet{vanMarle2014a} studied the instabilities in such a system.
\citet{Meyer2017} also simulated the bow shock nebulae of hot massive stars in a magnetized medium, which shows
similar results as our parallel stellar wind simulation.
However, they did not add the supernova explosion and convert the results to radio images.
Taking the circular SNR into account, we are able to simulate five types of SNRs in our classification.
Only the multi-layers and the irregular SNRs are difficult to be simulated.
Their formations are likely influenced by the inhomogeneous initial surrounding environment or the unusual progenitor
\citep{Orlando2007,Orlando2017}.
The upper images of Figure~\ref{fig:parc} show the simulation morphologies at 1450, 1850 and 3050 years respectively.
As a comparison, three real SNRs, G53.6-2.2, G29.7-0.3, G28.6-0.1, are shown in the lower panels of Figure~\ref{fig:parc}.
Because of the similar morphologies between the simulation images and the observation images, the three SNRs are likely all
few thousands years old.
In fact, G29.7-0.3 is about one thousands years old \citep{Leahy2008} and G28.6-0.1 \citep{Bamba2001} is no more than
2700 years old.
G53.6-2.2 seems older (about 15,000 years old, see \citet{Long1991}), which is worthy to be further checked.
In addition, the X-ray emissions of the three SNRs are all more or less separated from the radio shell \citep{Broersen2015,Su2009,Bamba2001},
similar to SNR G116.9+0.2.
The simulation results also coincide with these observations, just like the perpendicular simulation for G116.9+0.2,
so we do not show them here.
Since the parameters are same at the two simulations, we are able to compare the relative flux density in parallel with that in perpendicular
simulations at same age.
Figure~\ref{fig:par} shows the relative flux density in the y-z plane for the parallel simulation is much lower than that
for the perpendicular simulation.
In other words, bilateral asymmetric SNRs should be less than unilateral small-radian SNRs.
This is supported by the statistics in Table.~\ref{table:stat}.
The unilateral large-radian SNRs should be less than the bilateral asymmetric SNRs, if we only take the x-z plane into consideration in the simulation
results.
However, the directions of the LoS might influence this estimation.
For example, the Figure~\ref{fig:45deg} shows a unilateral large-radian SNR is brighter than the bilateral asymmetric SNR.
In fact, Table.~\ref{table:stat} implies that the unilateral large-radian SNRs are more than the bilateral asymmetric SNRs.
\section{Summary}
Taking the evolution result of the stellar wind as the initial conditions, we simulate the SNR evolution of a runaway 40 M$_{\odot}$ progenitor star.
The stellar wind simulations includes two models, the perpendicular simulation and the parallel simulation.
Based on real radio morphologies, we classify the SNRs into seven types.
Our conclusions are summarized as follows:
\begin{enumerate}
\item The stellar wind of the massive progenitor plays a key role in shaping the radio morphologies of SNRs, and is possibly important more than the
initial surrounding environment.
\item Considering the stellar wind, we can explain many radio morphologies of SNRs, except for the multi-layers and irregular SNRs.
\item It is not suggested to infer the large-scale magnetic field or density distribution in Milky Way based on the radio morphologies of SNRs.
\item The thermal conduction might slightly influence the SNR radio morphologies, but is not very important.
\item The separation between X-ray and radio emission of some SNRs is possibly related with the motion of the progenitor.
\end{enumerate}
We note that there are many simplifications in our current work.
It will be interesting to study the formation of multi-layers and irregular SNRs by more detailed simulation in the near future, e.g. including
an inhomogeneous initial surrounding environment or a special progenitor, etc.
\acknowledgements
We thank Dr.Meyer for his explaining the thermal conduction of the stellar wind.
We acknowledge support from the NSFC (11473038).
\software{PLUTO \citep{Mignone2007,Mignone2012}}
\bibliographystyle{aasjournal}
|
2,869,038,156,629 | arxiv | \section{Introduction\label{s-intro}}
Morphogenesis---that is, the development and formation of tissues
and organs---is one of the main mysteries in living organisms.
How does structure emerge from a structureless state
without the apparent action of an external organizing force?
A major factor seems to be the competition between
chemical reactions and spatial diffusion of substances
called morphogens, which are present in the cells.
The idea goes back to the pioneering work of Turing
in 1952~\cite{tu}, who noted that diffusion in a
mixture of chemically reacting morphogens can
cause instability of a spatially uniform steady state
and lead to the formation of spatial patterns;
see the article by Cross and Hohenberg~\cite{cross-h}
and the recent text by Hoyle~\cite{hoyle}
for a comprehensive overview of the theory
of pattern formation and Turing analysis.
Turing's analysis, which is essentially a linear
eigenvalue analysis, has been a basic tool
in the study of nonlinear reaction--diffusion systems;
in fact, it has provided insight into the behavior of
nonlinear systems as well, since the latter can
often be approximated, at least for brief lengths
of time, by linearized systems.
But as time evolves, the nonlinear structure takes
over, and other tools are needed to study the
long-time behavior.
In certain cases, where the existence of
invariant regions for reaction--diffusion
systems can be established,
the solution of a nonlinear system
remains bounded~\cite{smoller};
however, even in these cases it is not known
rigorously whether the patterns persist
in the long run, even though the idea
is supported by many numerical simulations.
The mathematical literature contains
many instances of weakly nonlinear stability
analyses for reaction--diffusion systems.
An early reference is~\cite{h-o},
where a center-manifold approach is used;
other, more recent references are~\cite{w, z-m}.
Sometimes, special techniques have been applied
to the study of Turing patterns in different regimes.
For example, Ref.~\cite{iww} deals with the stability
of symmetric $N$-peaked steady states
for systems where the inhibitor diffuses
much more rapidly than the activator.
We also mention Refs.~\cite{bms, pmm-2005},
which deal with the Schnakenberg model
on a two-dimensional square domain,
where spatially varying diffusion coefficients
cause the removal of the degeneracy
of the Turing bifurcation.
Weakly nonlinear stability analyses can be
justified rigorously on the basis of
modulation theory and a Ginzburg--Landau approximation;
see, for example, Refs.~\cite{bvhs, eck, schn-1, schn-2, vh}.
Murray's monograph~\cite{murray} gives
applications to biological systems
such as animal coat patterns.
The purpose of the present work is
to investigate the nonlinear stability
and persistence of spatiotemporal patterns
on bounded domains.
The investigation is based on recent results
of Ma and Wang~\cite{mw} on attractor
bifurcation for nonlinear equations.
The attractor bifurcation theorem
(see the Appendix, Section~\ref{ss-abt})
sums up the basic features
of a stability-breaking bifurcation.
Starting from the original partial
differential equation, it identifies and
characterizes the local basins of attraction
based on the multiplicity of the eigenvalues
near a bifurcation point.
Thus, the attractor bifurcation theorem
gives the complete picture,
rather than the caricature given
by the amplitude equations.
A second essential feature
of the present investigation
is a center-manifold reduction
to reduce the partial differential equation
to a finite-dimensional dynamical system.
The reduction requires the computation
of the center-manifold function
and the interaction of the higher-order
eigenfunctions with the eigenspace
belonging to the leading eigenvalues.
Such a reduction is inherently difficult,
and for this reason one usually resorts
to a generic form of the reduced equation
which is somewhat detached from the original.
On the other hand, a center-manifold reduction
offers a practical way to find the structure
of the local attractors of the original
partial differential equation.
These attractors completely describe
the local transitions, and their
basins of attraction define the long-time
dynamics associated with the transitions.
Since these are exactly the features of
interest, we have taken this approach
and focused much of our efforts on
the center-manifold reduction.
As a result, we are able to characterize
the types of transitions in terms of
explicitly computable parameters
which depend only on the domain and
the values of the physical parameters
of the system under consideration.
We prove that spatiotemporal patterns
in reaction--diffusion systems
of the attractor--inhibitor type
can arise as the result of a
supercritical (pitchfork) or
subcritical bifurcation.
The former results in a continuous transition,
the latter in a discontinuous transition.
In the case of diffusion on a (bounded) interval
or on a rectangular (non-square) domain,
we prove that the attractor consists of
two points, each with its basin of attraction
(Theorem~\ref{th-1d}, Fig.~\ref{fig3}).
In the case of diffusion on a (bounded) square,
the phase diagram after bifurcation consists of
eight steady-state solutions and their connecting
heteroclinic orbits
(Theorem~\ref{th-2d}, Fig.~\ref{fig.hetorbit}).
The conditions for the stability of these
bifurcated steady states and the heteroclinic
orbits are explicit;
they can be verified in terms of eigenvalues
and eigenvectors.
In the framework of classical bifurcation theory,
such an explicit characterization is very difficult,
if not impossible, and the existence of heteroclinic
orbits for a partial differential equation is often
hard to prove.
Although the focus in this article is on
activator--inhibitor systems, the analysis
is quite general and applies, for example,
to systems consisting of a self-amplifying
activator and a depleted substrate.
Following is an outline of the paper.
In Section~\ref{s-problem},
we formulate the reaction--diffusion problem
for an activator--inhibitor mixture and
rewrite it as an evolution equation
in a function space.
In Section~\ref{s-exchange},
we study the exchange of stability,
which is crucial for the stability
and bifurcation analysis.
The results of the bifurcation analysis
are summarized for the one-dimensional
case in Section~\ref{s-1d} and the
two-dimensional case in Section~\ref{s-2d}.
In Section~\ref{s-examples},
we illustrate the theoretical results
on two examples, namely the Schnakenberg
equation and the Gierer--Meinhardt equations.
Section~\ref{s-conclusions} summarizes our
conclusions.
Appendix~\ref{s-appendix} contains a brief summary
of the attractor bifurcation theory from Ref.~\cite{mw}
and the reduction method introduced in Ref.~\cite{mw-b}.
\section{Statement of the Problem\label{s-problem}}
Consider a mixture of two chemical species
which simultaneously react and diffuse;
one of the species is an activator,
the other an inhibitor
of the chemical reaction.
Their respective concentrations $U$ and $V$
satisfy a system of coupled nonlinear
reaction--diffusion equations,
\begin{equation}
\begin{split}
U_t &= f(U,V) + d_1 \Delta U , \\
V_t &= g(U,V) + d_2 \Delta V ,
\end{split}
\Label{eq1-UV}
\end{equation}
subject to no-flux boundary conditions
and given initial conditions.
The functions $f$ and $g$,
which describe the kinetics
of the chemical reaction,
are generally nonlinear
functions of the arguments.
The diffusion coefficients $d_1$ and $d_2$
are constant and positive.
We assume that the system of Eqs.~(\ref{eq1-UV})
admits a uniform steady-state solution
which is positive throughout the domain.
That is, there exist constants
$\bar{u} > 0$ and $\bar{v} > 0$
such that
\begin{equation}
f(\bar{u}, \bar{v}) = 0 , \quad g(\bar{u}, \bar{v}) = 0 .
\Label{equil}
\end{equation}
We are interested in
solutions that bifurcate
from this equilibrium solution
and, in particular, in their
long-term dynamics,
under the assumption that
the equilibrium solution~(\ref{equil})
is stable in the absence of diffusion.
For the bifurcation analysis,
it is convenient to rescale time and space
and rewrite the system~(\ref{eq1-UV})
in the form
\begin{equation}
\begin{split}
U_t &= \gamma f(U,V) + \Delta U , \\
V_t &= \gamma g(U,V) + d \Delta V ,
\Label{eq2-UV}
\end{split}
\end{equation}
where $\gamma= 1/d_1$ and $d = d_2/d_1$.
Thus,
$\gamma$ is a measure of the ratio
of the characteristic times for diffusion
and chemical reaction,
and $d$ is the ratio of
the diffusion coefficients
of the two species.
The above equations are satisfied
on an open bounded domain,
say $\Omega \subset \mathbf R^n$ ($n=1,2$),
while $U$ and $V$ satisfy Neumann (no-flux)
boundary conditions on the boundary $\partial\Omega$
of~$\Omega$.
\subsection{Bifurcation Problem\label{ss-bifurcation}}
Let
\begin{equation}
U = \bar u + u , \quad V = \bar v + v .
\Label{def-uv}
\end{equation}
Since $\bar{u}$ and $\bar{v}$ satisfy
the identities~(\ref{equil}),
we have
\begin{equation}
\begin{split}
f(U,V) &= f_u (\bar{u}, \bar{v}) u + f_v (\bar{u}, \bar{v}) v + f_1 (u,v) , \\
g(U,V) &= g_u (\bar{u}, \bar{v}) u + g_v (\bar{u}, \bar{v}) v + g_1 (u,v) ,
\end{split}
\end{equation}
where $f_1$ and $g_1$ incorporate the
higher-order terms in the Taylor expansions.
The functions $u$ and $v$ satisfy the equations
\begin{equation}
\begin{split}
u_t
&=
\Delta u + \gamma (f_u (\bar{u}, \bar{v}) u + f_v (\bar{u}, \bar{v}) v)
+ \gamma f_1 (u,v) , \\
v_t
&=
d \Delta v + \gamma (g_u (\bar{u}, \bar{v}) u + g_v (\bar{u}, \bar{v}) v)
+ \gamma g_1 (u,v) .
\Label{eq-uv}
\end{split}
\end{equation}
Henceforth we omit the arguments $(\bar{u}, \bar{v})$
and use the abbreviations $f_u$ for $f_u(\bar{u}, \bar{v})$,
et cetera.
Since the variables $U$ and $V$ are associated with
the activator and the inhibitor, respectively,
of the chemical reaction, we have the inequalities
\begin{equation}
f_u > 0 , \quad g_v < 0 .
\Label{ineq1-fg}
\end{equation}
The equilibrium solution~(\ref{equil})
is stable in the absence of diffusion,
so we also have the inequalities
\begin{equation}
f_u g_v - f_v g_u > 0 , \quad
f_u + g_v < 0 .
\Label{ineq2-fg}
\end{equation}
The first inequality in~(\ref{ineq2-fg}),
together with the inequalities~(\ref{ineq1-fg}),
implies that $f_v g_u < 0$.
The problem as stated has two parameters,
$\gamma$ and $d$.
We represent the ordered pair
by a single symbol,
$\lambda = (\gamma, d)$,
and consider $\lambda$ as
the bifurcation parameter.
The bifurcation is from the trivial solution,
$(u,v) = (0,0)$.
\subsection{Abstract Evolution Equation\label{ss-abstract}}
The system of Eqs.~(\ref{eq-uv})
defines an abstract evolution equation
for a vector-valued function
$w: [0, \infty) \to H = (L^2(\Omega))^2$,
\begin{equation}
\frac{dw}{dt}
=
L_\lambda w + G_\lambda (w) , \, t > 0 ; \quad
w(t)
=
\left( \begin{array}{c} u(\cdot\,, t) \\ v(\cdot\,, t) \end{array} \right) .
\Label{eq-w}
\end{equation}
Here, $L_\lambda$ a linear operator in $H$
of the form
\begin{equation}
L_\lambda = -AD + \gamma B ,
\Label{def-L}
\end{equation}
where
$A: \mathrm{dom}(A) \to H$ is given by the expression
\begin{equation}
A
=
- \Delta I
=
\left( \begin{array}{cc} -\Delta & 0 \\ 0 & -\Delta \end{array} \right) ,
\Label{def-A}
\end{equation}
on $\mathrm{dom}(A) = H_1 = \{ w \in (H^2(\Omega))^2 :
n\cdot \nabla w = 0 \mathrm{~on~} \partial\Omega \}$,
and
$B: H \to H$ and $D: H \to H$ are
represented by the constant matrices
\begin{equation}
B
=
\left( \begin{array}{cc} f_u & f_v \\ g_u & g_v \end{array} \right) , \quad
D
=
\left( \begin{array}{cc} 1 & 0 \\ 0 & d \end{array} \right) .
\Label{def-BD}
\end{equation}
In Eq.~(\ref{def-A}),
$\Delta$ denotes the Laplacian,
$H^2 (\Omega)$ is the usual Sobolev space,
and the gradient on the boundary $\partial\Omega$
of $\Omega$ is taken component-wise.
The nonlinear operator
$G_\lambda: H \to H$
is given by
\begin{equation}
G_\lambda :
w
= \left( \begin{array}{c} u \\ v \end{array} \right)
\mapsto
\gamma
\left( \begin{array}{c} f_1 (u,v) \\ g_1 (u,v) \end{array} \right) ,
\Label{def-G}
\end{equation}
Without loss of generality,
we assume that $G_\lambda$
can be written as the sum
of symmetric multilinear forms,
\begin{equation}
G_{\lambda}(w) = \sum _{k=2}^\infty G_{\lambda,k} (w, \ldots\,,w) ,
\Label{def-G2G3}
\end{equation}
where $G_k$ is a symmetric $k$-linear form ($k=2,3,\ldots$).
When the $k$ arguments of $G_k$ coincide,
we write $G_k$ with a single argument,
$G_k (w) = G_k (w, \ldots\,, w)$.
The abstract evolution equation~(\ref{eq-w})
belongs to a class of equations
analyzed in detail in Ref.~\cite{mw};
the relevant results are summarized
for reference purposes in the Appendix,
Section~\ref{ss-abt}.
\section{Exchange of Stability\label{s-exchange}}
The inequalities~(\ref{ineq2-fg}) imply that
\begin{equation}
\det (B) > 0 , \quad \mathrm{tr} (B) < 0 .
\end{equation}
Under these conditions,
diffusion has a destabilizing effect:
At some critical value $\lambda_0$ of $\lambda$,
an exchange of stability occurs and
the solution of Eq.~(\ref{eq-w})
bifurcates from the trivial solution.
\subsection{Eigenvalues and Eigenvectors of
$L_{\lambda}$ and $L^*_{\lambda}$\label{ss-eigen}}
The negative Laplacian $-\Delta$
on a bounded domain $\Omega \in \mathbf R^n$
with Neumann boundary conditions
is selfadjoint and positive in $L^2(\Omega)$.
Its spectrum is discrete,
consisting of eigenvalues $\rho_k$
with corresponding eigenvectors~$\varphi_k$,
\begin{equation}
- \Delta \varphi_k = \rho_k \varphi_k , \quad k = 1,2,\ldots \,,
\Label{ev-delta}
\end{equation}
We assume that the eigenvalues are ordered,
$0 < \rho_1 \le \rho_2 \le \cdots\,$,
and that the eigenvectors~$\{\varphi_k\}_k$
form a basis in $L^2(\Omega)$.
It follows from the definition~(\ref{def-A})
that $A$ is selfadjoint and positive in $H$;
its spectrum is also discrete, consisting of
the same eigenvalues~$\rho_k$
and the eigenvectors~$\varphi_k$ once repeated.
The operator $L_\lambda$ reduces via projection
to its components on the linear span of each
eigenvector of $A$.
Let $E_k$ be the component of $L_{\lambda}$
in the eigenspace associated with
the eigenvalue $\rho_k$,
\begin{equation}
E_k (\lambda) = - \rho_k D + \gamma B , \quad k = 1,2,\ldots\,.
\Label{def-Ek}
\end{equation}
The determinant and trace of $E_k (\lambda)$ are
\begin{equation}
\begin{split}
\det(E_k (\lambda))
&= \gamma^2 \det(B)
+ \gamma \rho_k |g_v|
- \rho_k d (\gamma f_u - \rho_k) , \\
\mathrm{tr}(E_k (\lambda))
&= \gamma \, \mathrm{tr}(B) - \rho_k (1 + d) .
\end{split}
\end{equation}
Note that $\mathrm{tr}(E_k(\lambda))$ is negative
everywhere in the first quadrant and
becomes more negative as $k$ increases.
The eigenvalues of $E_k (\lambda)$
come in pairs,
\begin{equation}
\beta_{ki} (\lambda)
= \textstyle{\frac12}
\left(
\mathrm{tr}(E_k(\lambda))
\pm
\left( (\mathrm{tr}(E_k(\lambda) )^2
- 4\,\det(E_k(\lambda)) \right)^{1/2}
\right) , \quad i=1,2 .
\Label{L-eigenvalues}
\end{equation}
They are either complex conjugate,
with $\Re \beta_{k1} = \Re \beta_{k2} < 0$,
or they are both real,
with $\beta_{k1} + \beta_{k2} < 0$.
We identify $\beta_{k1}$ with the upper ($+$) sign
and $\beta_{k2}$ with the lower ($-$) sign,
so $\Re \beta_{k2} \le \Re \beta_{k1}$.
The eigenvector corresponding to
the eigenvalue $\beta_{ki}$
of $L_\lambda$ is
\begin{equation}
w_{ki}
= \left( \begin{array}{c}
- \gamma f_v \varphi_k \\
(\gamma f_u - \rho_k - \beta_{ki}) \varphi_k \end{array} \right) .
\Label{L-eigenvectors}
\end{equation}
The eigenvectors $w_{k1}$ and $w_{k2}$
are linearly independent
as long as $\beta_{k1} \ne \beta_{k2}$.
The set of eigenvectors $\{w_{ki}\}_{k,i}$
forms a basis for $H$.
Note that $B$ is not symmetric;
its adjoint $B^*$ is the transpose
$B'$ of $B$.
Hence, the adjoint of $L_{\lambda}$ is
$L^*_{\lambda} = - AD + \gamma B'$,
and the adjoint of $E_k(\lambda)$ is
$E_k^*(\lambda) = - \rho_k D + \gamma B'$.
The eigenvalues of $E_k^*(\lambda)$ are
$\bar{\beta}_{ki}$, $i=1,2$,
the complex conjugates of
the eigenvalues $\beta_{ki}$
of $E_k(\lambda)$ given in
Eq.~(\ref{L-eigenvalues}).
Since the latter are either
complex conjugate or real,
the eigenvalues of $L_\lambda$
and $L_\lambda^*$ coincide.
The eigenvector corresponding
to the eigenvalue $\bar{\beta}_{ki}$
of $L_\lambda^*(\lambda)$ is
\begin{equation}
w^*_{ki}
= \left(\begin{array}{c}
-\gamma g_u \varphi_k \\
(\gamma f_u - \rho_k - \bar{\beta}_{ki}) \varphi_k\end{array} \right) .
\Label{L*-eigenvectors}
\end{equation}
\subsection{Exchange of Stability\label{ss-exchange}}
The equation $\det(E_k(\lambda)) = 0$ defines
a curve $\Lambda_k$ in the $(\gamma, d)$-plane,
\begin{equation}
\Lambda_k
= \{ (\gamma, d) : \det(E_k(\lambda)) = 0 \}
= \{ (\gamma, d) : d = d_k (\gamma) \} , \quad k = 1,2, \ldots ,
\Label{Lambda-k}
\end{equation}
where
\begin{equation}
d_k (\gamma)
=
\frac{\gamma^2 \det(B) + \gamma \rho_k |g_v|}
{\rho_k (\gamma f_u - \rho_k)} .
\Label{d-k}
\end{equation}
The expression for $d_k$ can be recast in the form
\begin{equation}
d_k (\gamma) - d^{(s)}
=
\frac{\rho_k |f_v g_u|}{f_u^3}
(\gamma - \gamma_k^{(s)})^{-1}
+ \frac{\det(B)}{\rho_k f_u}
(\gamma - \gamma_k^{(s)})^{-1} ,
\end{equation}
where
$\gamma_k^{(s)} = \rho_k/f_u$
and
$d^{(s)} = (\det(B) + |f_v g_u|)/f_u^2$.
This expression shows that
(i)~~$\Lambda_k$ is symmetric
with respect to the point $(\gamma_k^{(s)}, d^{(s)})$
(ii)~~$\Lambda_k$ has a vertical asymptote
at $\gamma = \rho_k/f_u$; and
(iii)~~$\Lambda_k$ has an oblique asymptote
with slope $\det(B)/(\rho_k f_u)$.
The symmetry point $(\gamma_k^{(s)}, d^{(s)})$
is located in the first quadrant
of the $(\gamma, d)$-plane;
$\gamma_k^{(s)}$ increases as $k$ increases,
$d^{(s)}$ is independent of $k$.
The vertical asymptote is in the right-half
of the $(\gamma, d)$-plane,
shifting to the right as $k$ increases.
The slope of the oblique asymptote
is positive, decreasing to zero as $k$ increases.
Therefore, each curve~$\Lambda_k$ has a branch
in the positive quadrant of the $(\gamma, d)$-plane.
The positive branches of the curves
$\Lambda_1$ and $\Lambda_2$
are sketched in Fig.~\ref{fig1}.
\begin{figure}[htb]
\begin{small}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(75,85
\thicklines
\put(-10,0){\vector(0,1){75}}
\put(-15, 5){\vector(1,0){95}}
\put(1, 0){\line(0,1){75}}
\put(38,0){$\gamma_1$}
\put(40, 5){\line(0,1){2}}
\put(78,0){$\gamma$}
\put(-14,68){$d$}
\put(2,0){$\rho_1/f_u$}
\qbezier(2.000,75.000)(2.000,60.000)(3.000,55.000)
\qbezier(3.000,55.000)(10.000,15.000)(20.000,20.000)
\qbezier(20.000,20,000)(25.000,20.000)(75.000,50.000)
\qbezier(32.000,75.000)(32.000,60.000)(33.000,55.000)
\qbezier(33.000,55.000)(40.000,15.000)(50.000,20.000)
\qbezier(50.000,20,000)(55.000,20.000)(90.000,43.000)
\put(60,48){$\Lambda_1$}
\put(80,42){$\Lambda_2$}
\put(8,13){$R_1$}
\put(15,40){$R_2$}
\end{picture}
\end{center}
\end{small}
\caption{Positive branches of $\Lambda_1$ and $\Lambda_2$.}
\label{fig1}
\end{figure}
The curve $\Lambda_k$ separates the region where
$\det(E_k (\lambda)) > 0$ (below the curve)
from the region where $\det(E_k (\lambda)) < 0$
(above the curve).
We focus on the region
below the curve $\Lambda_2$,
bounded on the left by
the first vertical asymptote
at $\gamma = \rho_1/f_u$
and on the right by
$\gamma = \gamma_1$,
where the curves $\Lambda_1$
and $\Lambda_2$ cross
(see Fig.~\ref{fig1}).
The curve $\Lambda_1$ separates
this region into two subregions,
\begin{equation}
\begin{split}
R_1
&= \{ \lambda = (\gamma, d) :
\rho_1/f_u < \gamma < \gamma_1 , \,
0 < d < d_1(\gamma) \} , \\
R_2
&= \{ \lambda = (\gamma, d) :
\rho_1/f_u < \gamma < \gamma_1 , \,
d_1(\gamma) < d < d_2(\gamma) \} .
\Label{def-R1R2}
\end{split}
\end{equation}
These regions are indicated in Fig.\ref{fig1}.
\begin{lemma} \Label{l-eigenvalues}
The eigenvalues $\beta_{1i}$ ($i=1,2$)
of $L_\lambda$ satisfy the inequalities
\begin{equation}
\begin{array}{ll}
\Re\beta_{11}(\lambda) < 0 , \, \Re\beta_{12}(\lambda) < 0 &
\text{~if~} \lambda \in R_1 , \\
\beta_{11}(\lambda) = 0, \, \beta_{12}(\lambda) < 0 &
\text{~if~} \lambda \in \Lambda_1 , \\
\beta_{11}(\lambda) > 0, \, \beta_{12}(\lambda) < 0 &
\text{~if~} \lambda \in R_2 . \\
\end{array}
\Label{ineq-pes1}
\end{equation}
Furthermore, for $k=2,3,\ldots\,$,
\begin{equation}
\Re\beta_{k1}(\lambda) < 0, \, \Re\beta_{k2}(\lambda) < 0 \,
\text{~if~} \lambda \in \Lambda_1 .
\Label{ineq-pes2}
\end{equation}
\end{lemma}
\begin{proof}
In $R_1$, we have
$\mathrm{tr}(E_1(\lambda)) < 0$ and
$\det(E_1(\lambda) > 0$,
so $\beta_{11} (\lambda)$ and
$\beta_{12} (\lambda)$ are
either complex conjugate
with a negative real part,
or they are both real and negative.
On $\Lambda_1$,
the leading eigenvalue $\beta_{11}(\lambda)$
is zero.
Since $\mathrm{tr}(E_1(\lambda)) < 0$,
it must be the case that $\beta_{12} (\lambda)$
is real and negative.
In $R_2$, we have
$\mathrm{tr}(E_1(\lambda)) < 0$ and
$\det(E_1(\lambda) < 0$,
so $\beta_{11} (\lambda)$ and
$\beta_{12} (\lambda)$ are both real,
and they have opposite signs.
On $\Lambda_1$,
$\det (E_2(\lambda)) > 0$.
Since $\det (E_k(\lambda))$ increases with $k$,
it follows that
$\det (E_k(\lambda)) > 0$ for $k=2,3,\ldots$.
Also, $\mathrm{tr} (E_k(\lambda)) < 0$.
Hence, either $\beta_{k1} (\lambda)$ and
$\beta_{k2} (\lambda)$ are complex conjugate
with a negative real part,
or they are both real
and negative.
\end{proof}
The lemma implies that all eigenmodes
are stable as long as $\lambda$ is below
the curve $\Lambda_1$.
However, as soon as $\lambda$ crosses
the ``critical curve'' $\Lambda_1$,
the first unstable eigenmode appears
and an exchange of stability occurs.
\section{Bifurcation Analysis -- One-dimensional Domain \label{s-1d}}
We first consider the bifurcation problem~(\ref{eq-w})
on a one-dimensional domain $\Omega = (0, \ell)$.
We reduce Eq.~(\ref{eq-w})
to its center-manifold representation
near a point $\lambda_0$ on
the critical curve $\Lambda_1$,
as proposed in Ref.~\cite{mw-b}
and sketched in the Appendix,
Section~\ref{ss-red}.
The eigenvalues and eigenvectors
of the negative Laplacian subject to
Neumann boundary conditions
(see Eq.~(\ref{ev-delta}))
are
\begin{equation*}
\rho_k= k^2 (\pi /\ell)^2 , \quad
\varphi_k (x) = \cos(k(\pi/\ell) x) , \, x \in \Omega ;
\qquad k = 1, 2, \ldots.
\end{equation*}
The linear operator $L_\lambda$
decomposes into its components
\begin{equation}
E_k (\lambda)
=
- k^2 (\pi/\ell)^2 D + \gamma B , \quad k=1,2,\ldots\,,
\end{equation}
with
\begin{equation}
\begin{split}
\det(E_k(\lambda))
&=
\gamma^2 \det(B)
+ \gamma |g_v| k^2 (\pi/\ell)^2
- d k^2 (\pi/\ell)^2
(\gamma f_u - k^2 (\pi/\ell)^2) , \\
\mathrm{tr}(E_k(\lambda))
&=
\gamma \mathrm{tr}(B) - (1+d) k^2 (\pi/\ell)^2 .
\end{split}
\end{equation}
Each $E_k$ contributes two eigenvalues,
$\beta_{k1}$ and $\beta_{k2}$,
to the spectrum of $L_\lambda$;
the expressions for $\beta_{ki}$ ($i=1,2$)
in terms of $\det(E_k(\lambda))$
and $\mathrm{tr}(E_k(\lambda))$
are given in Eq.~(\ref{L-eigenvalues}).
The eigenvalues of the adjoint $L_\lambda^*$
are the complex conjugates,
$\bar{\beta}_{k1}$ and $\bar{\beta}_{k2}$.
The eigenvectors of $L_\lambda$ and $L_\lambda^*$
corresponding to the eigenvalues
$\beta_{ki}$ and $\bar{\beta}_{ki}$ are
\begin{equation}
w_{ki}
=
\left( \begin{array}{c}
-\gamma f_v \cos(k(\pi/\ell) x) \\
(\gamma f_u - k^2(\pi/\ell)^2 - \beta_{ki}) \cos (k(\pi/\ell) x)
\end{array} \right)
\Label{wki}
\end{equation}
and
\begin{equation}
w_{ki}^*
=
\left( \begin{array}{c}
-\gamma g_u \cos(k(\pi/\ell) x) \\
(\gamma f_u - k^2(\pi/\ell)^2 - \bar{\beta}_{ki}) \cos (k(\pi/\ell) x)
\end{array} \right) ,
\Label{wki*}
\end{equation}
respectively.
\subsection{Center-manifold Reduction\label{ss-cmr-1d}}
We are interested in solutions of Eq.~(\ref{eq-w})
near a point $\lambda_0$ on the critical curve $\Lambda_1$.
In the region $R_1$, just below $\Lambda_1$,
both eigenvalues $\beta_{11}$ and $\beta_{12}$
are real, with $\beta_{12} < \beta_{11} < 0$.
As $\lambda$ approaches $\lambda_0$,
the leading eigenvalue $\beta_{11}$
increases and, as $\lambda$ transits
into $R_2$, $\beta_{11}$ passes
through 0 and becomes positive.
Thus, the first exchange of stability occurs.
\begin{lemma} \Label{l-reduction-1d}
Near the critical curve $\Lambda_1$,
the solution of Eq.~(\ref{eq-w})
can be expressed in the form
\begin{equation}
w = y_{11} w_{11} + z , \quad
z = y_{12} w_{12} + \sum_{k=2}^{\infty} \sum_{i=1,2} y_{ki} w_{ki} ,
\Label{w-expand-1d}
\end{equation}
where the coefficient $y_{11}$ of the leading term
satisfies the reduced bifurcation equation,
\begin{equation}
\frac{dy_{11}}{dt}
= \beta_{11} y_{11} + \alpha y_{11}^3 + o(|y_{11}|^3) .
\Label{eq4.6}
\end{equation}
The coefficient $\alpha \equiv \alpha (\lambda)$
are given explicitly in terms of the eigenfunctions
of $L_\lambda$ and $L_\lambda^*$,
\begin{equation}
\alpha (\lambda)
=
\alpha_2 (\lambda) + \alpha_3 (\lambda) ,
\Label{alpha-1d}
\end{equation}
where
\begin{align*}
\alpha_2 (\lambda)
&=
\frac{2}{<w_{11}, w_{11}^*>}
\sum_{i=1,2}
\frac{<G_2(w_{11}, w_{2i}), w_{11}^*> <G_2(w_{11}), w_{2i}^*>}
{(2\beta_{11}-\beta_{2i}) <w_{2i},w_{2i}^*>} , \\
\alpha_3 (\lambda)
&=
\frac{1} {<w_{11}, w_{11}^*>}
<G_3(w_{11}), w_{11}^*> .
\end{align*}
Here, $< \cdot\,, \cdot >$ denotes the inner product in $H$.
(The subscript $\lambda$ on the $k$-linear forms has
been omitted.)
\end{lemma}
\begin{proof}
We look for a solution $w$ of Eq.~(\ref{eq-w})
of the form~(\ref{w-expand-1d}).
In the space spanned by the eigenvector $w_{11}$,
Eq.~(\ref{eq-w}) reduces to
\begin{equation}
\begin{split}
<w_{11}, w_{11}^*> \frac{dy_{11}}{dt}
&= \ <L_\lambda w, w_{11}^*> \ +\ <G_\lambda(w), w_{11}^*> \\
&= \ \beta_{11} <w_{11}, w_{11}^*> y_{11}
\ +\ \sum_{k=2}^\infty <G_{k} (w), w_{11}^*> .
\Label{proj.eq-1d}
\end{split}
\end{equation}
To evaluate the contributions
from the various terms in the sum,
we use the asymptotic expression for
the center-manifold function
near $\lambda_0$
given in the Appendix
(Section~\ref{ss-red}),
Theorem~\ref{th-cm-asympt},
\begin{equation}
y_{ki}
=
\Phi^{\lambda}_{ki} (y_{11})
=
\frac{<G_2(w_{11}), w_{ki}^*> y_{11}^2}
{(2\beta_{11}-\beta_{ki}) <w_{ki}, w_{ki}^*>}
+
o(|y_{11}|^2) , \quad k = 2, 3, \ldots\,.
\Label{cm-1d}
\end{equation}
The contribution from the bilinear form ($k=2$) is
\begin{align*}
<G_2 (w), w_{11}^*>
&=\
<G_2 (y_{11} w_{11} + z), w_{11}^*> \\
&=\
<G_2 (w_{11}), w_{11}^*> y_{11}^2 \\
&\hspace{2em}+\ 2<G_2 (w_{11}, z), w_{11}^*> y_{11} + <G_2 (z), w_{11}^*> .
\end{align*}
The first term in the right member vanishes,
because
\[
<G_2 (w_{11}), w_{11}^*> \ = 0 .
\]
The second and third term can be evaluated
by means of the asymptotic expression~(\ref{cm-1d})
for the center manifold,
\begin{align*}
<G_2 (w_{11}, z), w_{11}^*>
&=
\sum_{i=1,2} <G_2 (w_{11}, w_{2i}), w_{11}^*> y_{2i}
+\ o(|y_{11}|^2) \\
&=\
\textstyle{\frac12} \alpha_2 <w_{11}, w_{11}^*> y_{11}^2 + o(|y_{11}|^2) , \\
<G_2 (z), w_{11}^*>
&= o(|y_{11}|^3) ,
\end{align*}
where $\alpha_2$ is defined in Eq.~(\ref{alpha-1d}).
Putting it all together, we obtain the asymptotic result
\begin{equation}
<G_2(w), w_{11}^*>
\ =\
\alpha_2 <w_{11}, w_{11}^*> y_{11}^3 + o(|y_{11}|^3) .
\end{equation}
The contribution from the trilinear form ($k=3$) is
\begin{equation}
\begin{split}
<G_3 (w), w_{11}^*>
&=\ <G_3 (w_{11}), w_{11}^*> y_{11}^3 + o(|y_{11}|^3) \\
&=\ \alpha_3 <w_{11}, w_{11}^*> y_{11}^3 + o(|y_{11}|^3) ,
\end{split}
\end{equation}
where $\alpha_3$ is defined in Eq.~(\ref{alpha-1d}).
The higher-order forms contribute only terms of $o(|y_{11}|^3)$.
\end{proof}
\subsection{Structure of the Bifurcated Attractor\label{ss-attractor-1d}}
The results of the bifurcation analysis
for one-dimensional spatial domains
are summarized in the following theorem.
\begin{theorem}
\Label{th-1d}
$\Omega = (0,\ell)$.
\begin{itemize}
\item If $\alpha(\lambda_0) < 0$, then
the following statements are true:
\begin{enumerate}
\item
$w = 0$ is a locally asymptotically stable
equilibrium point of Eq.~(\ref{eq-w}) for
$\lambda \in R_1$ or $\lambda \in \Lambda_1$.
\item
The solution of Eq.~(\ref{eq-w})
bifurcates supercritically
from $(\lambda_0, 0)$ to
an attractor $\mathcal A_\lambda$
as $\lambda$ crosses $\Lambda_1$
from $R_1$ into~$R_2$.
\item
There exists an open set $U_\lambda \subset H$
with $0 \in U_\lambda$ such that
the bifurcated attractor $\mathcal A_\lambda$
attracts $U_\lambda \setminus \Gamma$ in $H$,
where $\Gamma$ is the stable manifold
of~$0$ with codimension~$1$.
\item
The attractor $\mathcal A_\lambda$ consists
of two steady-state points,
$w_\lambda^+$ and $w_\lambda^-$,
\begin{equation}
w_\lambda^{\pm}
= {\pm} (\beta_{11}/|\alpha|)^{1/2} w_{11}
+ \omega_\lambda , \quad \lambda \in R_2 ,
\Label{attractor-1d}
\end{equation}
where
$\|\omega_\lambda\|_H = o (\beta_{11}^{1/2})$.
\item
There exists an $\varepsilon > 0$
and two disjoint open sets
$U_\lambda^+$ and $U_\lambda^-$ in~$H$,
with
$0 \in \partial U_\lambda^+ \cap \partial U_\lambda^-$,
such that $w_\lambda^\pm \in U_\lambda^\pm$
and
$\lim_{t \to \infty} ||w(t; w_0) - w_\lambda^\pm||_H = 0$
for any solution $w(t; w_0)$ of Eq.~(\ref{eq-w})
satisfying the initial condition
$w(0; w_0) = w_0 \in U_\lambda^\pm$
and any $\lambda$ satisfying the condition
$\mathrm{dist} (\lambda_0, \lambda) < \varepsilon$.
\end{enumerate}
\item
If $\alpha(\lambda_0) > 0$,
then the solution of Eq.~(\ref{eq-w})
bifurcates subcritically from $(\lambda_0, 0)$
to exactly two repeller points
as $\lambda$ crosses $\Lambda_1$
from $R_2$ into $R_1$.
\end{itemize}
\end{theorem}
\begin{proof}
Equation~(\ref{eq4.6}) shows that,
if $\alpha(\lambda_0) < 0$,
then $w=0$ is a locally
asymptotically stable
equilibrium point.
According to the attractor bifurcation theorem
(Section~\ref{ss-abt}, Theorem~\ref{t-abt}),
the system bifurcates at $(\lambda_0,0)$
to an attractor $\mathcal A_{\lambda}$
as $\lambda$ transits from $R_1$ into $R_2$.
The structure of the attractor follows
from the stationary form of Eq.~(\ref{eq4.6}),
\[
\beta_{11} y_{11} + \alpha y_{11}^3 + o(|y_{11}|^3) = 0 .
\]
The number and nature of the solutions of this equation
does not change if the terms of $o(|y_{11}|^3)$
are ignored, provided all solutions are regular
at the origin.
Thus, if $\alpha < 0$, we find
two solutions near $y=0$,
\begin{equation}
y_{11} = \pm (\beta_{11} / |\alpha|)^{1/2} + o(\beta^{1/2}) .
\end{equation}
The last assertion of the theorem follows by time reversal.
\end{proof}
Theorem~\ref{th-1d} shows that,
if $\alpha(\lambda_0) < 0$,
the attractor consists of two steady-state points,
each with its own basin of attraction.
The attractor bifurcation is shown
schematically in Fig.~\ref{fig3}.
From the perspective of pattern formation,
the theorem predicts the persistence
of two types of patterns
that differ only in phase;
which of the two patterns is actually
realized depends on the initial data.
\begin{figure}[htb]
\begin{center}
{\setlength{\unitlength}{1mm}
\begin{picture}(60,70)
\thicklines
\put(10,0){\line(0,1){63}}
\put(-15, 30){\line(1,0){25}}
\put(10,30){\circle*{1}}
\put(13, 30){\line(1,0){3}}
\put(19, 30){\line(1,0){3}}
\put(25, 30){\line(1,0){3}}
\put(31, 30){\line(1,0){3}}
\put(37, 30){\line(1,0){3}}
\put(44,30){\vector(1,0){7}}
\put(1, 20){\line(0,1){20}}
\put(1,25){\vector(0,1){.5}}
\put(1,35){\vector(0,-1){.5}}
\put(1,30){\circle*{1}}
\qbezier(35.000,50.000)(-15.000,30.000)
(35.000,10.000)
\put(25, 0){\line(0,1){60}}
\put(25,20){\vector(0,-1){.5}}
\put(25,40){\vector(0,1){.5}}
\put(25,50){\vector(0,-1){.5}}
\put(25,10){\vector(0,1){.5}}
\put(25, 30){\circle{1}}
\put(25, 14.45){\circle*{1}}
\put(25, 45.3){\circle*{1}}
\put(26, 15){$w^{-}$}
\put(26, 43.5){$w^{+}$}
\put(9,65){$\Lambda_1$}
\put(0,58){$R_1$}
\put(16,58){$R_2$}
\put(5,25){$\lambda_0$}
\put(48,25){$\lambda$}
\end{picture}}
\end{center}
\caption{One-dimensional domain:
Supercritical bifurcation to an attractor
$\mathcal A = \{w^{+}, w^{-} \}$.}
\label{fig3}
\end{figure}
\section{Bifurcation Analysis -- Two-dimensional Domains\label{s-2d}}
Next, we consider the bifurcation problem~(\ref{eq-w})
on a two-dimensional domain
$\Omega = (0, \ell_1) \times (0, \ell_2)$.
As in the one-dimensional case,
we reduce Eq.~(\ref{eq-w})
to its center-manifold representation
near a point $\lambda_0 \in \Lambda_1$.
The eigenvalues and eigenvectors
of the negative Laplacian subject to
Neumann boundary conditions
(see Eqs.~(\ref{wki}) and (\ref{wki*}))
are
\begin{equation*}
\begin{split}
\rho_{k_1k_2} &= k_1^2 (\pi/\ell_1)^2 + k_2^2 (\pi/\ell_2)^2 , \\
\varphi_{k_1k_2} (x)
&= \cos(k_1 (\pi/\ell_1) x_1) \cos(k_2 (\pi/\ell_2) x_2) , \,
x = (x_1, x_2) \in \Omega .
\end{split}
\end{equation*}
Here, $k_1$ and $k_2$ range over all nonnegative integers
such that $|k| = k_1 + k_2 = 1, 2, \ldots$.
The eigenvalues $\beta_{k_1k_2i}$ ($i=1,2$)
and the corresponding eigenvectors of $L_\lambda$
are given in Eqs.~(\ref{L-eigenvalues})
and~(\ref{L-eigenvectors}), respectively,
where $k$ now stands for the ordered pair $(k_1,k_2)$.
The dynamics depend on the relative size of
$\ell_1$ and $\ell_2$.
On a rectangular (non-square) domain,
they are essentially the same as
on a one-dimensional domain.
For example, if $\ell_2 < \ell_1$,
then $\rho_{10} = (\pi/\ell_1)^2$
is the smallest eigenvalue of the
negative Laplacian, with corresponding
eigenvector $\varphi_{10} = \cos((\pi/\ell_1)x_1)$,
and the leading eigenvalue of $L_\lambda$
is $\beta_{101}$.
This eigenvalue is simple, and
the corresponding eigenvector is
\begin{equation}
w_{101}
=
\left( \begin{array}{c}
-\gamma f_v \cos((\pi/\ell_1) x_1) \\
(\gamma f_u - (\pi/\ell_1)^2 - \beta_{101}) \cos ((\pi/\ell_1) x_1)
\end{array} \right) .
\end{equation}
The center-manifold reduction leads to
a one-dimensional dynamical system
similar to Eq.~(\ref{eq4.6}).
Lemma~\ref{l-reduction-1d} and Theorem~\ref{th-1d}
apply verbatim if $\beta_{11}$ is replaced by
$\beta_{101}$ and $w_{11}$ by $w_{101}$ everywhere.
On the other hand, the dynamics become
qualitatively different if the domain
is square---that is, if
$\ell_1 = \ell_2 = \ell$
and $\Omega = (0,\ell)^2$.
The eigenvalues and eigenvectors
of $-\Delta$ on the square are
\begin{align*}
\rho_{k_1k_2} &= (k_1^2 + k_2^2) (\pi/\ell)^2 , \\
\varphi_{k_1k_2} (x) &= \cos(k_1(\pi/\ell)x_1) \cos(k_2(\pi/\ell)x_2) , \quad
x = (x_1, x_2) .
\end{align*}
Note that $\rho_{k_1k_2} = \rho_{k_2k_1}$
for any pair $(k_1,k_2)$,
so the eigenvalues $\beta_{k_1k_2i}$ ($i=1,2$)
of $L_\lambda$ satisfy the same symmetry condition,
$\beta_{k_1k_2i} = \beta_{k_2k_1i}$.
To avoid notational complications,
we consider two eigenvalues, even if
they coincide because of symmetry,
as distinct and associate with each
its own eigenvector.
Thus, we associate the eigenvector
\begin{equation}
w_{k_1k_2i}
=
\left( \begin{array}{c}
-\gamma f_v \varphi_{k_1k_2} \\
(\gamma f_u - \rho_{k_1k_2} - \beta_{k_1k_2i}) \varphi_{k_1k_2}
\end{array} \right)
\Label{wk1k2i}
\end{equation}
with the eigenvalue $\beta_{k_1k_2i}$,
and the eigenvector
\begin{equation}
w_{k_1k_2i}^*
=
\left( \begin{array}{c}
-\gamma g_u \varphi_{k_1k_2} \\
(\gamma f_u - \rho_{k_1k_2} - \bar{\beta}_{k_1k_2i}) \varphi_{k_1k_2}
\end{array} \right)
\Label{wk1k2i*}
\end{equation}
with the eigenvalue $\bar{\beta}_{k_1k_2i}$,
whether $k_1$ and $k_2$ are equal or not.
\subsection{Center-manifold Reduction\label{ss-cmr-2d}}
We are again interested in values of $\lambda$
near the critical curve $\Lambda_1$,
where the first exchange of stability occurs.
The leading eigenvalues are
$\beta_{101}$ and $\beta_{011}$.
These eigenvalues coincide,
but we consider them separately,
each with its own eigenvector.
The two eigenvalues pass (together)
through 0 as $\lambda$ crosses
$\Lambda_1$ into $R_2$ from $R_1$,
at the value $\lambda = \lambda_0$.
\begin{lemma} \Label{l-reduction-2d}
Near the critical curve $\Lambda_1$,
the solution of Eq.~(\ref{eq-w})
can be expressed in the form
\begin{equation}
w = y_1 w_1 + y_2 w_2 + z , \quad
z = \sum_{(k_1,k_2): |k|=2,3,\ldots} \sum_{i=1,2} y_{k_1k_2i} w_{k_1k_2i} ,
\Label{w-expand-2d}
\end{equation}
where $w_1 = w_{101}$ and $w_2 = w_{011}$.
The coefficients $y_1$ and $y_2$
of the leading terms satisfy a system
of equations of the form
\begin{equation}
\begin{split}
\frac{dy_1}{dt}
&=
\beta_{101} y_1 + (\alpha y_1^2 + \sigma y_2^2) y_1 + o(|y|^3) , \\
\frac{dy_2}{dt}
&=
\beta_{011} y_2 + (\alpha y_2^2 + \sigma y_1^2) y_2 + o(|y|^3) ,
\Label{eq.cmr2D}
\end{split}
\end{equation}
where $\beta_{100} = \beta_{011}$.
The coefficients $\alpha \equiv \alpha(\lambda)$
and $\sigma \equiv \sigma(\lambda)$
are given explicitly in terms of the eigenfunctions
of $L_\lambda$ and $L_\lambda^*$,
\begin{equation}
\alpha (\lambda)
=
\alpha_2(\lambda) + \alpha_3(\lambda) ,
\Label{alpha-2d}
\end{equation}
where
\begin{align*}
\alpha_2 (\lambda)
&=
\frac{2}{<w_1, w_1^*>}
\sum_{i=1,2}
\frac
{<G_2(w_1, w_{20i}), w^*_1> <G_2(w_1), w_{20i}^*>}
{(2\beta_{101}-\beta_{20i}) <w_{20i}, w_{20i}^*>} , \\
\alpha_3 (\lambda)
&=
\frac{1}{<w_1, w_1^*>}
<G_3(w_1), w_1^*> ,
\end{align*}
and
\begin{equation}
\sigma(\lambda)
=
\sigma_2(\lambda) + \sigma_3(\lambda) ,
\Label{sigma-2d}
\end{equation}
where
\begin{align*}
\sigma_2 (\lambda)
&=
\frac{4}{<w_1, w_1^*>}
\sum_{i=1,2}
\frac{<G_2(w_2, w_{11i}), w^*_1> <G_2(w_1, w_2), w^*_{11}>}
{(2\beta_{101}-\beta_{11i}) <w_{11i}, w_{11i}^*>} , \\
\sigma_3 (\lambda)
&=
\frac {3}{<w_1, w_1^*>}
<G_3(w_1, w_2, w_2), w_1^*> .
\end{align*}
The asymptotic estimates in Eq.~(\ref{eq.cmr2D})
are valid as $|y| = \|y_1\|+\|y_2\| \to 0$.
\end{lemma}
\begin{proof}
We look for a solution $w$ of Eq.~(\ref{eq-w})
of the form (\ref{w-expand-2d}).
In the space spanned by the eigenvectors
$w_1 = w_{101}$ and $w_2 = w_{011}$,
Eq.~(\ref{eq-w}) reduces to
\begin{equation}
\begin{split}
<w_1, w_1^*> \frac{dy_1}{dt}
&=
\beta_{101} <w_1, w_1^*> y_1 + \sum_{k=2}^\infty <G_k(w), w_1^*> , \\
<w_2, w_2^*> \frac{dy_2}{dt}
&=
\beta_{011} <w_2, w_2^*> y_2 + \sum_{k=2}^\infty <G_k(w), w_2^*> .
\Label{proj.eq-2d}
\end{split}
\end{equation}
To evaluate the contributions from the various terms in the sums,
we again use the asymptotic expression for the center-manifold
function near $\lambda_0$ given in the Appendix
(Section~\ref{ss-red}), Theorem~\ref{th-cm-asympt},
\begin{equation}
\begin{split}
y_{k_1k_2i}
&= \Phi^{\lambda}_{k_1k_2i}(y_1, y_2) \\
&=
\frac{\sum_{j=1,2} < G_2 (w_j), w_{k_1k_2i}^*> y_j^2}
{(2\beta_{101}-\beta_{k_1k_2i}) <w_{k_1k_2i}, w_{k_1k_2i}^*>}
+ o(|y|^2) , \quad k_1 \not= k_2 , \\
y_{kki}
&= \Phi^{\lambda}_{kki}(y_1, y_2)
=
\frac{2 <G_2 (w_1, w_2), w_{kki}^*> y_1 y_2}
{(2\beta_{101}-\beta_{kki}) <w_{kki}, w_{kki}^*>}
+ o(|y|^2) ,
\Label{cm-2d}
\end{split}
\end{equation}
where $|y|^2 = \|y_1\|^2 + \|y_2\|^2$.
Consider the first of Eqs.~(\ref{proj.eq-2d}).
The contribution from the bilinear form is
\begin{align*}
<G_2 (w), w_1^*>
&=\
\sum_{i=1,2} <G_2 (w_i), w_1^*> y_i^2 \\
&\hspace{2em}+ 2 \sum_{i=1,2} <G_2( w_i, z), w_1^*> y_i
+ <G_2 (z), w_1^*> .
\end{align*}
The first term in the right member vanishes,
because
\[
<G_2 (w_i), w_1^*> \ = 0 , \quad i=1,2 .
\]
The last term is asymptotically small,
\[
<G_2 (z), w_1^*> \ = o(|y|^3) .
\]
The second term involves an infinte sum
over $(k_1,k_2)$ with $|k|=2,3,\ldots$.
Many of the coefficients are zero,
because of the specific form of
$w_1$, $w_2$, and $w_{k_1k_2j}$.
The non-zero terms can be evaluated
asymptotically by means of
the expression~(\ref{cm-2d}).
In fact, the only terms that are
non-zero and contribute to the leading-order
(cubic) terms in $y$ are those
with $i=1$ and either $(k_1, k_2) = (2, 0)$
or $(k_1, k_2) = (1, 1)$.
Asymptotic expressions for
$y_{20i}$ and $y_{11i}$ ($i=1,2$)
are given in Eq.~(\ref{cm-2d}),
where we note that only the term
with $j=1$ contributes to $y_{20i}$.
Taken together, these observations show
that the contribution from the bilinear form
is
\begin{equation}
<G_2 (w), w_1^*>
\ =\
\textstyle{\frac12} <w_1, w_1^*> (\alpha_2 y_1^2 + \sigma_2 y_1 y_2) y_1 + o(|y|^3) ,
\Label{proj.g2-2d}
\end{equation}
where $\alpha_2$ and $\sigma_2$ are defined in
Eqs.~(\ref{alpha-2d}) and~(\ref{sigma-2d}), respectively.
The contribution from the trilinear form is
\begin{equation}
\begin{split}
<G_3 (w), w_1^*>
&=
\sum_{i=1,2} <G_3 (w_i), w_1^*> y_i^3 + o(|y|^3) \\
&=\
<w_1, w_1^*> (\alpha_3 y_1^2 + \sigma_3 y_2^2) y_1 + o(|y|^3) ,
\end{split}
\Label{proj.g3-2d}
\end{equation}
where $\alpha_3$ and $\sigma_3$ are defined in
Eq.~(\ref{alpha-2d}) and~(\ref{sigma-2d}), respectively.
The computations for the second of Eqs.~(\ref{proj.eq-2d})
are entirely similar.
One finds the differential equation for $y_2$
given in the statement of the lemma
with the same expressions for the coefficients
$\alpha$ and $\sigma$.
We omit the details.
\end{proof}
\subsection{Structure of the Bifurcated Attractor\label{ss-attractor-2d}}
Before proceeding to the analysis of
the structure of the bifurcated attractor,
we recall the following result,
the proof of which can be found in Ref.~\cite{mw-b}.
\begin{lemma} \Label{l-structure}
Let $y_\lambda \in \mathbf R^2$ be a solution of the evolution equation
\[
\frac{dy}{dt} = \lambda y - G_{\lambda, k} (y) + o(|y|^k) ,
\]
where $G_{\lambda,k}$ is a symmetric $k$-linear field,
$k$ odd and $k \ge 3$, satisfying the inequalities
\[
C_1 |y|^{k+1} \le \ < G_{\lambda,k} (y), y> \ \le C_2 |y|^{k+1}
\]
for some constants $C_2>C_1>0$, uniformly in $\lambda$.
Then $y_\lambda$ bifurcates from $(y,\lambda)=(0,0)$
to an attractor $\mathcal A_\lambda$ which is homeomorphic to $S^1$.
Morover, one and only one of the following statements is true:
\begin{enumerate}
\item $\mathcal A_\lambda$ is a periodic orbit;
\item $\mathcal A_\lambda$ consists of an infinite number of singular points;
\item $\mathcal A_\lambda$ contains at most $2(k+1)$ singular points,
which are either saddle points
or (possibly degenerate) stable nodes
or singular points with index zero.
The number of saddle points is
equal to the number of stable nodes,
and both are even ($2N$, say).
If the number of singular points is
more than $4N$
($4N+n$ say, where $4N+n \le 2(k+1)$),
then the number of singular points
with index zero is $n$ and $N+n \ge 1$.
\end{enumerate}
\end{lemma}
The results of the bifurcation analysis
for two-dimensional spatial domains
are summarized in the following theorem.
\begin{theorem}
\Label{th-2d}
$\Omega = (0,\ell)^2$.
\begin{itemize}
\item
If $\alpha(\lambda_0) < 0$ and
$\alpha(\lambda_0) + \sigma(\lambda_0) < 0$,
the following statements are true:
\begin{enumerate}
\item $w = 0$ is a locally asymptotically stable
equilibrium point of Eq.~(\ref{eq-w})
for $\lambda \in R_1$ or $\lambda \in \Lambda_1$.
\item The solution of Eq.~(\ref{eq-w})
bifurcates from $(\lambda_0, 0)$
to an attractor $\mathcal A(\lambda)$
as $\lambda$ crosses $\Lambda_1$
from $R_1$ into $R_2$.
\item The attractor $\mathcal A(\lambda)$ is
homeomorphic to $S^1$.
\end{enumerate}
\item If $\alpha(\lambda_0) < 0$ and
$\sigma(\lambda_0) < 0$,
the attractor $\mathcal A(\lambda)$
consists of an infinite number
of steady-state points.
\item If $\alpha(\lambda_0) < 0$ and
$\sigma(\lambda_0) \ge 0$,
the attractor $\mathcal A(\lambda)$
consists of exactly eight steady-state points,
which can be expressed as
\begin{equation}
w_\lambda = W_\lambda + \omega_\lambda , \quad \lambda \in R_2 ,
\Label{attractor-2d}
\end{equation}
where $W_\lambda$ belongs to the eigenspace
corresponding to $\beta_{101}$ and
$\|\omega_\lambda\|_H = o(\|W_\lambda\|_H)$.
\item If $\alpha(\lambda_0) > 0$ and
$\alpha(\lambda_0) + \sigma(\lambda_0) > 0$,
the solutions of Eq.~(\ref{eq-w})
bifurcate from $(\lambda_0,0)$
to a repeller $\mathcal A(\lambda)$
as $\lambda$ transits into $R_2$.
Also, $\mathcal A(\lambda)$ is homeomorphic to $S^1$.
\end{itemize}
\end{theorem}
\begin{proof}
Equation~(\ref{proj.eq-2d}) shows that,
if $\alpha(\lambda_0) < 0$ and
$\alpha (\lambda_0) + \sigma(\lambda_0) < 0$,
then $w=0$ is a locally asymptotically stable equilibrium point.
It follows from
Lemma~\ref{l-structure} and
the attractor bifurcation theorem~\ref{t-abt},
that the system bifurcates at $(\lambda_0,0)$
to an attractor~$\mathcal A_\lambda$
as $\lambda$ transits from $R_1$ into $R_2$,
and that $\mathcal A_\lambda$ is homeomorphic to $S^1$.
The structure of the bifurcated attractor is found
from the stationary form of Eq.~(\ref{eq.cmr2D}).
Ignoring the terms of $o(|y|^3)$,
we have the system of equations
\begin{equation}
\begin{split}
(\beta_{101} + \alpha y_1^2 + \sigma y_2^2) y_1 = 0 , \\
(\beta_{101} + \alpha y_2^2 + \sigma y_1^2) y_2 = 0 .
\Label{eq4.13}
\end{split}
\end{equation}
If $\alpha <0$,
$\alpha + \sigma <0$, and
$\sigma \ge 0$,
the system~(\ref{eq4.13}) admits
eight nonzero solutions near $y=0$,
\begin{equation}
\begin{split}
y_1 &= 0 , \quad y_2^2 = \beta_{101} / |\alpha| ; \\
y_2 &= 0 , \quad y_1^2 = \beta_{101} / |\alpha| ; \\
y_1^2&= y_2^2 = \beta_{101} / |\alpha + \sigma| .
\end{split}
\end{equation}
These solutions are regular, so Eq.~(\ref{eq.cmr2D})
also has eight steady-state solutions;
they differ from the solutions of Eq.~(\ref{eq4.13})
by terms that are $o(|y|)$.
The last part of the theorem follows
by reversing time.
\end{proof}
Theorem~\ref{th-2d} shows that,
if $\alpha(\lambda_0) < 0$ and
$\alpha(\lambda_0) + \sigma(\lambda_0) < 0$,
the bifurcation is an $S^1$-attractor bifurcation.
If both $\alpha(\lambda_0)$ and $\sigma(\lambda_0)$
are negative, the attractor consists of
an infinite number of steady-state points;
on the other hand, if $\alpha(\lambda_0) < 0$
and $\sigma(\lambda_0) \ge 0$,
the attractor consists of precisely eight
steady-state points.
Figure~\ref{fig.hetorbit} shows the
phase diagram on the center manifold
after bifurcation, when $\lambda$
has crossed the critical curve~$\Lambda_1$
into the region $R_2$.
The phase diagram consists of eight
steady-state points and the
heteroclinic orbits connecting them.
The odd-indexed points ($P_1$, $P_3$, $P_5$, and $P_7$)
are minimal attractors;
they correspond to striped patterns.
The even-indexed points ($P_2$, $P_4$, $P_6$ and $P_8$)
are saddle points.
\begin{figure}[htb]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(58,65)
\thicklines
\qbezier(25.000,10.000)(33.284,10.000)
(39.142,15.858)
\qbezier(39.142,15.858)(45.000,21.716)
(45.000,30.000)
\qbezier(45.000,30.000)(45.000,38.284)
(39.142,44.142)
\qbezier(39.142,44.142)(33.284,50.000)
(25.000,50.000)
\qbezier(25.000,50.000)(16.716,50.000)
(10.858,44.142)
\qbezier(10.858,44.142)( 5.000,38.284)
( 5.000,30.000)
\qbezier( 5.000,30.000)( 5.000,21.716)
(10.858,15.858)
\qbezier(10.858,15.858)(16.716,10.000)
(25.000,10.000)
\put(44,22){\vector(1,1){0.5}}
\put(43.50,38){\vector(1,-1){0.5}}
\put(33.284,11){\vector(-1,-1){0.5}}
\put(33.284,48.5){\vector(-1,1){0.5}}
\put(17,11){\vector(1,-1){0.5}}
\put(17,48.65){\vector(1,1){0.5}}
\put(6.5,38){\vector(-1,-1){0.5}}
\put(6.25,22){\vector(-1,1){0.5}}
\put(25, 30){\vector(1,1){13}}
\put(25, 30){\vector(1,-1){13}}
\put(25, 30){\vector(-1,1){13}}
\put(25, 30){\vector(-1,-1){13}}
\put(25, 30){\vector(0,1){18}}
\put(25, 30){\vector(0,-1){18}}
\put(25, 30){\vector(1,0){18}}
\put(25, 30){\vector(-1,0){18}}
\put(25, 30){\line(1, 1){19}}
\put(25, 30){\line(-1, 1){19}}
\put(25, 30){\line(1, -1){19}}
\put(25, 30){\line(-1, -1){19}}
\put(25, 30){\line(0, 1){25}}
\put(25, 30){\line(1, 0){25}}
\put(25, 30){\line(0, -1){25}}
\put(25, 30){\line(-1, 0){25}}
\put(25.000,8.000){\vector(0,1){0.5}}
\put(41.142,13.858){\vector(-1,1){0.5}}
\put(47.000,30.000){\vector(-1,0){0.5}}
\put(41.142,46.142){\vector(-1,-1){0.5}}
\put(25.000,52.000){\vector(0,-1){0.5}}
\put(8.858,46.142){\vector(1,-1){0.5}}
\put( 3.000,30.000){\vector(1,0){0.5}}
\put(8.858,13.858){\vector(1,1){0.5}}
\put(45.000,30.000){\circle*{1}}
\put(47.000,32.000){$P_1$}
\put(25.000,50.000){\circle*{1}}
\put(20.000,52.000){$P_3$}
\put( 5.000,30.000){\circle*{1}}
\put( 0.000,26.000){$P_5$}
\put(25.000,10.000){\circle*{1}}
\put(28.000,6.000){$P_7$}
\put(39.142,44.142){\circle*{1}}
\put (38.142,48.142){$P_2$}
\put(10.858,44.142){\circle*{1}}
\put (4.858,43.142){$P_4$}
\put(10.858,15.858){\circle*{1}}
\put (10.858,9.858){$P_6$}
\put(39.142,15.858){\circle*{1}}
\put (42.142,15.858){$P_8$}
\end{picture}
\end{center}
\caption{Two-dimensional domain:
$S^1$-bifurcation with eight regular steady states.}
\label{fig.hetorbit}
\end{figure}
\section{Examples\label{s-examples}}
We illustrate the preceding results
with two examples from the theory
of pattern formation in complex
biological structures,
namely
the Schnakenberg equation~\cite{sch}
and
the Gierer--Meinhardt equation~\cite{g-m};
see also Ref.~\cite{h-o, k-m, mpf, ni-1, ni-2, pmm, t}.
\subsection{Schnakenberg Equation}
A classic model in biological pattern formation is due to
Schnakenberg~\cite{sch},
\begin{equation}
\begin{split}
U_t &= \gamma (a - U + U^2V) + \Delta U , \\
V_t &= \gamma (b - U^2 V) + d \Delta V ,
\Label{eq-UV-Schnak}
\end{split}
\end{equation}
on an open bounded set $\Omega \subset \mathbf R^n$
($n=1,2)$,
with Neumann boundary conditions
and given initial conditions.
The constants $a$ and $b$ are positive;
$\gamma$ and $d$ are positive parameters.
The system admits a uniform steady state,
\begin{equation}
\left( \begin{array}{c}
\bar{u} \\ \bar{v}
\end{array} \right)
=
\left( \begin{array}{c}
a+b \\ \frac{b}{(a+b)^2}
\end{array} \right)
\Label{equil-Schnak}
\end{equation}
The Schnakenberg equation is
of the type~(\ref{eq-w}),
with
\[
B
=
\left( \begin{array}{cc}
\frac{b-a}{a+b} & (a+b)^2 \\
- \frac{2b}{a+b} & -(a+b)^2
\end{array} \right) ,
\]
and a nonlinear term $G_\lambda$
of the form~(\ref{def-G}), with
\begin{equation*}
\begin{split}
f_1 (u,v)
&=
\frac{b}{(a+b)^2} u^2 + 2(a+b)uv + u^2v , \\
g_1 (u,v)
&=
- f_1 (u,v) .
\end{split}
\end{equation*}
The conditions~(\ref{ineq1-fg}) and~(\ref{ineq2-fg})
are satisfied if
\[
a<b, \quad b-a<(a+b)^3 .
\]
\subsubsection{One-dimensional Domain}
$\Omega = (0,1)$.
An evaluation of the inner products
in Eq.~(\ref{alpha-1d})
with the \texttt{MAPLE}
software package yields
the expression
\[
\alpha(\lambda)
=
\frac{s_1 + s_2 + s_3}
{-2\gamma^2 b(a+b) + \left( \gamma \frac{b-a}{a+b} - \rho_1 \right)^2} , \quad
\lambda \in \Lambda_1 ,
\]
where
\begin{equation*}
\begin{split}
s_i
=\,
& \textstyle{\frac12} \gamma^4 (\gamma + \rho_1) (a+b)^4
((5\rho_1 + \beta_{2i})(a+b) + \gamma (2a-b)) \\
& \times
\frac {(\gamma (2a-b) + 2\rho_1 (a+b))(4\rho_1 + \gamma + \beta_{2i})}
{\beta_{2i} (2\gamma^2 b(a+b) - (\gamma \frac{b-a}{a+b} - \rho_2 - \beta_{2i})^2)} ,
\quad i=1,2 , \\
s_3
=\,
& \textstyle{\frac{3}{4}}
\gamma^3 (\gamma + \rho_1) (a+b)^3
\left(\gamma (b-a) - \rho_1 (a+b) \right) .
\end{split}
\end{equation*}
Note that
$\alpha(\lambda)$ depends only on $\gamma$
if $\lambda \in \Lambda_1$;
we use the short-hand notation
$\alpha(\gamma) \equiv \alpha(\gamma, d_1(\gamma))$.
\subsubsection{Two-dimensional Domain}
$\Omega = (0,1)^2$.
An evaluation of the inner products
in Eqs.~(\ref{alpha-2d}) and~(\ref{sigma-2d})
with the \texttt{MAPLE} software package
yields the expressions
\[
\alpha(\lambda)
=
\frac{s_1 + s_2 + s_3}
{-2\gamma^2 b(a+b) + \left( \gamma \frac{b-a}{a+b} - \rho_{01} \right)^2} , \quad
\lambda \in \Lambda_1 ,
\]
and
\[
\sigma(\lambda)
=
\frac{s^1 + s^2 + s^3}
{-2\gamma^2 b(a+b) + (\gamma \frac{b-a}{a+b} - \rho_{01})^2} , \quad
\lambda \in \Lambda_1 ,
\]
where
\begin{equation*}
\begin{split}
s_i
=\,
& \textstyle{\frac12} \gamma^4 (\gamma + \rho_1) (a+b)^4
((5 \rho_1 + \beta_{20i}) (a+b) + \gamma (2a-b)) \\
& \times
\frac{(\gamma (2a-b) + 2\rho_1(a+b)) (4\rho_1 + \gamma +\beta_{20i})}
{\beta_{20i} (2 \gamma^2 b (a+b) - (\gamma \frac{b-a}{b+a} - 4 \rho_{10}
- \beta_{20i})^2)} , \quad i=1,2 , \\
s_3
=\,
& \textstyle\frac34 \gamma^3 (\gamma+\rho_1) (a+b)^3
(\gamma (b-a) - \rho_1 (a+b) ) ; \\
s^i
=\,
& \textstyle{\frac12} \gamma^4 (a+b)^4 (\gamma + \rho_1)
((2a-b) \gamma + (\beta_{11i}+3\rho_1) (a+b)) \\
& \times
\frac{((2a-b) \gamma + 2(a+b) \rho_1) (\gamma +2\rho_1+\beta_{11i})}
{\beta_{02i}(2 \gamma^2 b (a+b)
- (\gamma \frac{b-a}{a+b} - 2 \rho_{01} - \beta_{11i})^2)} , \quad i=1,2 , \\
s^3
=\,
& \textstyle\frac32 \gamma^3 (\gamma + \rho_{01}) (a+b)^3
(\gamma (b-a) - \rho_{01} (a+b)) .
\end{split}
\end{equation*}
\subsubsection{Numerical Results}
Numerical results are given for
$a=\textstyle{\frac13}$, $b =\textstyle{\frac23}$
in Fig.~\ref{fig4},
and for
$a=2$ and $b=100$
in Fig.~\ref{fig5}.
In the former case, there is no bifurcation;
in the latter, there is a pitchfork bifurcation
at $\lambda_0=(\gamma_0,d(\gamma_0))$.
\begin{center}
\begin{figure}[htb]
\includegraphics[height=2.5in]{./sch_a_onethird.eps}
\caption{Schnakenberg equation with
$a=\textstyle{\frac13}$, $b =\textstyle{\frac23}$.
(i)~Positive branches of $\Lambda_1$
and $\Lambda_2$ in one and two dimensions;
(ii)~Graph of $\alpha$ in the one-dimensional case;
(iii)~Graph of $\alpha$ in the two-dimensional case;
(iv)~Graph of $\sigma$ in the two-dimensional case.
\label{fig4}}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[htb]
\includegraphics[height=2.5in]{./sch_a_2.eps}
\caption{Schnakenberg equation with
$a=2$, $b =100$.
(i)~Positive branches of $\Lambda_1$
and $\Lambda_2$ in one and two dimensions;
(ii)~Graph of $\alpha$ in the one-dimensional case;
(iii)~Graph of $\alpha$ in the two-dimensional case;
(iv)~Graph of $\sigma$ in the two-dimensional case.
\label{fig5}}
\end{figure}
\end{center}
\subsection{Gierer--Meinhardt Equation}
Another model for the formation of Turing patterns
was proposed by Gierer and Meinhardt~\cite{g-m},
\begin{equation}
\begin{split}
U_t &= \gamma (a - bU + U^2/V) + \Delta U , \\
V_t &= \gamma (U^2 - V) + d \Delta V,
\Label{eq-UV-GM}
\end{split}
\end{equation}
on an open bounded set $\Omega \subset \mathbf R^n$ ($n=1,2$),
with Neumann boundary conditions
and given initial conditions.
The constants $a$ and $b$ are positive.
The system admits a steady state,
\begin{equation}
\left( \begin{array}{c}
\bar{u} \\ \bar{v}
\end{array} \right)
=
\left( \begin{array}{c}
\frac{a+1}{b} \\ \frac{(a+1)^2}{b^2}
\end{array} \right)
\Label{equil-GM}
\end{equation}
The Gierer--Meinhardt system is an equation
of the type~(\ref{eq-w}) with
\[
B
=
\left( \begin{array}{cc}
\frac{(1-a)b}{1+a} & - \frac{b^2}{(1+a)^2} \\
\frac{2(1+a)}{b} & -1
\end{array} \right) .
\]
The nonlinear term $G_\lambda$ is obtained
by expanding around the steady-state solution,
\begin{equation*}
\begin{split}
f_1 (u,v)
&=
\frac{b^2}{(1+a)^2}
\left(
u^2 - \frac{2b}{1+a} uv \right. \\
&\left.\hspace{2em}+ \frac{b^2}{(1+a)^2}
\left(
v^2 - u^2v + \frac{2b}{1+a} uv^2 + \cdots
\right)
\right) , \\
g_1 (u,v)
&= u^2 .
\end{split}
\end{equation*}
The conditions~(\ref{ineq1-fg}) and~(\ref{ineq2-fg})
are satisfied if
\[
a<1, \quad b<\frac{1+a}{1-a} .
\]
\subsubsection{One-dimensional Domain}
$\Omega=(0,1)$.
An evaluation of the inner products
in Eq.~(\ref{alpha-1d})
with the \texttt{MAPLE}
software package yields
the expression
\[
\alpha(\lambda)
=
\frac{s_1+s_2+s_3}{-\gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_1 \right)^2} ,
\]
where
\begin{equation*}
\begin{split}
s_i
=\,
& - \textstyle{\frac12} \gamma^4 b^6 \\
&\times
\frac {\gamma^2 (2a-1)b^2 + \gamma b (\rho_1 +2 a ( \beta_{2i} + 5 \rho_1))
+2 \rho_1 (1+a) ( \beta_{2i} + 4 \rho_1)}
{(1+a)^8} \\
&\times
\frac{\gamma^2 (2a-1)b^2 + \gamma b ( 4 \rho_1(1+a)+ \beta_{2i})
+2 \rho_1^2 (1+a)}
{\beta_{2i} \left( - \gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_1 - \beta_{2i} \right)^2
\right)} , \quad i=1,2, \\
s_3
=\,
& {\textstyle{\frac32}}
\frac{\gamma^2 b^5 (\gamma (1-a)b - \rho_1(1+a)) (\gamma ab +\rho_1 (1+a))^2}
{(1+a)^8} .
\end{split}
\end{equation*}
\subsubsection{Two-dimensional Domain}
$\Omega = (0,1)^2$.
An evaluation of the inner products
in Eqs.~(\ref{alpha-2d}) and~(\ref{sigma-2d})
with the \texttt{MAPLE} software package
yields the expressions
\[
\alpha(\lambda)
=
\frac{s_1+s_2+s_3}{-\gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_{10} \right)^2} ,
\]
and
\[
\sigma(\lambda)
=
\frac{s^1+s^2+s^3}{-\gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_{10} \right)^2} ,
\]
where
\begin{equation*}
\begin{split}
s_i
=\,
& - \textstyle{\frac12} \gamma^4 b^6 \\
&\times
\frac {\gamma^2 b^2
+ \gamma b (\rho_{10} + 2a (\beta_{20i}+5\rho_{10}))
+2\rho_{10} (1+a) (\beta_{20i}+4\rho_{10})} {(1+a)^8} \\
&\times
\frac {\gamma^2 (2a-1) b^2
+ \gamma b (4\rho_{10} (1+a) + \beta_{20i}) +2\rho_{10}^2 (1+a)}
{\beta_{20i}
\left( - \gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_{10} - \beta_{20i} \right)^2
\right)} , \quad i=1,2, \\
s_3
=\,
& {\textstyle\frac32}
\frac {\gamma^2 b^5 ( \gamma (1-a)b - \rho_{10} (1+a))
(\gamma ab +\rho_{10} (1+a))^2}{ (1+a)^8} ; \\
s^i
=\,
& -\textstyle{\frac12} \gamma^4 b^6 \\
&\times
\frac{\gamma^2 (2a-1)b^2 + \gamma b (\rho_{10}+2a (\beta_{11i}+3\rho_{10}))
+ 2\rho_{10} (1+a) (\beta_{11i}+2\rho_{10})}
{(1+a)^8} \\
&\times
\frac{\gamma^2 (2a-1)b^2 + \gamma b (\beta_{11i} + 2\rho_{10} (2a+1) )
+2\rho_{10}^2 (1+a)}
{\beta_{11i} \left( - \gamma^2 \frac{2b}{1+a}
+ \left( \gamma \frac{(1-a)b}{1+a} - \rho_{10} - \beta_{11i} \right)^2
\right)} , \quad i=1,2 , \\
s^3
=\,
& 3 \frac {\gamma^2 b^5 (\gamma (1-a)b - \rho_{10} (1+a))
(\gamma ab +\rho_{10} (1+a))^2}{(1+a)^8} .
\end{split}
\end{equation*}
\subsubsection{Numerical Results}
Numerical results are given
for
$a=\textstyle{\frac12}$, $b =1$
(Fig.~\ref{fig6})
and
$a=\textstyle{\frac13}$ and $b=\textstyle{\frac23}$
(Fig.~\ref{fig7}).
In the former case, there is no bifurcation;
in the latter, there is a bifurcation
at $\lambda_0=(\gamma_0,d(\gamma_0))$.
\begin{center}
\begin{figure}[htb]
\includegraphics[height=2.5in]{./gm_a_onehalf.eps}
\caption{Gierer--Meinhardt equation with $a=\frac12$, $b=1$.
(i)~Positive branches of $\Lambda_1$ and $\Lambda_2$ in one and two dimensions;
(ii)~Graph of $\alpha$ in the one-dimensional case;
(iii)~Graph of $\alpha$ in the two-dimensional case;
(iv)~Graph of $\sigma$ in the two-dimensional case.}
\label{fig6}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[htb]
\includegraphics[height=2.5in]{./gm_a_onethird.eps}
\caption{Gierer--Meinhardt equation with $a=\textstyle{\frac13}$, $b=\textstyle{\frac23}$.
(i)~Positive branches of $\Lambda_1$ and $\Lambda_2$ in one and two dimensions;
(ii)~Graph of $\alpha$ in the one-dimensional case;
(iii)~Graph of $\alpha$ in the two-dimensional case;
(iv)~Graph of $\sigma$ in the two-dimensional case.}
\label{fig7}
\end{figure}
\end{center}
\section{Conclusions\label{s-conclusions}}
In this paper we considered the evolution
of an activator--inhibitor system
consisting of two morphogens
on a bounded domain subject to
no-flux boundary conditions.
Assuming that the system admits
a uniform steady-state solution,
which is stable in the absence of diffusion,
we focused on solutions that bifurcate
from this uniform steady-state solution.
The bifurcation parameter $\lambda$
represented both $\gamma$,
the ratio of the characteristic times
for chemical reaction and diffusion,
and $d$, the ratio of the diffusion coefficients
of the two competing species (activator
and inhibitor).
We showed that, for such a system,
there exists a critical curve $\Lambda_1$
in parameter space such that,
as $\lambda$ crosses $\Lambda_1$,
a bifurcation occurs (Lemma~\ref{l-eigenvalues}).
While a linear analysis around the uniform
steady state suffices to obtain information
about the \emph{formation} of patterns,
a nonlinear analysis is needed to gain insight
into the long-time \emph{asymptotic behavior}
of the solutions after bifurcation.
This issue is intimately connected
with the long-term persistence of patterns.
In this paper we used the theory
of attractor bifurcation,
in combination with a
center-manifold reduction,
to analyze the long-time dynamics
of bifurcated solutions.
We considered two cases:
diffusion on a (bounded) interval
or a (non-square) rectangle,
and diffusion on a square domain.
In the former case, we showed that
a bifurcation occurs as $\lambda$
crosses a critical curve $\Lambda_1$,
and the bifurcation is
a pitchfork bifurcation.
Theorem~\ref{th-1d} gives an explicit condition
for the existence of an attractor.
The attractor consists of exactly two steady-state points,
each with its own basin of attraction.
The two steady states correspond to patterns
that differ only in phase;
which of them is eventually realized
depends on the initial conditions.
Essentially the same conclusion holds
in the case of diffusion on a rectangular
(that is, non-square) domain;
in particular, roll patterns emerge
as a result of the bifurcation.
In the case of diffusion on a square domain,
the dynamics are qualitatively different.
Theorem~\ref{th-2d} gives explicit conditions
for the existence of an $S^1$-bifurcation.
The bifurcated object consists of either
an infinite number of steady states,
or exactly eight regular steady-state points
with heteroclinic orbits connecting them.
Thus, two types of patterns may arise;
for example, in the formation of animal coat patterns,
we might expect stripe patterns or spot patterns.
Thus, in both the one- and two-dimensional case
we have given a complete characterization
of the bifurcated attractor and, therefore,
of the long-time asymptotic dynamics
of the bifurcated objects.
|
2,869,038,156,630 | arxiv | \section{Introduction}
\label{sec:intro}
An effective numerical representation of the textual content is crucial for natural language processing models, in order to understand the underlying relational patterns among words and discover patterns in natural languages. For resource-rich languages like English, numerous pre-trained models as well as the required materials to develop an embedding system are readily available. On the contrary, for resource-poor languages such as Sinhala, neither of those options could be easily found \cite{de2019survey}. Even the data sets that are available for training often fail to meet adequate standards \cite{caswell2021quality}. Thus, discovering a convenient methodology to develop embeddings for text would be a great step forward in the NLP domain for the Sinhala language.
Sinhala, also known as Sinhalese, is an Indo-Aryan language that is used within Sri Lanka \cite{kanduboda2011role}. The primary user base of this language is the Sinhalese ethnic group of the country. In total, 17 million people use Sinhala as their first language while 2 million people use it as a second language \cite{de2019survey}. Furthermore, Sinhala is structurally different from English, which uses a subject-verb-object structure as opposed to the subject-object-verb structure used by Sinhala as shown in the figure~\ref{fig:grammar} thus most of the pre-trained embedding models for English may not be effective with Sinhala.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.35\textwidth]{Images/grammar_structure.png}
\caption{SVO grammar structure of English and SOV grammar structure of Sinhala}
\label{fig:grammar}
\end{figure}
This study therefore is focused on discovering an effective embedding system for Sinhala text that provides reasonable results when used in training deep learning models. Sentiment analysis with Facebook data is utilized as the use case for the study.
Upon considering common forms of vector presentations of textual content, bag of words, word embedding, and sentence embedding are three of the leading methodologies in the present. Word embeddings have been observed to surpass the performance of bag of words for large enough data sets \cite{rudkowsky2018more} because bag of words often met with various problems such as disregarding the grammatical structure of the text, large vocabulary dimension and sparse representation \cite{le2014distributed,el2016enhancement}. In order to tackle the above challenges, word embeddings can be used. Since word embeddings capture the similarities among ingrained sentiments in words and represent them in the vector space, word embeddings tend to increase the accuracy of classification models \cite{goldberg2016primer}.
However, one of the major weaknesses of word embedding models is that they fail to capture syntax and polysemy; i.e. the presence of multiple possible meanings for a certain word or a phrase \cite{mu2016geometry}. In order to overcome these obstacles and also to achieve fine granularity in the embedding, sentence embeddings are used. The idea is to test common Euclidean space word embedding techniques such as fastText \cite{bojanowski2017enriching,joulin2016bag}, Word2vec \cite{mikolov2013efficient}, and GloVe \cite{pennington2014glove} with sentence embedding techniques. The pooling methods (i.e. max pooling, min pooling and avg pooling) will be considered as the baseline methods for the test. More advanced models such as sequence to sequence model (i.e. seq2seq model) \cite{sutskever2014sequence} and the modified version of the sequence to sequence model introduced by the work of~\newcite{cho2014learning} with GRU \cite{chung2014empirical} and LSTM \cite{hochreiter1997long} recurrent neural network units will be tested against the pooling means. Furthermore, the addition of attention mechanism \cite{vaswani2017attention} into the sequence to sequence model will also be tested.
Most models created using word and sentence embeddings are based on the Euclidean space. Though this vector space is commonly used, it poses significant limitations when representing complex structures \cite{nickel2017poincare}. Using the hyperbolic space provides a plausible solution for such instances. The hyperbolic space is a negatively-curved, non-Euclidean space. It is advantageous for embedding trees as the circumference of a circle grows exponentially with the radius. The usage of hyperbolic embedding is still a novel research area as it was only introduced recently, through the work of~\newcite{nickel2017poincare,chamberlain2017neural,sala2018representation}. The work of~\newcite{lu2019learning,lu2020exploiting} highlight the importance of using the hyperbolic space to improve the quality of embeddings in a practical context within the medical domain. However, research done on the applicability of hyperbolic embeddings in different arenas is highly limited. Thus, the full potential of the hyperbolic space is yet to be fully uncovered.
Through this paper, we are testing the effectiveness of a set of two-tiered word representation models that include various word embeddings as the lower tier and sentence embeddings as the upper tier will be compared.
\section{Related Work}
\label{sec:related}
The sequence to sequence model introduced by the work of~\newcite{sutskever2014sequence} is vital in this research as it is one of the core models in developing sentence embedding. Though originally developed for translation purposes the model has gone under multiple modifications depending on the context such as description generation for images \cite{karpathy2015deep}, phrase representation \cite{cho2014learning}, attention models \cite{vaswani2017attention} and BERT models \cite{devlin2018bert} thus proving the potential it holds in the machine learning area.
The work of~\newcite{nickel2017poincare} introduces and explores the potential of hyperbolic embedding by using an n-dimension Poincar\'e ball. The research work compares the hyperbolic and Euclidean embeddings for a complex latent data structure and comes to the conclusion that hyperbolic embedding surpasses the Euclidean embedding in effectivity. Inspired by the above results, both~\newcite{leimeister2018skip} and~\newcite{dhingra2018embedding} have extended the methodology introduced by~\newcite{nickel2017poincare}. \newcite{leimeister2018skip} have developed a hyperbolic word embedding using the skip-ngram negative sampling architecture taken from Word2vec. In lower embedding dimensions, the developed model performs better in comparison to its Euclidean counterpart. The work of~\newcite{dhingra2018embedding} uses re-parameterization to extend the Poincar\'e embedding, in order to learn the embedding of arbitrarily parameterized objects. The framework thus created is used to develop word and sentence embeddings. In our research, we will be following the footsteps of the above papers.
When considering the usage of hyperbolic embeddings in a practical context, the work of~\newcite{lu2019learning,lu2020exploiting} can be examined. The research by~\newcite{lu2019learning} improves the state-of-the-art model used to predict ICU (intensive care unit) re-admissions and surpasses the accepted benchmark used to predict in-hospital mortality using hyperbolic embedding of Electronic Health Records, while the work of~\newcite{lu2020exploiting} introduces a novel network embedding method which is capable of maintaining the consistency of the node representation across two views of networks, thus emphasizing the capabilities of hyperbolic embeddings. To the best of our knowledge, hyperbolic embeddings have not been previously applied to Sinhala content. Therefore, this research may reveal novel insight regarding hyperbolic embedding and its effectivity in sentiment analysis.
In the research work of~\newcite{senevirathne2020sentiment}, capsule-B model \cite{zhao2018investigating} is crowned as the state-of-the-art model for the Sinhala sentiment analysis. In this work, a set of deep learning models are tested for the ability to predict the sentiment of Sinhala news comments. The GRU \cite{chung2014empirical} model with a CNN \cite{wang2016combination} layer which is used for the testing of each embedding in this work is taken from the aforementioned research. Furthermore, the work of~\newcite{weeraprameshwara2022sentiment} has extended the idea and tested the same set of deep learning models with the addition of sentiment analysis models introduced in the work of~\newcite{jayawickrama2021seeking} using the Facebook data set which is used in this research work. According to their results, the 3 layer stacked BiLSTM model \cite{zhou2019sentiment} outshines as the state-of-the-art model.
\section{Methodology}
\label{sec:meth}
In order to test the feasibility of two-tiered word representation as a means of representing Sinhala text in the sentiment analysis domain, a series of experiments were conducted as described in the following subsections.
\subsection{Data Set}
\label{sec:data}
The data set used for the project is extracted from the work of~\newcite{wijeratne2020sinhala}, which contains 1,820,930 Facebook posts from 533 Facebook pages popular in Sri Lanka over the time window of 2010 to 2020. The research work has produced two cleaned corpora and a set of stop words for the given context. The larger corpus among them consists of a total of 28 to 29 million words. The data set covers a wide range of subjects such as politics, media, and celebrities. Table~\ref{Table:Fields} illustrates the fields taken from the data set for the embedding development, model training and testing phases.
\begin{table}[!htb]
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{ |c|c|c|c|c|}
\hline
\textbf{Field Name} & \textbf{Total Count} & \textbf{Percentage(\%)} \\
\hline
Likes & 312,282,979 & 93.58\\
Loves & 10,637,722 & 3.19\\
Wow & 1,633,255 & 0.49 \\
Haha & 5,377,815 & 1.61\\
Sad & 2,611,908 & 0.78\\
Angry & 1,158,182 & 0.35\\
Thankful & 12,933 & 0.00\\
\hline
\end{tabular}
\caption{The counts and percentages of the reactions in the Facebook data set}
\label{Table:Fields}
\end{table}
\subsection{Preprocessing}
Even though there are two preprocessed corpora introduced through the work of~\newcite{wijeratne2020sinhala}, the raw data set was used for this research with the objective of preprocessing it to suit our requirements. As such, numerical content, URLs, email addresses, hashtags, words in other languages except for Sinhala and English, and excessive spaces were removed from the text. While the focus of this study is colloquial Sinhala, English is included in the data set as the two languages are often codemixed in colloquial use. Codemixing of Sinhala with other languages is much less in comparison. Furthermore, stop words were removed from the text as well, as recommended by~\newcite{wijeratne2020sinhala}. Posts with no textual content after thus preprocessing as well as posts with no reaction annotations were also removed as they yield no value in the annotation stage. The final preprocessed data set consists of Sinhala, English, and Sinhala-English code mixed content, adding up to a total of 542,871 Facebook posts consisting of 8,605,849 words.
\subsection{Annotation}
\label{sec:anno}
Since the procedure followed in the model development is supervised learning, the data set needed to be annotated \cite{schapire2012foundations}. It is quite a considerable challenge to obtain sufficiently large annotated data sets for resource-poor languages like Sinhala thus Facebook data set is ideal for the given scenario as the Facebook posts are pre-annotated by Facebook users using Facebook reactions. Though this is not an expert annotation, it can be considered as an effective means of community annotation as the collective opinion of a large number of Facebook users is represented by the reaction annotation \cite{pool2016distant,freeman2020measuring,graziani2019jointly,jayawickrama2021seeking}.
A binary classification method which was introduced through the work of~\newcite{senevirathne2020sentiment} and further improved for Facebook data by~\newcite{weeraprameshwara2022sentiment} is used in this research as the annotation schema which is illustrated in the figure~\ref{fig:reactions}. Here, the Facebook reactions are divided into two classes; positive reactions and negative reactions. The reactions \textit{love} and \textit{wow} are considered as positive reactions while \textit{sad} and \textit{angry} are classified as negative reactions. The reactions \textit{like} and \textit{thankful} have been excluded as they are outliers in the data set with respect to the other reactions. The \textit{like} is the de facto reaction given by the users and it does not yield a valid sentiment. The \textit{thankful} reaction has appeared in a small time period making the presence insignificant compared to other reactions (only 0.00003\% of the total reaction count). The \textit{haha} reaction is also excluded due to the contradicting nature of its use cases \cite{jayawickrama2021seeking}. The \textit{care} reaction is not included in this data set as it was first introduced to the platform in 2020 \cite{lyles2022}, after the creation of the data set.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.35\textwidth]{Images/reactions.png}
\caption{Reaction categorization for the annotation}
\label{fig:reactions}
\end{figure}
\subsection{Word Embeddings}
\label{sec:wordemb}
The final vector representation of Facebook posts consists of two major elements: word embeddings and sentence embeddings.
Word embeddings are used both as the first tier of the two-tiered embedding systems and as the basic one-tiered embedding systems used in the form of a benchmark against which the performance of two-tiered embedding systems would be compared. The performance of both Euclidean and hyperbolic word embeddings has been thus evaluated in this research.
\subsubsection{Euclidean Word Embeddings}
\label{sec:euemb}
For the purpose of representing words in the Euclidean space; fastText, Word2vec, and GloVe word embedding techniques were utilized. Word vectors consisting of 200 dimensions were created using each of the aforementioned models and a window size of 40 was picked based on the work of~\newcite{senevirathne2020sentiment,weeraprameshwara2022sentiment} which precedes this research.
\subsubsection{Hyperbolic Embeddings}
\label{sec:hypemb}
The hyperbolic space exhibits different mathematical properties in comparison to the Euclidean space. Due to its inherent properties, the Euclidean space struggles to model a latent hierarchy. This issue could be addressed by mapping the embedding into a higher dimension \cite{nickel2017poincare}. However, this may lead to sparse data mapping, causing the curse of dimensionality to affect the performance. This may induce adverse effects such as causing the machine learning model to overfit by the data and using a high memory capacity for computations and storage.
The hyperbolic space has caught the attention of researchers as a plausible solution to such issues encountered in using the Euclidean space for modeling complex structures. The unique feature of this mathematical model is that the space covered by an n-ball in an n-dimensional hyperbolic space increases exponentially with the radius. In contrast to the Euclidean space where the space covered by an n-ball remains restricted by the $n^{th}$ power of the radius, the hyperbolic space could easily handle complex models such as tree-like structures within a limited dimensionality.
The distance ($D$) between two vectors ($i$ and $j$) in the hyperbolic space can be calculated as shown in equation~\ref{Eq:hdis}.
\begin{equation}
D_{(i,j)} = \arccosh{(1+\frac{2\lvert\lvert i-j\rvert\rvert^2}{(1-\lvert\lvert i\rvert\rvert^2)(1-\lvert\lvert j\rvert\rvert^2)})}
\label{Eq:hdis}
\end{equation}
Since both the circumference and the area of a hyperbolic circle grow exponentially with the radius, the hyperbolic space has the capability to effectively store a complex latent hierarchy of data using a much lower number of dimensions than the Euclidean space would require to store the exact same structure.
In order to create hyperbolic word embeddings, the data set should be reformed in such a manner that the syntactic structure of data is highlighted. However, an adequate language parser for Sinhala does not currently exist \cite{de2019survey}. Using parsers dedicated to the English language is also unfitting since the underlying grammatical structure of Sinhala is significantly different from that of English. Furthermore, for codemixed colloquial data present in this data set, grammatical structures of both Sinhala and English languages would have to be taken into consideration. Therefore, the parsing mechanism shown in figure\ref{fig:sent} is used to generate word tokens. A total of 8605849 tokens have been thus generated.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.35\textwidth]{Images/sentence_cropped.png}
\caption{Examples of parsing mechanism used for the hyperbolic embeddings where each word is matched to the sentence}
\label{fig:sent}
\end{figure}
The two-dimensional illustration of the Poincar\'e ball after training with the Facebook data set is shown in the figure~\ref{fig:poincare}. Each node represents a word in the figure and each edge represents the connection between words. Here for the illustration purposes, only a thousand nodes are shown and the dimension is projected from 200 to 2.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.32\textwidth]{Images/poincare.png}
\caption{Poincar\'e word embedding done on Facebook data set}
\label{fig:poincare}
\end{figure}
The clustering of semantically related words in the Poincar\'e embedding is shown in the figure~\ref{fig:poinwords}. A set of words related to cricket sport is clustered in the top left corner while a set of Sinhala words related to Christianity is clustered in the bottom left. A cluster which represents news-related terms is formed in the bottom right corner. With this evidence, we can safely assume that the hyperbolic space has the capability to store a complex latent hierarchy such as the semantic relation of words.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.495\textwidth]{Images/poincare_words.png}
\caption{Word clustering in the Poincar\'e embedding. The meaning of the Sinhala words are given in the brackets}
\label{fig:poinwords}
\end{figure}
\subsection{Sentence Embeddings}
\label{sec:sentemb}
Sentence embeddings are used as the second tier of the two-tiered embedding models. Basic pooling methods as well as the sequence to sequence model are used to generate sentence embeddings by using the word embedding of each word in a sentence. For both Euclidean space and hyperbolic space embeddings, the sentence embeddings are generated in a similar fashion as described below.
\subsubsection{Pooling}
\label{sec:pool}
Sentence embeddings have been created with three different pooling mechanisms for each of fastText, Word2vec, GloVe, and hyperbolic word embeddings; namely, max pooling, min pooling, and avg pooling. Pooling embeddings will be considered as baseline sentence embeddings against which the performance of the sequence to sequence model is compared.
\subsubsection{Sequence to Sequence Model}
\label{sec:seq2seq}
This sentence embedding mechanism follows the sequence to sequence model introduced through the work of~\newcite{sutskever2014sequence}, referred to as the seq2seq model from here onwards.
The data set is randomly shuffled and a subset consisting of 400,000 data rows is used for training the encoder, decoder units.
In the original model, the encoder accepts a set of vectors which consists of the word embedding of each word in a sentence followed by the $<EOS>$ token as input and returns a context vector as the output. In order to train the model, the decoder is fed with the context vector from the encoder, with the objective of getting the $<SOS>$ token followed by the translated sentence as the final output. For our research, the output expected from the decoder is the same sentence that has been inputted into the encoder. For a given sentence, the word embedding of each word in the sentence is inputted into the Recurrent Neural Network encoder, which has a hidden layer similar in dimensions to the word embedding. Since the expected output from the RNN decoder is also the same sentence, the context vector (output of the encoder) can be considered as the sentence embedding that we are seeking.
Different sentence embeddings are thus generated using both Euclidean and hyperbolic word embeddings as inputs to the seq2seq model. For each type of word embeddings, the RNNs inside the encoder and decoder are also modified to generate different sentence embeddings. Here, GRU \cite{chung2014empirical}, LSTM \cite{hochreiter1997long}, and simple RNN models are used. The architecture of the GRU seq2seq model has been inspired by the model introduced through the work of~\newcite{cho2014learning}.
Furthermore, two different decoder structures have been used to train the seq2seq model. A simple decoder which functions as explained above and a decoder with the attention mechanism introduced in the work of~\newcite{vaswani2017attention} are thus utilized. Both models use a teacher forcing value of 0.5 with the objective of performing better at the prediction task \cite{lamb2016professor}.
The squared L2 norm between the predicted word embedding and the actual word embedding is used as the loss function for Euclidean embeddings. Equations~\ref{Eq:predict} shows embedding value of the $i$th sentence which is calculated by summing up all the word embeddings in the predicted sequence of word embeddings. The symbol $n$ is the length of the longest sentence which may vary for the selected data set. $WE_k$ is the value of each dimension in the 200 dimension word embedding. Equation ~\ref{Eq:actual} calculates the value of $i$th true word embedding sequence ($TV_i$) which is the summation of the word embeddings of True word sequence. Then in the equation~\ref{Eq:eloss}, the squared L2 norm ($Err$) is calculated. $n$ denotes the number of data items used. The procedure follows for both Euclidean and hyperbolic space embeddings.
\begin{equation}
PV_i={\sum_{j=1}^{n}\sum_{k=1}^{200}WE_K}\\
\label{Eq:predict}
\end{equation}
\begin{equation}
TV_i={\sum_{j=1}^{n}\sum_{k=1}^{200}WE_K}\\
\label{Eq:actual}
\end{equation}
\begin{equation}
Err=(1/n){\sum_{i=1}^{n}(PV_i - TV_i)^2}
\label{Eq:eloss}
\end{equation}
\subsection{Testing}
\label{sec:test}
To the extent of our knowledge, there does not exist a well known or effective benchmark to test the performance of Sinhala sentence embeddings. Therefore, the GRU RNN model with a CNN layer introduced by the work of~\newcite{chung2014empirical,senevirathne2020sentiment} is used to test each embedding. The function of this model is to understand the sentimental reactions of Facebook users to Facebook posts and thus classify each post as either positive or negative based on its prediction of the sentimental reaction of users to that post. The classification of the Facebook posts was done as explained in the section~\ref{sec:anno}.
As mentioned above since the scarcity of a large enough data set for Sinhala language to train deep learning models, the same Facebook data set is used for the model training purpose. However, a different set of Facebook posts are used in order to avoid repetition of the data set and a total of 200,000 posts were used for the training purpose. The holdout method was used with data set splits into the 8:1:1 ratio for train, validate, and test sets. Tests were run multiple times and the average performance measures were recorded.
\section{Results}
\label{sec:res}
The results obtained by training the models only using word embeddings are displayed in table~\ref{Table:Words}. Here, the row fastText(Sinhala News Comments) taken from the work of~\newcite{weeraprameshwara2022sentiment} is used as a benchmark against which the performance measures of the other word embeddings are compared. There, the Facebook data set was embedded using the fastText word embeddings trained with the Sinhala News Comments data set introduced through the work of~\newcite{senevirathne2020sentiment}, while the latter rows display the results of embedding the Facebook data set with word embeddings trained with the Facebook data set itself.
As the table portrays, using the Facebook data set containing 542,871 preprocessed Facebook posts, which is much larger in size than the Sinhala News Comments data set with 15,000 Sinhala News comments, to develop the word embeddings has resulted in a comparatively higher F1 score.
\begin{table*}[!htb]
\centering
\begin{tabular}{|p{7.5cm}|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Word Embedding}} & \multicolumn{4}{c|}{\textbf{Performance Measures}}\\
\hhline{~----}
& \textbf{Accuracy} & \textbf{Precision} &\textbf{Recall} & \textbf{F1} \\
\hline
fastText (Sinhala News comments) \cite{weeraprameshwara2022sentiment} & 81.17 & 81.17 & 81.57 & 81.37 \\
\hline
Word2vec & 83.47 & 83.65 & 83.47 & 83.56 \\
GloVe & 82.09 & 81.91 & 82.65 & 82.28\\
fastText & \textbf{83.76} & \textbf{83.76} & \textbf{83.76} & \textbf{83.76} \\
Hyperbolic & 82.78 & 82.11 & 83.58 & 82.84\\
\hline
\end{tabular}
\caption{Word Embedding results}
\label{Table:Words}
\end{table*}
The results of each embedding in the two-tiered structure are shown in table~\ref{Table:results}. The first column presents the word embedding method used while the second column depicts the sentence embedding method utilized and the rest of the columns are used to present the performance measures. The best performance measures from each word embedding category are highlighted.
\FPeval{\MAXW}{86.72}
\FPeval{\MAXG}{86.22}
\FPeval{\MAXF}{87.49}
\FPeval{\MAXH}{85.77}
\FPeval{\MINW}{86.64}
\FPeval{\MING}{85.99}
\FPeval{\MINF}{87.52}
\FPeval{\MINH}{85.11}
\FPeval{\AVGW}{87.01}
\FPeval{\AVGG}{85.93}
\FPeval{\AVGF}{87.93}
\FPeval{\AVGH}{85.47}
\FPeval{\GSW}{85.75}
\FPeval{\GSG}{85.16}
\FPeval{\GSF}{86.23}
\FPeval{\GSH}{86.13}
\FPeval{\GAW}{87.29}
\FPeval{\GAG}{85.12}
\FPeval{\GAF}{88.04}
\FPeval{\GAH}{86.54}
\FPeval{\LSW}{86.01}
\FPeval{\LSG}{85.16}
\FPeval{\LSF}{86.60}
\FPeval{\LSH}{85.81}
\FPeval{\LAW}{86.53}
\FPeval{\LAG}{85.12}
\FPeval{\LAF}{87.72}
\FPeval{\LAH}{86.30}
\begin{table*}[!htb]
\centering
\begin{tabularx}{\textwidth}{|l|l||*{4}{Y|}}
\hline
\multicolumn{2}{|c||}{\textbf{Embedding level}} &
\multicolumn{4}{c|}{\textbf{Performance Measures}} \\
\hline
\textbf{Word} & \textbf{Sentence} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score}\\
\hline
\multirow{7}{*}{Word2vec}
& Max Pooling & 77.23 & 80.06 & 94.59 & \MAXW \\
& Min Pooling & 77.29 & \textbf{81.55} & 92.41 & \MINW\\
& Avg Pooling & 77.44 & 81.43 & 93.41 & \AVGW\\
& Seq2seq GRU & 75.86 & 76.74 & 97.14 & \GSW\\
& Seq2seq GRU with attention & \textbf{79.12} & 79.72 & 96.45 & \textbf{\GAW}\\
& Seq2seq LSTM & 75.97 & 76.17 & \textbf{98.76} & \LSW\\
& Seq2seq LSTM with attention & 77.42 & 77.86 & 97.36 & \LAW\\
\hline
\multirow{7}{*}{GloVe}
& Max Pooling & 75.63 & 77.74 & 96.79 & \textbf{\MAXG}\\
& Min Pooling & 75.34 & \textbf{78.68} & 94.81 & \MING\\
& Avg Pooling & \textbf{76.11} & 76.90 & 97.38 & \AVGG\\
& Seq2seq GRU & 74.23 & 74.15 & \textbf{100.00} & \GSG\\
& Seq2seq GRU with attention & 74.23 & 74.09 & \textbf{100.00} & \GAG\\
& Seq2seq LSTM & 74.23 & 74.15 & \textbf{100.00} & \LSG\\
& Seq2seq LSTM with attention & 74.23 & 74.09 & \textbf{100.00} & \LAG\\
\hline
\multirow{7}{*}{fastText}
& Max Pooling & 79.93 & 81.23 & 94.78 & \MAXF\\
& Min Pooling & 79.80 & 82.49 & 93.22 & \MINF\\
& Avg Pooling & \textbf{80.86} & \textbf{82.55} & 94.07 & \AVGF\\
& Seq2seq GRU & 78.12 & 80.90 & 92.33 & \GSF\\
& Seq2seq GRU with attention & 80.61 & 81.31 & 96.00 & \textbf{\GAF}\\
& Seq2seq LSTM & 79.00 & 82.12 & 91.59 & \LSF\\
& Seq2seq LSTM with attention & 80.31 & 80.06 & \textbf{96.98} & \LAF\\
\hline
\multirow{7}{*}{Hyperbolic}
& Max Pooling & 76.71 & 77.54 & 95.95 & \MAXH\\
& Min Pooling & 76.11 & 77.68 & 94.11 & \MINH\\
& Avg Pooling & 77.00 & 77.31 & 95.56 & \AVGH\\
& Seq2seq GRU & 76.38 & 77.09 & \textbf{97.57} & \GSH\\
& Seq2seq GRU with attention & \textbf{77.31} & \textbf{78.31} & 96.70 & \textbf{\GAH}\\
& Seq2seq LSTM & 76.48 & 77.91 & 95.49 & \LSH\\
& Seq2seq LSTM with attention & 77.19 & 78.22 & 96.24 & \LAH\\
\hline
\end{tabularx}
\caption{Performance measures of each embedding}
\label{Table:results}
\end{table*}
The best F1 score is produced by the two-tiered embedding which uses fastText as the word embedding and the seq2seq model with GRU RNNs and attention layer as the sentence embedding while the second-best F1 is scored by the fastText embedding with average pooling. For each of the sentence embedding methods, the highest F1 score is produced by pairing with fastText word embeddings. fastText embeddings have resulted in a better F1 score in the one-tiered embeddings as well. Thus, we can conclude that fastText is the word embedding schema which provides the best performance in this context.
Upon taking the word embedding categories into consideration, Word2vec embeddings provide the second-best results, with performance scores slightly lower than those of fastText. The ranking of F1 scores achieved by hyperbolic and GloVe embeddings seem to be highly dependent on the type of sentence embedding used. However, the best F1 score obtained by hyperbolic embeddings, which was by pairing with the seq2seq model with GRU encoder and decoder units including an attention layer, is higher than the best F1 score GloVe embeddings have achieved upon pairing with max pooling sentence embeddings. It should be noted that the structure of data utilized here may not be optimal for hyperbolic embeddings.
In sentence embeddings, the performance of seq2seq model with GRU encoder, decoder units and an attention layer tend to surpass other sentence embedding models except when the word embedding utilized is GloVe. Nonetheless, stripping off the attention layer brings the performance of the seq2seq model with LSTM encoders and decoders to a level higher than that obtained with GRU encoder and decoder units, with the exception of the case where hyperbolic word embeddings are utilized.
Furthermore, there is a clear improvement in the performance scores of the seq2seq model when the attention layer is applied to the decoder. However, when the attention layer is not applied, pooling embeddings manage to perform better than seq2seq models except when hyperbolic word embeddings are utilized. The reason for this exception could be that the Euclidean pooling mechanisms used may not be the best fit for hyperbolic embeddings.
\section{Conclusion}
\label{sec:conc}
Comparing tables~\ref{Table:Words} and~\ref{Table:results} makes it evident that there is a clear improvement in performance when two-tiered embedding systems are used, in contrast to simply using a single tier of word embeddings. The possibility of sentence embeddings used in two-tiered embedding systems to enable the models to consider the syntax of sentences could be the reason for this improvement. When word embeddings of Sinhala Facebook posts are directly fed to a sentiment analysis model, the model is likely to see the Facebook posts as merely an unorganized set of words instead of an organized set of sentences.
In addition, the results displayed in table~\ref{Table:results} exhibit the use of the two-tiered embedding system that combines fastText word embeddings and seq2seq sentence embeddings with GRU encoder and decoder units as well as an attention layer has given rise to the best performance measures. Although Word2vec embeddings follow closely behind in performance, they have failed to surpass fastText, possibly due to the inability of the embedding system to consider the internal structure of words, which the fastText embedding system by nature is capable of \cite{bojanowski2017enriching,joulin2016bag}.
Though the hyperbolic space has an advantage over the Euclidean space due to its ability to effectively represent complex hierarchical data structures \cite{nickel2017poincare}, fastText and Word2vec embeddings have outperformed hyperbolic embeddings in this research. The reason for this could be the lack of potent parsing tools for the Sinhala language \cite{de2019survey}. To obtain the optimum performance from hyperbolic embeddings, an effective hierarchical structure such as sentence structures identified via parsing is required. The simple $[word, sentence]$ relation structure used in this research may not be sufficient for this. Furthermore, the pooling techniques also fail to be on par with the seq2seq model, possibly due to the fact that the vectors generated by applying Euclidean pooling mechanisms on hyperbolic embeddings do not always fall within the space of the Poincar\'e ball.
Another noteworthy fact is that the GloVe embeddings tend to underperform in comparison to the other word embeddings models used in this research. Unlike resource-rich languages such as English, no pre-trained GloVe models exist for the Sinhala language. This could hinder the ability of GloVe embeddings to achieve their full potential.
Thus, it can be concluded that though a robust embedding model for Sinhala that is applicable across all domains may not be currently available, it could be possible to develop an effective embedding system that would at least be potent within the domain of the training data set by applying a two-tiered embedding model such as the seq2seq sentence embeddings with GRU encoders and decoders stacked on top of fastText word embeddings on a sufficiently large data set.
\section{Future Work}
\label{sec:future}
This research is related to the work of~\newcite{jayawickrama2021seeking} and as the final goal, a Facebook reaction prediction tool for colloquial Sinhala text will be developed and the word representations developed in this project will be used for that tool.
The data set contains both Sinhala and English text since our aim is to develop a word representation for colloquial Sinhala text which consists of English and Sinhala code-mixed content. However, a pure Sinhala embedding can be generated in the future.
Furthermore, Poincar\'e embeddings could be developed for Sinhala text with the use of a proper parser to identify sentence structures though developing a reasonable parser for colloquial text will be a challenge.
Though this research only considers sentiment analysis for the Sinhala language, the applicability of the two-tiered embedding systems discussed in other areas of natural language processing as well as for other resource-poor languages could be tested as well.
|
2,869,038,156,631 | arxiv | \section*{Funding Information}
The project is funded by two Sapere Aude grants (ID: DFF – 4005-00370 and DFF: - 1323-00752) awarded by the Danish Research Council for Technology and Production as well as by Villum Fonden via the NATEC Center of Excellence.
|
2,869,038,156,632 | arxiv | \section{Introduction}
In \cite{Tz}, we constructed and proved the invariance of a Gibbs measure associated to the sub-cubic,
focusing or defocusing Nonlinear Schr\"odinger equation (NLS) on the disc of the plane $\mathbb R^2$.
For focusing non-linear interactions the cubic threshold is critical for the argument in \cite{Tz}
because of measure existence obstructions.
The main goal of this paper is to show that, in the case of defocusing nonlinearities, one can
extend the result of \cite{Tz} to the case of sub-quintic nonlinearities.
Thus we will be able to treat the relevant for the Physics case of cubic defocusing NLS. The argument
presented here requires some significant elaborations with respect
to \cite{Tz} both in the measure existence analysis and the Cauchy problem issues.
The main facts, proved in \cite{Tz} which will be used here without proof are
some properties of the Bessel functions and their zeros and the bilinear
Strichartz estimates of Proposition~\ref{str1} and Proposition~\ref{str2} below.
\subsection{Presentation of the equation}
Let $V:\mathbb C\longrightarrow\mathbb R$ be a $C^\infty(\mathbb C)$ function.
We suppose that $V$ is gauge invariant
which means that there exists a smooth function $G:\mathbb R\longrightarrow\mathbb R$ such that $V(z)=G(|z|^2)$.
Set $F=\bar{\partial}V$, i.e. $F(z)=G'(|z|^2)z$. Consider the NLS
\begin{equation}\label{1}
(i\partial_t+\Delta)u-F(u)=0,
\end{equation}
where $u:\mathbb R\times\Theta\rightarrow \mathbb C$ is a complex valued function defined on the product
of the real line (corresponding to the time variable) and
$\Theta$, the unit disc of $\mathbb R^2$ corresponding to the spatial variable. More precisely
$$
\Theta=\big((x_1,x_2)\in\mathbb R^2\, :\, x_1^2+x_2^2<1\big).
$$
In this paper, we consider (\ref{1}) subject to Dirichlet boundary conditions
$u|_{\mathbb R\times \partial\Theta}=0$. It is likely that Neumann boundary
conditions are in the scope of applicability of our methods too.
We suppose that
\begin{equation}\label{rast}
\exists\, \alpha\in ]0,4[\,:\,\forall\, (k_1,k_2),\,
\exists\, C>0\, :\, \forall\, z\in \mathbb C,\,
\big|\partial^{k_1}\bar{\partial}^{k_2}V(z)\big|\leq
C(1+|z|)^{2+\alpha-k_1-k_2}\, .
\end{equation}
The number $\alpha$ involved in (\ref{rast}) measures the ``degree'' of the
nonlinearity. In this paper we will also suppose the defocusing assumption
\begin{equation}\label{defocus}
\exists\,\, C>0,\,\,\exists\,\, \beta\in [2,4[\,\, :\,\,
\forall\, z\in\mathbb C,\,V(z)\geq -C(1+|z|)^{\beta}.
\end{equation}
A typical example for $V$ is
$$
V(z)=\frac{2}{\alpha+2}\,\,(1+|z|^2)^{\frac{\alpha+2}{2}}
$$
with corresponding
$$
F(z)=(1+|z|^2)^{\frac{\alpha}{2}}z\,.
$$
In the case $\alpha =2$ one can take $V(z)=\frac{1}{2}|z|^{4}$ which leads to a cubic defocusing
nonlinearity $F(u)=|u|^{2}u$. Observe that $V(z)=-\frac{1}{2}|z|^{4}$, which
is the potential of the cubic
focusing nonlinearity $F(u)=-|u|^{2}u$, does not satisfy assumption (\ref{defocus}).
We restrict our consideration only to radial solutions, i.e. we shall suppose that $u=u(t,r)$, where
$$
x_1=r\cos\phi,\quad x_2=r\sin\phi,\quad 0\leq r<1,\quad \phi\in[0,2\pi].
$$
Our goal here is to construct a Gibbs type measure, on a suitable phase space,
associated to the radial solutions
of (\ref{1}) which is invariant under the (well-defined) global flow of (\ref{1}).
\subsection{Bessel expansion and formal Hamiltonian form}
Since we deal with radial solutions of (\ref{1}), it is natural to use Bessel function expansions.
Denote by $J_{0}(x)$ the zero order Bessel function. We have that (see \cite{Tz} and the
references therein) $J_{0}(0)=1$ and $J(x)$ decays as
$x^{-1/2}$ when $x\rightarrow\infty$. More precisely
$$
J_{0}(x)= \sqrt{\frac{2}{\pi}}\,\,\frac{\cos(x-\pi/4)}{\sqrt{x}}+{\mathcal O}(x^{-3/2}).
$$
Let $0<z_1<z_2<\cdots$ be the (simple) zeroes of $J_0(x)$. Then (see e.g. \cite{Tz}) $z_n\sim n$ as
$n\rightarrow\infty$.
Each $L^2$ radial function may be expanded with respect to the Dirichlet bases formed by
$J_0(z_n r)$, $n=1,2,3,\cdots$. The functions $J_0(z_n r)$ are eigenfunctions of $-\Delta$
with eigenvalues $z_n^2$.
Define $e_n:\Theta\rightarrow\mathbb R$ by
$$
e_n\equiv e_n(r)=\|J_0(z_n\cdot)\|_{L^2(\Theta)}^{-1}J_0(z_nr)\,.
$$
We have (see \cite{Tz}) that $\|J_0(z_n\cdot)\|_{L^2(\Theta)}\sim n^{-1/2}$ as $n\rightarrow\infty$.
Therefore $\|e_n\|_{L^2(\Theta)}=1$ but $\|e_n\|_{L^{\infty}(\Theta)}\sim n^{1/2}$ as $n\rightarrow\infty$.
Hence we observe a significant difference between the disc and the flat torus $\mathbb T^2$, where the sup norm
of the eigenfunctions can not grow so fast.
Let us fix from now on a real number $s$ such that
\begin{equation}\label{s}
\max\big(\frac{1}{3},1-\frac{2}{\alpha},1-\frac{2}{\beta}\big)<s<\frac{1}{2}
\end{equation}
(recall that $\alpha,\beta<4$ and thus a proper choice of the index $s$ is indeed possible).
Set $e_{n,s}=z_{n}^{-s}e_{n}$ ($H^s$ normalization) and if
$$
u(t)=\sum_{n=1}^{\infty}c_{n}(t)e_{n,s}
$$
then we need to analyze the equation
\begin{equation}\label{2}
iz_{n}^{-s}\dot{c_n}(t)-z_n^2\,z_{n}^{-s} c_n(t)-
\Pi_{n}\Big(
F\Big(
\sum_{m=1}^{\infty}c_m(t)\, e_{m,s}\Big)
\Big)=0,\quad n=1,2,\cdots
\end{equation}
where $\Pi_n$ is the projection on the mode $e_n$, i.e.
$\Pi_n(f)=\langle f, e_n\rangle.$
Equation (\ref{2}) is a Hamiltonian PDE for $c\equiv (c_n)_{n\geq 1}$ with Hamiltonian
$$
H(c,\overline{c})=\sum_{n=1}^{\infty} z_n^{2-2s}\, |c_n|^2 +2\pi\int_{0}^1V\Big(\sum_{n=1}^{\infty}c_n\,
e_{n,s}(r)\Big)rdr\, ,
$$
and a formal Hamiltonian form
$$
ic_t=J\frac{\delta H}{\delta \overline{c}},\quad i\overline{c}_{t}=-J\frac{\delta H}{\delta c}\, ,
$$
where $J={\rm diag}(z_n^{2s})_{n\geq 1}$ is the
map inducing the symplectic form in the coordinates $(c,\overline{c})$.
Thus the quantity $H(c,\overline{c})$ is, at least formally, conserved by the flow.
In fact we will need to use the energy conservation only for finite dimensional (Hamiltonian)
approximations of (\ref{2}).
Let us also observe that the $L^2$ norm of $u(t)$ expressed in terms of $c$ as
$$
\|c\|^2\equiv\sum_{n=1}^{\infty}z_{n}^{-2s}|c_n|^2
$$
is also conserved by the flow. Following Lebowitz-Rose-Speer \cite{LRS}, we will construct a
{\bf renormalization} of the formal measure $\chi(\|c\|)\exp(-H(c,\overline{c}))dc\,d\bar{c}$
($\chi$ being a cut-off) which is invariant under the (well-defined) flow, living on a low regularity phase space
(for a finite dimensional Hamiltonian model the invariance would follow from the Liouville theorem
for volume preservation by flows induced by divergence free vector fields).
\subsection{The free measure}
Define the Sobolev spaces $H^\sigma_{rad}(\Theta)$, $\sigma\geq 0$ equipped with the norm
$$
\Big\|\sum_{n=1}^{\infty}c_n\, e_{n,s}\Big\|^{2}_{H^{\sigma}_{rad}(\Theta)}
\equiv\sum_{n=1}^{\infty}z_n^{2(\sigma-s)}|c_n|^{2}\,.
$$
The Sobolev spaces $H^\sigma_{rad}(\Theta)$ are related to the domains of $\sigma/2$ powers of the Dirichlet
Laplacian. In several places in the sequel, we shall denote $\|\cdot\|_{H^{\sigma}_{rad}(\Theta)}$
simply by $\|\cdot\|_{H^{\sigma}(\Theta)}$.
We can identify $l^2(\mathbb N;\mathbb C)$ with $H^s_{rad}(\Theta)$ via the map
$$
c\equiv (c_{n})_{n\geq 1}\longmapsto \sum_{n=1}^{\infty}c_n\, e_{n,s}\,.
$$
Consider the free Hamiltonian
$$
H_0(c,\overline{c})=\sum_{n=1}^{\infty} z_n^{2-2s}\, |c_n|^2
$$
and the measure
$$
\frac{``\exp(-H_0(c,\overline{c}))dcd\bar{c}\,''}{\int\exp(-H_0(c,\overline{c}))dcd\bar{c}}=
\prod_{n=1}^{\infty}
\frac{e^{-z_n^{2-2s}|c_n|^{2}}dc_nd\bar{c}_{n}}
{\int_{\mathbb C}e^{-z_n^{2-2s}|c_n|^{2}}dc_nd\bar{c}_{n}}\equiv d\mu(c)\,.
$$
Denote by ${\mathcal B}$ the Borel sigma algebra of $H^s_{rad}(\Theta)$.
The measure $d\mu$ is first defined on cylindrical sets (see \cite{Tz}) in the natural way and
since for $s<1/2$,
$$
\sum_{n=1}^{\infty}z_{n}^{2s-2}<\infty
$$
we obtain that $d\mu$ is countably additive on the cylindrical sets and thus
may be defined as a probability measure on $(H^s_{rad}(\Theta),{\mathcal B})$ via the map considered above.
Let us recall that $A\subset H^s_{rad}(\Theta)$ is called cylindrical if there exist an integer $N$ and
a Borel set of $V$ of $\mathbb C^N$ so that
$$
A=\Big\{u\in H^s_{rad}(\Theta)\, :\, \big( (u,e_{1,s}),\dots,(u,e_{N,s})\big)\in V\Big\}.
$$
In addition, the minimal sigma algebra on $H^s_{rad}(\Theta)$ containing the cylindrical sets is
${\mathcal B}$.
\\
The measure $d\mu$ may also equivalently be defined as the distribution of the $H^s_{rad}(\Theta)$
valued random variable
\begin{equation*}
\varphi(\omega,r)=
\sum_{n= 1}^{\infty}\frac{g_n(\omega)}{z_n^{1-s}}e_{n,s}(r)=
\sum_{n= 1}^{\infty}\frac{g_n(\omega)}{z_n}e_{n}(r)\, ,
\end{equation*}
where $g_n(\omega)$ is a sequence of centered, normalised, independent identically distributed (i.i.d.)
complex Gaussian random variables, defined in a probability space
$(\Omega,{\mathcal F},p)$.
By normalised, we mean that
$$
g_{n}(\omega)=\frac{1}{\sqrt{2}}\big(h_{n}(\omega)+i\, l_{n}(\omega)\big),
$$
where $h_n,l_n\in{\mathcal N}(0,1)$ are standard independent real gaussian variables.
Indeed, if we consider the sequence $(\varphi_N(\omega,r))_{N\in\mathbb N}$ defined by
\begin{equation}\label{3}
\varphi_N(\omega,r)=\sum_{n= 1}^{N}\frac{g_n(\omega)}{z_n}e_{n}(r)
\end{equation}
then using that $s<1/2$ we obtain that $(\varphi_N(\omega,r))_{N\in\mathbb N}$ is a Cauchy sequence in
$L^2(\Omega;H^s_{rad}(\Theta))$ and $\varphi(\omega,r)$ is, by definition, the limit of this sequence.
Thus the map which to $\omega\in\Omega$ associates $\varphi(\omega,r)$ is measurable from
$(\Omega,{\mathcal F})$ to $(H^s_{rad}(\Theta),{\mathcal B})$.
Therefore $\varphi(\omega,r)$
may be seen as a $H^s_{rad}(\Theta)$ valued random variable and for every Borel set $A\in {\mathcal B}$,
$$
\mu(A)=p(\omega\,:\, \varphi(\omega,r)\in A).
$$
Moreover, if $f:H^s_{rad}(\Theta)\rightarrow \mathbb R$ is a measurable function then $f$ is integrable if and
only if the real random variable $f\circ \varphi\,:\, \Omega\rightarrow \mathbb R$ is integrable and
$$
\int_{H^s_{rad}(\Theta)}f(u)d\mu(u)=\int_{\Omega}f(\varphi(\omega,\cdot))dp(\omega)\,.
$$
\subsection{Measure existence}
Following the basic idea one may expect that the measure (Gibbs measure)
\begin{equation}\label{mesdef}
d\rho(u)\equiv\chi\big(\|u\|_{L^2(\Theta)}\big)\exp\Big(-\int_{\Theta}V(u)\Big)d\mu(u)
\end{equation}
is invariant under the flow of (\ref{1}).
In (\ref{mesdef}),
$$
\chi:\mathbb R\longrightarrow [0,\infty[
$$
is a non-negative continuous function with compact support.
In (\ref{mesdef}), $\exp\big(-\int_{\Theta}V(u)\big)$ is the contribution of
the nonlinearity of (\ref{1}) to the Hamiltonian, while the free Hamiltonian
(coming from the linear part of (\ref{1})) is incorporated in $d\mu(u)$.
One may wish to see $d\rho(u)$ as the image measure on $H^s_{rad}(\Theta)$ under the map
$$
\omega\longmapsto \sum_{n= 1}^{\infty}\frac{g_n(\omega)}{z_n}e_{n}(r)
$$
of the measure
$$
\chi\big(\|\varphi(\omega,\cdot)\|_{L^2(\Theta)}\big)
\exp\Big(-2\pi\int_{0}^{1}V\big(\varphi(\omega,r)\big)rdr \Big)dp(\omega)\,.
$$
A first problem (in order to ensure that $\rho$ is not trivial) is whether
$\int_{\Theta}V(u)$ is finite $\mu$ almost surely (a.s.). Let us notice that an appeal to the
(\ref{rast}) and the
Sobolev inequality gives
\begin{equation}\label{sobolev}
\Big|
\int_{\Theta}V(u)
\Big|
\leq
C\big(1+\|u\|_{L^{\alpha+2}(\Theta)}^{\alpha+2}\big)
\leq
C\big(1+\|u\|^{\alpha+2}_{H^{\sigma}_{rad}(\Theta)}\big),
\end{equation}
provided $\sigma\geq 2(\frac{1}{2}-\frac{1}{2+\alpha})=\frac{\alpha}{2+\alpha}$.
For $\alpha\geq 2$ (a case excluded in \cite{Tz}), inequality (\ref{sobolev}) does
not suffice to conclude that $\int_{\Theta}V(u)$ is finite $\mu$ a.s.
Indeed, for $\alpha\geq 2$ one has $\sigma\geq \frac{1}{2}$ and,
using for instance the Fernique integrability theorem, one may show that
$\|u\|_{H^{\sigma}_{rad}(\Theta)}=\infty$, $\mu$ a.s.
We can however resolve this problem by using a probabilistic argument
(which ``improves'' on the Sobolev inequality).
Let us also mention the recent work \cite{AT}, where one studies $L^p$
properties of Gaussian random series with a particular attention to radial functions.
Here is a precise statement.
\begin{theoreme}\label{thm1}
We have that $\int_{\Theta}V(u)\in L^{1}(d\mu(u))$
(in particular $\int_{\Theta}V(u)$ is $\mu$ a.s. finite).
\end{theoreme}
Essentially, the assertion of Theorem~\ref{thm1} follows from the considerations in \cite{AT}.
We will however give below a proof of Theorem~\ref{thm1} using an argument slightly different from \cite{AT}.
\subsection{Finite dimensional approximations}
Let $E_N$ be the finite dimensional complex vector space spanned by $(e_{n})_{n=1}^{N}$.
We consider $E_N$ as a measured space with the measure induced by $\mathbb C^n$ under
the map from $\mathbb C^N$ to $E_N$ defined by
$$
(c_1,\cdots,c_{N})\longmapsto \sum_{n=1}^{N}c_{n}e_{n,s}\,.
$$
Following Zhidkov (cf. \cite{Zh} and the references therein), we consider the finite dimensional
projection (an ODE) of (\ref{1})
\begin{equation}\label{N}
(i\partial_t+\Delta)u-S_{N}(F(u))=0,\quad u|_{t=0}\in E_N,
\end{equation}
where $S_N$ is the projection on $E_N$.
Notice that $S_{N}(F(u))$ is well-defined for $u\in E_N$ since $E_N\subset
C^{\infty}(\overline{\Theta})$.
The equation (\ref{N}) is a Hamiltonian ODE for $u\in E_N$ with Hamiltonian
$$
H_{N}(u,\bar{u})=\int_{\Theta}|\nabla u|^{2}+\int_{\Theta}V(u),\quad u\in E_N\,.
$$
Thus $H_{N}(u,\bar{u})$ is conserved by the flow of (\ref{N}). One may directly check this by
multiplying (\ref{N}) with $\bar{u}_{t}\in E_N$ and integrating over $\Theta$
(observe that the boundary terms in the integration by parts disappear).
Multiplying (\ref{N}) by $\bar{u}$ and integrating over $\Theta$, we see that the $L^2(\Theta)$ norm
is also preserved by the flow of (\ref{N}) and thus (\ref{N}) has a well-defined global dynamics.
Denote by $\Phi_{N}(t):E_N\rightarrow E_N$, $t\in\mathbb R$ the flow of (\ref{N}).
Let $\mu_N$ be the distribution of the $E_N$ valued random variable
$\varphi_{N}(\omega,r)$ defined by (\ref{3}). Set
$$
d\rho_N(u)\equiv
\chi\big(\|u\|_{L^2(\Theta)}\big)
\exp\big(-\int_{\Theta}V(u)\big)d\mu_N(u).
$$
One may see $\rho_{N}$ as the image measure on $E_N$ under the map
$\omega\mapsto \varphi_{N}(\omega,r)$ of the measure
$$
\chi\big(\|\varphi_{N}(\omega,\cdot)\|_{L^2(\Theta)}\big)
\exp\Big(-2\pi\int_{0}^{1}V\big(\varphi_{N}(\omega,r)\big)rdr\Big)dp(\omega)\,.
$$
From the Liouville theorem for divergence free vector fields, the measure $\rho_{N}$ is
invariant under $\Phi_{N}(t)$. Indeed, if we write the solution of (\ref{N}) as
$$
u(t)=\sum_{n=1}^{N}c_{n}(t)e_{n,s}\,,\quad c_{n}(t)\in \mathbb C
$$
then in the coordinates $c_{n}$, the equation (\ref{N}) can be written as
\begin{equation}\label{4bis}
iz_{n}^{-s}\dot{c_n}(t)-z_n^2\,z_{n}^{-s} c_n(t)-
\int_{\Theta}S_{N}(F(u(t)))\overline{e_{n}}=0,\quad 1\leq n\leq N.
\end{equation}
Equation (\ref{4bis}) in turn can be written in a Hamiltonian format as follows
$$
\partial_{t}c_n=-iz_{n}^{2s}\frac{\partial H}{\partial \overline{c_n}},
\quad
\partial_{t}\overline{c_n}=iz_{n}^{2s}\frac{\partial H}{\partial c_n},\quad 1\leq n\leq N,
$$
with
$$
H(c,\overline{c})=\sum_{n= 1}^{N} z_n^{2-2s}\, |c_n|^2 + 2\pi\int_{0}^1
V\Big(\sum_{n=1}^{N}c_n\, e_{n,s}(r)\Big)rdr\, ,\quad c=(c_1,\cdots,c_N).
$$
Since
$$
\sum_{n=1}^{N}
\Big(
\frac{\partial}{\partial c_n}\big(-iz_{n}^{2s}\frac{\partial H}{\partial \overline{c_n}}\big)
+
\frac{\partial}{\partial\overline{c_n}}
\big(iz_{n}^{2s}\frac{\partial H}{\partial c_n}\big)
\Big)=0,
$$
we can apply the Liouville theorem for divergence free vector fields to
conclude that the measure $dcd\overline{c}$ is invariant under the flow of
(\ref{4bis}).
On the other hand the quantities $H(c,\overline{c})$ and
$$
\|c\|^{2}\equiv\sum_{n=1}^{N}z_{n}^{-2s}|c_n|^{2}
$$
are conserved under the flow of (\ref{4bis}).
Moreover, by definition if $A$ is a Borel set of $E_{N}$ defined by
$$
A=\Big\{
u\in E_{N}\,:\, u=\sum_{n=1}^{N}c_{n}e_{n,s},\quad (c_1,\cdots,c_N)\in A_1
\Big\},
$$
where $A_1$ is a Borel set of $\mathbb C^N$, then
$$
\rho_{N}(A)=\kappa_{N}\int_{A_1}e^{-H(c,\overline{c})}\chi(\|c\|)dcd\overline{c},
$$
with
$$
\kappa_{N}=\pi^{-N}\Big(\prod_{1\leq n\leq N}z_{n}^{2-2s}\Big).
$$
Therefore the measure $\rho_{N}$ is invariant under $\Phi_{N}(t)$,
thanks to the invariance of $dcd\overline{c}$ and the $\Phi_{N}(t)$
conservations of of $H(c,\overline{c})$ and $\chi(\|c\|)$.
Let us also observe that if we write (\ref{4bis}) in terms of $(\Re(c_n),\textrm{Im}(c_n))$ then
we still obtain a Hamiltonian ODE and one may show the invariance of $\rho_N$
under (\ref{N}) by analyzing that ODE.
\\
One may extend $\rho_N$ to a measure $\tilde{\rho}_{N}$ on $H^s_{rad}(\Theta)$.
If $U$ is a $\rho$ measurable set then $\tilde{\rho}_{N}(U)\equiv \rho_{N}(U\cap E_{N})$.
A similar definition may be given for $\mu_N$.
The measure $\tilde{\rho}_{N}$ is well-defined since for $U\in {\mathcal B}$ one has that
$U\cap E_{N}$ is a Borel set of $E_{N}$. Indeed, this property is clear for $U$ a cylindrical set and
then we extend it to ${\mathcal B}$ by the key property of the cylindrical sets.
Observe that for $U$, a $\rho$ measurable set, one has
$$
\tilde{\rho}_{N}(U)=\int_{U_{N}}\chi\big(\|S_N(u)\|_{L^2(\Theta)}\big)
\exp\big(-\int_{\Theta}V(S_{N}(u))\big)d\mu(u),
$$
where
$$
U_{N}=\big\{u\in H^s_{rad}(\Theta)\,:\, S_{N}(u)\in U\big\}.
$$
The following properties relating $\rho$ and $\rho_N$ will be useful in our analysis concerning the long
time dynamics of (\ref{1}).
\begin{theoreme}\label{thm2}
One has that for every $p\in [1,\infty[$,
$$
\chi\big(\|u\|_{L^2(\Theta)}\big)
\exp\big(-\int_{\Theta}V(u)\big)\in L^p(d\mu(u)).
$$
In addition, if we fix $\sigma\in [s,1/2[$ then for every $U$ an open set of $H^{\sigma}_{rad}(\Theta)$
one has
\begin{equation}\label{parvo}
\rho(U)\leq\liminf_{N\rightarrow\infty}\tilde{\rho}_{N}(U)\,\,
(= \liminf_{N\rightarrow\infty}\rho_{N}(U\cap E_{N})) \,.
\end{equation}
Moreover if $F$ is a closed set of $H^{\sigma}_{rad}(\Theta)$ then
\begin{equation}\label{vtoro}
\rho(F)\geq\limsup_{N\rightarrow\infty}\tilde{\rho}_{N}(F)\,\,
(=\limsup_{N\rightarrow\infty}\rho_{N}(F\cap E_{N})).
\end{equation}
\end{theoreme}
The proof of Theorem~\ref{thm2} is slightly more delicate than an analogous result used in \cite{Tz}.
In contrast with \cite{Tz} we can not exploit that $\int_{\Theta}V(S_{N}u)$ converges $\mu$
a.s. to $\int_{\Theta}V(u)$.
In \cite{Tz} we deal with sub-quartic growth of $V$ and by the Sobolev embedding
we can get directly the needed $\mu$ a.s. convergence. Here we will need to use a different argument.
\subsection{Statement of the main result}
With Theorem~\ref{thm1} and Theorem~\ref{thm2} in hand we can prove our main result.
\begin{theoreme}\label{thm3}
The measure $\rho$ is invariant under the well-defined $\rho$ a.s. global in time flow of
the NLS (\ref{1}), posed on the disc. More precisely :
\begin{itemize}
\item
There exists a $\rho$ measurable set $\Sigma$ of full $\rho$ measure such that for every $u_0\in\Sigma$
the NLS (\ref{1}), posed on the disc,
with initial data data $u|_{t=0}=u_0$ has a unique (in a suitable sense)
global in time solution $u\in C(\mathbb R;H^s_{rad}(\Theta))$.
In addition, for every $t\in\mathbb R$, $u(t)\in\Sigma$ and the map $u_0\mapsto u(t)$
is $\rho$ measurable.
\item
For every $A\subset\Sigma$, a $\rho$ measurable set, for every $t\in\mathbb R$,
$
\rho(A)=\rho(\Phi(t)(A)),
$
where $\Phi(t)$ denotes the flow defined in the previous point.
\end{itemize}
\end{theoreme}
The uniqueness statement of Theorem~\ref{thm3} is in the sense of a uniqueness for the integral equation
(\ref{venda}) in a suitable space continuously embedded in the space of continuous $H^s_{rad}(\Theta)$
valued functions. Another possibility is to impose zero boundary conditions on $\mathbb R\times \partial\Theta$
and then relate the solutions of (\ref{1}) to the solutions of (\ref{venda})
(see also Remark~\ref{zabelejka} below).
As a consequence of Theorem~\ref{thm3} one may apply the Poincar\'e recurrence theorem to the flow $\Phi$.
For previous works proving the invariance of Gibbs measures under the flow of NLS we refer to
\cite{Bo1,Bo2,Zh}.
In all these works one considers periodic boundary conditions, i.e. the spatial domain is the flat torus.
We also refer to \cite{KS}, for a construction of invariant measures, supported by $H^2$,
for the defocusing NLS.
Let us also remark that the result of Theorem~\ref{thm3} implies that the
sub-quintic defocusing NLS is almost surely globally well-posed for data
$\varphi(\omega,r)$ defined by
$$
\varphi(\omega,r)= \sum_{n= 1}^{\infty}\frac{g_n(\omega)}{z_n}e_{n}(r)\,.
$$
Because of the low regularity of $\varphi$ for typical $\omega$'s such a
result seems to be difficult to achieve by the present deterministic methods
for global well-posedness of NLS.
\subsection{Structure of the paper and notation}
Let us briefly describe the organization of the rest of the paper.
In the next section, we prove Theorem~\ref{thm1}. Section~3, is devoted to the proof of Theorem~\ref{thm2}.
In Section~4, we recall the definition of the Bourgain spaces and we state two bilinear
Strichartz estimates which are the main tool in the study of the local Cauchy problem. In Section~5,
we prove nonlinear estimates in Bourgain spaces. Section~6 is devoted to the local well-posedness analysis.
In Section~7, we establish the crucial control on the dynamics of (\ref{N}). In section~8, we construct
the set $\Sigma$ involved in the statement of Theorem~\ref{thm3}.
In Section~9, we prove the invariance of the measure. In the last section, we prove several bounds for the
3d NLS with random data.
\\
In this paper, we assume that the set of the natural numbers $\mathbb N$ is $\{1,2,3,\cdots\}$.
We call dyadic integers the non-negative powers of $2$, i.e. $1,2,4,8$ etc.
\subsection{Acknowledgements.}
I am very grateful to Nicolas Burq for several useful discussions on the problem and for
pointing out an error in a previous version of this text.
It is a pleasure to thank A.~Ayache and H.~Queff\'elec for useful
discussions on random series.
I am also indebted to N.~Burq and P.~G\'erard since this work (as well as \cite{Tz}) benefited from
our collaborations on NLS on compact manifolds.
I thank the referee for pointing out several imprecisions in a previous version
of the paper.
\section{Proof of Theorem~\ref{thm1} (measure existence)}
\subsection{Large deviation estimates}
\begin{lemme}\label{lem1}
Let $(g_n(\omega))_{n\in\mathbb N}$ be a sequence of normalized i.i.d. complex Gaussian random
variables defined in a probability space $(\Omega,{\mathcal F},p)$.
There exists $\beta>0$ such that for every $\lambda >0$, every sequence $(c_n)\in l^2(\mathbb N;\mathbb C)$
of complex numbers,
\begin{equation*}
p\Big(\omega\,:\,\big|\sum_{n=1}^{\infty} c_n g_n(\omega) \big|>\lambda\Big)\leq
4\,e^{-\frac{\beta\lambda^2}{\sum_{n}|c_n|^2}}
\end{equation*}
(the right hand-side being defined as zero if $(c_n)_{n\in\mathbb N}$ is identically zero).
\end{lemme}
\begin{proof}
By separating the real and the imaginary parts, we can assume that $g_n$ are real valued independent
standard gaussians and $c_n$ are real constants. The bound we need to prove
is thus
\begin{equation}\label{real}
\exists\,\beta>0\,:\,
\forall\, (c_n)\in l^2(\mathbb N;\mathbb R),\, \forall\, \lambda>0,\quad
p\Big(\omega\,:\,\big|\sum_{n=1}^{\infty} c_n g_n(\omega) \big|>\lambda\Big)\leq
2\,e^{-\frac{\beta\lambda^2}{\sum_{n}c_n^2}}\, .
\end{equation}
We may of course assume that the sequence $(c_n)_{n\in\mathbb N}$ is not identically zero.
For $t>0$ to be determined later, using the independence, we obtain that
\begin{equation*}
\int_{\Omega}\, \exp\Big(t\sum_{n= 1}^{\infty}c_n g_n(\omega)\Big)dp(\omega)=
\exp\Big((t^2/2)\sum_{n=1}^{\infty}c_n^2\Big)\, .
\end{equation*}
Therefore
$$
\exp\Big((t^2/2)\sum_{n=1}^{\infty}c_n^2\Big)\geq \exp(t\lambda)\,\,\,
p\,\Big(\omega\,:\,\sum_{n=1}^{\infty} c_n g_n(\omega)>\lambda\Big)
$$
and thus
$$
p\,(\omega\,:\,\sum_{n=1}^{\infty} c_n g_n(\omega)>\lambda)\leq \exp\Big((t^2/2)
\sum_{n=1}^{\infty}c_n^2\Big)\,\,\,
e^{-t\lambda}\, .
$$
For $a>0$, $b>0$ the minimum of $f(t)=at^2-bt$ is $-b^2/4a$ and this minimum is
attained in the positive number $t=b/(2a)$. It is thus natural to choose the positive number $t$ as
$$
t\equiv \lambda/\big(\sum_{n=1}^{\infty}c_n^2\big)
$$
which leads to
$$
p\,(\omega\,:\,\sum_{n=1}^{\infty} c_n g_n(\omega)>\lambda)\leq
\exp\Big(-\frac{\lambda^2}{2\sum_{n}c_n^2}\Big)\, .
$$
In the same way (replacing $c_n$ by $-c_n$), we can show that
$$
p\,(\omega\,:\,\sum_{n=1}^{\infty} c_n g_n(\omega)<-\lambda)\leq
\exp\Big(-\frac{\lambda^2}{2\sum_{n}c_n^2}\Big)
$$
which shows that (\ref{real}) holds with $\beta=1/2$.
This completes the proof of Lemma~\ref{lem1}.
\end{proof}
We next state the following consequence of Lemma~\ref{lem1}.
\begin{lemme}\label{armen}
Let $(g_n(\omega))_{n\in\mathbb N}$ be a sequence of normalized i.i.d. complex Gaussian random
variables defined in a probability space $(\Omega,{\mathcal F},p)$.
Then there exist positive numbers $c_1,c_2$ such that for every non empty
finite set of indexes $\Lambda \subset \mathbb N $, every $\lambda >0$,
$$
p\Big(\omega\in \Omega\, : \, \sum_{n\in \Lambda} |g_n(\omega)|^2 >\lambda\Big)\leq
e^{c_1|\Lambda|-c_2\lambda}
\, ,
$$
where $|\Lambda|$ denotes the cardinality of $\Lambda$.
\end{lemme}
\begin{proof}
A proof of this lemma is given in \cite[Lemma~3.4]{Tz}.
Here we propose a different proof based on Lemma~\ref{lem1}.
The interest of this proof is that the argument might be useful in more
general situations.
Again, we can suppose that $g_n$ are real valued standard gaussians.
A simple geometric observation shows that there exists $c_1>0$ (independent of $|\Lambda|$)
and a set ${\mathcal A}$ of the unit ball of $\mathbb R^{|\Lambda|}$ of cardinality bounded
by $e^{c_1|\Lambda|}$ such that almost surely in $\omega$,
$$
\frac{1}{2}\, \Big(\sum_{n\in \Lambda}\, |g_n(\omega)|^2\Big)^{1/2}
\leq
\sup_{c\in {\mathcal A}}\,
\big|\sum_{n\in\Lambda} c_n g_{n}(\omega) \big|\,
$$
($c=(c_n)_{n\in\Lambda}$ with $\sum_{n}|c_n|^2=1$).
Therefore
$$
\{\omega\,: \,\sum_{n\in \Lambda} |g_n(\omega)|^2 >\lambda\}\subset\,\bigcup_{c\in {\mathcal A}}
\,\{\omega\,: \,|\sum_{n\in\Lambda} c_n g_{n}(\omega)|\geq \frac{\sqrt{\lambda}}{2}\}\, .
$$
Consequently, using Lemma~\ref{lem1}, we obtain that there exists $c_2>0$,
independent of $\Lambda$, such that for every $\lambda>0$,
$$
p\,(\omega\,: \,\sum_{n\in \Lambda} |g_n(\omega)|^2 >\lambda)
\leq
|{\mathcal A}|\, 4e^{-c_2\lambda}\leq 4\, e^{c_1|\Lambda|-c_2\lambda}
< e^{(c_1+2)|\Lambda|-c_2\lambda}\, .
$$
This completes the proof of Lemma~\ref{armen}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm1}}
Theorem~\ref{thm1} follows from the following statement.
\begin{lemme}\label{rudin}
The sequence $\int_{\Theta}V(S_{N}(u))$ converges to $\int_{\Theta}V(u)$ in $L^1(d\mu)$.
\end{lemme}
\begin{proof}
Let us first show that $(\int_{\Theta}V(S_{N}(u)))_{N\in\mathbb N}$ is a Cauchy sequence in $L^1(d\mu)$.
From the Sobolev embedding, we have that for a fixed $N$ the map from $H^s_{rad}(\Theta)$ to $\mathbb R$ defined by
$u\mapsto \int_{\Theta}V(S_{N}(u))$ is continuous and thus measurable.
Write, for $N<M$, using (\ref{rast})
\begin{multline*}
\Big\|\int_{\Theta}V(S_{N}(u))-\int_{\Theta}V(S_{M}(u))\Big\|_{L^1(H^s_{rad};{\mathcal B},d\mu(u))}
\\
\leq C\Big\|\int_{\Theta}|S_{N}(u)-S_{M}(u)|(1+|S_{N}(u)|^{\alpha+1}+|S_{M}(u)|^{\alpha+1})
\Big\|_{L^1(H^s_{rad};{\mathcal B},d\mu(u))}\,.
\end{multline*}
Using the H\"older inequality, we get
\begin{multline*}
\Big|\int_{\Theta}|S_{N}(u)-S_{M}(u)|(1+|S_{N}(u)|^{\alpha+1}+|S_{M}(u)|^{\alpha+1})\Big|
\\
\leq \|S_{N}(u)-S_{M}(u)\|_{L^{\alpha+2}(\Theta)}\big(C+\|S_{N}(u)\|_{L^{\alpha+2}(\Theta)}^{\alpha+1}+
\|S_{M}(u)\|_{L^{\alpha+2}(\Theta)}^{\alpha+1}\big).
\end{multline*}
Another use of the H\"older inequality, this time with respect to $d\mu$ gives
\begin{multline*}
\Big\|\int_{\Theta}V(S_{N}(u))-\int_{\Theta}V(S_{M}(u))\Big\|_{L^1(d\mu(u))}\leq
C\big\|\|S_{N}(u)-S_{M}(u)\|_{L^{\alpha+2}(\Theta)}\big\|_{L^{\alpha+2}(d\mu(u))}
\\
\times\Big(1+\big\|\|S_{N}(u)\|_{L^{\alpha+2}(\Theta)}\big\|_{L^{\alpha+2}(d\mu(u))}^{\alpha+1}+
\big\|\|S_{M}(u)\|_{L^{\alpha+2}(\Theta)}\big\|_{L^{\alpha+2}(d\mu(u))}^{\alpha+1}\Big).
\end{multline*}
Thus
\begin{multline}\label{sarbia}
\Big\|\int_{\Theta}V(S_{N}(u))-\int_{\Theta}V(S_{M}(u))\Big\|_{L^1(d\mu(u))}\leq
C\|\varphi_{N}-\varphi_{M}\|_{L^{\alpha+2}(\Theta\times\Omega)}
\\
\times\Big(1+\big\|\varphi_{N}\|_{L^{\alpha+2}(\Theta\times\Omega)}^{\alpha+1}
+\big\|\varphi_{M}\|_{L^{\alpha+2}(\Theta\times\Omega)}^{\alpha+1}\Big),
\end{multline}
where $\varphi_{N}$ is defined by (\ref{3}).
Let us now prove that there exists $C>0$ such that for every $N$,
\begin{equation}\label{ravn}
\|\varphi_{N}\|_{L^{\alpha+2}(\Omega\times\Theta)}\leq C.
\end{equation}
Using Lemma~\ref{lem1} with $c_n=z_{n}^{-1}e_{n}(r)$, $1\leq n \leq N$ and the definition of the
$L^{\alpha+2}$ norms by the aide of the distributional function, we obtain that for a fixed $r$
\begin{eqnarray*}
\|\varphi_{N}(\omega,r)\|_{L^{\alpha+2}(\Omega)}^{\alpha+2} & = &(\alpha+2)\int_{0}^{\infty}\lambda^{\alpha+1}
p\Big(\omega\,:\,\big|\varphi_{N}(\omega,r)\big|>\lambda\Big)d\lambda
\\
& \leq &
C\int_{0}^{\infty}\lambda^{\alpha+1}
\exp\Big(-(\beta\lambda^2)/\big(\sum_{n=1}^{N}z_{n}^{-2}|e_{n}(r)|^{2}\big)\Big)d\lambda
\\
&= &
C\big(\int_{0}^{\infty}\lambda^{\alpha+1}e^{-\beta\lambda^2}d\lambda\big)
\big(\sum_{n= 1}^{N}z_{n}^{-2}|e_{n}(r)|^{2}\big)^{\frac{\alpha+2}{2}}\,.
\end{eqnarray*}
Therefore
$$
\|\varphi_{N}(\omega,r)\|_{L^{\alpha+2}(\Omega)}\leq
C\big(\sum_{n=1}^{N}z_{n}^{-2}|e_{n}(r)|^{2}\big)^{\frac{1}{2}}\,.
$$
Squaring, taking the $L^{\frac{\alpha+2}{2}}(\Theta)$ norm and using the triangle inequality, we get
$$
\|\varphi_{N}\|_{L^{\alpha+2}(\Omega\times\Theta)}^{2}\leq
\sum_{n=1}^{N}z_n^{-2}\|e_{n}\|_{L^{\alpha+2}(\Theta)}^{2}\,.
$$
On the other hand, it is shown in \cite{Tz} that for $\alpha<2$ one has that
$\|e_{n}\|_{L^{\alpha+2}(\Theta)}$ is uniformly bounded (with respect to $n$),
for $\alpha=2$, $\|e_{n}\|_{L^{\alpha+2}(\Theta)}\leq C\log(1+z_n)^{1/4}$
and for $\alpha>2$, $\|e_{n}\|_{L^{\alpha+2}(\Theta)}\leq Cz_n^{1/2-2/(\alpha+2)}$.
Since $z_n\sim n$, we obtain that there exists $C$ such that for every $N\in \mathbb N$,
$\|\varphi_{N}\|_{L^{\alpha+2}(\Omega\times\Theta)}\leq C.$
Therefore (\ref{ravn}) holds.
\\
Similarly, we may obtain that
\begin{equation}\label{ravn2}
\|\varphi_{N}-\varphi_{M}\|_{L^{\alpha+2}(\Omega\times\Theta)}^{2}\leq
\sum_{n=N+1}^{M}z_n^{-2}\|e_{n}\|_{L^{\alpha+2}(\Theta)}^{2}
\end{equation}
which tends to zero as $N\rightarrow\infty$ thanks to the bounds on the growth of
$\|e_{n}\|_{L^{\alpha+2}(\Theta)}$.
Moreover, we have that
\begin{equation}\label{noel-pak}
\lim_{N\rightarrow\infty}\varphi_{N}=\varphi\quad{\rm in}\quad L^{\alpha+2}(\Theta\times\Omega)
\end{equation}
(we can identify the limit thanks to the $L^{2}(\Theta\times\Omega)$
convergence of $\varphi_{N}$ to $\varphi$ and the fact that
$L^{\alpha+2}(\Theta\times\Omega)$ convergence implies
$L^{2}(\Theta\times\Omega)$ convergence).
On the other hand thanks to (\ref{defocus}), we can write
$
V(u)=V_1(u)+V_2(u),
$
where $V_1\geq 0$ and
$
|V_2(u))|\leq C\big(1+|u|^{\beta}\big).
$
Thanks to the Sobolev embedding and (\ref{s}), we obtain that $\int_{\Theta}V_{2}(u)$
is continuous on $H^s_{rad}(\Theta)$.
Therefore the map $u\mapsto \int_{\Theta}V_{2}(u)$ is a $\mu$ measurable real valued function.
Let us next show that the map $u\mapsto \int_{\Theta}V_{1}(u)$ is $\mu$ measurable.
For that purpose, it is sufficient to show that the map
\begin{equation}\label{ccc}
c\equiv (c_n)_{n\in\mathbb N}\longmapsto \int_{\Theta}V_{1}\Big(\sum_{n\in\mathbb N}c_{n}e_{n,s}\Big)
\end{equation}
is measurable from $l^2(\mathbb N)$ to $\mathbb R$. Indeed, we have that the map
$$
(c,r)\longmapsto \sum_{n\in\mathbb N}c_{n}e_{n,s}(r)
$$
is measurable from from $l^2(\mathbb N)\times\Theta$ to $\mathbb R$ since we can see
$\sum_{n\in\mathbb N}c_{n}e_{n,s}(r)$ as the limit of $\sum_{n=1}^{N}c_{n}e_{n,s}(r)$
as $N\rightarrow\infty$ in $L^2( l^2(\mathbb N)\times \Theta)$ where $l^2(\mathbb N)$ is
equipped with the measure $d\mu(c)$ introduced in the
introduction. Therefore $V_1\Big( \sum_{n\in\mathbb N}c_{n}e_{n,s} \Big)$ is a
measurable map from $l^2(\mathbb N)\times\Theta$ to $\mathbb R$.
Since $V_1\geq 0$, using for instance the Fubini theorem,
we obtain that the map (\ref{ccc}) is indeed measurable.
This in turn implies the measurability of the map $u\mapsto \int_{\Theta}V(u)$.
Next, similarly to the proof of (\ref{sarbia}), we get
\begin{multline*}
\Big\|\int_{\Theta}V(S_{N}(u))-\int_{\Theta}V(u)\Big\|_{L^1(d\mu(u))}\leq
C\|\varphi-\varphi_{N}\|_{L^{\alpha+2}(\Theta\times\Omega)}
\\
\times\Big(1+\big\|\varphi_{N}\|_{L^{\alpha+2}(\Theta\times\Omega)}^{\alpha+1}
+\big\|\varphi\|_{L^{\alpha+2}(\Theta\times\Omega)}^{\alpha+1}\Big)\,.
\end{multline*}
Therefore
$$
\lim_{N\rightarrow\infty}\Big\|\int_{\Theta}V(S_{N}(u))-
\int_{\Theta}V(u)\Big\|_{L^1(H^s_{rad};{\mathcal B},d\mu(u))}=0\,.
$$
This completes the proof of Lemma~\ref{rudin}.
\end{proof}
Using Lemma~\ref{rudin}, we have that $\int_{\Theta}V(u)\in L^1(d\mu(u))$ and thus
$\int_{\Theta}V(u)$ is finite $\mu$ a.s.
This proves that $d\rho$ is indeed a nontrivial measure.
This completes the proof of Theorem~\ref{thm1}.
\qed
\subsection{The necessity of the probabilistic argument}
In this section we make a slight digression by showing that for $\alpha\geq 2$ an argument based only on
the Sobolev embedding may not conclude to the fact that $\int_{\Theta}V(u)$ is finite
$\mu$ a.s. More precisely we know that for every $\sigma<1/2$, $\|u\|_{H^{\sigma}(\Theta)}$
is finite $\mu$ a.s. Therefore the deterministic inequality
\begin{equation}\label{wrong}
\exists\,\, \sigma<1/2,\,\,\exists\,\, C>0,\,\, \forall\,\, u\in H^{\sigma}_{rad}(\Theta),\quad
\|u\|_{L^{\alpha+2}(\Theta)}\leq C\|u\|_{H^{\sigma}_{rad}(\Theta)}
\end{equation}
would suffice to conclude that $\int_{\Theta}V(u)$ is finite
$\mu$ a.s. We have however the following statement.
\begin{lemme}\label{scaling}
For $\alpha\geq 2$, estimate (\ref{wrong}) fails.
\end{lemme}
\begin{proof}
We shall give the proof for $\alpha=2$. The construction for $\alpha>2$ is similar.
Suppose that (\ref{wrong}) holds for some $\sigma<1/2$. Using the
Cauchy-Schwarz inequality, we obtain that there exists $\theta\in ]0,1/2]$
such that
$$
\exists\, C>0\,:\, \forall\, u\in H^1_{rad}(\Theta),\quad
\|u\|_{H^{\sigma}(\Theta)}\leq C
\|u\|_{L^{2}(\Theta)}^{\frac{1}{2}+\theta}\|u\|_{H^{1}_{rad}(\Theta)}^{\frac{1}{2}-\theta}
$$
(observe that $H^1_{rad}(\Theta)$ may be seen as the completion of $C_{0}^{\infty}(\Theta)$
radial functions with respect to the $H^1(\Theta)$ norm).
Thus by applying (\ref{wrong}) to $H^1_{rad}(\Theta)$ functions, we obtain that
\begin{equation}\label{wrong-bis}
\exists\, C>0\,:\,\forall\,\, u\in H^{1}_{rad}(\Theta),\quad \|u\|_{L^{4}(\Theta)}\leq C
\|u\|_{L^{2}(\Theta)}^{\frac{1}{2}+\theta}\|u\|_{H^{1}(\Theta)}^{\frac{1}{2}-\theta}\,.
\end{equation}
We now show that (\ref{wrong-bis}) fails. Let $v\in C_{0}^{\infty}(\Theta)$ be a radial bump function,
not identically zero.
We can naturally see $v$ as a $C_{0}^{\infty}(\mathbb R^2)$ function. For $\lambda\geq 1$, we set
$$
v_{\lambda}(x_1,x_2)\equiv v(\lambda x_1,\lambda x_2)\,.
$$
Thus $v_{\lambda}\in C_{0}^{\infty}(\Theta)$ and $v_{\lambda}$ is still radial.
We can therefore substitute $v_{\lambda}$ in (\ref{wrong-bis}) and obtain a contradiction in the limit
$\lambda\rightarrow \infty$. More precisely, one may directly check that for $\lambda\gg 1$,
$$
\|v_{\lambda}\|_{L^{4}(\Theta)}\sim \lambda^{-\frac{1}{2}},\quad
\|v_{\lambda}\|_{L^{2}(\Theta)}\sim \lambda^{-1},\quad\|v_{\lambda}\|_{H^{1}(\Theta)}\sim 1.
$$
This completes the proof of Lemma~\ref{scaling}.
\end{proof}
\section{Proof of Theorem~\ref{thm2} (integrability and convergence properties)}
\subsection{Convergence in measure}
Let us define the $\mu$ measurable functions $f$ and $f_{N}$ by
$$
f(u)\equiv\chi\big(\|u\|_{L^2(\Theta)}\big) \exp\Big(-\int_{\Theta}V(u)\Big)
$$
and
$$
f_{N}(u)\equiv\chi\big(\|S_{N}(u)\|_{L^2(\Theta)}\big) \exp\Big(-\int_{\Theta}V(S_{N}(u))\Big)\,.
$$
We start by the following convergence property.
\begin{lemme}\label{lem2}
The sequence $(f_{N}(u))_{N\in\mathbb N}$ converges in measure as $N$ tends to infinity,
with respect to the measure $\mu$, to
$f(u)$.
\end{lemme}
\begin{proof}
Since $\chi$ and the exponential are continuous functions, it suffices to show that
the sequence $\|S_{N}u\|_{L^2(\Theta)}$ converges in measure as $N$ tends to infinity,
with respect to the measure $\mu$, to
$\|u\|_{L^2(\Theta)}$ and that
the sequence $\int_{\Theta}V(S_{N}(u))$ converges in measure as $N$ tends to infinity,
with respect to the measure $\mu$, to $\int_{\Theta}V(u)$.
Thanks to the Chebishev inequality, it therefore suffices to prove that
$\|S_{N}u\|_{L^2(\Theta)}$ converges in $L^2(d\mu(u))$ to $\|u\|_{L^2(\Theta)}$ and that
$\int_{\Theta}V(S_{N}(u))$
converges in $L^1(d\mu(u))$ to $\int_{\Theta}V(u)$.
The first assertion is trivial and the second one follows from Lemma~\ref{rudin}.
This completes the proof of Lemma~\ref{lem2}.
\end{proof}
\subsection{A gaussian estimate}
We now state a property of the measure $\mu$ resulting from its
gaussian nature.
\begin{lemme}\label{gauss}
Let $\sigma\in [s,1/2[$. There exist $C>0$ and $c>0$ such that for every integers $M\geq N\geq 0$
(with the convention that $S_{0}\equiv 0$), every real number $\lambda\geq 1$,
$$
\mu
\Big(
u\in H^s_{rad}(\Theta)\, :\,
\big\|S_{M}(u)-S_{N}(u)\big\|_{H^{\sigma}(\Theta)}>\lambda\Big)
\leq Ce^{-c\lambda^{2}(1+N)^{2(1-\sigma)}}\,.
$$
\end{lemme}
\begin{proof}
We follow the argument given in \cite[Proposition~3.3]{Tz}.
It suffices to prove that $p(A_{N,M})\leq C\exp\big(-c\lambda^{2}(1+N)^{2(1-\sigma)}\big)$, where
$$
A_{N,M}\equiv \Big(\omega\in\Omega\, :\,
\big\|\varphi_{M}(\omega,\cdot)-\varphi_{N}(\omega,\cdot)\big\|_{H^{\sigma}(\Theta)}>\lambda\Big)\,.
$$
Let $\theta>0$ be such that $2\theta<1-2\sigma$. Notice that a proper choice of $\theta$ is possible
thanks to the assumption $\sigma<1/2$.
For $0\leq N_1\leq N_2$ two integers and $\kappa>0$,
we consider the set $A_{N_1,N_2,\kappa}$, defined by
$$
A_{N_1,N_2,\kappa}\equiv \Big(\omega\in\Omega\, :\,
\big\|\varphi_{N_2}(\omega,\cdot)-\varphi_{N_1}(\omega,\cdot)\big\|_{H^{\sigma}(\Theta)}>\kappa\lambda
\big((1+N_2)^{-\theta}+
\Big(\frac{1+N}{1+N_2}\Big)^{1-\sigma}\big)\Big)\,.
$$
Let $L_1$, $L_2$ be two dyadic integers such that
$$L_1/2<1+N\leq L_1,\quad L_2\leq M< 2L_2.$$
We will only analyse the case $L_1\leq L_{2}/2$. If $L_{1}> L_{2}/2$ then the
analysis is simpler. Indeed, if $L_{1}> L_{2}/2$ then
$L_1\geq L_2$ which implies
$$
L_1/2<1+N\leq 1+M<1+2L_2<4L_1
$$
and the analysis of the case $L_1\leq L_{2}/2$ below (see (\ref{lidle2}),
(\ref{lidle3})) can be performed to this case by writing
$$
\varphi_{M}-\varphi_{N}=(\varphi_{L_1}-\varphi_{N})+(\varphi_{M}-\varphi_{L_1})
$$
(without the summation issue).
We thus assume that $L_1\leq L_{2}/2$.
Write
$$\varphi_{M}-\varphi_{N}=(\varphi_{L_1}-\varphi_{N})+\Big(\sum_{\stackrel{L_1\leq L\leq L_2/2}
{ L-{\rm dyadic }}}(\varphi_{2L}-\varphi_{L})\Big)+(\varphi_{M}-\varphi_{L_2}).$$
Using the triangle inequality and summing-up geometric series, we obtain that
there exists a sufficiently small $\kappa>0$
depending on $\sigma$ but independent of $\lambda$, $N$ and $M$ such that
\begin{equation}\label{union}
A_{N,M}\subset\,A_{N,L_1,\kappa}\bigcup\,\Big(\bigcup_{\stackrel{L_1\leq L\leq L_2/2}{ L-{\rm dyadic }}} \,
A_{L,2L,\kappa}\Big)\,\bigcup A_{L_2,M,\kappa}\,.
\end{equation}
Since $z_n \sim n$, for $\omega \in A_{L,2L,\kappa}$,
$$\sum_{n=L+1}^{2L}|g_{n}(\omega)|^{2}\geq c\lambda^{2}L^{2-2\sigma}
\big(L^{-2\theta}+(L^{-1}(1+N))^{2-2\sigma}\big).
$$
Therefore using Lemma~\ref{armen} and that $2-2\sigma-2\theta>1$, we obtain that for $\lambda\geq 1$,
\begin{equation}\label{lidle1}
p(A_{L,2L,\kappa})\leq e^{c_1L-c_{2}\lambda^{2}(L^{2-2\sigma-2\theta}+(1+N)^{2-2\sigma})}
\leq
e^{-c\lambda^2 (1+N)^{2-2\sigma}}e^{-c\lambda^{2}L^{2-2\sigma-2\theta}},
\end{equation}
where the constant $c>0$ is independent of $L,N,M$ and $\lambda$.
Similarly
\begin{equation}\label{lidle2}
p(A_{N,L_1,\kappa})\leq e^{-c\lambda^2 (1+N)^{2-2\sigma}}e^{-c\lambda^{2}L_1^{2-2\sigma-2\theta}}
\leq e^{-c\lambda^2 (1+N)^{2-2\sigma}}
\end{equation}
and
\begin{equation}\label{lidle3}
p(A_{L_2,M,\kappa})\leq e^{-c\lambda^2 (1+N)^{2-2\sigma}}e^{-c\lambda^{2}L_2^{2-2\sigma-2\theta}}
\leq e^{-c\lambda^2 (1+N)^{2-2\sigma}}\,.
\end{equation}
Collecting estimates (\ref{lidle1}), (\ref{lidle2}), (\ref{lidle3}), coming back to (\ref{union})
and summing an obviously convergent series in $L$ completes the proof of Lemma~\ref{gauss}.
\end{proof}
\subsection{Uniform integrability}
We next prove the crucial uniform integrability property of $f_{N}$.
\begin{lemme}\label{lem5/2}
Let us fix $p\in[1,\infty[$. Then there exists $C>0$ such that for every $M\in\mathbb N$,
$$
\int_{H^s_{rad}(\Theta)}|f_{M}(u)|^{p}d\mu(u)\leq C\,.
$$
\end{lemme}
\begin{proof}
Using (\ref{defocus}), we observe that it suffices to prove that
$$
\exists\,\, C>0,\,\,\forall M\in\mathbb N,\,\,\,
\int_{\Omega}\chi^{p}\big(\|\varphi_{M}(\omega,\cdot)\|_{L^2(\Theta)}\big)
\exp\big(Cp\|\varphi_{M}(\omega,\cdot)\|^{\beta}_{L^{\beta}(\Theta)}\big)dp(\omega)\leq C.
$$
Using the Sobolev inequality, we infer that
$$
\|\varphi_{M}(\omega,\cdot)\|_{L^{\beta}(\Theta)}\leq C\|\varphi_{M}(\omega,\cdot)\|_{H^{\sigma}(\Theta)},
$$
provided
\begin{equation}\label{sob}
\sigma\geq 2\big(\frac{1}{2}-\frac{1}{\beta}\big)\,.
\end{equation}
Observe that since $\beta<4$ there exists $\sigma\in [s,1/2[$ satisfying (\ref{sob}).
Let us fix such a value of $\sigma$ for the sequel of the proof.
Since $\chi$ is with compact support, we need to study the convergence of the integral
$$
\int_{\lambda_0}^{\infty}h_{M}(\lambda)d\lambda,
$$
with
$$
h_{M}(\lambda)\equiv p\Big(\omega\in\Omega\,:\, \|\varphi_{M}(\omega,\cdot)\|_{H^{\sigma}(\Theta)}\geq
c(\log(\lambda))^{\frac{1}{\beta}},\quad
\|\varphi_{M}(\omega,\cdot)\|_{L^{2}(\Theta)}\leq C
\Big),
$$
where $c$ and $C$ are independent of $\lambda$ and $M$ ($C$ is depending on
the support of $\chi$) and $\lambda_0$ is a large constant, independent of $M$, to be fixed later.
Since for $N\leq M$,
\begin{equation}\label{empty}
\|\varphi_{N}(\omega,\cdot)\|_{H^{\sigma}(\Theta)}\leq
C N^{\sigma}\|\varphi_{N}(\omega,\cdot)\|_{L^{2}(\Theta)}
\leq
C N^{\sigma}\|\varphi_{M}(\omega,\cdot)\|_{L^{2}(\Theta)}
\end{equation}
we obtain that there exists $\alpha>0$, independent of $M$ and $\lambda$ such that if $M$ satisfies
$
M \leq
\alpha (\log(\lambda))^{\frac{1}{\sigma\beta}}
$
then $h_{M}(\lambda)=0$ (use (\ref{empty}) with $M=N$).
We can therefore assume that $M>\alpha
(\log(\lambda))^{\frac{1}{\sigma\beta}}$.
Let us fix $\lambda\geq \lambda_0$. Define $N$ as the integer part of
$
\alpha(\log(\lambda))^{\frac{1}{\sigma\beta}-\delta},
$
where $\delta$ is such that
\begin{equation}\label{viena}
0<\delta<\frac{2-\sigma \beta}{2\sigma\beta(1-\sigma)}\,.
\end{equation}
Let us notice that a proper choice of $\delta$ is possible since $\beta<4$ and $\sigma<1/2$.
Observe also that for $\lambda_0\gg 1$, depending only on $\alpha$, we have
$N\geq 1$ and $N\leq M$.
Using (\ref{empty}), we obtain that the event
$$
\Big(\omega\in\Omega\,:\, \|\varphi_{N}(\omega,\cdot)\|_{H^{\sigma}(\Theta)}\geq
\frac{c}{2}(\log(\lambda))^{\frac{1}{\beta}},\quad
\|\varphi_{M}(\omega,\cdot)\|_{L^{2}(\Theta)}\leq C
\Big)
$$
is of probability zero for $\lambda\geq\lambda_0$, where $\lambda_0$ is a large constant independent of $M$.
At this place we fix the value of $\lambda_0$.
Using the triangle inequality, we obtain that for $\lambda\geq\lambda_0$,
$$
h_{M}(\lambda)\leq
p\Big(\omega\in\Omega\,:\, \|\varphi_{M}(\omega,\cdot)-\varphi_{N}(\omega,\cdot)\|_{H^{\sigma}(\Theta)}\geq
\frac{c}{2}(\log(\lambda))^{\frac{1}{\beta}}
\Big).
$$
Using Lemma~\ref{gauss}, we arrive at
\begin{eqnarray*}
h_{M}(\lambda) & \leq &
Ce^{-c(\log(\lambda))^{\frac{2}{\beta}}(1+N)^{2(1-\sigma)}}
\\
& \leq &
Ce^{-c(\log(\lambda))^{\frac{2}{\beta}}
(\log(\lambda))^{\frac{2(1-\sigma)}{\sigma\beta}-2\delta(1-\sigma)}}
\\
& = &
Ce^{-c(\log(\lambda))^{\frac{2}{\sigma\beta}-2\delta(1-\sigma)}}
\,.
\end{eqnarray*}
Thanks to (\ref{viena}), we have that $\frac{2}{\sigma\beta}-2\delta(1-\sigma)>1$
and therefore $h_{M}(\lambda)$ is
integrable on the interval $[\lambda_0,\infty[$.
The integrability on $[0,\lambda_0]$ is direct since $0\leq h_{M}(\lambda)\leq
1$. This completes the proof of Lemma~\ref{lem5/2}.
\end{proof}
\begin{remarque}
The exponent $\beta=4$ appears as critical in the above argument, a fact which reflects
the critical nature of the cubic non-linearity for the $2d$ NLS. This fact may be related to
a blow-up for the cubic focusing NLS for data of positive $\mu$ measure. This is however an open problem
(see the final section of \cite{Tz}).
\end{remarque}
Using Lemma~\ref{lem5/2}, we readily arrive at the following statement.
\begin{lemme}\label{gauss-bis}
Let $\sigma\in [s,1/2[$. There exist $C>0$ and $c>0$ such that for every integer $M\geq 1$,
every real number $\lambda\geq 1$,
$$
\tilde{\rho}_{M}
\Big(
u\in H^s_{rad}(\Theta)\, :\, \|S_{M}(u)\|_{H^{\sigma}(\Theta)}>\lambda\Big)
\leq Ce^{-c\lambda^{2}}\,.
$$
\end{lemme}
\begin{proof}
It suffices to use the Cauchy-Schwarz inequality, Lemma~\ref{gauss} and Lemma~\ref{lem5/2}.
\end{proof}
Another consequence of Lemma~\ref{lem5/2} is the integrability of $f(u)$.
\begin{lemme}\label{integrability}
For every $p\in[1,\infty[$, $f(u)\in L^p(H^s_{rad};{\mathcal B},d\mu(u))$.
\end{lemme}
\begin{proof}
Using Lemma~\ref{lem2}, we obtain that there is a sub-sequence $N_k$ such that the sequence
$(f_{N_k}(u))_{k\in\mathbb N}$ converges to
$f(u)$, $\mu$ almost surely. Thanks to Lemma~\ref{lem5/2} $(f_{N_k}(u))_{k\in\mathbb N}$ is uniformly bounded
in $L^p(H^s_{rad}(\Theta),{\mathcal B}, d\mu)$. Using Fatou's lemma we deduce that $f(u)$ belongs to
$L^p(H^s_{rad}(\Theta),{\mathcal B}, d\mu)$ with a norm bounded by the liminf of the norms of $f_{N_k}(u)$'s.
This completes the proof of Lemma~\ref{integrability}.
\end{proof}
\subsection{End of the proof of Theorem~\ref{thm2}}
We have the following convergence property which yields the assertion of Theorem~\ref{thm2}
in the particular case $U=F=H^s_{rad}(\Theta)$.
\begin{lemme}\label{lem3}
Let us fix $p\in[1,\infty[$.
The following holds true :
$$
\lim_{N\rightarrow\infty}\int_{H^s_{rad}(\Theta)}|f_{N}(u)-f(u)|^{p}d\mu(u)=0\,.
$$
\end{lemme}
\begin{proof}
Let us fix $\varepsilon >0$. Consider the set
$$
A_{N,\varepsilon}\equiv \big(
u\in H^{s}_{rad}(\Theta)\,:\, |f_{N}(u)-f(u)|\leq \varepsilon
\big).
$$
Denote by $A_{N,\varepsilon}^{c}$ the complementary set in
$H^{s}_{rad}(\Theta)$ of $A_{N,\varepsilon}$.
Observe that $f$ and $f_{N}$ belong to $L^{2p}(d\mu)$ with norms bounded uniformly in $N$.
Then, using the H\"older inequality, we get
$$
\Big|
\int_{A_{N,\varepsilon}^{c}}|f_{N}(u)-f(u)|^{p}d\mu(u)
\Big|^{\frac{1}{p}}
\leq
\|f_N-f\|_{L^{2p}(d\mu)}[\mu(A_{N,\varepsilon}^{c})]^{\frac{1}{2p}}\leq
C[\mu(A_{N,\varepsilon}^{c})]^{\frac{1}{2p}}\,.
$$
On the other hand
$$
\int_{A_{N,\varepsilon}}|f_{N}(u)-f(u)|^{p}d\mu(u)\leq\varepsilon^{p}
$$
and thus we have the needed assertion since the convergence in measure of $f_N$ to $f$
implies that for a fixed $\varepsilon$,
$
\lim_{N\rightarrow \infty}\mu(A_{N,\varepsilon}^{c})=0.
$
This completes the proof of Lemma~\ref{lem3}.
\end{proof}
We can now turn to the proof of Theorem~\ref{thm2}. We follow the arguments of \cite[Lemma~3.8]{Tz}.
If we set
$$
U_{N}\equiv\big\{u\in H^s_{rad}(\Theta)\,:\, S_{N}(u)\in U\big\}
$$
then
$$
U\subset \liminf_{N}(U_{N}),
$$
where
$$
\liminf_{N}(U_{N})\equiv \bigcup_{N=1}^{\infty}\bigcap_{N_1= N}^{\infty}U_{N_1}\,.
$$
Indeed, we have that for every $u\in H^{\sigma}_{rad}(\Theta)$,
$S_{N}(u)$ converges to $u$ in $H^{\sigma}_{rad}(\Theta)$, as $N$ tends to $\infty$.
Therefore, using that $U$ is an open set, we conclude that for every $u\in U$
there exists $N_{0}\geq 1$ such that for $N\geq N_0$ one has $u\in
U_{N}$. Hence we have
$
U\subset \liminf_{N}(U_{N}).
$
If $A$ is a $\rho$-measurable set, we denote by ${\rm 1~\hspace{-1.4ex}l} _{A}$ the characteristic
function of $A$.
Notice that thanks to the property $U\subset \liminf_{N}(U_{N})$,
$$
\liminf_{N\rightarrow\infty}{\rm 1~\hspace{-1.4ex}l} _{U_{N}}\geq{\rm 1~\hspace{-1.4ex}l} _{U}.
$$
Recall that
$$
\tilde{\rho}_{N}(U)=\rho_{N}(U\cap E_{N})=\int_{H^s_{rad}(\Theta)}{\rm 1~\hspace{-1.4ex}l} _{U_N}(u)f_{N}(u)d\mu(u)\,.
$$
Using Lemma~\ref{lem3}, we observe that
$$
\lim_{N\rightarrow\infty}\Big(\int_{H^s_{rad}(\Theta)}{\rm 1~\hspace{-1.4ex}l} _{U_N}(u)f_{N}(u)d\mu(u)-
\int_{H^s_{rad}(\Theta)}{\rm 1~\hspace{-1.4ex}l} _{U_N}(u)f(u)d\mu(u)\Big)=0\,.
$$
Next, using the Fatou lemma, we get
\begin{eqnarray*}
\liminf_{N\rightarrow\infty}\rho_{N}(U\cap E_{N}) & = &
\liminf_{N\rightarrow\infty}
\int_{H^s_{rad}(\Theta)}{\rm 1~\hspace{-1.4ex}l} _{U_N}(u)f(u)d\mu(u)
\\
& \geq &
\int_{H^s_{rad}(\Theta)}{\rm 1~\hspace{-1.4ex}l} _{U}(u)f(u)d\mu(u)
\\
& = &\int_{U}f(u)d\mu(u)=\rho(U)\,.
\end{eqnarray*}
This proves (\ref{parvo}).
Observe that Lemma~\ref{lem3} implies that
$$
\lim_{N\rightarrow\infty}\rho_{N}(E_{N})=\rho(H^s_{rad}(\Theta))\,.
$$
Therefore to prove (\ref{vtoro}), it suffices to use (\ref{parvo}) by passing to complementary sets
(as in \cite{Tz}, we could give a direct proof of (\ref{vtoro})).
This completes the proof of Theorem~\ref{thm2}.
\qed
\begin{remarque}
Let us observe that the reasoning in the proof of Theorem~\ref{thm2} is of quite general nature.
It suffices to know that :
\begin{itemize}
\item
$(f_{N})$ is bounded uniformly with respect to $N$ in $L^{p}(d\mu)$ for some $p>1$.
\item
$(f_{N})$ converges to $f$ in measure.
\end{itemize}
\end{remarque}
\subsection{A corollary of Theorem~\ref{thm2}}
Combining Lemma~\ref{gauss-bis} and Theorem~\ref{thm2}, we arrive at the following statement.
\begin{lemme}\label{gauss-tris}
Let $\sigma\in [s,1/2[$. There exist $C>0$ and $c>0$ such that for every real number $\lambda\geq 1$,
$$
\rho\Big(u\in H^s_{rad}(\Theta)\, :\, \|u\|_{H^{\sigma}(\Theta)}\in ]\lambda,\infty[\Big)\leq Ce^{-c\lambda^{2}}\,.
$$
\end{lemme}
\begin{proof}
It suffices to apply Theorem~\ref{thm2} to the open set of $H^{\sigma}_{rad}(\Theta)$,
$$
U=\Big(u\in H^s_{rad}(\Theta)\, :\, \|u\|_{H^{\sigma}(\Theta)}\in]\lambda,\infty[\Big)
$$
and to observe that $\tilde{\rho}_{N}(U)=\tilde{\rho}_{N}(U_{N})$, where
\begin{equation*}
U_{N}=\Big(u\in H^s_{rad}(\Theta)\, :\, \|S_{N}(u)\|_{H^{\sigma}(\Theta)}\in]\lambda,\infty[\Big).
\end{equation*}
Thus by Lemma~\ref{gauss-bis}, $\tilde{\rho}_{N}(U)\leq C\exp(-c\lambda^2)$ which,
combined with Theorem~\ref{thm2}, completes the proof of
Lemma~\ref{gauss-tris}.
\end{proof}
\begin{remarque}\label{rem}
As a consequence of Lemma~\ref{gauss-tris} one obtains that for $\sigma\in [s,1/2[$ one has
$\rho(H^{\sigma}_{rad}(\Theta))=\rho(H^{s}_{rad}(\Theta))$.
Moreover for every $\rho$ measurable set $A$,
$$
\rho\Big(u\in A\, :\, \|u\|_{H^{\sigma}(\Theta)}\in]\lambda,\infty[\Big)\leq Ce^{-c\lambda^{2}}\,.
$$
and thus $A$ may be approximated by bounded sets of $H^{\sigma}_{rad}(\Theta)$
(the intersections of $A$ and the balls of radius $\lambda\gg 1$ centered at the origin of
$H^{\sigma}_{rad}(\Theta)$).
\end{remarque}
\section{Bourgain spaces and bilinear estimates}
The following two statements play a crucial role in the analysis of the local Cauchy problem for (\ref{1}).
\begin{proposition}\label{str1}
For every $\varepsilon>0$, there exists $\beta<1/2$, there exists $C>0$ such that for
every $N_1,N_2\geq 1$, every $L_1,L_2\geq 1$, every $u_1$, $u_2$ two functions on
$\mathbb R\times\Theta$ of the form
$$
u_{j}(t,r)=\sum_{N_j\leq \langle z_n\rangle < 2N_j}\,c_j(n,t)\, e_{n}(r),\quad j=1,2
$$
where the Fourier transform of $c_j(n,t)$ with respect to $t$ satisfies
$$
{\rm supp}\, \widehat{c_j}(n,\tau)\subset \{
\tau\in\mathbb R\,:\, L_{j}\leq \langle\tau+z_n^2\rangle\leq 2L_j\},\quad j=1,2
$$
one has the bound
$$
\|u_1 u_2\|_{L^2(\mathbb R\times \Theta)}\leq C(\min(N_1, N_2))^{\varepsilon}(L_1L_2)^{\beta}
\|u_1\|_{L^2(\mathbb R\times \Theta)}\|u_2\|_{L^2(\mathbb R\times \Theta)}\,.
$$
\end{proposition}
\begin{proposition}\label{str2}
For every $\varepsilon>0$, there exists $\beta<1/2$, there exists $C>0$ such that for
every $N_1,N_2\geq 1$, every $L_1,L_2\geq 1$, every $u_1$, $u_2$ two functions on
$\mathbb R\times\Theta$ of the form
$$
u_{1}(t,r)=\sum_{N_1\leq \langle z_n\rangle < 2N_1}\,c_1(n,t)\, e_{n}(r)
$$
and
$$
u_{2}(t,r)=\sum_{N_2\leq \langle z_n\rangle < 2N_2}\,c_2(n,t)\, e'_{n}(r)
$$
where the Fourier transform of $c_j(n,t)$ with respect to $t$ satisfies
$$
{\rm supp}\, \widehat{c_j}(n,\tau)\subset \{\tau\in\mathbb R\,:\,
L_{j}\leq \langle\tau+z_n^2\rangle\leq 2L_j\},\quad j=1,2
$$
one has the bound
$$
\|u_1 u_2\|_{L^2(\mathbb R\times \Theta)}\leq C(\min(N_1, N_2))^{\varepsilon}(L_1L_2)^{\beta}
\|u_1\|_{L^2(\mathbb R\times \Theta)}\|u_2\|_{L^2(\mathbb R\times \Theta)}\,.
$$
\end{proposition}
For the proof of Propositions~\ref{str1} and \ref{str2} we refer to \cite[Proposition~4.1]{Tz}
and \cite[Proposition~4.3]{Tz} respectively.
The results of Propositions~\ref{str1} and \ref{str2} can be injected in the framework of the Bourgain spaces
associated to the Schr\"odinger equation on the disc, in order to get local existence results for (\ref{1}).
Following \cite{Tz}, we define the Bourgain spaces $X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$ of
functions on $\mathbb R\times\Theta$ which are radial with respect to the second
argument, equipped with the norm
$$
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}^{2}=\sum_{n= 1}^{\infty} z_{n}^{2\sigma}\|
\langle\tau+z_n^2\rangle^{b}\widehat{\langle u(t),e_{n}\rangle}(\tau)\|_{L^{2}(\mathbb R_{\tau})}^{2}\,,
$$
where $\langle\cdot,\cdot\rangle$ stays for the $L^2(\Theta)$ pairing and $\widehat{\cdot}$ denotes
the Fourier transform on $\mathbb R$.
We can express the norm in
$X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$ in terms of the localisation
operators $\Delta_{N,L}$. More precisely, if for
$N,L$ positive integers, we define $\Delta_{N,L}$ by
\begin{equation*}
\Delta_{N,L}(u)=\frac{1}{2\pi}\sum_{n\,:\, N\leq\langle z_n\rangle< 2N}
\Big(\int_{L\leq \langle\tau+z_n^2\rangle\leq 2L}
\widehat{\langle u(t),e_{n}\rangle}(\tau)e^{it\tau}d\tau\Big)e_{n},
\end{equation*}
then we can write
\begin{equation*}
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}^{2} \approx_{\sigma,b}
\sum_{L,N-{\rm dyadic }}L^{2b}N^{2\sigma} \|\Delta_{N,L}(u)\|_{L^2(\mathbb R\times\Theta)}^{2}\,,
\end{equation*}
where $\approx_{\sigma,b}$ means that the implicit constant may depend on $\sigma$ and $b$.
Using that (see \cite{Tz}),
$$
\exists\, C>0\,\, : \,\,\forall n\in \mathbb N,\,\, \|e_{n}\|_{L^{\infty}(\Theta)}\leq Cn^{\frac{1}{2}}
$$
and the Cauchy-Schwarz inequality in the $\tau$ integration and in the $n$ summation, we arrive at the bound
\begin{equation}\label{infty}
\|\Delta_{N,L}(u)\|_{L^{\infty}(\mathbb R\times \Theta)}\leq
L^{\frac{1}{2}}N\|\Delta_{N,L}(u)\|_{L^{2}(\mathbb R\times \Theta)}\,.
\end{equation}
Let us next analyse $\partial_{r}(\Delta_{N,L}(u))$. We can write
\begin{equation*}
\Delta_{N,L}(u)=\sum_{N\leq \langle z_{n}\rangle < 2N}\,c(n,t)\, e_{n}(r),\quad
{\rm supp}\, \widehat{c}(n,\tau)\subset \{\tau\in\mathbb R\,:\, L\leq \langle\tau+z_{n}^2\rangle\leq 2L\}
\end{equation*}
and thus
\begin{equation}\label{partial_r}
\partial_{r}\big(\Delta_{N,L}(u)\big)=\sum_{N\leq \langle z_{n}\rangle < 2N}\,c(n,t)\, e'_{n}(r).
\end{equation}
Recall (see \cite{Tz}) that for $m\neq n$, $e'_{m}$ and $e'_{n}$ are orthogonal in $L^2(\Theta)$ and
$
\|e'_{n}\|_{L^2(\Theta)}\approx n
$.
Therefore
\begin{equation*}
\big\|\partial_{r}\big(\Delta_{N,L}(u)\big)\big\|_{L^2(\mathbb R\times\Theta)}^{2}=
c\sum_{N\leq \langle z_{n}\rangle < 2N}\,\|e'_{n}\|_{L^2(\Theta)}^{2}
\int_{-\infty}^{\infty}|\widehat{c}(n,\tau)|^{2}d\tau
\end{equation*}
and thus
\begin{equation}\label{jm}
\big\|\partial_{r}\big(\Delta_{N,L}(u)\big)\big\|_{L^2(\mathbb R\times\Theta)}\approx
N\big\|\Delta_{N,L}(u)\big\|_{L^2(\mathbb R\times\Theta)}\,.
\end{equation}
Using \cite[Lemma~2.1]{Tz}, $\|\partial_{r}e_{n}\|_{L^{\infty}(\Theta)}\leq Cn^{3/2}$ and thus coming back to
(\ref{partial_r}), after writing $c(n,t)$ in terms of its Fourier transform and
applying the Cauchy-Schwarz inequality in the $\tau$ (the dual of $t$ variable) integration, we obtain that
\begin{equation}\label{infty-bis}
\big\|\partial_{r}\big(\Delta_{N,L}(u)\big)\big\|_{L^{\infty}(\mathbb R\times\Theta)}\leq C L^{\frac{1}{2}}
N^{2}\big\|\Delta_{N,L}(u)\big\|_{L^2(\mathbb R\times\Theta)}\,.
\end{equation}
Let us next define two other projectors involved in the well-posedness analysis of (\ref{1}).
The projector $\Delta_{N}$ is defined by
$$
\Delta_{N}(u)=\sum_{n\,:\, N\leq\langle z_n\rangle< 2N}\, \langle u, e_n\rangle e_n\,.
$$
For $N\geq 2$ a dyadic integer, we define the projector $\tilde{S}_N$ by
$$
\tilde{S}_{N}=
\sum_{\stackrel{N_1\leq N/2}{ N_1-{\rm dyadic }}}\Delta_{N_1}\, .
$$
For a notational convenience, we assume that $\tilde{S}_{1}$ is zero.
\section{Nonlinear estimates in Bourgain spaces}
In this section, we shall derive nonlinear estimates related to the problems (\ref{1}) and (\ref{N}).
We start by a lemma which improves on the Sobolev embedding.
\begin{lemme}\label{Lp}
Let us fix $p\geq 4$. Then for every $b>\frac{1}{2}$, $\sigma>1-\frac{4}{p}$ there exists $C>0$ such that
for all $u\in X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$ one has
\begin{equation}\label{che}
\|u\|_{L^p(\mathbb R\times\Theta)}\leq C\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}\,.
\end{equation}
\end{lemme}
\begin{proof}
It suffices to prove the assertion for $p=4$ and $p=\infty$.
Let us first consider the case $p=4$.
Observe that $\Delta_{N,L}(u)$ fits in the scope of applicability of Proposition~\ref{str1}.
Using Proposition~\ref{str1} with $\varepsilon=\sigma/2>0$, we obtain that
$$
\|\Delta_{N,L}(u)\|_{L^4(\mathbb R\times\Theta)}\leq
C\|\Delta_{N,L}(u)\|_{X^{\sigma/2,\beta}_{rad}(\mathbb R\times\Theta)}\,.
$$
Therefore, by writing $u=\sum_{L,N}\Delta_{N,L}(u)$, where the summation runs over the dyadic values of $L$,
$N$, by summing geometric series in $N$ and $L$ ,
we obtain that (\ref{che}) holds true for $p=4$ (observe that we use
Proposition~\ref{str1} with $\varepsilon=\sigma/2$
instead of $\sigma$ in order to get small negative powers of $N$ and $L$ after
applying the triangle inequality to $\sum_{L,N}\Delta_{N,L}(u)$.
Let us next consider the case $p=\infty$.
In this case, the assertion holds true thanks to (\ref{infty}) and another summation of geometric series.
This completes the proof of Lemma~\ref{Lp}.
\end{proof}
The next lemma gives sense of $F(u)$, in the scale of Bourgain's spaces, for $u$ of low regularity.
\begin{lemme}\label{F(u)}
Let $(b,\sigma)$ be such that $\max(1/3,1-2/\alpha)<\sigma<1/2$, $b>1/2$ and let
$u\in X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$.
Then $F(u)\in X^{-\sigma,-b}_{rad}(\mathbb R\times\Theta)$. Moreover
$$
\lim_{N\rightarrow \infty}\|F(u)-F(\tilde{S}_{N}(u))\|_{X^{-\sigma,-b}_{rad}(\mathbb R\times\Theta)}=0\,.
$$
\end{lemme}
\begin{proof}
For $v\in X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$, we write
\begin{eqnarray*}
\int_{\mathbb R\times\Theta}|F(u)v|
& \leq & C\big(\int_{\mathbb R\times \Theta}|uv|+\int_{\mathbb R\times \Theta}|u|^{\alpha+1}|v|\big)
\\
& \leq & C\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}\|v\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}
+C\|u\|^{\alpha+1}_{L^{\alpha+2}(\mathbb R\times\Theta)}\|v\|_{L^{\alpha+2}(\mathbb R\times\Theta)}
\end{eqnarray*}
Now, using Lemma~\ref{Lp}, we get
$$
\|u\|_{L^{\alpha+2}(\mathbb R\times\Theta)}\leq C\|u\|_{X^{\sigma_1,b}_{rad}(\mathbb R\times\Theta)}\,,
$$
where $\sigma_1>0$, when $\alpha\leq 2$ and $\sigma_1>1-4/(\alpha+2)$ when $\alpha\in]2,4[$.
Observing that for $\alpha\geq 2$, $\max(1/3,1-2/\alpha)\geq 1-4/(\alpha+2)$ shows that
$$
\int_{\mathbb R\times\Theta}|F(u)v|\leq C\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}\|v\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}
\big(1+\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}^{\alpha}\big)
$$
and thus $F(u)\in X^{-\sigma,-b}_{rad}(\mathbb R\times\Theta)$.
Similarly one shows that
$$
\int_{\mathbb R\times\Theta}|(F(u)-F(\tilde{S}_{N}(u)))v|\leq C\|u-\tilde{S}_{N}(u)\|_{X^{\sigma,b}_{rad}}
\|v\|_{X^{\sigma,b}_{rad}}\big(1+\|u\|_{X^{\sigma,b}_{rad}}^{\alpha}\big)
$$
which yields the needed convergence. This completes the proof of Lemma~\ref{F(u)}.
\end{proof}
One may prove a statement similar to Lemma~\ref{F(u)} with $\tilde{S}_{N}$ replaced by $\tilde{S}_{N,L}$
where $\tilde{S}_{N,L}$ is defined similarly to $\tilde{S}_{N}$ with $\Delta_{N_1}$ replaced by
$\Delta_{N_1,L_1}$, $L_1\leq L$. This observation allows to consider only finite sums over dyadic integers
in the proof of the next proposition (one can also apply a similar approximation argument to $v$ involved in
(\ref{vanbis})).
In fact a much stronger statement then Lemma~\ref{F(u)} holds true.
It turns out that under the assumptions of Lemma~\ref{F(u)} one has
$F(u)\in X^{\sigma,-b}_{rad}(\mathbb R\times\Theta)$.
\begin{proposition}\label{main}
Let
$\max(1/3,1-2/\alpha)<\sigma_1\leq\sigma<1/2$.
Then there exist two positive numbers $b,b'$ such that $b+b'<1$, $b'<1/2<b$, there
exists $C>0$ such that for every $u,v\in X^{\sigma,b}_{rad}(\mathbb R\times\Theta)$,
\begin{equation}\label{main1}
\|F(u)\|_{X^{\sigma,-b'}_{rad}(\mathbb R\times\Theta)}\leq
C\Big(1+\|u\|^{\max(\alpha,2)}_{X^{\sigma_{1},b}_{rad}(\mathbb R\times\Theta)}\Big)
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}
\end{equation}
and
\begin{equation}\label{main2}
\|F(u)-F(v)\|_{X^{\sigma,-b'}_{rad}}\leq C\Big(1+\|u\|^{\max(\alpha,2)}_{X^{\sigma,b}_{rad}}+
\|v\|^{\max(\alpha,2)}_{X^{\sigma,b}_{rad}}\Big)\|u-v\|_{X^{\sigma,b}_{rad}}\,.
\end{equation}
\end{proposition}
\begin{proof}
The proof of this proposition for $\alpha<2$ is given in \cite{Tz}.
We therefore may assume that $\alpha\geq 2$ in the sequel of the proof.
The proof will follow closely \cite{Tz} by incorporating an argument already appeared in \cite{BGT}.
Let us observe that in order to prove (\ref{main1}), it suffices to prove that
\begin{equation}\label{main1-bis}
\|F(\tilde{S}_{M}(u))\|_{X^{\sigma,-b'}_{rad}(\mathbb R\times\Theta)}\leq
C\Big(1+\|u\|^{\alpha}_{X^{\sigma_{1},b}_{rad}(\mathbb R\times\Theta)}\Big)
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)},
\end{equation}
uniformly in $M\in\mathbb N$. Indeed, if we can prove (\ref{main1-bis}) then $(F(\tilde{S}_{M}(u)))_{M\in\mathbb N}$
is a bounded sequence in $X^{\sigma,-b'}_{rad}(\mathbb R\times\Theta)$
(and thus also in $X^{-\sigma,-b}_{rad}(\mathbb R\times\Theta)$)
and therefore it converges (up to a sub-sequence)
weakly to some limit which satisfies the needed bound.
In order to identify this limit with $F(u)$ it suffices make appeal to Lemma~\ref{F(u)}.
Thanks to the gauge invariance of the nonlinearity $F(u)$, we observe that
$F(u)-(\partial F)(0)u$ is vanishing at order $2$ at $u=0$ and thus in the proof of (\ref{main1-bis}),
we may assume that
\begin{equation}\label{vanishing}
\partial^{k_1}\bar{\partial}^{k_2}(F)(0)=0,\quad \forall\,\, k_1+k_2\leq 2.
\end{equation}
Observe that (\ref{main1-bis}) follows from the estimate
\begin{equation}\label{vanbis}
\Big|\int_{\mathbb R\times\Theta}F(\tilde{S}_{M}(u))\bar{v}\Big|
\leq C\|v\|_{X^{-\sigma,b'}_{rad}(\mathbb R\times\Theta)}
\Big(1+\|u\|^{\alpha}_{X^{\sigma_{1},b}_{rad}(\mathbb R\times\Theta)}\Big)
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}
\end{equation}
(let us remark that if $v$ contains only very high frequencies with respect to
the $\Delta_{N,L}$ decomposition then the right hand-side of (\ref{vanbis}) is small).
Using that $\Delta_{N}=\tilde{S}_{2N}-\tilde{S}_{N}$ and (\ref{vanishing}), we may write
\begin{eqnarray*}
F(\tilde{S}_{M}(u)) & = &\sum_{\stackrel{2\leq N_1\leq M}{N_1-{\rm dyadic }}}
(F(\tilde{S}_{N_1}(u))-F(\tilde{S}_{N_1/2}(u))
\\
& = &
\sum_{\stackrel{N_1\leq M/2}{N_1-{\rm dyadic }}}
\big(\Delta_{N_1}(u)G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))+
\overline{\Delta_{N_1}(u)}G_{2}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))\big)
\\
& \equiv & F_{1}(u)+F_{2}(u),
\end{eqnarray*}
where $G_{1}(z_1,z_2)$ and $G_{2}(z_1,z_2)$ are smooth functions with a
control on their growth at infinity coming from (\ref{rast})
(similar bounds to $F$ with $\alpha$ replaced $\alpha-1$).
Moreover, thanks to (\ref{vanishing}), $G_{1}(0,0)=\partial(F)(0)=0$ and $G_{2}(0,0)=\bar{\partial}(F)(0)=0$.
We will only estimate the contribution of $F_{1}(u)$ to the right hand-side of (\ref{vanbis}),
the argument for the contribution of $F_{2}(u)$ being completely analogous.
Next, we set
$$
I=\Big|\int_{\mathbb R\times\Theta}F_{1}(u)\bar{v}\Big|,\quad
I(N_0,N_1)=\Big|\int_{\mathbb R\times\Theta}\Delta_{N_1}(u)\overline{\Delta_{N_0}(v)}
G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))\Big|.
$$
Then $I\leq I_1+I_2$, where
$$
I_{1}=
\sum_{\stackrel{N_0\leq N_1\leq M/2}{N_0,N_1-{\rm dyadic }}}I(N_0,N_1),\quad
I_{2}=
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{N_0,N_1-{\rm dyadic }}}I(N_0,N_1).
$$
We first analyse $I_1$. Using (\ref{vanishing}) with $(k_1,k_2)=(1,0)$,
we decompose $G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))$ as
$$
\sum_{\stackrel{N_2\leq N_1}{N_2-{\rm dyadic }}}
\Big(G_{1}(\tilde{S}_{2N_2}\Delta_{N_1}(u),\tilde{S}_{2N_2}\tilde{S}_{N_1}(u))-
G_{1}(\tilde{S}_{N_2}\Delta_{N_1}(u),\tilde{S}_{N_2}\tilde{S}_{N_1}(u))\Big).
$$
Using that $\Delta_{N_1}\Delta_{N_2}=\Delta_{N_1}$, if $N_1=N_2$ and zero elsewhere, we obtain
\begin{multline}\label{G1}
G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))=
\sum_{\stackrel{N_2\leq N_1}{N_2-{\rm dyadic }}}
\Delta_{N_2}(u)G_{11}^{N_2}(\Delta_{N_2}(u),\tilde{S}_{N_2}(u))
+
\\
\sum_{\stackrel{N_2\leq N_1}{N_2-{\rm dyadic }}}
\overline{\Delta_{N_2}(u)}G_{12}^{N_2}(\Delta_{N_2}(u),\tilde{S}_{N_2}(u)),
\end{multline}
where $G_{11}^{N_2}(z_1,z_2)$ and $G_{12}^{N_2}(z_1,z_2)$ are smooth functions with a
control on their growth at infinity coming from (\ref{rast}).
Moreover thanks to (\ref{vanishing}), applied with $(k_1,k_2)=(2,0)$ and $(k_1,k_2)=(1,1)$,
we get $G_{11}^{N_2}(0,0)=0$ and $G_{12}^{N_2}(0,0)=0$. Therefore, we can expand for $j=1,2$,
\begin{multline}\label{G1j}
G_{1j}^{N_2}(\Delta_{N_2}(u),\tilde{S}_{N_2}(u))=
\sum_{\stackrel{N_3\leq N_2}{N_3-{\rm dyadic }}}
\Delta_{N_3}(u)G_{1j1}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))
+
\\
\sum_{\stackrel{N_3\leq N_2}{N_3-{\rm dyadic }}}
\overline{\Delta_{N_3}(u)}G_{1j2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u)),
\end{multline}
where, thanks to the growth assumption on the nonlinearity $F(u)$, we have
that the functions $G_{1 j_1 j_2}^{N_3}(z_1,z_2)$, $j_1,j_2\in\{1,2\}$ satisfy
\begin{equation}\label{novo}
|G_{1 j_1 j_2}^{N_3}(z_1,z_2)|\leq C(1+|z_1|+|z_2|)^{\alpha-2}.
\end{equation}
We therefore have the bound
\begin{multline*}
I_1\leq C
\sum_{\stackrel{N_0\leq N_1}{N_0,N_1-{\rm dyadic }}}
\sum_{\stackrel{N_1\geq N_2\geq N_3}{N_2,N_3-{\rm dyadic }}}
\\
\int_{\mathbb R\times\Theta}
|\Delta_{N_0}(v)\Delta_{N_1}(u)\Delta_{N_2}(u)\Delta_{N_3}(u)|
(
1+|\Delta_{N_3}(u)|+|\tilde{S}_{N_3}(u)|)^{\alpha-2}.
\end{multline*}
By splitting
$$
\Delta_{N}=\sum_{L-{\rm dyadic}}\Delta_{N,L},
$$
we may write for $b>1/2$, $0<\sigma_1<1/2$,
by using (\ref{infty}) and the Cauchy-Schwarz inequality in the $L$ summation
\begin{eqnarray*}
\|\Delta_{N_3}(u)\|_{L^{\infty}(\mathbb R\times\Theta)}
& \leq &
\sum_{L-{\rm dyadic}}\|\Delta_{N_3,L}(u)\|_{L^{\infty}(\mathbb R\times\Theta)}
\\
& \leq &
C\sum_{L-{\rm dyadic}}N_3L^{\frac{1}{2}}\|\Delta_{N_3,L}(u)\|_{L^{2}(\mathbb R\times\Theta)}
\\
& \leq &
CN_{3}^{1-\sigma_1}\|u\|_{X^{\sigma_1,b}_{rad}(\mathbb R\times\Theta)}\,,
\end{eqnarray*}
where $C$ depends on $b$ and $\sigma_1$. Similarly
\begin{eqnarray*}
\|\tilde{S}_{N_3}(u)\|_{L^{\infty}(\mathbb R\times\Theta)}
& \leq &
\sum_{\stackrel{N\leq N_3/2}{N-{\rm dyadic }}}
\|\Delta_{N}(u)\|_{L^{\infty}(\mathbb R\times\Theta)}
\\
& \leq &
\sum_{\stackrel{N\leq N_3/2}{L,N-{\rm dyadic }}}\|\Delta_{N,L}(u)\|_{L^{\infty}(\mathbb R\times\Theta)}
\\
& \leq &
\sum_{\stackrel{N\leq N_3/2}{L,N-{\rm dyadic }}}
CNL^{\frac{1}{2}}\|\Delta_{N,L}(u)\|_{L^{2}(\mathbb R\times\Theta)}
\\
& \leq &
C\big(\sum_{\stackrel{N\leq N_3/2}{L,N-{\rm dyadic }}}
L^{1-2b}
N^{2(1-\sigma_1)}
\big)^{\frac{1}{2}}
\big(
\sum_{\stackrel{N\leq N_3/2}{L,N-{\rm dyadic }}}
L^{2b}N^{2\sigma_1}\|\Delta_{N,L}(u)\|_{L^{2}}^{2}
\big)^{\frac{1}{2}}
\\
& \leq &
CN_{3}^{1-\sigma_1}\|u\|_{X^{\sigma_1,b}_{rad}(\mathbb R\times\Theta)}\,.
\end{eqnarray*}
Therefore
\begin{multline*}
I_1\leq C
(1+\|u\|_{X^{\sigma_1,b}(\mathbb R\times\Theta)}^{\alpha-2})
\sum_{\stackrel{N_0\leq N_1}{N_0,N_1-{\rm dyadic }}}
\sum_{\stackrel{N_1\geq N_2\geq N_3}{N_2,N_3-{\rm dyadic }}}
\\
N_{3}^{(1-\sigma_1)(\alpha-2)}
\Big(\int_{\mathbb R\times\Theta}
|\Delta_{N_0}(v)\Delta_{N_1}(u)\Delta_{N_2}(u)\Delta_{N_3}(u)|\Big).
\end{multline*}
Using Proposition~\ref{str1} and the Cauchy-Schwarz inequality, we obtain
that for every $\varepsilon>0$ there exist $\beta<1/2$ and $C_{\varepsilon}$
such that
\begin{multline*}
\int_{\mathbb R\times\Theta}
|\Delta_{N_0,L_0}(v)\Delta_{N_1,L_1}(u)\Delta_{N_2,L_2}(u)\Delta_{N_3,L_3}(u)|\leq
\\
\leq
\|\Delta_{N_0,L_0}(v)\Delta_{N_2,L_2}(u)\|_{L^2(\mathbb R\times\Theta)}
\|\Delta_{N_1,L_1}(u)\Delta_{N_3,L_3}(u)\|_{L^2(\mathbb R\times\Theta)}
\leq
\\
\leq C_{\varepsilon}(N_2 N_3)^{\varepsilon}(L_0L_1L_2L_3)^{\beta}
\|\Delta_{N_0,L_0}(v)\|_{L^2(\mathbb R\times\Theta)}
\prod_{j=1}^{3}\|\Delta_{N_j,L_j}(u)\|_{L^2(\mathbb R\times\Theta)}.
\end{multline*}
Therefore, if we set
\begin{multline}\label{Q}
Q\equiv Q(N_0,N_1,N_2,N_3,L_0,L_1,L_2,L_3)=
CN_{0}^{-\sigma}N_1^{\sigma}(N_2 N_3)^{\sigma_{1}}L_{0}^{b'}(L_1 L_2 L_3)^{b}
\\
\times
(1+\|u\|_{X^{\sigma_1,b}(\mathbb R\times\Theta)}^{\alpha-2})
\|\Delta_{N_0,L_0}(v)\|_{L^2(\mathbb R\times\Theta)}
\prod_{j=1}^{3}\|\Delta_{N_j,L_j}(u)\|_{L^2(\mathbb R\times\Theta)},
\end{multline}
we can write
$$
I_1\leq \sum_{L_0,L_1,L_2,L_3-{\rm dyadic }}\,
\sum_{\stackrel{N_1\geq N_2\geq N_3, N_1\geq N_0}{N_0,N_1,N_2,N_3-{\rm dyadic }}}
L_{0}^{\beta-b'}(L_1L_2L_3)^{\beta-b}\Big(\frac{N_0}{N_1}\Big)^{\sigma}\,\,
\frac{N_{3}^{(1-\sigma_1)(\alpha-2)}}{(N_2N_3)^{\sigma_{1}-\varepsilon}}Q.
$$
Let us take $\varepsilon>0$ such that
$$
\varepsilon<1-\frac{\alpha(1-\sigma_1)}{2}\,.
$$
A proper choice of $\varepsilon$ is possible thanks to the assumption $\sigma_1>1-2/\alpha$.
With this choice of $\varepsilon$ we have that $2(\sigma_1-\varepsilon)>(1-\sigma_1)(\alpha-2)$.
The choice of $\varepsilon$ fixes $\beta$ via the application of Proposition~\ref{str1}.
Then we choose $b'$ such that $\beta<b'<1/2$. We finally choose $b>1/2$ such that
$b+b'<1$. With this choice of the parameters, coming back to the definition of the projectors $\Delta_{N,L}$
and after summing geometric series in $L_0$, $L_1$, $L_2$, $L_3$, $N_2$, $N_3$, we can write that
$$
I_{1}\leq
C
(1+\|u\|_{X^{\sigma_1,b}(\mathbb R\times\Theta)}^{\alpha-2})
\|u\|^{2}_{X^{\sigma_{1},b}_{rad}(\mathbb R\times\Theta)}
\sum_{\stackrel{ N_0\leq N_1}{N_0,N_1-{\rm dyadic }}}
\Big(\frac{N_0}{N_1}\Big)^{\sigma}c(N_0)d(N_1),
$$
where
\begin{equation}\label{cd}
c(N_0)=N_{0}^{-\sigma}\|\Delta_{N_0}(v)\|_{X^{0,b'}_{rad}(\mathbb R\times\Theta)},\quad
d(N_1)=N_{1}^{\sigma}\|\Delta_{N_1}(u)\|_{X^{0,b}_{rad}(\mathbb R\times\Theta)}\,.
\end{equation}
Finally, using \cite[Lemma~6.2]{Tz}, we arrive at the bound
$$
I_1\leq
C\|v\|_{X^{-\sigma,b'}_{rad}(\mathbb R\times\Theta)}
(1+\|u\|_{X^{\sigma_1,b}(\mathbb R\times\Theta)}^{\alpha-2})
\|u\|^{2}_{X^{\sigma_{1},b}_{rad}(\mathbb R\times\Theta)}
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}
$$
which ends the analysis for $I_1$.
\\
Let us now turn to the analysis of $I_2$.
The main observation is that after in integration by parts the roles of $N_0$ and $N_1$ are exchanged.
We have that
$$
I_{2}\leq \sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}I(L_0,N_0,N_1),
$$
where
$$
I(L_0,N_0,N_1)=\Big|\int_{\mathbb R\times\Theta}\Delta_{N_1}(u)\overline{\Delta_{N_0,L_0}(v)}
G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))\Big|.
$$
Write
\begin{equation}\label{write1}
\Delta_{N_0,L_0}(v)=\sum_{N_0\leq \langle z_{n_0}\rangle < 2N_0}\,c(n_0,t)\, e_{n_0}(r),
\end{equation}
where
$$
{\rm supp}\, \widehat{c}(n_0,\tau)\subset \{\tau\in\mathbb R\,:\, L_{0}\leq \langle\tau+z_{n_0}^2\rangle\leq 2L_0\}.
$$
Define $\widetilde{\Delta}_{N_0,L_0}$ as
\begin{equation*}
\widetilde{\Delta}_{N_0,L_0}(v)\equiv
\sum_{N_0\leq \langle z_{n_0}\rangle \leq 2N_0}\,
\frac{c(n_0,t)}{z_{n_0}^2}\, e'_{n_0}(r).
\end{equation*}
Since $\| e'_{n_0}\|_{L^2(\Theta)}\approx n_0$ (see \cite{Tz}), we have
\begin{equation}\label{jm2}
\|\widetilde{\Delta}_{N_0,L_0}(v)\|_{L^2(\mathbb R\times\Theta)}
\approx
N_{0}^{-1}
\|\Delta_{N_0,L_0}(v)\|_{L^2(\mathbb R\times\Theta)}.
\end{equation}
Since $e_n$ vanishes on the boundary, using that
$$
e_{n}(r)=-\frac{1}{z_n^2}\frac{1}{r}\partial_{r}(r\partial_{r}e_{n}(r)),
$$
an integration by parts gives
$$
I(L_0,N_0,N_1)=\Big|\int_{\mathbb R\times\Theta}\overline{\widetilde{\Delta}_{N_0,L_0}(v)}
\partial_{r}\Big(\Delta_{N_1}(u)G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))\Big)\Big|.
$$
Recall that equality (\ref{G1}) shows that $G_{1}(\Delta_{N_1}(u),\tilde{S}_{N_1}(u))$ can be expanded as a
sum of two terms and then each term can be expanded according to (\ref{G1j}).
Therefore
$$
I(L_0,N_0,N_1)\leq I_1(L_0,N_0,N_1)+I_2(L_0,N_0,N_1)+I_3(L_0,N_0,N_1)+I_4(L_0,N_0,N_1),
$$
where
\begin{multline*}
I_1(L_0,N_0,N_1)=\sum_{j_1=1}^{2}\sum_{j_2=1}^{2}
\sum_{\stackrel{N_3\leq N_2\leq N_1}{N_2,N_3-{\rm dyadic }}}
\\
\int_{\mathbb R\times\Theta}
|\widetilde{\Delta}_{N_0,L_0}(v)\partial_{r}\big(\Delta_{N_1}(u)\big)\Delta_{N_2}(u)\Delta_{N_3}(u)
G_{1j_1j_2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))|,
\end{multline*}
\begin{multline*}
I_2(L_0,N_0,N_1)=\sum_{j_1=1}^{2}\sum_{j_2=1}^{2}
\sum_{\stackrel{N_3\leq N_2\leq N_1}{N_2,N_3-{\rm dyadic }}}
\\
\int_{\mathbb R\times\Theta}
|\widetilde{\Delta}_{N_0,L_0}(v)\Delta_{N_1}(u)\partial_{r}\big(\Delta_{N_2}(u)\big)\Delta_{N_3}(u)
G_{1j_1j_2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))|,
\end{multline*}
\begin{multline*}
I_3(L_0,N_0,N_1)=\sum_{j_1=1}^{2}\sum_{j_2=1}^{2}
\sum_{\stackrel{N_3\leq N_2\leq N_1}{N_2,N_3-{\rm dyadic }}}
\\
\int_{\mathbb R\times\Theta}|\widetilde{\Delta}_{N_0,L_0}(v)\Delta_{N_1}(u)\Delta_{N_2}(u)\partial_{r}
\big(\Delta_{N_3}(u)\big)G_{1j_1j_2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))|,
\end{multline*}
\begin{multline*}
I_4(L_0,N_0,N_1)=\sum_{j_1=1}^{2}\sum_{j_2=1}^{2}
\sum_{\stackrel{N_3\leq N_2\leq N_1}{N_2,N_3-{\rm dyadic }}}
\\
\int_{\mathbb R\times\Theta}|\widetilde{\Delta}_{N_0,L_0}(v)\Delta_{N_1}(u)\Delta_{N_2}(u)\Delta_{N_3}(u)
\partial_{r}\big(G_{1j_1j_2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))\big)|,
\end{multline*}
Recall that $G_{1j_1j_2}^{N_3}(z_1,z_2)$ satisfies the bound (\ref{novo}).
If we define $Q$ by (\ref{Q}),
expanding with respect to the $L$ localizations,
using two times Proposition~\ref{str2}
to the products $\widetilde{\Delta}_{N_0,L_0}(v)\Delta_{N_2,L_2}(u)$
and $\partial_{r}\big(\Delta_{N_1}(u)\big)\Delta_{N_3,L_3}(u)$
and (\ref{novo}) (together with (\ref{infty}), (\ref{jm}) and (\ref{jm2})) gives
\begin{multline*}
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}
I_1(L_0,N_0,N_1)
\leq
\\
\sum_{L_0,L_1,L_2,L_3-{\rm dyadic }}\,
\sum_{\stackrel{N_1\geq N_2\geq N_3, N_1\leq N_0}{N_0,N_1,N_2,N_3-{\rm dyadic }}}
L_{0}^{\beta-b'}(L_1L_2L_3)^{\beta-b}\Big(\frac{N_0}{N_1}\Big)^{\sigma-1}\,\,
\frac{N_{3}^{(1-\sigma_1)(\alpha-2)}}{(N_2N_3)^{\sigma_{1}-\varepsilon}}Q.
\end{multline*}
The last expression may estimated exactly as we did for $I_1$, by exchanging the roles of $N_0$ and $N_1$.
Similarly
\begin{multline*}
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}
I_2(L_0,N_0,N_1)
\leq
\\
\sum_{L_0,L_1,L_2,L_3-{\rm dyadic }}\,
\sum_{\stackrel{N_1\geq N_2\geq N_3, N_1\leq N_0}{N_0,N_1,N_2,N_3-{\rm dyadic }}}
L_{0}^{\beta-b'}(L_1L_2L_3)^{\beta-b}\Big(\frac{N_0}{N_1}\Big)^{\sigma}
\Big(\frac{N_2}{N_0}\Big)
\,\,
\frac{N_{3}^{(1-\sigma_1)(\alpha-2)}}{(N_2N_3)^{\sigma_{1}-\varepsilon}}Q.
\end{multline*}
On the other hand on the summation region,
$$
\Big(\frac{N_0}{N_1}\Big)^{\sigma}
\Big(\frac{N_2}{N_0}\Big)\leq
\Big(\frac{N_0}{N_1}\Big)^{\sigma-1}
$$
and thus, again, we may conclude as in the bound for $I_1$.
The sum
$$
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}
I_3(L_0,N_0,N_1)
$$
can be bounded similarly.
Let us finally estimate the quantity
$$
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}
I_4(L_0,N_0,N_1)\,.
$$
Observe that we can write
\begin{multline*}
\Big|\partial_{r}\big(G_{1j_1j_2}^{N_3}(\Delta_{N_3}(u),\tilde{S}_{N_3}(u))\big)\Big|\leq
\\
C
\Big(
|\partial_{r}\big(\Delta_{N_3}(u)\big)|+|\partial_{r}\big(\tilde{S}_{N_3}(u)\big)|
\Big)
\Big(1+|\Delta_{N_3}(u)|+|\tilde{S}_{N_3}(u)|\Big)^{\max(\alpha-3,0)}\,.
\end{multline*}
Now using (\ref{infty-bis}), we can write
\begin{eqnarray*}
\|\partial_{r}(\Delta_{N_3}(u))\|_{L^{\infty}(\mathbb R\times\Theta)}
+
\|\partial_{r}(\tilde{S}_{N_3}(u))\|_{L^{\infty}(\mathbb R\times\Theta)}
& \leq &
\sum_{\stackrel{N\leq N_3}{N-{\rm dyadic }}}
\|\partial_{r}(\Delta_{N}(u))\|_{L^{\infty}(\mathbb R\times\Theta)}
\\
& \leq &
\sum_{\stackrel{L,N\leq N_3}{L,N-{\rm dyadic }}}\|\partial_{r}(\Delta_{N,L}(u))\|_{L^{\infty}(\mathbb R\times\Theta)}
\\
& \leq &
\sum_{\stackrel{L,N\leq N_3}{L,N-{\rm dyadic }}}
CN^{2}L^{\frac{1}{2}}\|\Delta_{N,L}(u)\|_{L^{2}(\mathbb R\times\Theta)}
\\
& \leq &
CN_{3}^{2-\sigma_1}\|u\|_{X^{\sigma_1,b}_{rad}(\mathbb R\times\Theta)}\,.
\end{eqnarray*}
Similarly
$$
\Big(1+|\Delta_{N_3}(u)|+|\tilde{S}_{N_3}(u)|\Big)^{\max(\alpha-3,0)}
\leq
C(1+N_{3}^{1-\sigma_1}\|u\|_{X^{\sigma_1,b}_{rad}(\mathbb R\times\Theta)})^{\max(\alpha-3,0)}\,\,.
$$
Let us suppose that $\alpha\in [3,4[$, the analysis for $\alpha\in [2,3]$ being simpler
(one needs to modify slightly the next several lines by invoking the
assumption $\sigma_1>1/3$).
If we define $Q$ by (\ref{Q}),
expanding with respect to the $L$ localizations,
using Proposition~\ref{str1} to the product
$\Delta_{N_1,L_1}(u)\Delta_{N_3,L_3}(u)$ and Proposition~\ref{str2}
to the product $\widetilde{\Delta}_{N_0,L_0}(v)\Delta_{N_2,L_2}(u)$, we get
\begin{multline*}
\sum_{\stackrel{N_1\leq \min(N_0, M/2)}{L_0,N_0,N_1-{\rm dyadic }}}
I_4(L_0,N_0,N_1)
\leq
\\
\sum_{L_0,L_1,L_2,L_3-{\rm dyadic }}\,
\sum_{\stackrel{N_1\geq N_2\geq N_3, N_1\leq N_0}{N_0,N_1,N_2,N_3-{\rm dyadic }}}
L_{0}^{\beta-b'}(L_1L_2L_3)^{\beta-b}\Big(\frac{N_0}{N_1}\Big)^{\sigma}\,\,
\frac{1}{N_0}
\frac{N_3N_{3}^{(1-\sigma_1)(\alpha-2)}}{(N_2N_3)^{\sigma_{1}-\varepsilon}}Q.
\end{multline*}
Since on the region of summation
$$
\Big(\frac{N_0}{N_1}\Big)^{\sigma}\frac{1}{N_0}N_3\leq \Big(\frac{N_0}{N_1}\Big)^{\sigma-1}
$$
we may conclude exactly as we did for $I_1$.
This completes the analysis for $I_2$ and thus (\ref{main1}) is established.
Thanks to the multilinear nature of our reasoning (compare with the method of Ginibre-Velo, Kato for treating
the Cauchy problem for NLS which is not multilinear),
the proof of (\ref{main2}) is essentially the same as the proof of
(\ref{main1}).
However one can no longer assume that the frequencies $N_1$ and $N_2$
satisfy $N_1\geq N_2$ but this fact does not affect the analysis since in
contrast with (\ref{main1}) all terms in the right hand-side of (\ref{main2})
have {\it the same} spatial regularity $\sigma$ (this is a
standard feature in the analysis of nonlinear PDE's and not related to the
spaces $X^{\sigma,b}_{rad}$ we work with).
More precisely, we can write
$$
F(u)-F(v)=(u-v)G_{1}(u,v)+(\overline{u}-\overline{v})G_{2}(u,v)
$$
with $G_{j}(z_1,z_2)$, $j=1,2$ satisfying the growth assumption
\begin{equation}\label{rastbis}
\big|
\partial^{k_1}_{z_1}\bar{\partial}^{k_2}_{z_1}
\partial^{l_1}_{z_2}\bar{\partial}^{l_2}_{z_2}
G_j(z_1,z_2)
\big|\leq C_{k_1,k_2,l_1,l_2}(1+|z_1|+|z_2|)^{\alpha-k_1-k_2-l_1-l_2}
\,.
\end{equation}
Since the analysis is very similar to the proof of (\ref{main1}), we shall
only outline the estimate for $(u-v)G_{1}(u,v)$. Again, we can suppose that
$F(u)$ is vanishing at order $3$ at $u=0$ and $\alpha\geq 2$. Let us set
$$
w_1=u-v,\quad w_2=u,\quad w_3=v.
$$
We thus need to prove that
\begin{multline*}
\Big|
\int_{\mathbb R\times\Theta}
w_1G_{1}(w_2,w_3)\overline{w_4}
\Big|
\\
\leq
C(1+\|w_2\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}+\|w_3\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)})^\alpha
\|w_1\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}\|w_4\|_{X^{-\sigma,b'}_{rad}(\mathbb R\times\Theta)}\,.
\end{multline*}
Next, we expand
$$
w_{1}=\sum_{N_1-{\rm dyadic}}\Delta_{N_1}(w_1),\quad w_{4}=\sum_{N_0-{\rm dyadic}}\Delta_{N_0}(w_4)
$$
and
$$
G_{1}(w_2,w_3)=\sum_{N_2-{\rm dyadic}}\Big(G_{1}(\tilde{S}_{2N_2}(w_2),\tilde{S}_{2N_2}(w_3))-
G_{1}(\tilde{S}_{N_2}(w_2),\tilde{S}_{N_2}(w_3))\Big).
$$
Thus, modulo complex conjugations irrelevant in this discussion, one has to
evaluate quantities of type
\begin{multline}\label{derm}
\sum_{N_0,N_1,N_2-{\rm dyadic}}
\Big|
\int_{\mathbb R\times\Theta}\overline{\Delta_{N_0}(w_4)}\Delta_{N_1}(w_1)\Delta_{N_2}(w_j)
\\
H_{j}^{N_2}(\Delta_{N_2}(w_2),\tilde{S}_{N_2}(w_2),\Delta_{N_2}(w_3),\tilde{S}_{N_2}(w_3))
\Big|,\quad j=2,3,
\end{multline}
where $H_{j}^{N_2}(z_1,z_2,z_3,z_4)$ are smooth functions satisfying growth
restrictions at infinity coming from (\ref{rast}). In the analysis of
(\ref{derm}), we distinguish two cases for $N_0$, $N_1$, $N_2$ in the sum
defining (\ref{derm}). Since $N_1$ and $N_2$ are not ordered, we need to
compare $N_0$ with $\max(N_1,N_2)$ by performing arguments close in the spirit
to the proof of (\ref{main1}).
{\bf Case 1.}
The first case is when $N_{0}\leq \max(N_1,N_2)$. In
this case, we expand once more $H_{j}^{N_2}$ which introduces a sum over $N_{3}-{\rm
dyadic}$, $N_{3}\leq N_2$ of terms $\Delta_{N_3}(w_{j})$ (or complex conjugate)
times a function which satisfies a decay property coming form (\ref{rast}).
As in the analysis of $I_1$ above, we obtain the bound
\begin{multline}\label{greve}
|(\ref{derm})|\leq
\sum_{L_0,L_1,L_2,L_3-{\rm dyadic }}\,
\sum_{\stackrel{N_1, N_2\geq N_3, \max(N_1,N_2)\geq N_0}{N_0,N_1,N_2,N_3-{\rm dyadic }}}
\\
L_{0}^{\beta-b'}(L_1L_2L_3)^{\beta-b}\Big(\frac{N_0}{\max(N_1,N_2)}\Big)^{\sigma}\,\,
\frac{N_{3}^{(1-\sigma)(\alpha-2)}}{(\min(N_1,N_2)N_3)^{\sigma-\varepsilon}}Q,
\end{multline}
where $Q$ is defined similarly to (\ref{Q}) with the important difference that
$\sigma_1$ is replaced by $\sigma$ and the harmless difference that $u$ is
replaced by a suitable $w_j$, $j=1,2,3$ and $v$ is replaced by $w_4$.
If $\max(N_1,N_2)=N_1$ or $N_1\geq N_3$ then we conclude exactly as in the proof of
(\ref{main1}).
We can therefore suppose that $\max(N_1,N_2)=N_2$ and $N_1\leq N_3$.
Observe that we can also suppose that $F(u)$ is vanishing at order $5$ at
$u=0$ which allows to expand the non-linearity once more. Indeed, the cubic
term in the Taylor expansion of the non-linearity can be dealt with as in
(\ref{greve}) since in this term $\alpha=2$. Thus in the case $\max(N_1,N_2)=N_2$ and $N_1\leq N_3$,
we expand once more the non-linearity which introduces a sum over $N_{4}-{\rm
dyadic}$, $N_{4}\leq N_3$ of terms $\Delta_{N_4}(w_{j})$ (or complex conjugate)
times a function which satisfies an appropriate decay property coming form
(\ref{rast}).
We next consider two cases $N_1\geq N_4$ and $N_1\leq N_4$.
Let us suppose first that $N_1\geq N_4$. In this case,
using the bilinear Strichartz estimates as in the analysis of $I_1$ above, we obtain the bound
\begin{multline}\label{greve2}
|(\ref{derm})|\leq
\sum_{L_0,L_1,L_2,L_3,L_4-{\rm dyadic }}\,
\sum_{\stackrel{N_3\geq N_1\geq N_4, N_2\geq N_3\geq N_4, N_2\geq N_0}{N_0,N_1,N_2,N_3,N_4-{\rm dyadic }}}
\\
L_{0}^{\beta-b'}(L_1L_2L_3L_4)^{\beta-b}\Big(\frac{N_0}{N_2}\Big)^{\sigma}\,\,
\frac{N_4^{(1-\sigma)(\alpha-2)}}
{(N_1 N_3)^{\sigma-\varepsilon}}Q,
\end{multline}
where $Q$ is defined similarly to (\ref{Q}) with one additional factor in the
product, i.e. the product runs from $1$ to $4$ instead of $1$ to $3$.
With (\ref{greve2}) at our disposal, we can conclude exactly as in the proof
of (\ref{main1}).
Let us suppose finally that $N_1\leq N_4$. In this case we put the term
involving $\Delta_{N_1}$ in $L^\infty$ and perform the bilinear estimates with
the terms involving $N_0$, $N_2$, $N_3$, $N_4$ to get
\begin{multline*}
|(\ref{derm})|\leq
\sum_{L_0,L_1,L_2,L_3,L_4-{\rm dyadic }}\,
\sum_{\stackrel{N_1\leq N_4, N_2\geq N_3\geq N_4, N_2\geq N_0}{N_0,N_1,N_2,N_3,N_4-{\rm dyadic }}}
\\
L_{0}^{\beta-b'}(L_1L_2L_3L_4)^{\beta-b}\Big(\frac{N_0}{N_2}\Big)^{\sigma}\,\,
\frac{N_1^{1-\sigma}N_4^{\max(0,(1-\sigma)(\alpha-3))}}
{(N_3 N_4)^{\sigma-\varepsilon}}Q,
\end{multline*}
where $Q$ is defined similarly to (\ref{Q}).
Once again we conclude similarly to the proof
of (\ref{main1}).
{\bf Case 2.}
If $N_{0}\geq \max(N_1,N_2)$, then we integrate by parts by the aid of
$\Delta_{N_0}(w_4)$ and the analysis is very similar to the bound for
$I_2$ in the proof of (\ref{main1}).
\\
This completes the proof of Proposition~\ref{main}.
\end{proof}
\begin{remarque}
We refer to \cite{Ramona}, where an analysis similar to the proof of Proposition~\ref{main} is performed.
In \cite{Ramona}, one proves bilinear Strichartz estimates for the free evolution and by the transfer principal
of \cite{BGT} these estimates are transformed to estimates involving the projector $\Delta_{N,L}$.
This approach is slightly different from the approach used in \cite{Tz}, based on direct bilinear estimates for
functions enjoying localization properties similar to $\Delta_{N,L}(u)$.
\end{remarque}
\section{Local analysis for NLS and the approximating ODE}
In this section, we state the standard consequence of Proposition~\ref{main} to the local well-posedness of
(\ref{1}) and (\ref{N}).
For $T>0$, we define the restriction spaces
$X^{\sigma,b}_{rad}([-T,T]\times\Theta)$, equipped with the natural norm
$$
\|u\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}=
\inf\{
\|w\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)},\quad w\in
X^{\sigma,b}_{rad}(\mathbb R\times\Theta)\quad {\rm with}\quad w|_{]-T,T[}=u
\}.
$$
Similarly, for $I\subset\mathbb R$ an interval, we can define the restriction spaces
$X^{\sigma,b}_{rad}(I\times\Theta)$, equipped with the natural norm.
A Sobolev inequality with respect to the time variable yields,
\begin{equation*}
\|u\|_{L^{\infty}([-T,T]\,;\,H^{\sigma}_{rad}(\Theta))}\leq C_{b}
\|u\|_{X^{\sigma,b}_{rad}([-T,T]\times \Theta)},\quad b>\frac{1}{2}.
\end{equation*}
Thus for $b>1/2$ the space $X^{\sigma,b}_{rad}([-T,T]\times \Theta)$
is continuously embedded in $C([-T,T]\,;\,H^{\sigma}_{rad}(\Theta))$.
We shall solve (\ref{1}) for short times by applying the Banach contraction mapping principle to the
``Duhamel formulation'' of (\ref{1})
\begin{equation}\label{venda}
u(t)=e^{it\Delta}u_0-i\int_{0}^{t}e^{i(t-\tau)\Delta}F(u(\tau))d\tau\,,
\end{equation}
where $e^{it\Delta}$ denotes the free propagator.
\begin{remarque}\label{zabelejka}
{\rm
In (\ref{venda}), the operator $e^{it\Delta}$ is defined by the Dirichlet self-adjoint realization of the
Laplacian via the functional calculus of self-adjoint operators. As mentioned before the uniqueness
statements in the well-posedness results in this paper are understood as uniqueness results for
(\ref{venda}).
On the other hand, despite the low regularity situation in this paper,
the solutions of (\ref{venda}), we construct here have zero traces on $\mathbb R\times\partial\Theta$
(which is a general feature reflecting from the Dirichlet Laplacian, we work with)
and thus the uniqueness issue can be studied in the context of the equation (\ref{1})
subject to zero boundary conditions on $\mathbb R\times\partial\Theta$.
If we set $S(t)=e^{it\Delta}$, then $S(t)e_{n}=e^{-itz_n^2}e_{n}$ and the norms in the Bourgain spaces
may be expressed as
$$
\|u\|_{X^{\sigma,b}_{rad}(\mathbb R\times\Theta)}=\|S(-t)u\|_{H^{\sigma,b}_{rad}(\mathbb R\times\Theta)},
$$
where $H^{\sigma,b}_{rad}(\mathbb R\times\Theta)$ is a classical anisotropic Sobolev space equipped with the norm
$$
\|v\|_{H^{\sigma,b}_{rad}(\mathbb R\times\Theta)}^{2}=\sum_{n\geq 1}z_{n}^{2\sigma}\|
\langle\tau\rangle^{b}\widehat{\langle v(t),e_{n}\rangle}(\tau)\|_{L^{2}(\mathbb R_{\tau})}^{2}\,,
$$
where again $\langle\cdot,\cdot\rangle$ stays for the $L^2(\Theta)$ pairing and $\widehat{\cdot}$ denotes
the Fourier transform on $\mathbb R$.
Therefore in the context of (\ref{venda}) we are in a situation where the Bourgain approach to well-posedness
of dispersive equations may be applied.
Let us also observe that the solutions of (\ref{venda}) we obtain here
solve (\ref{1}) in distributional sense (see e.g. \cite[Section~3.2]{BGT} for details on this point).
Let us finally remark that for $\sigma<1/2$ the spaces $H^{\sigma}_{rad}(\Theta)$ are independent of
the choice of the boundary conditions we work with. In particular, the space $H^s_{rad}(\Theta)$, on
which the invariant measure $d\rho$ is defined, is independent of the boundary conditions.
On the other hand both the dynamics and the Gibbs measure $d\rho$ do depend on
the choice of the boundary conditions.
}
\end{remarque}
Now we state the following standard consequence of Proposition~\ref{main} (see \cite{Gi} or
\cite[Proposition~6.3]{Tz}).
\begin{proposition}\label{duh}
Let $\max(1/3,1-2/\alpha)<\sigma_1\leq \sigma<1/2$. Then there exist two positive numbers $b,b'$ such
that $b+b'<1$, $b'<1/2<b$, there exists $C>0$ such that for every $T\in]0,1]$,
every $u,v\in X^{\sigma,b}_{rad}([-T,T]\times\Theta)$, every $u_0\in
H^{\sigma}_{rad}(\Theta)$,
\begin{equation*}
\big\|e^{it\Delta}u_0 \big\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}\leq
C\|u_0\|_{H^{\sigma}_{rad}(\Theta)}\, ,
\end{equation*}
\begin{multline*}
\Big\|\int_{0}^{t}e^{i(t-\tau)\Delta}F(u(\tau))d\tau\Big\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}
\leq
\\
\leq CT^{1-b-b'}\Big(1+\|u\|^{\max(2,\alpha)}_{X^{\sigma_{1},b}_{rad}([-T,T]\times\Theta)}\Big)
\|u\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}
\end{multline*}
and
\begin{multline*}
\Big\|\int_{0}^{t}e^{i(t-\tau)\Delta}(F(u(\tau))-F(v(\tau)))d\tau
\Big\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}
\leq
\\
\leq CT^{1-b-b'}\Big(1+\|u\|^{\max(2,\alpha)}_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}+
\|v\|^{\max(2,\alpha)}_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}\Big)
\|u-v\|_{X^{\sigma,b}_{rad}([-T,T]\times\Theta)}\,.
\end{multline*}
\end{proposition}
One may also formulate statements in the spirit of Proposition~\ref{duh}, where $[-T,T]$ is replaced by
an interval $I\subset\mathbb R$ of size one and $0$ by a point of $I$.
We also remark that the integral terms in Proposition~\ref{duh} are well-defined thanks to a priori
estimates in the Bourgain spaces (see e.g. \cite{Gi}).
Proposition~\ref{duh} implies (see \cite[Proposition~7.1]{Tz})
a local well-posedness result for the Cauchy problem
\begin{equation}\label{1bis}
(i\partial_t+\Delta)u-F(u)=0,\quad u|_{t=0}=u_0.
\end{equation}
\begin{proposition}\label{lwp}
Let us fix $\sigma_1$ and $\sigma$ such that $\max(1/3,1-2/\alpha)<\sigma_{1}\leq\sigma<1/2$.
Then there exist $b>1/2$, $\beta>0$, $C>0$, $\tilde{C}>0$, $c\in]0,1]$ such that for every
$A>0$ if we set $T=c(1+A)^{-\beta}$ then for every $u_0\in H^{\sigma_{1}}_{rad}(\Theta)$
satisfying $\|u_0\|_{H^{\sigma_{1}}}\leq A$ there exists a unique
solution $u$ of (\ref{venda}) in $X^{\sigma_{1},b}_{rad}([-T,T]\times \Theta)$.
Moreover $u$ solves (\ref{1bis}) and
$$
\|u\|_{L^{\infty}([-T,T];H^{\sigma_{1}}(\Theta))}
\leq
C\|u\|_{X^{\sigma_{1},b}_{rad}([-T,T]\times \Theta)}
\leq \tilde{C}\|u_0\|_{H^{\sigma_{1}}(\Theta)}\, .
$$
If in addition $u_0\in H^{\sigma}_{rad}(\Theta)$ then
$$
\|u\|_{L^{\infty}([-T,T];H^{\sigma}(\Theta))}\leq
C\|u\|_{X^{\sigma,b}_{rad}([-T,T]\times \Theta)}
\leq \tilde{C}\|u_0\|_{H^{\sigma}(\Theta)}\, .
$$
Finally if $u$ and $v$ are two solutions with data $u_0$, $v_0$ respectively,
satisfying
$$
\|u_0\|_{H^{\sigma_{1}}}\leq A,\quad \|v_0\|_{H^{\sigma_{1}}}\leq A
$$
then
$$
\|u-v\|_{L^{\infty}([-T,T];H^{\sigma_{1}}(\Theta))}\leq C\|u_0-v_0\|_{H^{\sigma_{1}}(\Theta)}\, .
$$
If in addition $u_0,v_0\in H^{\sigma}_{rad}(\Theta)$ then
$$
\|u-v\|_{L^{\infty}([-T,T];H^{\sigma}(\Theta))}\leq C\|u_0-v_0\|_{H^{\sigma}(\Theta)}\, .
$$
\end{proposition}
Since the projector $S_N$ is acting nicely on the Bourgain spaces Proposition~\ref{duh} also implies a
well-posedness result (the important point is the independence of $N$ of the constants appearing
in the statement) for the ODE (\ref{N}).
\begin{proposition}\label{lwpbis}
Let us fix $\sigma_1$ and $\sigma$ such that $\max(1/3,1-2/\alpha)<\sigma_{1}\leq\sigma<1/2$.
Then there exist $b>1/2$, $\beta>0$, $C>0$, $\tilde{C}>0$, $c\in]0,1]$ such that for every $A>0$ if we set
$T= c(1+A)^{-\beta}$ then for every $N\geq 1$, every
$u_0\in E_{N}$ satisfying $\|u_0\|_{H^{\sigma_{1}}}\leq A$ there exists a unique
solution $u=S_{N}(u)$ of
(\ref{N}) in $X^{\sigma_{1},b}_{rad}([-T,T]\times \Theta)$. Moreover
\begin{equation*}
\|u\|_{L^{\infty}([-T,T];H^{\sigma_{1}}(\Theta))}
\leq
C\|u\|_{X^{\sigma_{1},b}_{rad}([-T,T]\times \Theta)}
\leq \tilde{C}\|u_0\|_{H^{\sigma_{1}}(\Theta)}\, .
\end{equation*}
If in addition $u_0\in H^{\sigma}_{rad}(\Theta)$ then
\begin{equation*}
\|u\|_{L^{\infty}([-T,T];H^{\sigma}(\Theta))}
\leq
C\|u\|_{X^{\sigma,b}_{rad}([-T,T]\times \Theta)}
\leq \tilde{C}\|u_0\|_{H^{\sigma}(\Theta)}\, .
\end{equation*}
Finally if $u$ and $v$ are two solutions with data $u_0$, $v_0$ respectively,
satisfying
$$
\|u_0\|_{H^{\sigma_{1}}}\leq A,\quad \|v_0\|_{H^{\sigma_{1}}}\leq A
$$
then
$$
\|u-v\|_{L^{\infty}([-T,T];H^{\sigma_{1}}(\Theta))}\leq C\|u_0-v_0\|_{H^{\sigma_{1}}(\Theta)}\, .
$$
If in addition $u_0,v_0\in H^{\sigma}_{rad}(\Theta)$ then
$$
\|u-v\|_{L^{\infty}([-T,T];H^{\sigma}(\Theta))}\leq
C
\|u_0-v_0\|_{H^{\sigma}(\Theta)}\, .
$$
\end{proposition}
\section{Long time analysis of the approximating ODE}
In this section we study the long time dynamics of
\begin{equation}\label{Nbis}
(i\partial_t+\Delta)u-S_{N}(F(u))=0,\quad u|_{t=0}\in E_N\,.
\end{equation}
Recall from the introduction that the measure $d\rho_N$ is invariant under the well-defined flow
of (\ref{Nbis}). Denote this flow by $\Phi_{N}(t):E_{N}\rightarrow E_{N}$, $t\in\mathbb R$.
We have the following statement.
\begin{proposition}\label{longtime}
There exists $\Lambda>0$ such that for every integer $i\geq 1$, every $\sigma\in[s,1/2[$, every $N\in\mathbb N$,
there exists a $\rho_{N}$ measurable set $\Sigma_{N,\sigma}^{i}\subset E_{N}$ such that :
\begin{itemize}
\item
$
\rho_{N}(E_{N}\backslash \Sigma_{N,\sigma}^{i})\leq 2^{-i}\,.
$
\item
$
u\in \Sigma_{N,\sigma}^{i}\,\Rightarrow\, \|u\|_{L^2(\Theta)}\leq \Lambda.
$
\item
There exists $C_{\sigma}$, depending on $\sigma$, such that for every $i\in\mathbb N$,
every $N\in\mathbb N$, every $u_0\in \Sigma_{N,\sigma}^{i}$, every $t\in\mathbb R$,
$
\|\Phi_{N}(t)(u_0)\|_{H^{\sigma}(\Theta)}\leq C_{\sigma}(i+\log(1+|t|))^{\frac{1}{2}}\,.
$
\item
For every $\sigma\in ]s,1/2[$, every $\sigma_1\in [s,\sigma[$, every $t\in\mathbb R$
there exists $i_{1}$ such that for every integer $i\geq 1$, every $N\geq 1$, if $u_0\in
\Sigma^{i}_{N,\sigma}$ then one has
$
\Phi_{N}(t)(u_0)\in \Sigma^{i+i_1}_{N,\sigma_1}.
$
\end{itemize}
\end{proposition}
\begin{remarque}
One may wish to see the invariance property of the sets $\Sigma^{i}_{N,\sigma}$ displayed
by the last assertion as a ``weak form of a conservation law''.
\end{remarque}
\begin{proof}
Let $\Lambda>0$ be such that $\chi(x)=0$ for $|x|>\Lambda$.
For $\sigma\in [s,1/2[$, $i,j$ integers $\geq 1$, we set
$$
B_{N,\sigma}^{i,j}(D_{\sigma})\equiv
\Big\{u\in E_{N}\,:\,\|u\|_{H^{\sigma}(\Theta)}\leq D_{\sigma}(i+j)^{\frac{1}{2}},\quad
\|u\|_{L^2(\Theta)}\leq \Lambda\Big\},
$$
where the number $D_{\sigma}\gg 1$ (independent of $i,j,N$) will be fixed later.
Thanks to Proposition~\ref{lwpbis}, there exist $c>0$, $C>0$, $\beta>0$ only depending on $\sigma$
such that if we set $\tau \equiv c D_{\sigma}^{-\beta}(i+j)^{-\beta/2}$ then for every $t\in[-\tau,\tau]$,
\begin{equation}\label{preser}
\Phi_{N}(t)\big(B_{N,\sigma}^{i,j}(D_{\sigma})\big)\subset
\Big\{u\in E_{N}\,:\,\|u\|_{H^{\sigma}(\Theta)}\leq C\,D_{\sigma}(i+j)^{\frac{1}{2}},\quad
\|u\|_{L^2(\Theta)}\leq \Lambda\Big\}\, .
\end{equation}
Next, following Bourgain \cite{Bo1}, we set
$$
\Sigma_{N,\sigma}^{i,j}(D_{\sigma})\equiv
\bigcap_{k=-[2^{j}/\tau]}^{[2^{j}/\tau]}\Phi_{N}(-k\tau)(B_{N,\sigma}^{i,j}(D_{\sigma}))\, ,
$$
where $[2^{j}/\tau]$ stays for the integer part of $2^{j}/\tau$.
Using the invariance of the measure $\rho_{N}$ by the flow $\Phi_{N}$, we can write
\begin{eqnarray*}
\rho_{N}(E_{N}\backslash\Sigma_{N,\sigma}^{i,j}(D_{\sigma}))
& = &
\rho_{N}\Big(\bigcup_{k=-[2^{j}/\tau]}^{[2^{j}/\tau]}
\big(E_{N}\backslash\Phi_{N}(-k\tau)\big(B_{N,\sigma}^{i,j}(D_{\sigma})\big)\big)\Big)
\\
& \leq &
(2[2^{j}/\tau]+1)\rho_{N}(E_N\backslash B_{N,\sigma}^{i,j}(D_{\sigma}))
\\
& \leq &
C2^{j}D_{\sigma}^{\beta}(i+j)^{\beta/2}\rho_{N}(E_N\backslash B_{N,\sigma}^{i,j}(D_{\sigma}))\,.
\end{eqnarray*}
Using the support property of $\chi$, we observe that
set $(u\in E_{N}\,:\, \|u\|_{L^2(\Theta)}>\Lambda)$ is of zero $\rho_N$ measure and therefore
\begin{eqnarray}
\rho_{N}(E_N\backslash B_{N,\sigma}^{i,j}(D_{\sigma})) = \tilde{\rho}_{N}
\Big(u\in H^{s}_{rad}(\Theta)\,:\,\|S_{N}(u)\|_{H^{\sigma}(\Theta)}> D_{\sigma}(i+j)^{\frac{1}{2}} \Big).
\end{eqnarray}
Therefore, using Lemma~\ref{gauss-bis}, we can write
\begin{equation}\label{zvez}
\rho_{N}(E_{N}\backslash\Sigma_{N,\sigma}^{i,j}(D_{\sigma}))\leq
C 2^{j}D_{\sigma}^{\beta}(i+j)^{\beta/2}e^{-cD_{\sigma}^2(i+j)}\leq 2^{-(i+j)},
\end{equation}
provided $D_{\sigma}\gg 1$, depending on $\sigma$ but independent of $i,j,N$.
Thanks to (\ref{preser}), we obtain that for
$u_0\in\Sigma_{N,\sigma}^{i,j}(D_\sigma)$, the solution of (\ref{Nbis}) with data $u_0$ satisfies
\begin{equation}\label{jjj1}
\|\Phi_{N}(t)(u_0)\|_{H^{\sigma}(\Theta)}\leq CD_{\sigma}(i+j)^{\frac{1}{2}},\quad |t|\leq 2^{j}\,.
\end{equation}
Indeed, for $|t|\leq 2^{j}$, we may find an integer $k\in [-[2^{j}/\tau],[2^{j}/\tau]]$ and
$\tau_1\in [-\tau,\tau]$ so that $t=k\tau+\tau_1$ and thus
$u(t)=\Phi_{N}(\tau_1)\big(\Phi_{N}(k\tau)(u_0)\big)$.
Since $u_0\in\Sigma_{N,\sigma}^{i,j}(D_\sigma)$ implies that $\Phi_{N}(k\tau)(u_0)\in
B_{N,\sigma}^{i,j}(D_\sigma)$, we may apply (\ref{preser}) and arrive at (\ref{jjj1}).
Next, we set
$$
\Sigma_{N,\sigma}^{i}=\bigcap_{j= 1}^{\infty}\Sigma_{N,\sigma}^{i,j}(D_{\sigma})\,.
$$
Thanks to (\ref{zvez}),
\begin{equation*}
\rho_{N}(E_{N}\backslash \Sigma_{N,\sigma}^{i})\leq 2^{-i}\,.
\end{equation*}
In addition, using (\ref{jjj1}), we get that there exists $C_{\sigma}$ such that for every $i$, every
$N$, every $u_0\in \Sigma_{N,\sigma}^{i}$, every $t\in \mathbb R$,
$$
\|\Phi_{N}(t)(u_0)\|_{H^{\sigma}(\Theta)}\leq C_{\sigma}(i+\log(1+|t|))^{\frac{1}{2}}\,.
$$
Indeed for $t\in \mathbb R$ there exists $j\in\mathbb N$ such that $2^{j-1}\leq 1+|t|\leq 2^j$ and we apply
(\ref{jjj1}) with this $j$.
Let us now turn to the proof of the last assertion.
Fix $t\in\mathbb R$ and $u_{0}\in \Sigma_{N,\sigma}^{i}$.
Since $u_{0}\in \Sigma_{N,\sigma}^{i}$, for every integer $j\geq 1$, we have the bound
$$
\|\Phi_{N}(t_1)(u_0)\|_{H^{\sigma}(\Theta)}\leq C_{\sigma}(i+j)^{\frac{1}{2}},\quad |t_1|\leq 2^j.
$$
Let $i_1\in\mathbb N$ (depending on $t$) be such that for every $j\geq 1$, $2^{j}+|t|\leq 2^{j+i_1}$.
Therefore, if we set $u(t)\equiv\Phi_{N}(t)(u_0)$, we have that
$$
\|\Phi_{N}(t_1)(u(t))\|_{H^{\sigma}(\Theta)}
=
\|\Phi_{N}(t+t_1)(u_0)\|_{H^{\sigma}(\Theta)}
\leq
C_{\sigma}(i+j+i_1)^{\frac{1}{2}},\quad |t_1|\leq 2^j.
$$
Thanks to the $L^2$ conservation law, for $u_0\in\Sigma^{i}_{N,\sigma}$ one has
$$
\|\Phi_{N}(t_1)(u(t))\|_{L^{2}(\Theta)}=\|u_0\|_{L^2(\Theta)}\leq \Lambda.
$$
Therefore
\begin{eqnarray*}
\|\Phi_{N}(t_1)(u(t))\|_{H^{\sigma_1}(\Theta)}
& \leq &
\|\Phi_{N}(t_1)(u(t))\|_{H^{\sigma}(\Theta)}^{\frac{\sigma_1}{\sigma}}
\|\Phi_{N}(t_1)(u(t))\|_{L^{2}(\Theta)}^{\frac{\sigma-\sigma_1}{\sigma}}
\\
& \leq &
[\Lambda]^{\frac{\sigma-\sigma_1}{\sigma}}\Big[C_{\sigma}(i+j+i_1)\Big]^{\frac{\sigma_1}{2\sigma}}\,.
\end{eqnarray*}
Let us fix $i_1\geq 1$ such that in addition to the property
$$
2^{j}+|t|\leq 2^{j+i_1},\quad \forall\,\, j\geq 1,
$$
we also have that for every $i,j\geq 1$,
$$
[\Lambda]^{\frac{1-\sigma_1}{\sigma}}\Big[C_{\sigma}(i+j+i_1)\Big]^{\frac{\sigma_1}{2\sigma}}
\leq D_{\sigma_1}(i+j+i_1)^{\frac{1}{2}}\,.
$$
Thus
$$
\|\Phi_{N}(t_1)(u(t))\|_{H^{\sigma_1}(\Theta)}
\leq
D_{\sigma_1}(i+j+i_1)^{\frac{1}{2}}, \quad |t_1|\leq 2^j,
$$
i.e. for every $|t_1|\leq 2^j$ one has
$\Phi_{N}(t_1)(u(t))\in B_{N,\sigma_1}^{i+i_1,j}(D_{\sigma})$.
We can therefore conclude that $u(t)\in\Sigma_{N,\sigma_1}^{i+i_1,j}(D_{\sigma})$ for every $j\geq
1$. Hence
$
u(t)\in \Sigma_{N,\sigma_1}^{i+i_1}
$
and the restriction on $i_1$ depends only on $\sigma$, $\sigma_1$ and $t$.
This completes the proof of Proposition~\ref{longtime}.
\end{proof}
\section{Construction of the statistical ensemble (long time analysis for NLS)}
Let us set for integers $i\geq 1$, $N \geq 1$ and $\sigma\in[s,1/2[$,
$$
\tilde{\Sigma}_{N,\sigma}^{i}\equiv
\big\{
u\in H^{\sigma}_{rad}(\Theta)\,:\, S_{N}(u)\in \Sigma_{N,\sigma}^{i}
\big\}.
$$
Next, for an integer $i\geq 1$ and $\sigma\in[s,1/2[$, we set
$$
\Sigma_{\sigma}^{i}\equiv
\big\{
u\in H^{\sigma}_{rad}(\Theta)\,:\,\exists\, N_k\rightarrow\infty, N_k\in\mathbb N,\,
\exists\, u_{N_k}\in \Sigma_{N_k,\sigma}^{i},\,u_{N_k}\rightarrow u\,\, {\rm in}\, H^{\sigma}_{rad}(\Theta)
\big\}.
$$
We have the following statement.
\begin{lemme}\label{nicolas}
The set $\Sigma_{\sigma}^{i}$ is a closed set in $H^{\sigma}_{rad}(\Theta)$
(in particular $\rho$ measurable).
\end{lemme}
\begin{proof}
Let $(u_{m})_{m\in\mathbb N}$ be a sequence of $\Sigma_{\sigma}^{i}$ which converges to
$u$ in $H^{\sigma}_{rad}(\Theta)$. Our goal is to show that $u\in \Sigma_{\sigma}^{i}$.
Since $u_m\in \Sigma_{\sigma}^{i}$ there exist a sequence of integers
$N_{m,k}\rightarrow\infty$ as $k\rightarrow \infty$
and a sequence $(u_{N_{m,k}})_{k\in\mathbb N}$ of $\Sigma_{N_{m,k},\sigma}^{i}$ such that
\begin{equation}\label{noel}
\lim_{k\rightarrow\infty} u_{N_{m,k}}=u_{m}\quad \, {\rm in}\,\,\,\, H^{\sigma}(\Theta)\,.
\end{equation}
For every $j\in\mathbb N$, we can find $m_j\in \mathbb N$ such that
$$
\|u-u_{m_j}\|_{H^{\sigma}(\Theta)}<\frac{1}{2j}\,.
$$
Then, thanks to (\ref{noel}) (with $m=m_j$), for every $j\in\mathbb N$, we can find $N_{m_j,k_j}\in \mathbb N$
and $u_{N_{m_j,k_j}}\in\Sigma^{i}_{N_{m_j,k_j},\sigma}$ such that
$$
N_{m_j,k_j}>j,\quad \|u_{m_j}-u_{N_{m_j,k_j}}\|_{H^{\sigma}(\Theta)}<\frac{1}{2j}\,.
$$
Therefore, if we set
$v_{j}\equiv u_{N_{m_j,k_j}}$ and $M_{j}\equiv N_{m_j,k_j}$ then
$M_{j}\rightarrow \infty$ as $j\rightarrow\infty$, $v_{j}\in \Sigma^{i}_{M_j,\sigma}$
and $v_j\rightarrow u$ as $j\rightarrow\infty$ in $H^{\sigma}_{rad}(\Theta)$.
Consequently $u\in \Sigma_{\sigma}^{i}$.
This completes the proof of Lemma~\ref{nicolas}.
\end{proof}
We have the inclusion
$$
\limsup_{N\rightarrow\infty}\tilde{\Sigma}_{N,\sigma}^{i}
\equiv\bigcap_{N=
1}^{\infty}\bigcup_{N_1=N}^{\infty}\tilde{\Sigma}_{N_1,\sigma}^{i}
\subset \Sigma_{\sigma}^{i}.
$$
Indeed, if $u\in \limsup_{N\rightarrow\infty}\tilde{\Sigma}_{N,\sigma}^{i}$
then there exists a sequence of integers $(N_k)$ tending to infinity as
$k\rightarrow\infty$ such that $u\in \tilde{\Sigma}_{N_k,\sigma}^{i}$, i.e.
$S_{N_k}(u)\in \Sigma_{N_k,\sigma}^{i}$. Thus $u\in \Sigma_{\sigma}^{i}$
since $S_{N_k}(u)$ tends to $u$ in $H^{\sigma}_{rad}(\Theta)$.
Therefore
\begin{equation}\label{kr1}
\rho(\Sigma_{\sigma}^{i}) \geq \rho(\limsup_{N\rightarrow\infty}\tilde{\Sigma}_{N,\sigma}^{i})\,.
\end{equation}
Let us next show that
\begin{equation}\label{kr2}
\rho(\limsup_{N\rightarrow\infty}\tilde{\Sigma}_{N,\sigma}^{i})
\geq
\limsup_{N\rightarrow \infty}\rho(\tilde{\Sigma}_{N,\sigma}^{i})\,.
\end{equation}
Indeed, if we set $A_{N}\equiv \tilde{\Sigma}_{N,\sigma}^{i}$ and
$B_{N}\equiv H^s_{rad}(\Theta)\backslash A_{N}$ then
\begin{equation}\label{kr3}
\limsup_{N\rightarrow \infty}\rho(A_{N})=\limsup_{N\rightarrow \infty}\Big(\rho(H^s_{rad}(\Theta))-
\rho(B_{N})\Big)
=\rho(H^s_{rad}(\Theta))-\liminf_{N\rightarrow \infty}\rho(B_{N})\,.
\end{equation}
Using Fatou's lemma, we can obtain
\begin{equation*}
-\liminf_{N\rightarrow \infty}\rho(B_{N})\leq -\rho\Big(\liminf_{N\rightarrow \infty}B_{N}\Big),
\end{equation*}
where
$$
\liminf_{N\rightarrow \infty}B_{N}\equiv
\bigcup_{N= 1}^{\infty}\bigcap_{N_1=N}^{\infty}B_{N_1}\,.
$$
Therefore, coming back to (\ref{kr3}), we get
\begin{equation*}
\limsup_{N\rightarrow \infty}\rho(A_{N})\leq \rho\Big(
H^s_{rad}(\Theta)\backslash
\liminf_{N\rightarrow \infty}B_{N}
\Big)
=\rho\Big(\limsup_{N\rightarrow \infty}A_{N}\Big).
\end{equation*}
Therefore (\ref{kr2}) holds. Since
$$
\rho(\tilde{\Sigma}_{N,\sigma}^{i})=\int_{\tilde{\Sigma}_{N,\sigma}^{i}}f(u)d\mu(u)
$$
and
$$
\rho_{N}(\Sigma_{N,\sigma}^{i})=\int_{\Sigma_{N,\sigma}^{i}}f_{N}(u)d\mu_{N}(u)=\int_{\tilde{\Sigma}_{N,\sigma}^{i}}f_{N}(u)d\mu(u)
$$
thanks to Lemma~\ref{lem3}, we get
$$
\lim_{N\rightarrow \infty}\big((\rho(\tilde{\Sigma}_{N,\sigma}^{i})-\rho_{N}(\Sigma_{N,\sigma}^{i})\big)=0\,.
$$
Thus, using Proposition~\ref{longtime} and Theorem~\ref{thm2}, we obtain
\begin{equation}\label{kr4}
\limsup_{N\rightarrow \infty}\rho(\tilde{\Sigma}_{N,\sigma}^{i})
=
\limsup_{N\rightarrow \infty}\rho_{N}(\Sigma_{N,\sigma}^{i})
\geq
\limsup_{N\rightarrow \infty}\big(\rho_{N}(E_{N})-2^{-i}\big)
=
\rho\big(H^s_{rad}(\Theta)\big)-2^{-i}.
\end{equation}
Collecting (\ref{kr1}), (\ref{kr2}) and (\ref{kr4}), we arrive at
$$
\rho(\Sigma_{\sigma}^{i}) \geq \rho\big(H^s_{rad}(\Theta)\big)-2^{-i}.
$$
Now, we set
$$
\Sigma_{\sigma}\equiv\bigcup_{i\geq 1}\Sigma_{\sigma}^{i}\,.
$$
Thus $\Sigma_{\sigma}$ is of full $\rho$ measure.
It turns out that one has global existence for $u_0\in \Sigma_{\sigma}^{i}$.
\begin{proposition}\label{global_existence}
Let us fix $\sigma\in [s,1/2[$, $\sigma_1\in ]\max(1/3,1-2/\alpha),\sigma[$ and $i\in\mathbb N$.
Then for every $u_0\in \Sigma_{\sigma}^{i}$, the local solution $u$
of (\ref{1bis}) given by Proposition~\ref{lwp} is globally defined.
In addition there exists $C>0$ such that for every $u_0\in \Sigma_{\sigma}^{i}$,
\begin{equation}\label{growth}
\|u(t)\|_{H^{\sigma_1}(\Theta)}\leq C(i+\log(1+|t|))^{\frac{1}{2}}\,.
\end{equation}
Moreover, if $(u_{0,k})_{k\in\mathbb N}$, $u_{0,k}\in \Sigma^{i}_{\sigma,N_k}$,
$N_k\rightarrow\infty$ converges to $u_0$ as $k\rightarrow\infty$ in $H^{\sigma}_{rad}(\Theta)$ then
\begin{equation}\label{limit}
\lim_{k\rightarrow\infty}\|u(t)-\Phi_{N_k}(t)(u_{0,k})\|_{H^{\sigma_1}(\Theta)}=0\,.
\end{equation}
\end{proposition}
\begin{proof}
Let $u_0\in \Sigma_{\sigma}^{i}$ and $(u_{0,k})$ $u_{0,k}\in \Sigma^{i}_{\sigma,N_k}$,
$N_k\rightarrow\infty$
a sequence
tending to $u_0$ in $H^{\sigma}_{rad}(\Theta)$. Let us fix $T>0$. Our aim so to extend the solution
of (\ref{1bis}) given by Proposition~\ref{lwp} to the interval $[-T,T]$.
Using Proposition~\ref{longtime}, we have that there exists a constant $C$ such that for every $k\in\mathbb N$,
every $t\in\mathbb R$,
\begin{equation}\label{ant}
\|\Phi_{N_k}(t)(u_{0,k})\|_{H^{\sigma}(\Theta)}\leq C(i+\log(1+|t|))^{\frac{1}{2}}\,.
\end{equation}
Therefore, if we set $u_{N_k}(t)\equiv \Phi_{N_k}(t)(u_{0,k})$ and
$\Lambda\equiv C(i+\log(1+T))^{\frac{1}{2}}$, we have the bound
\begin{equation}\label{david1}
\|u_{N_k}(t)\|_{H^{\sigma}}\leq \Lambda,\quad \forall\,|t|\leq T,\quad \forall\, k\in\mathbb N.
\end{equation}
In particular $\|u_0\|_{H^{\sigma}}\leq\Lambda$ (apply (\ref{david1}) with $t=0$ and let $k\rightarrow\infty$).
Let $\tau>0$ be the local existence time for (\ref{1bis}), provided by
Proposition~\ref{lwp} for $\sigma_1$, $\sigma$ and $A=\Lambda+1$.
Recall that we can assume $\tau=c(2+\Lambda)^{-\beta}$
for some $c>0$, $\beta>0$ depending only on $\sigma$ and $\sigma_1$.
We can of course assume that $T>\tau$.
Denote by $u(t)$ the solution of (\ref{1bis}) with data $u_0$ on the time interval $[-\tau,\tau]$. Then
$v_{N_k}\equiv u-u_{N_k}$ solves the equation
\begin{equation}\label{eqnv}
(i\partial_{t}+\Delta) v_{N_k} =F(u)-S_{N_k}(F(u_{N_k})), \quad v_{N_k}|_{t=0}=u_0-u_{0,k} \, .
\end{equation}
Next, we write
$$
F(u)-S_{N_k}(F(u_{N_k}))=S_{N_k}\big(F(u)-F(u_{N_k})\big)+(1-S_{N_k})F(u).
$$
Therefore
\begin{multline*}
v_{N_k}(t)=e^{it\Delta}(u_0-u_{0,k})
\\
-i\int_{0}^{t}e^{i(t-\tau)\Delta}S_{N_k}\big(F(u(\tau))-F(u_{N_k}(\tau))\big)d\tau
-i\int_{0}^{t}e^{i(t-\tau)\Delta}(1-S_{N_k})F(u(\tau))d\tau\,.
\end{multline*}
Let us observe that for $\sigma_1<\sigma$ the map $1-S_{N}$ sends $H^{\sigma}_{rad}(\Theta)$ to
$H^{\sigma_{1}}_{rad}(\Theta)$ with norm $\leq CN^{\sigma_{1}-\sigma}$.
Similarly, for $I\subset\mathbb R$ an interval, the map
$1-S_{N}$ sends $X^{\sigma,b}_{rad}(I\times\Theta)$ to
$X^{\sigma_{1},b}_{rad}(I\times\Theta)$ with norm $\leq CN^{\sigma_{1}-\sigma}$.
Moreover $S_{N}$ acts as a bounded operator (with norm $\leq 1$) on the Bourgain spaces
$X^{\sigma,b}_{rad}$. Therefore, using Proposition~\ref{duh}, we obtain that there exist $C>0$,
$b>1/2$ and $\theta>0$ (depending only on $\sigma$, $\sigma_1$) such that one has the bound
\begin{eqnarray*}
\|v_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}
& \leq &
C\Big(\|u_0-u_{0,k}\|_{H^{\sigma_1}(\Theta)}
\\
& &
+ \tau^{\theta}\|v_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}
\big(1+\|u\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}^{\alpha_1}+
\|u_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}^{\alpha_1}
\big)
\\
& & + \tau^{\theta}N_k^{\sigma_1-\sigma}\|u\|_{X^{\sigma,b}_{rad}([-\tau,\tau]\times\Theta)}
\big(
1+\|u\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}^{\alpha_1}
\big)\Big),
\end{eqnarray*}
where $\alpha_1\equiv\max(2,\alpha)$.
A use of Proposition~\ref{lwp} and Proposition~\ref{lwpbis} yields
\begin{eqnarray*}
\|v_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}
& \leq &
C\|u_0-u_{0,k}\|_{H^{\sigma_1}(\Theta)}
\\
& &
+
C\tau^{\theta}\|v_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}
(1+C\|u_{0}\|_{H^{\sigma_1}(\Theta)}^{\alpha_1}+C\|u_{0,k}\|_{H^{\sigma_1}(\Theta)}^{\alpha_1})
\\
& & +
C\tau^{\theta}N_k^{\sigma_1-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}
(1+C\|u_{0}\|_{H^{\sigma_1}(\Theta)}^{\alpha_1})
\\
& \leq &
C\|u_0-u_{0,k}\|_{H^{\sigma_1}(\Theta)}
+
C\tau^{\theta}(1+\Lambda)^{\alpha_1}\|v_{N_k}\|_{X^{\sigma_1,b}_{rad}([-\tau,\tau]\times\Theta)}
\\
& &
+
C\tau^{\theta}(1+\Lambda)^{\alpha_1}N_k^{\sigma_1-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}\,.
\end{eqnarray*}
Recall that $\tau=c(2+\Lambda)^{-\beta}$, where $c>0$ and $\beta>0$ are depending only on
$\sigma$ and $\sigma_1$. In the last estimate the constants $C$ and $\theta$ also depend only on
$\sigma_1$ and $\sigma$. Therefore, if we assume that $\beta>\alpha_1/\theta$ then the restriction on $\beta$
remains to depend only on $\sigma_1$ and $\sigma$. Similarly, if we assume that $c$ is so small that
$$
C\tau^{\theta}(1+\Lambda)^{\alpha_1}\leq
Cc^{\theta}(2+\Lambda)^{-\beta\theta}(1+\Lambda)^{\alpha_1}
\leq Cc^{\theta}
<1/2
$$
then the smallness restriction on $c$ remains to depend only on $\sigma_1$ and $\sigma$.
Therefore, we have that after possibly slightly modifying the values of $c$ and $\beta$
(keeping $c$ and $\beta$ only depending on $\sigma$ and $\sigma_1$ and independent of $N_k$)
in the definition of $\tau$ that
\begin{eqnarray*}
\|v_{N_k}\|_{X^{\sigma_{1},b}_{rad}([-\tau,\tau]\times\Theta)}& \leq & C\|u_0-u_{0,k}\|_{H^{\sigma_1}(\Theta)}+
\frac{1}{2}N_k^{\sigma_1-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}
\\
& = &
C\|v_{N_k}(0)\|_{H^{\sigma_1}(\Theta)}+
\frac{1}{2}N_k^{\sigma_1-\sigma}\|u(0)\|_{H^{\sigma}(\Theta)}\,.
\end{eqnarray*}
Since $b>1/2$, the last inequality implies
\begin{equation}\label{vvvv}
\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}\leq C\|v_{N_k}(0)\|_{H^{\sigma_1}(\Theta)}+
CN_k^{\sigma_1-\sigma}\|u(0)\|_{H^{\sigma}(\Theta)},\quad |t|\leq \tau= c(1+\Lambda)^{-\beta}\,,
\end{equation}
where the constants $c$, $C$ and $\beta$ depend only $\sigma_1$ and $\sigma$.
Therefore, using that
$$
\lim_{k\rightarrow\infty}\|v_{N_k}(0)\|_{H^{\sigma_1}(\Theta)}=0,
$$
we obtain that
\begin{equation}\label{rub1}
\lim_{k\rightarrow\infty}\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}=0,\quad |t|\leq \tau\,.
\end{equation}
Thus by taking $N_k$ large enough in (\ref{vvvv}) one has via a use of
the triangle inequality,
\begin{equation}\label{david2}
\|u(t)\|_{H^{\sigma_1}(\Theta)}\leq\|u_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}
+\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}\leq\Lambda+1,\quad |t|\leq\tau.
\end{equation}
Let us define the function $g_k(t)$ by
$$
g_k(t)\equiv \|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}+N_k^{\sigma_1-\sigma}\|u(t)\|_{H^{\sigma}(\Theta)}\,.
$$
The function $g_k(t)$ is a priori defined only on $[-\tau,\tau]$. Our goal is to extend it on $[-T,T]$.
Using (\ref{vvvv}) and the bound
$$
\|u(t)\|_{H^{\sigma}(\Theta)}\leq C\|u(0)\|_{H^{\sigma}(\Theta)}, \quad |t|\leq \tau,
$$
provided from Proposition~\ref{lwp}, we obtain that there exists a constant $C(\sigma,\sigma_1)$ depending
only on $\sigma_1$ and $\sigma$ such that
$$
g_k(t)\leq C(\sigma,\sigma_1)g_k(0),\quad \forall\, t\in [-\tau,\tau]\,.
$$
We now repeat the argument for obtaining (\ref{vvvv}) on
$[\tau,2\tau]$ and thanks to the bounds (\ref{david1}) and (\ref{david2}), we obtain
that $v_{N_k}(t)$ and $u$ exist on $[\tau,2\tau]$ and one has the bound
\begin{equation*}
\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}\leq C\|v_{N_k}(\tau)\|_{H^{\sigma_1}(\Theta)}+
CN_k^{\sigma_1-\sigma}\|u(\tau)\|_{H^{\sigma}(\Theta)},\quad t\in[\tau,2\tau]\,.
\end{equation*}
Therefore, thanks to (\ref{rub1}) (with $t=\tau$)
$$
\lim_{k\rightarrow\infty}\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}=0,\quad \tau\leq t\leq 2\tau\,.
$$
By taking $N_k\gg 1$, we get via a use of the triangle inequality
\begin{equation*}
\|u(t)\|_{H^{\sigma_{1}}(\Theta)}\leq
\|u_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}
+
\|v_{N_k}(t)\|_{H^{\sigma_1}(\Theta)}
\leq
\Lambda+1,\quad \tau \leq t\leq 2\tau.
\end{equation*}
Using (\ref{vvvv}) and the bound
$$
\|u(t)\|_{H^{\sigma}(\Theta)}\leq C\|u(\tau)\|_{H^{\sigma}(\Theta)},\quad \tau\leq t\leq 2\tau,
$$
provided from Proposition~\ref{lwp}, we obtain that
$$
g_k(t)\leq C(\sigma,\sigma_1)g_k(\tau),\quad \forall\, t\in [\tau,2\tau].
$$
Then, we can continue by covering the interval $[-T,T]$ with intervals of size $\tau$, which yields
the existence of $u(t)$ on $[-T,T]$
(the point is that at each step the $H^{\sigma}$ norm of $u$ remains bounded by $\Lambda+1$
and the limit as $k\rightarrow\infty$ of the $H^{\sigma}$ norm of $v_{N_k}$ is zero).
Since $T>0$ was chosen arbitrary, we obtain that for every
$u_0\in\Sigma_{\sigma}^{i}$ the local solution of (\ref{1bis}) is globally defined.
Moreover
$$
\|u(t)\|_{H^{\sigma_{1}}(\Theta)}\leq\Lambda+1,\quad |t|\leq T
$$
which by recalling the definition of $\Lambda$ implies the bound (\ref{growth}).
In addition, by iterating the bounds on $g_k$ we get at each step,
we obtain the existence of a constant $C$ depending
only on $\sigma$ and $\sigma_1$ such that
$$
g_k(t)\leq e^{C(1+|t|)}g_k(0)
$$
which implies that there exists a constant $C$ depending only on $\sigma_1$ and $\sigma$ such that
$v_{N_k}$ enjoys the bound
\begin{equation*}
\|v_{N_k}(t)\|_{H^{\sigma_{1}}(\Theta)}\leq
C^{1+T}\Big(N_k^{\sigma_1-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}
+\|u_0-u_{0,k}\|_{H^{\sigma_1}(\Theta)}\Big),\quad |t|\leq T.
\end{equation*}
Therefore for every $\varepsilon>0$ there exists $N^{\star}$ such that for $N_k\geq N^\star$
one has the inequality
\begin{equation*}
\sup_{|t|\leq T}\|u(t)-\Phi_{N_k}(t)(u_{0,k})\|_{H^{\sigma_1}(\Theta)}<\varepsilon\,.
\end{equation*}
Hence we have (\ref{limit}).
This completes the proof of Proposition~\ref{global_existence}.
\end{proof}
By the Proposition~\ref{global_existence}, we can define a flow $\Phi$ acting on
$\Sigma_{\sigma}$, $\sigma\in [s,1/2[$ and defining the global dynamics of (\ref{1bis}) for
$u_0\in \Sigma_{\sigma}$.
Let us now turn to the construction of a set invariant under $\Phi$.
Let $ l=(l_{j})_{j\in\mathbb N}$ be a increasing sequence of real numbers
such that $l_0=s$, $l_{j}<1/2$ and
$
\lim_{j\rightarrow\infty}l_{j}=1/2.
$
Then, we set
\begin{equation*}
\Sigma=\bigcap_{\sigma\in l}\Sigma_{\sigma}\,.
\end{equation*}
The set $\Sigma$ is of full $\rho$ measure. It is the one involved in the statement of Theorem~\ref{thm3}.
Using the invariance property of $\Sigma^{i}_{N,\sigma}$, we now obtain that the set $\Sigma$
is invariant under $\Phi$.
\begin{proposition}\label{compare}
For every $t\in\mathbb R$, $\Phi(t)(\Sigma)=\Sigma$.
In addition for every $\sigma\in l$,
$\Phi(t)$ is continuous with respect to the induced by $H^\sigma_{rad}(\Theta)$ on $\Sigma$ topology.
In particular, the map
$\Phi(t):\Sigma\rightarrow\Sigma$ is a measurable map with respect to $\rho$.
\end{proposition}
\begin{proof}
Since the flow is time reversible, it suffices to show that
\begin{equation}\label{inclu}
\Phi(t)(\Sigma)\subset\Sigma,\quad \forall t\in\mathbb R\,.
\end{equation}
Indeed, if we suppose that (\ref{inclu}) holds true then for $u\in\Sigma$ and $t\in\mathbb R$,
we have that thanks to (\ref{inclu}) $u_0\equiv\Phi(-t)u\in\Sigma$ (recall that $\Phi$ is well-defined
on $\Sigma$ by Proposition~\ref{global_existence}) and thus
$u=\Phi(t)u_{0}$, i.e. $\Sigma\subset\Phi(t)(\Sigma)$.
Hence $\Phi(t)(\Sigma)=\Sigma$ is a consequence of (\ref{inclu}).
Let us now prove (\ref{inclu}).
Fix $u_0\in\Sigma$ and $t\in\mathbb R$. It suffices to show that for every $\sigma_1\in l$,
we have
$$
\Phi(t)(u_0)\in \Sigma_{\sigma_1}\,.
$$
Let us take $\sigma\in ]\sigma_1,1/2[$, $\sigma\in l$. Since $u_0\in\Sigma$, we have that
$u_0\in \Sigma_{\sigma}$.
Therefore there exists $i$ such that
$
u_0\in \Sigma^{i}_{\sigma}.
$
Let $u_{0,k}\in\Sigma^{i}_{N_k,\sigma}$, $N_k\rightarrow\infty$ be a sequence which
tends to $u_0$ in $H^{\sigma}(\Theta)$.
Thanks to Proposition~\ref{longtime} there exists $i_1$ such that
$$
\Phi_{N_k}(t)(u_{0,k})\in\Sigma^{i+i_1}_{N_k,\sigma_1}\,,\quad \forall\, k\in\mathbb N.
$$
Therefore using (\ref{limit}) of Proposition~\ref{global_existence}, we obtain that
$$
\Phi(t)(u_0)\in \Sigma^{i+i_1}_{\sigma_1}.
$$
Thus $\Phi(t)(u_0)\in\Sigma_{\sigma_1}$ which proves (\ref{inclu}).
Let us finally prove the continuity of $\Phi(t)$ on $\Sigma$ with respect to the $H^\sigma_{rad}(\Theta)$
topology.
Let $u\in\Sigma$ and $u_n\in\Sigma$ be a sequence such that $u_n\rightarrow u$ in $H^\sigma_{rad}(\Theta)$.
We need to prove that for every $t\in\mathbb R$,
$
\Phi(t)(u_n)\rightarrow \Phi(t)(u)
$
in $H^\sigma_{rad}(\Theta)$. Let us fix $t\in\mathbb R$.
Since $u\in\Sigma$ (and thus in all $\Sigma_{\sigma}$, $\sigma\in l$),
using Proposition~\ref{global_existence}, we obtain that there exists $C>0$ such that
\begin{equation}\label{kyoto}
\sup_{|\tau|\leq |t|}\|\Phi(\tau)(u)\|_{H^\sigma(\Theta)}\leq C(\log(2+|t|))^{\frac{1}{2}}\equiv \Lambda.
\end{equation}
Let us denote by $\tau_0$ the local existence time in Proposition~\ref{lwp}, associated to $\sigma$ and
$A=2\Lambda $. Then, by the continuity of the flow given by Proposition~\ref{lwp}, we have
$\Phi(\tau_0)(u_n)\rightarrow \Phi(\tau_0)(u)$ in $H^\sigma_{rad}(\Theta)$. Next, we cover
the interval $[0,t]$ by intervals of size $\tau_0$ and we apply the continuity
of the flow established in Proposition~\ref{lwp} at each step.
The applicability of Proposition~\ref{lwp} is possible thanks to the bound (\ref{kyoto}).
Therefore, we obtain that $\Phi(t)(u_n)\rightarrow \Phi(t)(u)$ in $H^\sigma_{rad}(\Theta)$.
This completes the proof of Proposition~\ref{compare}.
\end{proof}
\section{Proof of the measure invariance}
Fix $\sigma\in]s,1/2[$, $\sigma\in l$.
Thanks to the invariance by $\Phi$ of the set $\Sigma$,
using the regularity of the measure $\mu$ (which is a finite Borel measure) and Remark~\ref{rem},
we deduce that it suffices to prove the measure invariance for subsets $K$ of $\Sigma$
which are compacts of $H^s_{rad}(\Theta)$ and which are bounded in $H^\sigma_{rad}(\Theta)$.
Let us fix $t\in \mathbb R$ and a compact $K$ of $H^s_{rad}(\Theta)$ which is a bounded set in
$H^\sigma_{rad}(\Theta)$. Our aim is to show that
$
\rho(\Phi(t)(K))=\rho(K).
$
By the time reversibility of the flow, we may suppose that $t>0$.
Since $K$ is bounded in $H^\sigma_{rad}(\Theta)$ and a compact in $H^s_{rad}(\Theta)$,
using the continuity property displayed
by Proposition~\ref{compare} and Proposition~\ref{lwp}, we infer that there exists $R>0$ such that
\begin{equation}\label{krum}
\{\Phi(\tau)(K),\,\, 0\leq \tau\leq t\}\subset
\{u\in H^\sigma_{rad}(\Theta)\,:\, \|u\|_{H^\sigma(\Theta)}\leq R\}\equiv B_{R}\,.
\end{equation}
Indeed, the left hand-side of (\ref{krum}) is included in a sufficiently large
$H^s_{rad}(\Theta)$ ball thanks to the continuity property of the flow on
$H^s_{rad}(\Theta)$ shown in Proposition~\ref{compare} and the compactness of
$K$. Then, by iterating the propagation of regularity statement of
Proposition~\ref{lwp},
applied with $A$ such that the $H^s_{rad}(\Theta)$ ball centered at the origin of
radius $A$ contains the left hand-side of (\ref{krum}), we arrive at
(\ref{krum}) (observe that we only have the poor bound $R\sim e^{Ct}$).
\\
Let $c$ and $\beta$ (depending only on $s$ and $\sigma$)
be fixed by an application of Proposition~\ref{lwp} with $s=\sigma_1$ and $\sigma=\sigma$.
Next, we set
$$
\tau_0\equiv c_0(1+R)^{-\beta_0},
$$
where $0<c_0\leq c$, $\beta_0\geq \beta$, depending only on $s$ and $\sigma$,
are to be fixed in the next lemma which allows to compare $\Phi$ and $\Phi_N$ for data in $B_{R}$.
\begin{lemme}\label{nedelia}
There exist $c_0$ and $\beta_0$ depending only on $s$ and $\sigma$ such that
for every $\varepsilon>0$ there exists $N_0\geq 1$ such that for every $N\geq N_0$, every $u_0\in B_{R}$,
every $\tau\in [0,\tau_0]$,
$$
\|\Phi(\tau)(u_0)-\Phi_{N}(\tau)(S_{N}(u_0))\|_{H^s(\Theta)}<\varepsilon\,.
$$
\end{lemme}
\begin{proof}
For $u_0\in B_{R}$, we denote by $u$ the solution of (\ref{1bis}) with data $u_0$
and by $u_{N}$ the solution of (\ref{Nbis}) with data $S_{N}(u_0)$, defined on $[0,\tau_0]$.
Next, we set $v_{N}\equiv u-u_N$. Then $v_{N}$ solves
\begin{equation}\label{eqnvpak}
(i\partial_t+\Delta)v_N= F(u)-S_{N}(F(u_{N})), \quad v_{N}(0)=(1-S_{N})u_0\,.
\end{equation}
By writing
$$
F(u)-S_{N}(F(u_{N}))=S_{N}\big(F(u)-F(u_{N})\big)+(1-S_{N})F(u)
$$
and using Proposition~\ref{duh}, we obtain that there exists $b>1/2$ and $\theta>0$
depending only on $s$ and $\sigma$ such that one has
\begin{eqnarray*}
\|v_{N}\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}
& \leq &
CN^{s-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}
\\
& & + C\tau_0^{\theta}\|v_N\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}
\big(1+\|u\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}^{\max(2,\alpha)}
+\|u_{N}\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}^{\max(2,\alpha)}\big)
\\
& &
+ C\tau_0^{\theta}N^{s-\sigma}\|u\|_{X^{\sigma,b}_{rad}([0,\tau_0]\times\Theta)}
\big(1+\|u\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}^{\max(2,\alpha)}\big).
\end{eqnarray*}
Using Proposition~\ref{lwp} and Proposition~\ref{lwpbis}, we get
\begin{eqnarray*}
\|v_N\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}
& \leq &
CN^{s-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}
\\
& &
+C\tau_0^{\theta}\|v_N\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}(1+C\|u_{0}\|_{H^{s}(\Theta)}^{\max(2,\alpha)})
\\
& &
+C\tau_0^{\theta}N^{s-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}(1+C\|u_{0}\|_{H^{s}(\Theta)}^{\max(2,\alpha)})\,.
\end{eqnarray*}
Coming back to the definition of $\tau_0$
we can choose $c_0$ small enough and $\beta_0$ large enough,
but keeping their dependence only on $s$ and $\sigma$, to infer that
$$
\|v_N\|_{X^{s,b}_{rad}([0,\tau_0]\times\Theta)}\leq CN^{s-\sigma}\|u_0\|_{H^{\sigma}(\Theta)}.
$$
Since $b>1/2$, by the Sobolev embedding, the space $X^{s,b}_{rad}([0,\tau_0]\times\Theta)$ is continuously
embedded in $L^{\infty}([0,\tau_0];H^s_{rad}(\Theta))$ and thus there exists $C$ depending only on $s$,
$\sigma$ such that
\begin{equation*}
\|v_N(t)\|_{H^{s}(\Theta)}\leq CRN^{s-\sigma},\quad t\in [0,\tau_0].
\end{equation*}
This completes the proof of Lemma~\ref{nedelia}.
\end{proof}
It suffices to prove that
\begin{equation}\label{reduction}
\rho(\Phi(\tau)(K))=\rho(K),\quad \tau\in [0,\tau_0].
\end{equation}
Indeed, it suffices to cover $[0,t]$ by intervals of size $\tau_0$ and apply
(\ref{reduction}) at each step. Such an iteration is possible since
by the continuity property of $\Phi(t)$ at each
step the image remains a compact of $H^s_{rad}(\Theta)$ included in the ball $B_{R}$.
Let us now prove (\ref{reduction}).
Let $B_{\varepsilon}$ be the open ball in $H^{s}_{rad}(\Theta)$ centered at
the origin and of radius $\varepsilon$.
We have that $\Phi(\tau)(K)$ is a closed set of $H^{s}_{rad}(\Theta)$
contained in $\Sigma$. Therefore, by Theorem~\ref{thm2}, we can write
$$
\rho\Big(\Phi(\tau)(K)+\overline{B_{2\varepsilon}}\Big) \geq \limsup_{N\rightarrow \infty}
\rho_{N}\Big(\big(\Phi(\tau)(K)+\overline{B_{2\varepsilon}}\big)\cap E_{N}\Big)\,,
$$
where $\overline{B_{2\varepsilon}}$ is the closed ball in
$H^{s}_{rad}(\Theta)$, centered at the origin and of radius $2\varepsilon$.
Using Lemma~\ref{nedelia}, we obtain that for every $\varepsilon>0$, if we take $N$ large enough, we
have
$$
\big(\Phi_{N}(\tau)(S_{N}(K))+B_{\varepsilon}\big)\cap E_{N}
\subset
\big(\Phi(\tau)(K)+\overline{B_{2\varepsilon}}\big)\cap E_{N}
$$
and therefore
$$
\limsup_{N\rightarrow \infty}
\rho_{N}\Big(\big(\Phi(\tau)(K)+\overline{B_{2\varepsilon}}\big)\cap E_{N}\Big)
\geq
\limsup_{N\rightarrow \infty}
\rho_{N}\Big(
\big(\Phi_{N}(\tau)(S_{N}(K))+B_{\varepsilon}\big)\cap E_{N}
\Big).
$$
Next, using the Lipschitz continuity of the flow $\Phi_N$ (see Proposition~\ref{lwpbis}), we obtain that
there exists $c\in]0,1[$, independent of $\varepsilon$ such that for $N$ large enough, we have
$$
\Phi_{N}(\tau)\big((K+B_{c\varepsilon})\cap E_{N}\big)
\subset
\big(\Phi_{N}(\tau)(S_{N}(K))+B_{\varepsilon}\big)\cap E_{N},
$$
where $B_{c\varepsilon}$ is the open ball in $H^{s}_{rad}(\Theta)$ centered at the origin and
of radius $c\varepsilon$. Therefore
$$
\limsup_{N\rightarrow \infty}
\rho_{N}\Big(\big(\Phi_{N}(\tau)(S_{N}(K))+B_{\varepsilon}\big)\cap E_{N}\Big)
\geq
\limsup_{N\rightarrow \infty}\rho_{N}\Big(\Phi_{N}(\tau)\big(
(K+B_{c\varepsilon})\cap E_{N}\big)\Big)\, .
$$
Further, using the invariance of $\rho_N$ under $\Phi_N$, we obtain that
$$
\rho_{N}\Big(\Phi_{N}(\tau)\big((K+B_{c\varepsilon})\cap E_{N}\big)\Big)
=
\rho_{N}\Big((K+B_{c\varepsilon})\cap E_{N}\Big)
$$
and thus
$$
\limsup_{N\rightarrow \infty}\rho_{N}\Big(\Phi_{N}(\tau)\big((K+B_{c\varepsilon})\cap E_{N}\big)\Big)
\geq
\liminf_{N\rightarrow \infty}\rho_{N}\Big((K+B_{c\varepsilon})\cap E_{N}\Big).
$$
Finally, invoking once again Theorem~\ref{thm2}, we can write
$$
\liminf_{N\rightarrow \infty}\rho_{N}\Big((K+B_{c\varepsilon})\cap E_{N}\Big)
\geq
\rho(K+B_{c\varepsilon})\geq \rho(K).
$$
Therefore, we have the inequality
$$
\rho\Big(\Phi(\tau)(K)+\overline{B_{2\varepsilon}}\Big) \geq \rho(K).
$$
By letting $\varepsilon\rightarrow 0$, thanks to the dominated convergence,
we obtain that
$$
\rho(\Phi(\tau)(K))\geq \rho(K).
$$
By the time reversibility of the flow we get $\rho(\Phi(\tau)(K))= \rho(K)$
and thus the measure invariance.
\\
This completes the proof of Theorem~\ref{thm3}.\qed
\section{Concerning the three dimensional case}
\subsection{General discussion}
The extension of the result to the 3d case is an interesting problem.
In this case one can still prove the measure existence. The Cauchy problem issue
is much more challenging. Despite the fact that the Cauchy problem for $H^{\sigma}$, $\sigma<1/2$ data
is ill-posed, in the sense of failure of continuity of the flow map (see the work of
Christ-Colliander-Tao \cite{CCT}, or the appendix of \cite{BGT}), we may hope that estimates on Wiener chaos
can help us to resolve globally (with uniqueness) the Cauchy problem a.s. on a suitable statistical
ensemble $\Sigma$ (which is included in the intersection of $H^{\sigma}$, $\sigma<1/2$ and misses $H^{1/2}$).
This would be an example showing the possibility to get strong solutions of a dispersive equation,
a.s. with respect to a measure, beyond the Hadamard well-posedness threshold.
In this section, we prove an estimate which shows that one has a control on the second Picard iteration,
in all $H^{\sigma}$, $\sigma<1/2$, a.s. with respect to the measure.
We will consider zonal solutions of the cubic defocusing NLS on the sphere $S^3$. The analysis of
this model has a lot of similarities with the analysis on the ball of $\mathbb R^3$
(which is the three dimensional analogue of (\ref{1})).
There are however some simplifications because of the absence of boundary on $S^3$ and a nice formula
for the products of zonal eigenfunctions.
In this section, we will benefit from some computations of the unpublished manuscript \cite{BGT-zonal}.
\subsection{Zonal functions on $S^3$ }
Let $S^3$ be the unit sphere in $\mathbb R^{4}$. If we consider functions on $S^3$ depending only on the geodesic
distance to the north pole, we obtain the zonal functions on $S^3$.
The zonal functions can be expressed in terms of zonal spherical harmonics which in their turn can be
expressed in terms the classical Jacobi polynomials.
Let $\theta\in[0,\pi]$ be a local parameter measuring the geodesic distance to the north pole of $S^3$.
Define the space $L^{2}_{rad}(S^3)$ to be equipped with the following norm
$$
\|f\|_{L^{2}_{rad}(S^3)}=\Big(\int_{0}^{\pi}|f(\theta)|^{2}(\sin\theta)^{2} d\theta\Big)^{\frac{1}{2}},
$$
where $f$ is a zonal function on $S^3$ and $(\sin\theta)^{2} d\theta$ is the surface measure on $S^3$.
One can define similarly other functional spaces of zonal functions,
for example $L^{p}_{rad}(S^3)$, $H^{s}_{rad}(S^3)$ etc. The Laplace-Beltrami
operator on $L^2(S^3)$ can be restricted to
$L^{2}_{rad}(S^3)$ and in the coordinate $\theta$ it reads
$$
\frac{\partial^{2}}{\partial \theta^2}+\frac{2}{{\rm tg }\, \theta}\frac{\partial}{\partial\theta}
$$
since using the parametrization of $S^3$ in terms of $\theta$ and $S^{2}$, one can write,
$$
\Delta_{S^3}=\frac{\partial^{2}}{\partial \theta^2}+\frac{2}{{\rm tg }\, \theta}
\frac{\partial}{\partial\theta}+\frac{1}{\sin^{2}\theta}\Delta_{S^{2}}\,.
$$
It follows from the Sturm-Liouville theory (see also e.g. \cite{SW}) that an orthonormal basis of
$L^{2}_{rad}(S^3)$ can be build by the functions
$$
P_{n}(\theta)=\sqrt{\frac{2}{\pi}}\,\frac{\sin n\theta}{\sin\theta},\quad\theta\in[0,\pi],\quad n\geq 1,
$$
where $\theta$ connotes the geodesic distance to the north pole of $S^3$.
The functions $P_n$ are eigenfunctions of $-\Delta_{S^3}$
with corresponding eigenvalue $\lambda_{n}=n^2-1$.
We next define the function $\gamma : \mathbb N^{4}\longrightarrow \mathbb R$ by
$$
\gamma(n,n_1,n_2,n_3)\equiv\int_{S^3}
P_{n}P_{n_1}P_{n_2}P_{n_3}.
$$
Then clearly
$$
P_{n_1}P_{n_2}P_{n_3}
=
\sum_{n=1}^{\infty}\gamma(n,n_1,n_2,n_3)P_{n}
$$
and thus the behaviour of $\gamma$ would be of importance when analysing cubic expressions on $S^3$.
In the next lemma we give a bound for $\gamma(n,n_1,n_2,n_3)$.
\begin{lemme}\label{l1}
One has the bound
$
0\leq \gamma(n,n_1,n_2,n_3)\leq (2/\pi)\min(n,n_1,n_2,n_3).
$
\end{lemme}
\begin{proof}
Using the explicit formula for $P_n$ and some trigonometric considerations, we obtain the relation
\begin{equation}\label{basic}
P_{k}P_{l}=
\sqrt{\frac{2}{\pi}}\,
\sum_{j=1}^{\min(k,l)}
P_{|k-l|+2j-1},\quad k\geq 1,\, l\geq 1.
\end{equation}
By symmetry we can suppose that $n_{1}\geq n_{2}\geq n_{3}\geq n$. Then due to (\ref{basic}) we obtain
that $P_{n}P_{n_3}$ can be
expressed as a sum of $n$ terms while the sum corresponding to $P_{n_1}P_{n_2}$ contains $n_2$ terms.
Since for $k\neq l$ one has
$\int_{S^3}P_{k}P_{l}=0$, we obtain that the contribution to $\gamma(n,n_1,n_2,n_3)$ of any of the term
of the sum for $P_{n}P_{n_3}$
is not more than $2/\pi$ and therefore $\gamma(n,n_1,n_2,n_3)\leq (2/\pi)n$. This completes the proof of Lemma~\ref{l1}.
\end{proof}
We shall also make use of the following property of
$\gamma(n,n_1,n_2,n_3)$.
\begin{lemme}\label{l2}
Let $n>n_{1}+n_{2}+n_{3}$. Then $\gamma(n,n_1,n_2,n_3)=0.$
\end{lemme}
\begin{proof}
One needs simply to observe that in the spectral decomposition of $P_{n_1}P_{n_2}P_{n_3}$ there
are only spherical
harmonics of degree $\leq n_{1}+n_2+n_3$ and therefore $P_{n_1}P_{n_2}P_{n_3}$ is orthogonal to $P_n$.
This completes the proof of Lemma~\ref{l2}.
\end{proof}
\begin{remarque}
Let us observe that $(\pi/2)\gamma(n,n_1,n_2,n_3)\in\mathbb Z$. This fact is however not of importance for the sequel.
\end{remarque}
\subsection{The cubic defocusing NLS on $S^3$}
Consider the cubic defocusing nonlinear Schr\"odinger equation, posed on $S^3$,
\begin{equation}\label{5}
(i\partial_{t}+\Delta_{S^3})u-|u|^{2}u=0,
\end{equation}
where $u:\mathbb R\times S^3\longrightarrow \mathbb C$. By the variable change $u\rightarrow e^{it}u$,
we can reduce (\ref{5}) to
\begin{equation}\label{6}
(i\partial_{t}+\Delta_{S^3}-1)u-|u|^{2}u=0.
\end{equation}
We will perform our analysis to the equation (\ref{6}).
The Hamiltonian associated to (\ref{6}) is
$$
H(u,\bar{u})=\int_{S^3}|\nabla u|^{2}+\int_{S^3}|u|^{2}+\frac{1}{2}\int_{S^3}|u|^{4},
$$
where $\nabla$ denotes the riemannian gradient on $S^3$.
We will study zonal solutions of (\ref{6}), i.e. solutions such that $u(t,\cdot)$ is a zonal function on $S^3$.
Let us fix $s<1/2$.
The free measure, denoted by $\mu$, associated to (\ref{6}) is the distribution of the $H^s_{rad}(S^3)$
random variable
\begin{equation*}
\varphi(\omega,\theta)=
\sum_{n= 1}^{\infty}\frac{g_n(\omega)}{n}P_{n}(\theta)\, ,
\end{equation*}
where $g_n(\omega)$ is a sequence of centered, normalised, independent identically distributed (i.i.d.)
complex Gaussian random variables, defined in a probability space $(\Omega,{\mathcal F},p)$.
Using Lemma~\ref{l1}, we obtain that
$$
\|P_{n}\|_{L^4(S^3)}\leq n^{\frac{1}{4}}\,.
$$
and therefore using Lemma~\ref{lem1}, as in the proof of Theorem~\ref{thm1},
we get
$$
\|\varphi(\omega,\theta)\|^{2}_{L^4(\Omega\times S^3)}\leq\sum_{n= 1}^{\infty}
\frac{C}{n^2}\|P_{n}\|_{L^4(S^3)}^{2}\leq C \sum_{n= 1}^{\infty}\frac{n^{\frac{1}{2}}}{n^2}<\infty\,.
$$
Hence the image measure on $H^s_{rad}(S^3)$ under the map
$$
\omega\longmapsto \sum_{n= 1}^{\infty}\frac{g_n(\omega)}{n}P_{n}(\theta)\, ,
$$
of
$$
\exp\Big(-\frac{1}{2}\|\varphi(\omega,\cdot)\|_{L^4(S^3)}^{4}\Big)dp(\omega)
$$
is a nontrivial measure which could be expected to be invariant under a flow of (\ref{6}).
For that purpose one should define global dynamics of (\ref{6}) on a set of full $\mu$ measure,
i.e. solutions of (\ref{6}) with data $\varphi(\omega,\theta)$ for
typical $\omega$'s. Using for instance the Fernique integrability theorem one has that
$\|\varphi(\omega,\cdot)\|_{H^{1/2}(S^3)}=\infty$ $\mu$ a.s. Thus one needs to establish a
well-define (and stable in a suitable sense) dynamics for data of Sobolev regularity $<1/2$.
There is a major problem if one tries to solve this problem for individual
$\omega$'s since the result of \cite{CCT} (see also the appendix of \cite{BGT}) shows that (\ref{6}) is
in fact ill-posed for data of Sobolev regularity $<1/2$ and
the data giving the counterexample can be chosen to be a zonal function since the analysis uses only
point concentrations.
Therefore, it is possible that solving (\ref{6}) with data $\varphi(\omega,\theta)$, for typical $\omega$'s,
would require a probabilistic argument in the spirit
of the definition of the stochastic integration.
Below, we present an estimate which gives a control on the second Picard iteration with data
$\varphi(\omega,\theta)$.
\\
Let us consider the integral equation (Duhamel form) corresponding to (\ref{6}) with data
$\varphi(\omega,\theta)$
\begin{equation}\label{Duhamel}
u(t)=S(t)(\varphi(\omega,\cdot))-i\int_{0}^{t}S(t-\tau)(|u(\tau)|^{2}u(\tau))d\tau,
\end{equation}
where $S(t)=\exp(it(\Delta_{S^3}-1))$ is the unitary group generated by the free evolution.
The operator $S(t)$ acts as an isometry on $H^{s}(S^3)$ which can be easily seen by expressing $S(t)$
in terms of the spectral decomposition.
One can show (see \cite{BGT}) that for $s>1/2$, the Picard iteration applied in the context of
(\ref{Duhamel}) converges, if we replace $\varphi(\omega,\cdot)$
in (\ref{Duhamel}) by data in $u_0\in H^{s}(S^3)$, in the Bourgain spaces $X^{s,b}([-T,T]\times S^3)$,
where $b>1/2$ is close to $1/2$, $T\sim (1+\|u_0\|_{H^s(S^3)})^{-\beta}$
(for some $\beta>0$ depending on $b$ and $s$).
For the definition the Bourgain spaces $X^{s,b}([-T,T]\times S^3)$ associated
to $\Delta_{S^3}$, we refer to \cite{BGT} (see also (\ref{7}) below).
The modification for $\Delta_{S^3}-1$ is then direct.
Let us set (the first Picard iteration)
$$
u_{1}(\omega,t,\theta)\equiv S(t)(\varphi(\omega,\cdot))=
\sum_{n=1}^{\infty}\frac{g_{n}(\omega)}{n}P_{n}(\theta)e^{-itn^2} \,.
$$
The random variable $u_{1}$ represents the free evolution. Notice that again
$$
\|u_1(\omega,t,\cdot)\|_{H^{1/2}(S^3)}=\infty,\quad {\rm a.s.}
$$
but for every $\sigma<1/2$,
$$
\|u_1(\omega,t,\cdot)\|_{H^{\sigma}(S^3)}<\infty,\quad {\rm a.s.}
$$
Let us consider the second Picard iteration
$$
u_{2}(\omega,t,\theta)\equiv S(t)(\varphi(\omega,\cdot))-
i\int_{0}^{t}S(t-\tau)(|u_1(\omega,\tau)|^{2}u_1(\omega,\tau))d\tau\,.
$$
Set
$$
v_{2}(\omega,t,\theta)\equiv\int_{0}^{t}S(t-\tau)(|u_1(\omega,\tau)|^{2}u_1(\omega,\tau))d\tau\,.
$$
Thanks to the ``dispersive effect'', $v_{2}$ is again a.s. in all $H^{\sigma}(S^3)$ for $\sigma<1/2$.
\begin{proposition}\label{thm4}
Let us fix $\sigma<1/2$. Then for $b>1/2$ close to $1/2$ and every $T>0$,
$$
\|v_{2}(\omega,t,\theta)\|_{L^2(\Omega\,;\,X^{\sigma,b}([-T,T]\times S^3))}<\infty.
$$
In particular
$$
\|v_{2}(\omega,t,\theta)\|_{L^2(\Omega\,;\,L^{\infty}([-T,T]\,;\,H^{\sigma}(S^3)))}<\infty
$$
and thus $\|v_{2}(\omega,\cdot,\cdot)\|_{L^{\infty}([-T,T]\,;\,H^{\sigma}(S^3))}$ is a.s. finite
which implies that the second Picard iteration for (\ref{Duhamel}) is a.s. in $H^{\sigma}$.
\end{proposition}
\begin{remarque}
Using estimates on the third order Wiener chaos, we might show that higher moments and Orlitch norms
with respect to $\omega$ are finite.
\end{remarque}
\begin{proof}[Proof of Proposition~\ref{thm4}]
Let $\psi\in C_{0}^{\infty}(\mathbb R;\mathbb R)$ be a bump function localizing in $[-T,T]$.
Let $\psi_1\in C_{0}^{\infty}(\mathbb R;\mathbb R)$ be a bump function which equals one on the support of $\psi$.
Set
$$
w_{1}(t)\equiv \psi_1(t)u_{1}(t).
$$
Then using \cite{Gi}, for $b>1/2$ (close to $1/2$),
\begin{eqnarray*}
\|v_{2}(\omega,\cdot)\|_{X^{\sigma,b}([-T,T]\times S^3)}
& \leq &
\|\psi\, v_{2}(\omega,\cdot)\|_{X^{\sigma,b}(\mathbb R\times S^3)}
\\
& \leq &
C\||w_{1}(\omega,\cdot)|^{2}w_{1}(\omega,\cdot)\|_{X^{\sigma,b-1}(\mathbb R\times S^3)}.
\end{eqnarray*}
Set
$$
w(\omega,t,\theta)\equiv |w_{1}(\omega,t,\theta)|^{2}w_{1}(\omega,t,\theta).
$$
We need to show that the $L^2(\Omega)$ of $\|w(\omega,\cdot)\|_{X^{\sigma,b-1}(\mathbb R\times S^3)}$ is finite.
If
$$
w(\omega,t,\theta)=\sum_{n=1}^{\infty}c(\omega,n,t)P_{n}(\theta)
$$
then we have
\begin{equation}\label{7}
\|w(\omega,\cdot)\|_{X^{\sigma,b-1}(\mathbb R\times S^3)}^{2}
=
\sum_{n=1}^{\infty}\int_{-\infty}^{\infty}
\langle \tau+n^{2}\rangle^{2(b-1)}n^{2\sigma}|\widehat{c}(\omega,n,\tau)|^{2}d\tau,
\end{equation}
where $\widehat{c}(\omega,n,\tau)$ denotes the Fourier transform with respect to $t$ of $c(\omega,n,t)$.
Let us next compute $c(\omega,n,t)$. This will of course make appeal to the function $\gamma$ introduced
in the previous section.
We have that
$$
w(\omega,t,\theta)=\psi_{1}^{3}(t)
\sum_{(n_1,n_2,n_3)\in\mathbb N^3}
\frac{g_{n_1}(\omega) \overline{g_{n_2}(\omega)} g_{n_3}(\omega) }
{n_1 n_2 n_3}P_{n_1}(\theta)P_{n_2}(\theta)P_{n_3}(\theta)
e^{-it(n_1^2-n_2^2+n_3^2)}
$$
and therefore
$$
c(\omega,n,t)=\psi_{1}^{3}(t)\sum_{(n_1,n_2,n_3)\in\mathbb N^3}\gamma(n,n_1,n_2,n_3)
\frac{g_{n_1}(\omega) \overline{g_{n_2}(\omega)} g_{n_3}(\omega) }
{n_1 n_2 n_3}e^{-it(n_1^2-n_2^2+n_3^2)}\,.
$$
If we denote $\psi_2=\psi_1^3$ then
$$
\widehat{c}(\omega,n,\tau)=\sum_{(n_1,n_2,n_3)\in\mathbb N^3}\gamma(n,n_1,n_2,n_3)
\frac{g_{n_1}(\omega) \overline{g_{n_2}(\omega)} g_{n_3}(\omega) }
{n_1 n_2 n_3}\widehat{\psi_2}\big(\tau+n_1^2-n_2^2+n_3^2\big)\,.
$$
Let us observe that thanks to the independence of $(g_{n})_{n\in\mathbb N}$ we have that there are essentially
two different situations when the expression
\begin{equation}\label{combin}
\int_{\Omega}
g_{n_1}(\omega) \overline{g_{n_2}(\omega)} g_{n_3}(\omega)
\overline{g_{m_1}(\omega)} g_{m_2}(\omega)\overline{g_{m_3}(\omega)}
dp(\omega)
\end{equation}
is different from zero. Namely
\begin{itemize}
\item
$n_1=m_1$, $n_2=m_2$, $n_3=m_3$,
\item
$n_1=n_2$, $n_3=m_1$, $m_2=m_3$.
\end{itemize}
Indeed, the complex gaussians $g_n$ satisfy
$$
\int_{\Omega}g_{n}(\omega)dp(\omega)=\int_{\Omega}g^2_{n}(\omega)dp(\omega)
=\int_{\Omega}g^3_{n}(\omega)dp(\omega)=\int_{\Omega}|g_{n}(\omega)|^2g_{n}(\omega)dp(\omega)=0
$$
and thus in order to have a nonzero contribution of (\ref{combin}) each gaussian without a bar
in the integral (\ref{combin}) should be coupled with another gaussian
having a bar and the same index.
Therefore coming back to (\ref{7}), we get
$$
\int_{\Omega}\|w(\omega,\cdot)\|_{X^{\sigma,b-1}(\mathbb R\times S^3)}^{2}\,dp(\omega)\leq C(I_1+I_2),
$$
where
\begin{equation}\label{8}
I_1=\sum_{(n,n_1,n_2,n_3)\in\mathbb N^4}\int_{-\infty}^{\infty}\frac{n^{2\sigma}}{\langle \tau+n^{2}\rangle^{\beta}}
\frac{\gamma^{2}(n,n_1,n_2,n_3)}{(n_1 n_2 n_3)^{2}}|\widehat{\psi_2}|^{2}\big(\tau+n_1^2-n_2^2+n_3^2\big)d\tau,
\end{equation}
with $\beta\equiv 2(1-b)$ and
\begin{equation}\label{8bis}
I_2=
\sum_{(n,n_1,n_2,n_3)\in\mathbb N^4}\int_{-\infty}^{\infty}\frac{n^{2\sigma}}{\langle \tau+n^{2}\rangle^{\beta}}
\frac{\gamma(n,n_1,n_1,n_2)\gamma(n,n_2,n_3,n_3)}{(n_1 n_1 n_2)(n_2 n_3 n_3)}
|\widehat{\psi_2}|^{2}\big(\tau+n_2^2\big)d\tau\,.
\end{equation}
Notice that $\beta<1$ is close to $1$ when $b>1/2$ is close to $1/2$.
Thus our goal is to show the convergence of (\ref{8}) and (\ref{8bis}).
For that purpose we make appeal to the following lemma.
\begin{lemme}\label{ihp}
For every $\sigma\in]0,1/2[$ there exist $\beta<1$ and $C>0$ such that for every $\alpha\in\mathbb R$,
\begin{equation}\label{vlak}
\sum_{n=1}^{\infty}\frac{n^{2\sigma}}{(1+|n^2-\alpha|)^{\beta}}\leq
C(1+|\alpha|)^{\sigma}\,.
\end{equation}
\end{lemme}
\begin{proof}
Let $\beta<1$ be such that $2\beta-2\sigma>1$, i.e.
$
1/2+\sigma<\beta<1.
$
We prove (\ref{vlak}) for such values of $\beta$. The contribution of the region $\frac{1}{4}n^2\geq|\alpha|$
to the left hand-side of (\ref{vlak}) can be bounded by
$$
\sum_{n=1}^{\infty}\frac{n^{2\sigma}}{(1+\frac{3}{4}n^2)^{\beta}}\leq C_{\sigma}\leq
C_{\sigma}(1+|\alpha|)^{\sigma}
$$
thanks to the assumption $2\beta-2\sigma>1$ and since for $\frac{1}{4}n^2\geq|\alpha|$ one has
$|n^2-\alpha|\geq \frac{3}{4}n^2$. We next estimate the contribution of the region
$\frac{1}{4}n^2\leq|\alpha|$ (if it is not empty) by
$$
(4|\alpha|)^{\sigma}\sum_{n=1}^{\infty}\frac{1}{(1+|n^2-\alpha|)^{\beta}}
\leq C_{\sigma}|\alpha|^{\sigma}\,.
$$
This completes the proof of Lemma~\ref{ihp}.
\end{proof}
Let us now show the convergence of (\ref{8}).
Using the rapid decay of $|\widehat{\psi_2}|^{2}$, we can eliminate the $\tau$ integration and arrive at
\begin{equation}\label{9}
(\ref{8})\leq
C\sum_{(n,n_1,n_2,n_3)\in\mathbb N^4}
\frac{n^{2\sigma}\gamma^{2}(n,n_1,n_2,n_3)}{(1+|n^2-n_1^2+n_2^2-n_3^2|)^{\beta}(n_1 n_2 n_3)^{2}}\,.
\end{equation}
Using Lemma~\ref{l1} ans Lemma~\ref{ihp}, we obtain that with a suitable choice of $\beta<1$ one has
\begin{eqnarray*}
(\ref{8})& \leq &
C\sum_{(n_1,n_2,n_3)\in\mathbb N^3}
\frac{(1+|n_1^2-n_2^2+n_3^2|)^{\sigma}(\min(n_1,n_2,n_3))^{2}}
{(n_1 n_2 n_3)^{2}}
\\
& \leq &
C\sum_{(n_1,n_2,n_3)\in\mathbb N^3}
\frac{(\max(n_1,n_2,n_3))^{2\sigma}(\min(n_1,n_2,n_3))^{2}}
{(n_1 n_2 n_3)^{2}}
\\
& \leq &
C\sum_{n_3\leq n_2\leq n_1}
\frac{n_1^{2\sigma}n_2 n_3}{(n_1 n_2 n_3)^{2}}
\leq
C\sum_{n_1=1}^{\infty}\frac{n_1^{2\sigma}(\log(1+n_1))^{2}}{n_1^2}<\infty.
\end{eqnarray*}
Let us next analyse (\ref{8bis}).
Using the rapid decay of $|\widehat{\psi_2}|^{2}$, we can eliminate the $\tau$ integration and arrive at
\begin{equation*}
(\ref{8bis})\leq
C\sum_{(n,n_1,n_2,n_3)\in\mathbb N^4}\frac{n^{2\sigma}}{(1+|n_2^{2}-n^{2}|)^{\beta}}
\frac{\gamma(n,n_1,n_1,n_2)\gamma(n,n_2,n_3,n_3)}{(n_1 n_1 n_2)(n_2 n_3 n_3)}\,.
\end{equation*}
Using Lemma~\ref{l1} ans Lemma~\ref{ihp}, we obtain that with a suitable choice of $\beta<1$ one has
\begin{equation*}
(\ref{8bis})\leq
C\sum_{(n_1,n_2,n_3)\in\mathbb N^3}
\frac{n_2^{2\sigma} \min(n_1,n_2)\min (n_3,n_2) }{(n_1 n_2 n_3)^{2}}\,.
\end{equation*}
Let us fix $\varepsilon>0$ such that $\sigma+\varepsilon<1/2$.
Therefore, we can write
$$
(\ref{8bis})\leq
C\sum_{(n_1,n_2,n_3)\in\mathbb N^3}\frac{n_2^{2\sigma+2\varepsilon}(n_1 n_3)^{1-\varepsilon}}{(n_1 n_2 n_3)^{2}}
<\infty\,.
$$
This completes the proof of Proposition~\ref{thm4}.
\end{proof}
|
2,869,038,156,633 | arxiv | \section{Introduction}
The amount of data available and used for training public datasets is vast, yet there is an inherent bias in these datasets towards certain ethnicity groups like caucasian faces as compared to other ethnicities such as Asian, African, Indian, etc. There is definitely a need to mitigate the bias and emphasize on the improvement of fairness in face detection algorithms. This will improve the efficiency and accuracy of Face Verification (FV), recognition, anonymization and other use-cases of face detection.
With the advent of publicly available images on social media and the internet, there is a need to enforce personal privacy of people by performing face anonymization on these images. In this work, we propose a ML pipeline to detect faces using a robust multi-task cascaded CNN architecture along with other pre-trained models such as VGGFace2 \cite{https://doi.org/10.48550/arxiv.1710.08092} and FaceNet \cite{Schroff_2015} to anonymize the detected faces and blur them using a Gaussian function. We also benchmark the performance of certain custom and pre-trained models on various open-sourced datasets such as MIAP \cite{Schumann_2021}, FairFace \cite{https://doi.org/10.48550/arxiv.1908.04913}, and RFW \cite{https://doi.org/10.48550/arxiv.1812.00194} (Racial Faces in Wild) to understand the bias of models trained on these datasets. Along with face anonymization, we also determine the age and gender demographics of the detected faces to find any bias present in open-source models. We also evaluate the performance of these open-source models before and after training it on a diverse and fairness-induced dataset by proposing a decentralized system of data evaluation and verification by users of the model output generated (faces detected in the input), see section \ref{dataPortal}.
Lastly, we also discuss ways to de-bias the data during pre-processing and post-processing and how to reduce the false positives using clustering and statistical analysis of the generated output. We propose a decentralized platform for data collection and annotation of data with user incentives for detecting any machine-undetected faces in images as part of an initiative to increase model fairness and reduce ethnicity, age, and gender bias.
\section{Related Work}
The current systems in computer vision have higher yield and astonishing results in several areas, but there are several societal issues related to demographics, ethnicity, gender, age, etc. that have been discussed more recently due to their usage in face recognition, object detection and other applications \cite{terhorst2020face} \cite{terhorst2021comprehensive} \cite{klare2012face}. Most image recognition algorithms have high disparity in performance on images across the world as discussed in \cite{work-for-everyone} \cite{issuesinData} \cite{yang2020towards} due to the bias in the dataset used for training and also the differences in pipeline used. This bias is generally due to dataset disparity since most of the open-source datasets created and benchmarked are localized to only a few locations restricting the diversity in data quality. Secondly, the other set of related papers talk about harmful and mislabelled data associations which can often lead to a lot of wrongful associations across gender and ethnicity groups in general as discussed by Crawford et al. \cite{crawford2019excavating}. Some of the other indicators which causes disparity in performance of a face detection algorithm towards certain groups of people is due to bias in learned representations or embeddings of users of underrepresented groups and other demographic traits. Raji et al. \cite{raji} talks about reduction of errors in evaluating the commercial face detectors by changing the evaluation metrics used. Ensuring privacy as part of face recognition campaign is an equally important problem, and limited research has been done on the task of extracting and remove private and sensitive information from public dataset and image databases. There has been a few previous work done in literature \cite{privacy1} \cite{privacy2} \cite{privacy3} that blur the background or use gaussian/pixelation functions to blur faces in an image.
To improve the robustness and add fairness to the datasets and models used in the above problem approach, we propose a decentralized tool for collecting, annotating and verifying the face detections made by face recognition algorithms across different parts of the world to ensure the data samples collected are rich in diversity, help identify the bias in current commercial and open-source models, generate edge-cases and training samples that could be used to retrain these detectors to improve the coverage of data distribution learnt by our models.
\section{Methodology}
\vspace{-3pt}
We aim to build a robust face anonymization pipeline along with functionalities to determine the characteristics of the detected faces as shown in Fig \ref{fig:faceanon} on a decentralized platform for verification and annotation. We also try to estimate the bias towards certain ethnicities and characteristic features in some of the popular pre-trained model architectures such as MTCNN (Multitask Cascade CNN) \cite{Zhang_2016}, FaceNet \cite{Schroff_2015} and RetinaNet \cite{https://doi.org/10.48550/arxiv.1708.02002} against the open-source datasets used for understanding and evaluating bias in the face detectors.
\begin{figure}[ht]
\includegraphics[width=13.5cm, height=9cm]{architecture.png}
\caption{End-to-End Architecture of Face Anonymization and attribute extraction}
\label{fig:faceanon}%
\end{figure}
\subsection{Datasets}
In order to understand the bias of ethnicity, age, and gender, it is important to evaluate the performance of classification of different ethnicities as a binary task of faces detected and undetected to understand if there is a bias towards some ethnicity classes having stronger attribute indicators as compared to the rest. The following datasets are a good benchmark to determine the bias since each of these datasets have been labeled and open-sourced keeping the diversity and inclusion of most ethnicities in mind.
\textbf{MIAP Dataset:} The MIAP (More Inclusive Annotations for People) Dataset \cite{Schumann_2021} is a subset of Open Images Dataset with a new set of annotations for all people found in these images enabling fairness in face detection algorithms. The Dataset contains new annotations for 100,000 Images \textit{(Training set of 70k and Valid/ Test set of 30k images)}. Annotations of the dataset include 454k bounding boxes along with Age and Gender group representations.
\textbf{FairFace Dataset:} FairFace\cite{https://doi.org/10.48550/arxiv.1908.04913} a facial image database contains nearly 100k images which are also created to reduce the bias during training by having equal representation of classes from YFCC-100M Flickr dataset \cite{Thomee_2016}. The dataset consists of 7 classes namely, White, Latino, Indian, East Asian, Southeast Asian, Black, and Middle Eastern. Models trained on FairFace have reported higher performance metrics\cite{https://doi.org/10.48550/arxiv.1908.04913} as compared to other general datasets, and hence, we have included this dataset as well as part of our study.
\textbf{Racial Faces in the Wild (RFW):} The RFW \cite{https://doi.org/10.48550/arxiv.1812.00194} Database primarily consists of four test subsets in terms of ethnic backgrounds, namely Indian, Asian, African, and Caucasian. Each subset consists of images for face verification, which is around 10k images of 3k individuals.
\subsection{Architecture}
The end-to-end pipeline uses multiple models to detect faces from the input images. MTCNN \cite{Zhang_2016} and VGGFace \cite{https://doi.org/10.48550/arxiv.1710.08092} are used for generating bounding boxes of the detected faces, post which, we enhance the output bounding boxes and extract the face image to generate a gaussian blurred image as part of our goal to anonymize the faces. These architectures have been employed and are chosen as standard models for face attribute extraction algorithms. The non-anonymized copy of the detected images are used as input to the FaceNet \cite{Schroff_2015} model for generating the face embedding vectors.
The \textbf{MTCNN} architecture proposed by Zhang et al. \cite{Zhang_2016} mainly consists of three different stages and each stage consists of a Neural Network, namely, the Proposal Network, Refine Network, and the Output network. The first stage uses a shallow CNN architecture to generate candidate proposal windows, which the Refine network enhances with a deeper CNN. The output network refines the result of previous layers and generates the face landmark positions. Since the architecture uses different face landmark locations to estimate a face, we use it as part of our experiment to evaluate face recognition datasets for estimating inherent bias.
\textbf{FaceNet} is another model proposed by Schroff et al. \cite{Schroff_2015} outputting a 128-dimension vector, also known as a face embedding which is optimized to differentiate between similar and dissimilar faces using euclidean metrics. The architecture uses a triplet-based loss function, which uses positive and negative samples to estimate the distance between each other respectively as part of the loss function. For each face detected in the inferred image, an embedding is calculated. We use FaceNet embeddings to cluster similar faces using DBSCAN \cite{10.5555/3001460.3001507} on faces extracted from the MTCNN model. DBSCAN generally uses two parameters, namely, the minimum distance between two instances for them to be grouped together, and second, the number of points to form a cluster. So, if the distance between two faces is high, they tend to form different clusters. PCA \cite{MACKIEWICZ1993303}, a very popular dimensionality reduction technique is used to reduce the 128-dimensional vector to a 2-dimensional vector to visualize the cluster faces as part of estimating bias in the algorithms.
Generally, the undetected faces and misclassified faces in the dataset for different pre-trained and popular model architectures form outliers or belong to wrong clusters which are easy to identify. We then employ clustering metrics to estimate the embedding-based partitioning of clusters, such as \textit{Mean Silhouette Coefficient} which measures similarity and dissimilarity between elements of a certain cluster, and, \textit{Davies-Bouldin index} \cite{petrovic2006comparison}.
\subsection{Decentralized Data Collection Platform}
\label{dataPortal}
We propose a decentralized data platform for crowdsourcing images and image datasets. The purpose of the tool is to run inferences on images that users upload and perform face anonymization using our algorithm. This creates two opportunities for incentivizing users to use our tool:
\textbf{(1)} Users can now annotate images on the interface directly in case of any undetected faces (False negatives) and wrong detections (False positives) and post successful verification by verifiers, will be incentivized in form of bounties and revenue shares,
\textbf{(2)} Users can upload, annotate, and verify annotations of images while still keeping the ownership of their data - they give a license to the platform to use it and in return get a share in the revenue their contributions create, i.e, any form of revenue generated using models built using the dataset will ensure that users receive a royalty for contributing to the dataset. Also, the missed edge cases (face detections and false positives) across various images will be collected in the system and will be used for retraining the face detectors in a periodic manner to improve the performance of the model and enrich fairness and reduce the inherent bias in the data and trained model.
Distributing ownership of the dataset across creators, annotators and verifiers will democratize the system of ownership without only one central party controlling it and the revenue/value generated by the built algorithms and datasets can flow back to the community directly. The trained model and inference algorithm will be published on decentralized algorithm marketplaces so it will be possible to run inference in decentralized compute environments to make downloading and copying the model impossible. For end-to-end workflow, refer appendix \ref{fig:dataPortalPipeline}.
\section{Results}
As seen in Table \ref{table:1}, we present a few statistical metrics to determine the fairness for different ethnicities in the RFW dataset using MTCNN \cite{Zhang_2016} model and FaceNet embeddings. It is not directly evident from the results of one group getting better results consistently, but a clear pattern in the bias towards certain ethnicities became evident on a deeper study. The prediction accuracy for Asian (A) and Black (B) groups was lower compared to Indian (I) and White (W). But, this is not enough to indicate the bias as there isn't a significant difference between different groups. However, Positive Predictive Value (PPV) and False Positive Rate (FPR) indicate higher confidence in White faces than other groups with a significantly lower False Positive Rate, and this pattern is also seen in PPV, as for white faces, the value is as high as 0.98 and compared to Asian group which is only around 0.78 indicating higher precision rates in detecting white faces compared to other groups.
\begin{table}[h!]
\centering
\caption{Statistical metrics for RFW Dataset using pre-trained MTCNN + FaceNet embeddings}
\begin{tabular}{||c c c c c||}
\hline
\textbf{Metrics (M)} &\textbf{ Asian(A)} & \textbf{Indian(I)} & \textbf{Black(B)} & \textbf{White(W)} \\ [0.5ex]
\hline\hline
Prediction Accuracy & 0.91 & 0.95 & 0.92 & \textbf{0.97} \\
False Positive Rate & 0.07 & 0.04 & 0.08 & \textbf{0.005} \\
False Negative rate & 0.05 & 0.08 & 0.04 & 0.14 \\
Positive Predictive Value & 0.78 & 0.93 & 0.82 & \textbf{0.98}\\ [1ex]
\hline
\end{tabular}
\vspace{4pt}
\label{table:1}
\end{table}
We also tried to quantify the similarity between users in a given cluster extracted and processed by FaceNet embeddings followed by dimensional reduction techniques for both MIAP and RFW datasets in Table \ref{table:2}. As we can see, the trend in mean silhouette score (MSC) is such that an attribute with higher number of distinct clusters has a higher score, indicating higher similarity between elements to their own cluster. Clearly for MIAP \cite{Schumann_2021}, as we can see when we calculate the metrics for combined clusters of race and gender, the MSC score is higher than when calculated individually, indicating that the gender clusters in any race are closer and correlated than between two ethnicities (racial groups). The Davies-Bouldin index also shows a very similar pattern for the MIAP dataset indicating that clusters are best separated when combined than when clustered individually in the order: both clustered together, racial groups clustered together, and finally clustered based on gender.
\begin{table}[h!]
\centering
\caption{Clustering metrics for RFW and MIAP Dataset using MTCNN + FaceNet embeddings}
\begin{tabular}{||c c c c c||}
\hline
Metrics & RFW-Race & MIAP-Race & MIAP-Gender & MIAP-Both \\ [0.5ex]
\hline\hline
MSC & 0.12 & 0.16 & 0.09 & 0.19 \\
DBI & 4.21 & 3.89 & 6.47 & 3.64 \\ [1ex]
\hline
\end{tabular}
\vspace{4pt}
\label{table:2}
\end{table}
These results clearly state the need for a trained model that is unbiased towards all ethnicities and gender groups. To enrich fairness in training, the MTCNN + FaceNet model was retrained on a FairFace \cite{https://doi.org/10.48550/arxiv.1908.04913}, a balanced dataset with equal distribution of all ethnicities and gender groups with adjusted labels of Race similar to RFW and MIAP Datasets. The increase in prediction accuracy of classes ranged between 1\% to 5.5\%, and PPV showed an increase of upto 19\% after retraining. This shows there was a clear improvement in the performance of the model as shown in Table \ref{table:3} indicating that, an unbiased dataset used for training along with a few data augmentation techniques can improve the model performance such that, the results are not biased towards any single gender or racial group.
\begin{table}[h!]
\centering
\caption{Statistical metrics for RFW Dataset using FairFace trained MTCNN + FaceNet embeddings}
\begin{tabular}{||c c c c c||}
\hline
Metrics (M') & Asian(A) & Indian(I) & Black(B) & White(W) \\ [0.5ex]
\hline\hline
Prediction Accuracy & 0.96 & 0.95 & 0.97 & 0.98 \\
False Positive Rate & 0.01 & 0.01 & 0.008 & 0.005 \\
False Negative rate & 0.05 & 0.03 & 0.04 & 0.04 \\
Positive Predictive Value & 0.93 & 0.92 & 0.94 & 0.98\\ [1ex]
\hline
\end{tabular}
\vspace{4pt}
\label{table:3}
\end{table}
Hence, as proposed in Section \ref{dataPortal}, a Data Portal will be used for the curation and publishing of various datasets with the support of annotators and verifiers. The incentive of ownership in dataset usage and also for labeling incorrectly detected faces and missed face detections on the tool also helps increase the engagement of users on the portal to challenge the model and receive a bounty in return. This will help us to periodically retrain the face anonymization models on various edge cases and improve fairness in these models in a decentralized manner.
\section{Conclusion}
\vspace{-4pt}
In conclusion, we believe that measuring fairness in face anonymization algorithms is necessary to deploy technology that is unbiased and more inclusive to all different ethnicities, gender, and age groups. We proposed a decentralized tool to improve the quality of training datasets used in modeling face recognition algorithms by shifting focus onto identifying and quantifying "bias" in the core algorithm towards different groups and de-biasing it. The debiasing steps included both, creating a diverse dataset with a better representation of most demographics and retraining all the layers of the core algorithm to allow the same model to be fine-tuned (Dense layers only) periodically based on the missed detections identified by the annotators and verifiers in the tool. The bias measurement framework was outlined in this paper.
As part of our analysis, we figured that most face detection algorithms are predominantly biased towards white faces across both MIAP and RFW datasets irrespective of the gender groups. Since the clustered embeddings of the FaceNet model showed that clustering metrics were much higher when male and female faces were clustered together across all ethnicities than when clustered separately. This indicates a need for diversity in the dataset across all ethnicities; it is more likely to be fairer when the dataset creation happens in a more decentralized manner and users across the world contribute in adding images, identifying missed detections of a certain demographic group or in validating the corrected output from a fellow user.
In future work, we will focus on answering the questions raised during the above discussion in terms of breaking down the clusters in more detail to help us interpret the correlation between the data points that led the model to cluster certain points closer to each other. We also plan to make the Data Portal public with access to all users to ensure that the users will be able to upload their own data into the pipeline and also get incentivized based on the usage of their data from any algorithm that is built on top of their data. We also plan to improve the anonymization algorithm using a GAN-based approach to ensure the data distribution of the anonymized face does not change completely. In addition, we also plan to integrate Spotify's Annoy \cite{https://doi.org/10.48550/arxiv.1806.09823} for indexing similar faces across the Data Portal to find similar images of users uploaded for denoising the uploaded data for duplicates.
\section{Appendix}
\subsection{Clustering similar faces using FaceNet Embeddings}
The visual representation of the RFW (Racial Faces in Wild) dataset faces clustered using dimensionality reduction technique: t-SNE in 2-dim space followed by DBSCAN algorithm (Converted from 128-dim vector generated by FaceNet representations or face embeddings).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{facenet.png}
\caption{Face Embeddings visualized using t-SNE and DBSCAN}
\label{fig:tsne}
\end{figure}
As seen visually, similar demographic groups are clustered closer to each other in the 2-D space. On using different cluster sizes, the density of clusters changed accordingly. The number of clusters that gave the optimal clustering metrics was chosen for benchmarking the dataset's clustering metrics.
\subsection{Data Portal pipeline}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{dataPortal.png}
\caption{Proposed end-to-end working of the decentralized data portal}
\label{fig:dataPortalPipeline}
\end{figure}
\newpage
{\small
\bibliographystyle{plain}
|
2,869,038,156,634 | arxiv | \section{Introduction}
Feature selection in high-dimensional multiresponse models with complex group structures is important in many scientific fields, especially in genetic and medical studies. Many studies about detecting associations between multiple traits and predictors have found that both responses and predictors are often embedded in some biological functional groups, such as gene-gene associations \citep{park2008penalized,zhang2010new}, protein DNA associations \citep{zamdborg2009discovery}, and brain fMRI-DNA association study \citep{stein2010voxelwise,FAN2020106341}. The intrinsic group structures carry crucial information when modeling high-dimensional multiresponse models. However, these inherent and vital group structures are hard to distinguish.
Without considering the group structures, regular linear models can be classified into univariate response models and multiresponse models. Researchers have proposed many methods for the univariate response models. To select the true feature in univariate response models, one way is to use the penalized regularized regression approaches \citep{tibshirani1996regression, zou2006adaptive, zou2005regularization, fan2001variable, zhang2010nearly}; another efficient technique is to choose features at each step according to a specific criterion, which is called sequential procedures \citep{wang2009forward, ing2011stepwise, cai2011orthogonal, luo2014sequential}. Without considering group structures, these approaches can be extended to the multiresponse models by imposing specific constraints on the coefficient matrix $B$. There are various forms of constraints proposed from different aspects, such as from the singular value of $B$ \citep{yuan2007dimension}; from the $\text{rank}(B)$ \citep{chen2012sparse}; from the norms or the linear combination of norms of $B$ \citep{turlach2005simultaneous, obozinski2011support, peng2010regularized}.
\red{Recent years, many innovative methods for dealing with the linear models under specific situations have been proposed, such as regression with hidden variables \citep{bing2022adaptive}, low-rank regression \citep{cho2022multivariate}, estimation of the rank of the coefficient matrix \citep{bing2019adaptive}.}
These methods are efficient for the uncorrelated data, but they are not suitable for dealing with the data when both the univariate response models and multiresponse models share group structures \citep{biswas2012logistic,zhou2010association}.
With the increase of data dimensions, complex group structures are common in responses and predictors, seriously affecting the statistical modeling. Yet, due to the complexity, this kind of data has seldom been studied. As far as we know, there are a few researches focus on the high-dimensional multiresponse models with the group structures: 1) multivariate sparse group lasso (MSGL) \citep{li2015multivariate}, 2) sequential canonical correlation search (SCCS) \citep{luo2020feature}, 3) Bayesian linear regression \citep{ning2020bayesian}. MSGL is inspired from the group lasso \citep{yuan2006model} and sparse group lasso \citep{simon2013sparse}. \citet{li2015multivariate} extended the sparse group lasso to the high-dimensional multivariate response models with group structures in not only the predictors but also the responses. SCCS is a sequential method under the high dimensional multivariate regression model allowing complex group structures in both responses and predictors. It first selects coefficient block according to the canonical correlation coefficients and then selects nonzero coefficients in blocks by the EBIC \citep{chen2008extended} and often surpasses the MSGL in both accuracy and computation. Further, \citet{XIA2022108745} has studied a stepwise algorithm with the permutation importance measure, and \citet{luo2020feature} has discussed the principle of correlations. Sequential procedures also have been wild studied in other fields, such as decision making \citep{LIU2020105642}, fMRI data analysis \citep{FAN2020106341}, multidimensional time series \citep{9064715}, structural changes in time series \citep{kejriwal2020robust}, parameters estimation for radio equipment \citep{zaliskyi2021sequential}, etc.
Inspired by the related research and driven by empirical requirements, we focus on using the sequential procedure to detect the high-dimensional multiresponse models with group structures in both predictors and responses. In this paper, we propose a sequential method called sequential stepwise screening (SeSS) to detect the associations between predictor groups and response groups. Specifically, at each step, SeSS first selects a predictor group and a response group by calculating the correlation between predictor groups and the current residuals of response groups, then selects a predictor among the predictor group based on the correlation between each predictor and the current residual of the response group. The above strategy has two advantages: 1) It is used to identify the group structures and for the purpose of dimension reduction. 2) It is the key to control the computational time of the method rendering the computational complexity of the complex group structures roughly comparable with regularized regression modeling. We show that the proposed method enjoys the required theoretical properties, and show the effectiveness of this method in simulations and applications. In the empirical analysis, we apply the method to a GeneChip (Affymetrix) microarrays dataset and compare the results with the SCCS.
The rest of the paper is arranged as follows. In section 2, we introduce the models and describe SeSS in detail. In section 3, the main theoretical properties are provided. We present the simulation studies for the comparison of SeSS with the SCCS in section 4. In section 5, the real data is analyzed.
\section{Model and Methods}
In this section, we consider the sparse linear model in high-dimensional data with multiresponse and complex group structures, i.e., grouping responses and predictors according to prior information.
We allow the complex group structures, e.g., partial overlaps between groups and each response group may be related to multiple predictor groups. To handle these structures and obtain accurate estimates, we introduce the following framework and notations, which are mainly followed from \citet{luo2020feature}:
Set $q$ responses by $\mathcal{Y} = \left \{ y_1,\dots,y_q \right \}$, and the $j$th response group by $\mathcal{Y}_j$ $(j = 1,\dots, J)$. Similarly, set $p$ predictors by $\mathcal{X}=\left \{ x_1, \dots, x_p \right \}$ and the $k$th response group by $\mathcal{X}_k$ $(k = 1,\dots,K)$. Groups can be overlapped, for example, $\mathcal{Y}_1 = \{ y_1, y_2 \}$, $\mathcal{Y}_2 = \{ y_2, y_3, y_4 \}$, then $\mathcal{Y}_1 \cap \mathcal{Y}_2 = \{ y_2 \}$. We have $ \mathcal{Y} =\cup^{J}_{j=1}\mathcal{Y}_j $ and $ \mathcal{X} =\cup^{K}_{k=1}\mathcal{X}_k $. We use $ | \mathcal{X}_k |$ denote the number of variables in $\mathcal{X}_k$ and the same as $\mathcal{Y}_j$. Both $p$ and $q$ are much larger than $n$, but the size of $\mathcal{Y}_j$ and $\mathcal{X}_k$ are smaller than $n$.
Consider a sample of size $n$, let $Y$ be the $n \times q$ response matrix, and $X$ be the $n \times p$ predictor matrix. We suppose that each column of $X$ and $Y$ is standardized, i.e., the mean is zero and the variance is $n$. We denote $Y_j$ the matrix which consists the columns in $Y$ corresponding to the responses in group $\mathcal{Y}_j$, and $X_k$ the matrix which consists the columns in $X$ corresponding to the predictors in group $\mathcal{X}_k$. Note that both $\mathcal{Y}_j$, $\mathcal{X}_k$ can be overlapped, thus $Y$ and $X$ are generally not equal to $(Y_1,Y_2,\dots,Y_J)$ and $(X_1,X_2,\dots,X_K)$. For the notation simplicity, we redeclare $X$ and $Y$ as $Y = (Y_1,Y_2,\dots,Y_J)$ and $X = (X_1,\dots,X_K)$. We can write the model as follows:
\begin{equation*}
\big( Y_1, \dots, Y_J \big)
= \big( X_1, \dots, X_K \big)
\left( \begin{array}{ccc}
B_{11} & \dots, & B_{1J}\\
\dots, & \dots, & \dots,\\
B_{K1} & \dots, & B_{KJ}
\end{array}\right)
+ \big(\mathcal{E}_1, \dots, \mathcal{E}_J \big),
\end{equation*}
where $\mathcal{E}_j$'s are random error matrices and $B_{kj}$ are the coefficient blocks.
\red{Each coefficient block \(B_{kj}\) describes the correlation of two groups, group \(X_k\) and group \(Y_j\).}
Denote the matrix consists of all the $B_{kj}$ blocks as $B$. We aim to select the relevant predictors for each group, that is, selecting nonzero elements of $B_{kj}$. To find the nonzero locations and estimate the coefficient matrix accurately, we propose a sequential procedure called SeSS. The procedure includes two parts. The first part focuses on finding the nonzero locations. Specifically, in this part, we select predictors for each response by selecting nonzero blocks in $B$; select nonzero rows in the selected block $B_{kj}$; select nonzero elements in the selected rows. In the second part, based on the obtained result, we conduct least-squared regression on the predictors to each response and screen the predictors for each response by imposing a threshold on the least-squared estimates.
Two measures are considered in the proposed method. One is the canonical correlation coefficients (CC), which is used in the first part that selects nonzero blocks in $B$ and then selects nonzero rows in $B_{kj}$. Let $\tilde Y_j$ be the residual matrix of $Y_j$ after conducting least-squared regression of certain predictors. We write the canonical correlation matrix between and \(X_k\) and \(\tilde Y_j\) as follows:
\[C_{kj} = \Sigma_k^{-1}\Xi_{kj}\Omega_j^{-1}\Xi_{kj},\]
where $\Sigma_k$, $\Omega_j$ and $\Xi_{kj}$ denote the variance matrices of $\mathcal{X}_k$ and $\widetilde{\mathcal{Y}}_j$ respectively and the covariance matrix between $\mathcal{X}_k$ and $\widetilde{\mathcal{Y}}_j$. It can be estimated by
\[\hat{C}_{k j}=\left(X_{k}^{\top} X_{k}\right)^{-1} X_{k}^{\top} \tilde{Y}_{j}\left(\tilde{Y}_{j}^{\top} \tilde{Y}_{j}\right)^{-1} \tilde{Y}_{j}^{\top} X_{k}.\]
Then we set the correlation measure as follows:
\[r(k, j)\triangleq r(X_k,\tilde Y_j) = tr(\hat{C}_{k j}).\]
The other measure is the extended Bayesian Information Criterion (EBIC) \citep{chen2008extended}, which is used as the stopping rule in selecting nonzero elements in the selected rows. Denote $r_{kj}$ as the number of nonzero elements in $B_{kj}$. We set $\zeta$ as a selected model with $m$ nonzero coefficient blocks. Let $\hat{B}^{(j)}(\zeta)$ be the least-square estimate of the $j$th column block, or, equivalently, the $(\hat{B}_{1j} \, \dots, \, \hat{B}_{Kj})^T$ and $\vert B_{k_{l}j_{l}} \vert$ denote the number of total elements of $B_{k_{l}j_{l}}$. Then we have
\begin{align*}
\text{EBIC}(\zeta) = & n\sum_{j=1}^{J}\ln\frac{1}{n} \big\| Y_j - X\hat{B}^{(j)}(\zeta)\big\|^2_F + \lambda_1\sum_{l=1}^{m}r_{k_lj_l}\ln n\\
& + 2\lambda_2 \gamma \left( \ln \binom{KJ}{m} + \ln \binom{\left | B_{k_lj_l} \right |}{r_{k_lj_l}} \right),
\end{align*}
where $\lambda_1$ and $\lambda_2$ are tuning parameters; $\| \cdot \|_F$ denotes the Frobenius norm; and $\gamma = 1 - \ln n/(2\ln p)$, following the setting by \citet{luo2020feature}. Based on the EBIC, we obtain $\tilde Y_j = Y_j - X\hat{B}^{(j)}(\zeta)$. In the following, we show the selecting part of the proposed method.
\begin{center}
Selection of SeSS
\end{center}
\vspace{-.5 cm}
Set $\zeta^* = \varnothing$.
\begin{itemize}
\item \textit{Step 1.} Selection of coefficient blocks: Compute $r(k, j)$ for all the $k$ and $j$. If $r(k^*,{j^*})$ is the maximum, then the $B_{k^*j^*}$, $\left | \mathcal{X}_{k^*} \right | \times \left | \mathcal{Y}_{j^*} \right |$ matrix, will be selected as the nonzero block in this step.
\end{itemize}
\begin{itemize}
\item \textit{Step 2.} Selection of coefficient rows: Let $B_{k^*j^*} = (b_{k^*j^*}^1, \dots, b_{k^*j^*}^{|\mathcal{X}_{k^*}|})^T$.
Compute $r(X_{k^*}^{a},\tilde{Y}_{j^*})$ for all $a = 1,\dots,\left|\mathcal{X}_{k^*}\right| $
and choose the maximum, denoted as $r(X_{k^*}^{(a^*)},\tilde Y_{j^*})$ where $X_{k^*}^{a}$ is the $a$th column of $X_{k^*}$. In this step, a nonzero row will be selected,
denoted as $(b_{k^*j^*}^{a^*})^T$.
\end{itemize}
\begin{itemize}
\item \textit{Step 3.} Selection of nonzero elements:
Let $\mathcal{B}=\left \{ \beta_{a^*m} \right \}$ where $m = 1,\dots,|\mathcal{Y}_{j^*}|$ be the set of
row $(b_{k^*j^*}^{a^*})^T$. Let $\zeta^*$ be the current model. Compute $ \text{EBIC}(\zeta^* \cup \beta_{a^*m})$ for all $\beta_{a^*m}$ in $\mathcal{B}$.
We note $\beta_{a^*\tilde{m}}$ as the minimum of $\{ \text{EBIC}(\zeta^*), \text{EBIC}(\zeta^* \cup \beta_{a^*m}), m = 1,\dots, |\mathcal{Y}_{j^*}|\}$. Update $\zeta^*$ to $\zeta^* \cup \beta_{a^*\tilde{m}}$ and simultaneously update $\mathcal{B}$ to $\mathcal{B} \setminus \beta_{a^*\tilde{m}}$. Repeat \textit{Step 3} until $\text{EBIC}(\zeta^* \cup \beta_{a^*\tilde{m}}) \geqslant \text{EBIC}(\zeta^*)$ or $\mathcal{B} \setminus \beta_{a^*\tilde{m}}$ is empty. Then return to \textit{Step 2} and \textit{Step 1}.
\end{itemize}
When returning to \textit{Step 2}, check whether at least one element in $(b^{k^*j^*}_{a^*})^T$ has been selected. If yes, in \textit{Step 2}, we update $B_{k^*j^*}$ as itself but without considering the row $(b_{k^*j^*}^{a^*})^T$. Else, which rarely happens, means that we do not select any element in row $(b_{k^*j^*}^{a^*})^T$. In this situation, we return directly to \textit{Step 1}. When returning to \textit{Step 1}, similar to above, check whether at least one element in $B_{k^*j^*}$ has been selected. If yes, the pairs $(k^*, j^*)$ will not be considered while computing $r(k, j)$ for all $(k, j)$ pairs. Else, the selection of the SeSS algorithm is completed.
We apply the ordinary least squares combined with a threshold $\rho$ to
obtain the estimate on the remaining entries of the coefficient matrix.
Figure~\ref{fig 1} exhibits the selection details of the proposed method.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.13]{fig/process.png}
\caption{The selecting details of the SeSS algorithm.\label{fig 1}}
\end{figure}
In the algorithm, we obtain the model within the reduced feature space by EBIC and use the canonical correlation coefficients (CC) as the correlation measure. The former is shown with a small loss in the positive selection rate but tightly controls the false discovery rate in many applications \citep{chen2008extended}. For the CC, it is suitable for the linear model, compared with other measures such as distance correlation and Pearson's correlation. Distance correlation, for instance, measures the correlation including the nonlinear relationship among variables, which would lead to low power in a linear model detection \citep{li2012feature,kong2017interaction}.
\red{Though Pearson's correlation could measure the linear correlation between two variables, it can not describe the group correlation between two groups, while CC is often used and recommended for the data with group structures \citep{luo2020feature}.
}
Compared with the SCCS, SeSS handles the sparse high-dimensional complex data more efficiently. For each selected group block, we propose adding a new step of selecting the most relevant predictor among the predictor group based on the correlation between each predictor and the current residual of the response group. This strategy has the following advantages in sparse high-dimensional models and computation: 1) avoids the exhaustivity of SCCS in each block and thus reduces the computational cost; 2) achieves high accuracy in detecting the extremely sparse models; 3) enjoys theoretical properties without extra requirement;
4) SeSS's merits in the computation allow it to deal with the data with higher dimensions and more numerous group structures.
\section{Theoretical Properties}
In this section, we focus on the theoretical guarantee of the proposed method. We will show in the following that this method successfully recovers the true underlying sparse model with high probability. We first define some notations. Consider the following dimensional setting where $2\ln(pq) < n^{1/3 - \delta}$ that $0 < \delta < 1/3$ and $p_0 = O(n^{1/6})$ that each column of $B$ has at most $p_0$ nonzero elements. For each block $B_{kj}$ where $k = 1,\dots, K$ and $j = 1,\dots, J$, we assume the numbers of its column and row are bounded.
To study the asymptotic property of the proposed method, we first introduce a property of the canonical correlation measure established by \citet{luo2020feature}.
Recall the notations of the canonical correlation matrix and its estimates, i.e., $C_{kj}(\zeta)$ and $\hat C_{kj}(\zeta)$ under model $\zeta$. Set $\text{tr}(C_{k^*j^*}) = \max_{k, j} \{ \text{tr}(C_{kj}(\zeta)) \}$. Then, under the following conditions, we have, as $n \rightarrow \infty$, uniformly for $\zeta$,
\begin{equation}\label{eq cannoical}
P\big( \text{tr}(\hat C_{k^*j^*}(\zeta)) = \max_{k, j} \{ \text{tr}( \hat C_{kj}(\zeta)) \} \big) \rightarrow 1.
\end{equation}
The proof of the above result can be found in the proof of Lemma~1 from \citet{luo2020feature}, so we omit it. We will display the required conditions of \eqref{eq cannoical}, A1 and A2, in the following.
Set $\mathcal{C}_0$ be the index set of nonzero columns of $B$ and set $\Psi_0$ be the index set of nonzero blocks of $B$. For the $l$th column of $B$ where $l \in \mathcal{C}_0$, we denote $s_{0l} = \{ i: \beta_{il} \neq 0 \}$ and $s^*_l$ be the nonzero index set of the response $y_l$ selected by the proposed method. Recall that $\Sigma_k$, $\Omega_j$ denote the variance matrices of $\mathcal{X}_k$ and $\widetilde{\mathcal{Y}}_j$. We use $X(s)$ to denote the subvector of $X$ with indices in $s$ and $\Sigma_{ss}$ to denote the related variance matrix. To state our theoretical results, we need the following conditions.
\begin{itemize}
\item[A1] The eigenvalues of $\Sigma_{k}$, $\Sigma_{s_{0l}s_{0l}}$, $\Omega_j$ are bounded from below and above for all $k,l, j$.
\item[A2] Let $\sigma(\cdot)$ be the standard deviation and let $t$ be a neighborhood of $0$. There exists a generic constant $C$ that $\max_{i, j}\{ \sigma(X_iX_j),\sigma(Y_iX_j),\sigma(X_iY_j) \} \leqslant C$ and $\max_{i, j}\big\{E\exp\{$\\$ t(X_iX_j - EX_iX_j)\}, E\exp\{ t(Y_iX_j - EY_iX_j)\}, E\exp\{ t(Y_iY_j - EY_iY_j)\}\big\} \leqslant C$.
\item[A3] Under model $\zeta$,
\[ \max_{(k, j) \notin \Psi_0} |\text{tr} C_{kj}(\zeta)| < \max_{(k, j) \in \Psi_0} |\text{tr} C_{kj}(\zeta)|, \]
and for $(k, j) \in \Psi_0$,
\[ \max_{l \notin \mathcal{C}_0} |\text{tr} C^l_{kj}(\zeta)| < \max_{l \in \mathcal{C}_0} |\text{tr} C^l_{kj}(\zeta)|. \]
\item[A4] Under model $\zeta$, for $l \in \Psi_0$ and $|\zeta| < |\Psi_0|$,
\[ \max_{i \in s^c_{0l}} \ln \dfrac{ (I - H(\zeta)) XB^{(j)}(\zeta) }{(I - H(\zeta)) XB^{(j)}(\zeta \cup i)} \leqslant \ln n\lambda_1/n,\]
where $H(\zeta) = X(\zeta)[X^\mathrm{\scriptscriptstyle T} (\zeta)X(\zeta)]^{-1} X^\mathrm{\scriptscriptstyle T} (\zeta)$.
\item[A5] As $n \rightarrow \infty$,
\[\sqrt{n} \min_{(k, j) \in \Psi_0} \min_{l \in \mathcal{Y}_k, i \in s_{0l} \cap \mathcal{X}_j} |\beta_{il}|/\sqrt{p_0 \ln p} \rightarrow \infty. \]
\end{itemize}
A1 - A2 are used for proving \eqref{eq cannoical}. A1 is a regular condition that assumes the eigenvalues of the true covariance matrix are bounded. A2 is raised from \citet{fill1983convergence}. These two conditions are used to provide the error bound between the associated sample covariance and the true covariance. A3 - A4 are used to prove that the proposed method is selection consistent. A5 states that the relative portion of a variation of the irrelevant predictor is bounded by the tuning parameter. These conditions are all followed from \citet{luo2020feature}.
Then we have the following result of the selection consistency of SeSS. The proof of Theorem~\ref{thm} is given in Appendix.
\begin{thm}\label{thm}
Suppose A1 - A5 hold. The SeSS is selection consistent, i.e., with probability tends to $1$ as $n \rightarrow \infty$, we have $s^*_l = s_{0l}$ for $l \in \mathcal{C}_0$ and $s^*_l = \emptyset$ for $l \in \mathcal{C}^c_0$.
\end{thm}
\section{Simulations}
\red{In this section, we will demonstrate the performance of the SeSS and compare it with the SCCS and the other two methods: group LASSO \citep{yang2015fast} and elastic net \citep{friedman2010regularization}. We also test the performance of the backward selection \citep{tsagris2018feature} and forward-backward selection \citet{borboudakis2019forward}. They both suffer from the complexity of group structures and do not perform well, thus their results are not presented.
We consider two cases of the group structure, an equal size case and an unequal size case. We also consider four settings of sparsity, i.e., $90\%$, $95\%$, $70\%$, $50\%$} of elements of the coefficient matrix equals zero. During the simulations, we consider two dimensions, $n=150, q=200, p=200$ or $n=150, q=200, p=400$. In both dimensions, we consider both group structures in predictors and responses. Simulation settings are presented in detail in the following.
We generate each row of $X$ independently from a multivariate normal distribution $N(0,\Sigma)$, where $\Sigma_{ij} = 0.5^{|i-j|}$. Elements of the error matrix $\mathcal{E}$ are generated from $N(0,\sigma^2)$ independently and $\sigma^2$ depends on the variance of $XB$. We denote $z_l$ be the $l$th column of $XB$ and $V_1 = \sum^q_{l=1}var(z_l)$. Then we set $\sigma^2 = \frac{V_1}{5q}$. We consider the diagonal block setting, that is, for the coefficient blocks $B_{kj}$ in $200 \times 200$ or $400 \times 200$ coefficient matrix, we have $B_{kj} = 0$, $k \neq j$. For diagonal blocks, we first set the values of coefficient matrix independently generated from a uniform distribution on $[-5,-1] \cup [1,5]$, then randomly select $90\%$, $95\%$, $70\%$, $50\%$ of them and set to zero. For the group structures, we consider two cases:
\begin{itemize}
\item[] \text{Equal-size case:} The group size of responses and predictors are both equal $20$.
\item[] \text{Unequal-size case:} The group size of responses and predictors are randomly set as $20$ or $30$.
\end{itemize}
Each group consists of the variables in order, for example, $\mathcal{Y}_j = \{ y_{ 20(j-1)+1}, \dots,$\\$ y_{20j} \}$ and $\mathcal{X}_k = \{x_{ 20(k-1)+1}, \dots,x_{20k} \} $, where $j,k = 1,\dots,20$. \red{For the proposed method, the threshold $rho$ is set inspired from \citet{guo2016spline}, that is, we set $\rho = \hat \sigma \sqrt{2\log p}$ where $\hat \sigma$ is the standard error of the estimated coefficients and is adjusted by small magnitude. We also use this threshold in Real example.}
We use \red{seven} measures to show the performance of the \red{four} methods: $l_1$-norm error norm, $l_2$-norm error, positive discovery rate (PDR) and false discovery rate (FDR), discovery rate (DR), \red{blocks discovery rate (BDR)} and computational cost. $l_1$-norm error denotes $\|\hat B - B\|_1$ and $l_2$-norm error denotes the Frobenius norm error $\|\hat B - B\|_F$. For the PDR and FDR, we have, with $\theta_{il} = I\{ \beta_{il} \neq 0\}$:
\[ \text{PDR} = \frac{ \# \{ \hat{\theta}_{il} = \theta_{il} = 1 \} }{ \# \{ \theta_{il} = 1 \} },
\text{FDR} = \frac{ \# \{ \hat{\theta}_{il} = 1 ,\theta_{il} = 0 \} }{ \# \{ \hat{\theta}_{il} = 1\} }.\]
Both PDR and FDR are one-sided since we can either select most elements to get a high PDR or simply choose a few elements in $B$ for a low FDR. So we also introduce $\text{DR} = \text{PDR}+1-\text{FDR}$ as a joint measure. \red{Similar to DR, BDR is a measure describing the accuracy of the selected blocks while DR is a measure describing the accuracy of the selected entries.} The computational costs are performed on a standard laptop computer with a 2.50 GHz Intel Core i5-7200U processor.
\begin{table}[!htbp]
\centering
\caption{\red{The average PDR, FDR, DR, BDR, $l_1$-norm error, $l_2$-norm error, and the computation time over 100 times simulation studies in the cases of 0.9, 0.95 sparsity(numbers in parentheses are standard deviations).}}
\label{tab:table1}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc}
\hline
Method & Sparsity & PDR & FDR & DR & BDR & L1 & L2 & time \\ \hline
& Equal-size & & n=150 & q=200 & p=200 & & & \\ \hline
SeSS & 0.9 & 0.913(0.014) & 0.009(0.005) & 1.903(0.017) & 1.866(0.141) & 134.049(11.213) & 131.426(24.903) & 44.414(6.399) \\
SCCS & 0.9 & 0.939(0.008) & 0.059(0.008) & 1.880(0.014) & 1.848(0.107) & 122.563(9.765) & 93.965(18.853) & 217.514(5.803) \\
group Lasso & 0.9 & 1(0) & 0.989(0.001) & 1.011(0.001) & 1.097(0.166) & 1599.585(32.888) & 3088.827(148.376) & 49.384(0.735) \\
elastic net & 0.9 & 1(0) & 0.978(0.001) & 1.022(0.001) & 1.100(0) & 1521.571(33.471) & 2950.001(146.11) & 34.743(1.135) \\ \hline
SeSS & 0.95 & 0.967(0.012) & 0.002(0.002) & 1.966(0.013) & 1.845(0.163) & 35.636(4.872) & 21.333(8.876) & 28.816(4.097) \\
SCCS & 0.95 & 0.934(0.018) & 0.073(0.032) & 1.862(0.030) & 1.097(0.125) & 52.871(6.083) & 42.555(13.802) & 116.530(8.343) \\
group Lasso & 0.95 & 1(0) & 0.995(0.001) & 1.005(0.001) & 1.134(0.149) & 1230.227(22.781) & 1440.917(43.205) & 50.635(0.886) \\
elastic net & 0.95 & 1(0) & 0.989(0.002) & 1.011(0.002) & 1.100(0) & 1155.623(25.233) & 1373.527(40.095) & 39.995(0.945) \\ \hline
& Unequal-size & & n=150 & q=200 & p=200 & & & \\ \hline
SeSS & 0.9 & 0.872(0.012) & 0.011(0.006) & 1.861 (0.015) & 1.882(0.138) & 262.216(15.239) & 498.435(63.668) & 42.532(6.224) \\
SCCS & 0.9 & 0.925(0.018) & 0.083(0.047) & 1.842 (0.051) & 1.842(0.11) & 191.298(26.039) & 150.094(36.369) & 507.775(8.625) \\
group Lasso & 0.9 & 1(0) & 0.987(0.001) & 1.013(0.001) & 1.115(0.185) & 1936.406(27.026) & 4095.843(126.883) & 56.394(1.315) \\
elastic net & 0.9 & 1(0) & 0.971(0.001) & 1.029(0.001) & 1.100(0) & 1816.066(24.892) & 3903.617(125.112) & 33.143(0.463) \\ \hline
SeSS & 0.95 & 0.934(0.013) & 0.002(0.003) & 1.931(0.012) & 1.848(0.117) & 66.246(7.532) & 59.924(17.554) & 34.478(5.196) \\
SCCS & 0.95 & 0.954(0.015) & 0.068(0.018) & 1.886(0.029) & 1.868(0.124) & 63.128(7.084) & 40.550(11.482) & 270.149(6.861) \\
group Lasso & 0.95 & 1(0) & 0.993(0.001) & 1.007(0.001) & 1.124(0.112) & 1365.569(17.51) & 1962.492(62.723) & 56.3(1.463) \\
elastic net & 0.95 & 1(0) & 0.986(0.001) & 1.014(0.001) & 1.100(0) & 1245.935(22.097) & 1850.772(63.71) & 37.72(0.738) \\ \hline
& Equal-size & & n=150 & q=200 & p=400 & & & \\ \hline
SeSS & 0.9 & 0.908(0.014) & 0.012(0.005) & 1.896(0.018) & 1.905(0.08) & 138.489(12.357) & 140.876(29.172) & 56.018(9.465) \\
SCCS & 0.9 & 0.959(0.009) & 0.080(0.015) & 1.879(0.015) & 1.927(0.209) & 111.935(7.506) & 63.230(10.356) & 233.501(4.446) \\
group Lasso & 0.9 & 1(0) & 0.994(0.002) & 1.006(0.002) & 1.157(0.100) & 1586.44(31.077) & 3122.184(150.279) & 73.859(1.252) \\
elastic net & 0.9 & 1(0) & 0.979(0.001) & 1.021(0.001) & 1.100(0) & 1427.115(30.759) & 2983.029(149.058) & 54.341(0.966) \\ \hline
SeSS & 0.95 & 0.958(0.008) & 0.003(0.004) & 1.956(0.011) & 1.929(0.178) & 41.069(4.699) & 34.854(10.621) & 36.246(4.763) \\
SCCS & 0.95 & 0.959(0.009) & 0.080(0.015) & 1.879(0.015) & 1.938(0.147) & 111.935(7.506) & 63.230(10.356) & 127.970(4.080) \\
group Lasso & 0.95 & 1(0) & 0.997(0.001) & 1.003(0.001) & 1.129(0.1) & 1191.112(15.741) & 1445.955(45.749) & 73.641(1.152) \\
elastic net & 0.95 & 1(0) & 0.989(0.001) & 1.011(0.001) & 1.100(0) & 994.38(20.28) & 1369.649(43.083) & 62.657(1.25) \\ \hline
& Unequal-size & & n=150 & q=200 & p=400 & & & \\ \hline
SeSS & 0.9 & 0.894(0.010) & 0.021(0.006) & 1.873(0.014) & 1.821(0.189) & 232.398(22.567) & 361.499(79.828) & 59.067(5.033) \\
SCCS & 0.9 & 0.919(0.012) & 0.038(0.010) & 1.881(0.014) & 1.898(0.163) & 180.875(11.228) & 150.691(29.439) & 500.800(7.355) \\
group Lasso & 0.9 & 1(0) & 0.992(0) & 1.008(0) & 1.125(0.157) & 1932.355(25.273) & 4155.343(132.346) & 84.788(1.843) \\
elastic net & 0.9 & 1(0.001) & 0.973(0.001) & 1.027(0.001) & 1.100(0) & 1732.345(28.41) & 3958.264(130.069) & 54.365(0.656) \\ \hline
SeSS & 0.95 & 0.941(0.012) & 0.006(0.006) & 1.935(0.017) & 1.886(0.123) & 63.996(8.525) & 57.772(19.750) & 37.254(2.306) \\
SCCS & 0.95 & 0.910(0.015) & 0.025(0.008) & 1.885(0.016) & 1.825(0.237) & 79.403(4.414) & 78.503(10.107) & 276.498(8.820) \\
group Lasso & 0.95 & 1(0) & 0.996(0) & 1.004(0) & 1.111(0.14) & 1341.372(17.375) & 1984.445(67.73) & 82.678(1.844) \\
elastic net & 0.95 & 1(0) & 0.986(0.001) & 1.014(0.001) & 1.100(0) & 1112.189(24.504) & 1859.54(66.452) & 59.645(0.654) \\ \hline
\end{tabular}%
}
\end{table}
\begin{table}[!htbp]
\centering
\caption{\red{The average PDR, FDR, DR, $l_1$-norm error, and $l_2$-norm error and the computation time over 100 times simulation studies in the cases of 0.5, 0.7 sparsity(numbers in parentheses are standard deviations).}}
\label{tab:table2}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccc}
\hline
Method & Sparsity & PDR & FDR & DR & L1 & L2 & time \\ \hline
& Equal-size & & n=150 & q=200 & p=200 & & \\ \hline
SeSS & 0.7 & 0.787(0.009) & 0.097(0.008) & 1.69(0.015) & 1239.473(37.873) & 1692.364(94.373) & 21.026(0.456) \\
SCCS & 0.7 & 0.91(0.011) & 0.047(0.01) & 1.864(0.016) & 590.978(29.409) & 516.614(50.189) & 167.753(3.565) \\
SeSS & 0.5 & 0.611(0.005) & 0.334(0.009) & 1.277(0.011) & 3660.833(57.515) & 6853.23(227.779) & 39.43(0.346) \\
SCCS & 0.5 & 0.835(0.011) & 0.028(0.004) & 1.807(0.011) & 1416.775(55.531) & 1718.264(141.828) & 227.277(3.072) \\ \hline
& Unequal-size & & n=150 & q=200 & p=200 & & \\ \hline
SeSS & 0.7 & 0.627(0.006) & 0.353(0.013) & 1.274(0.017) & 2945.466(67.337) & 5018.946(231.614) & 54.179(0.661) \\
SCCS & 0.7 & 0.795(0.017) & 0.022(0.004) & 1.773(0.019) & 1124.532(54.125) & 1503.029(149.8) & 337.738(6.832) \\
SeSS & 0.5 & 0.416(0.008) & 0.51(0.017) & 0.906(0.022) & 6914.002(219.93) & 16193.394(750.4) & 69.153(2.666) \\
SCCS & 0.5 & 0.453(0.169) & 0.019(0.004) & 1.434(0.167) & 4523.972(1235.173) & 11940.482(5496.638) & 298.133(99.075) \\ \hline
& Equal-size & & n=150 & q=200 & p=400 & & \\ \hline
SeSS & 0.7 & 0.786(0.008) & 0.073(0.006) & 1.713(0.012) & 1208.827(36.276) & 1627.983(95.574) & 28.168(0.986) \\
SCCS & 0.7 & 0.915(0.008) & 0.027(0.007) & 1.887(0.011) & 536.351(17.082) & 448.209(37.127) & 164.131(2.1) \\
SeSS & 0.5 & 0.595(0.011) & 0.335(0.011) & 1.259(0.02) & 3747.004(80.347) & 7097.174(284.317) & 58.544(1.868) \\
SCCS & 0.5 & 0.844(0.011) & 0.016(0.002) & 1.827(0.011) & 1293.958(54.144) & 1493.908(143.688) & 229.054(2.569) \\ \hline
& Unequal-size & & n=150 & q=200 & p=400 & & \\ \hline
SeSS & 0.7 & 0.609(0.01) & 0.365(0.015) & 1.244(0.022) & 3127.801(96.576) & 5428.155(305.734) & 80.475(1.763) \\
SCCS & 0.7 & 0.796(0.015) & 0.015(0.002) & 1.782(0.016) & 1092.801(52.359) & 1450.153(147.862) & 335.775(6.808) \\
SeSS & 0.5 & 0.395(0.005) & 0.549(0.008) & 0.846(0.01) & 7378.296(104.048) & 17083.66(493.544) & 106.178(2.14) \\
SCCS & 0.5 & 0.234(0.186) & 0.014(0.007) & 1.22(0.185) & 6063.39(1342.351) & 18839.385(5953.104) & 167.59(103.51) \\ \hline
\end{tabular}%
}
\end{table}
For each situation, we simulate 100 times. \red{As one can see from Table~\ref{tab:table1}, we can find that SeSS performs better than the SCCS in FDR, DR, and computational time in all the cases. And the other two methods such as group LASSO and elastic net do not have a good performance because they both have a high FDR, which means they select many wrong nonzero entries. From the BDR metrics, we can see that both SeSS and SCCS almost select features in the correct blocks while the other two cannot. We only compare the performance of SeSS and SCCS in the cases of $0.3$ and $0.5$ sparsity to show that SeSS and SCCS are not very suitable for the data with low sparsity. Table~\ref{tab:table2} describes the simulation in the low sparsity cases of 0.5 and 0.7 sparsity. We can see that both SeSS and SCCS do not perform well in these cases. However, in all the cases SCCS has a higher DR than SeSS and lower L1 and L2. This shows that SCCS has the capability to deal with the data with a wide range of sparsity.}
On average, the computational time used for SeSS is generally $1/10$ to $1/6$ that of SCCS. In addition, we can find that in both the equal case and the unequal case, when the sparsity is 0.9, the PDR of SeSS is slightly lower than SCCS but the joint measure DR is still comparable. When the sparsity is 0.95, SeSS, in general, outperforms SCCS. Both tables show that the SeSS algorithm is efficient in computation and always achieves a low FDR, which are two merits in terms of the quality of feature selection.
\section{Real example}
In this section, we apply the proposed method to a GeneChip (Affymetrix) microarrays dataset, which was used in \citet{wille2004sparse} to reverse engineer genetic regulatory networks by using GGM (Graphical Gaussian Modeling). The dataset is about 118 GeneChip (Affymetrix) microarrays for the expressions of 39 genes, which are in the isoprenoid pathways in \textit{Arabidopsis thaliano} and 21 genes are in the \textit{mevalonatepatty} and 18 in the \textit{nonmevalonatepatty} pathway. 795 additional genes from 56 downstream metabolic pathways are also consisted in the dataset. In order to formulate the genetic regulatory network, we use a conditional Gaussian graphical model with group structures: $Y = X \mathcal{B} + \mathcal{E} $ with $(n,p,q) = (118,795,39)$.
$Y$ denotes the observation of the expression levels of genes in the isoprenoid pathways and $Y$ has a group structure $Y = (Y_1, Y_2)$. $X$ denotes the observation of the expression levels of genes in the downstream metabolic pathways which is divided into 56 groups as $X = (X_1,\dots, X_{56})$.
We respectively use SCCS, SeSS, group LASSO, and elastic net to estimate the coefficient matrix $\mathcal{B}$.
\red{Because the real $\mathcal{B}$ is unknown in empirical analysis, we do not use either the PDR or the FDR or the DR as the performance measure. Instead, we use the mean-squared prediction error (MSPE) to describe the performance of four methods.}
In each test, we randomly choose $n_0 = 70$ or $n_0 = 100$ microarrays of the 118 microarrays as the training sample and the remaining 48 and 18 microarrays as the testing sample. We then use the training sample to select the nonzero entries of $B$ and estimate it by the ordinary least squares. The prediction error is computed using the testing sample. We repeat the test 100 times and the prediction errors are averaged and reported in Table~\ref{tab:table3}. We also show the number of the nonzero entries (NNE) selected by the algorithm and the computational time of SCCS and SeSS respectively. The mean-squared error (MSE) computed by the training sample is shown as a baseline of the MSPE.
From the result reported in Table~\ref{tab:table3} where $n_0$ equals 100, we can see that the mean MSPE of SeSS is 6.9\% less than that of SCCS. \red{The group Lasso and elastic net both have lower MSE and MSPE than SeSS and SCCS, however, both methods select a great number of nonzero entries.} There is no order of magnitude difference between the NNE of SeSS and SCCS. The computational merits of SeSS surpass the SCCS, e.g., the SCCS requires 195 sec while SeSS requires 29 sec, which is 15\% of the former. \red{When the $n_0$ equals 70, the group Lasso and elastic net perform much worse than the former case with higher MSE and MSPE than SeSS and SCCS.} The mean MSPE of SeSS is 2\% less than that of SCCS. At this time, there is a significant difference in the NNE compared with the former. SeSS is computationally attractive compared with SCCS in that the time SeSS used is only 10.9\% of the time SCCS used.
\begin{table}[!htbp]\small%
\centering
\caption{\red{The average MSE, MSPE, NNE, and computational time (numbers in parentheses) are standard deviations.}}
\label{tab:table3}
\begin{tabular}{ccccc}
\hline
& MSE & MSPE & NNE & Time \\ \hline
$n_0 = 100$ & & & & \\ \hline
SeSS & 0.419(0.089) & 0.613(0.146) & 144.950(11.452) & 29.431(14.546) \\
SCCS & 0.415(0.014) & 0.659(0.212) & 112.820(5.528) & 195.239(25.938) \\
group Lasso & 0.275(0.026) & 0.579(0.171) & 3208.512(216.136) & 46.054(2.269) \\
elastic net & 0.278(0.020) & 0.520(0.147) & 1194.323(77.400) & 5.791(0.179) \\ \hline
$n_0 = 70$ & & & & \\ \hline
SeSS & 0.295(0.072) & 0.674(0.137) & 211.03(60.458) & 39.201(13.112) \\
SCCS & 0.228(0.013) & 0.688(0.126) & 276.2(42.367) & 358.75(48.623) \\
group Lasso & 0.402(0.072) & 0.655(0.152) & 2560.473(288.380) & 36.925(3.368) \\
elastic net & 0.312(0.037) & 0.620(0.162) & 839.217(49.376) & 4.814(0.205) \\ \hline
\end{tabular}%
\end{table}
\section{Summary}
For the high-dimensional multiresponse models with complex group structures, the proposed method efficiently detects the related responses and predictors.
Inspired by SCCS, SeSS is based on the canonical correlation and extended Bayesian Information Criterion. We innovatively propose a three-steps sequential screening algorithm. Simulations and real example analysis both show that SeSS has a better performance than SCCS on these data of particular group structures.
In future research, two possible improvements of SeSS are worth studying. One is the replacement of the canonical correlation and the extended Bayesian Information Criterion. The other is the trade-off between computational cost and accuracy. A suggestion is to address the accuracy problem induced by the greedy algorithm, we can try selecting an entry in the nonzero row and then changing the target nonzero row.
\red{Further, another attractive idea of dealing with the models with complex group structures is using deep learning methods. Some researchers have studied relevant research on feature selection or regression by using deep learning methods \citep{niu2020developing,qiu2014ensemble}. It would be worthwhile to study the vast amounts of data with deep neural networks and other deep learning strategies.}
The study of high-dimensional multiresponse models with complex group structures is of great significance, and more relevant studies are desired.
\section*{Acknowledgement}
This work was supported by the National Natural Science Foundation of China (Grant No. 12001557); the Youth Talent Development Support Program (QYP202104), the Emerging Interdisciplinary Project, and the Disciplinary Funding of Central University of Finance and Economics.
\section*{Compliance with ethical standards}
The authors declare no potential conflict of interests. The authors declare no research involving human participants and/or animals. The authors declare informed consent.
|
2,869,038,156,635 | arxiv | \section{Introduction}
The chemically peculiar stars such as CH stars and barium stars that
exhibit enhancement of slow neutron-capture elements in their
surface chemical composition can provide observational constraints
for models of neutron-capture nucleosynthesis that occur during
the AGB phase of evolution of low and intermediate mass stars. Both
the CH stars and barium stars are known to be binary systems
(McClure 1983, 1984) with a now invisible white dwarf companion.
The companions produced the neutron-capture elements during their
AGB phase of evolution and transferred these materials to the CH
and barium stars through mass transfer mechanisms while evolving
through the AGB phase. The mass transfer mechanisms are however
not clearly understood. Although CH stars and barium stars show
enhancement of slow neutron-capture elements and very similar
in this respect, there are a few properties that make them
distinct from each other. From kinematics, barium stars are
known to be disk objects while CH stars belong to the halo
of our Galaxy (Gomez et al. 1997, Mennessier et al. 1997). CH stars
have high radial velocities and also they are metal-deficient
(Hartwick and Cowley 1985). Also barium stars have longer
orbital period and greater eccentricities compared to the CH stars
(Vanture 1992a, Jorissen et al. 2016). Another distinguishing feature
is the C/O ratio which is less than unity in case of barium stars
(Barbuy et al. 1992, Allen and Barbuy 2006, Drake and Pereira 2008,
Pereira and Drake 2009) and greater than unity in case of CH
stars (Vanture 1992b, Pereira and Drake 2009). Luck and Bond (1991)
analyzed a few barium stars that show strong CH band but weak
C$_{2}$ band and weak metallic lines. With the absence of strong
C$_{2}$ band in their spectra as they can not be placed in the group
of CH stars, the authors categorized them as `metal-deficient-barium
stars', and referred them as population II analogs of classical
barium stars. This thin line of difference raises questions
concerning the evolutionary connection, and the exact
relationship between `metal-deficient barium stars' and the CH stars.
The three chemically peculiar stars that are the subject of this
present study are listed in the CH star catalogue of Bartkevicius
(1996). However, as can be seen later our abundance analysis results
indicate these objects to be more likely barium stars. So far,
no previous studies on chemical composition of the stars HD~51959
and HD~88035 exist in literature.
In section 2, we describe the source of the spectra of the
programme stars and in section 3 we provide photometric temperatures
of the stars. In section 4, we discuss abundance analysis and present
the abundance results. In section 5 we have discussed briefly the
kinematic analysis
of the stars. Some discussions and conclusions are presented in section 6.
\section{High resolution spectra of the programme stars}
High resolution spectra for the programme stars are obtained from the
FEROS (Fiber-fed Extended Range Optical Spectrograph) connected to the
1.52 m telescope at ESO, Chile. It covers the complete optical spectral
range from 3500 - 9000 \AA\, in a single exposure. The spectral resolution
is R = 48000. FEROS is equipped with a 2K x 4K, 15${\mu}$m pixel CCD.
An image slicer (Kaufer 1998) is used to achieve the high spectral
resolution. The basic data for the programme stars are listed in
Table 1. A few sample spectra of the programme stars are shown
in figure 1.
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure1.eps}
\caption{ Sample spectra of the programme stars in the wavelength
region 5160 - 5190 \AA\,}
\end{figure}
{\footnotesize
\begin{table*}
{\bf Table 1: Basic data for the programme stars}\\
\begin{tabular}{ccccccccccc}
\hline
Star Name. & RA(2000) & DEC(2000)& ${\pi}$(mas) & BC &B& V&J&H&K& Date of obs \\
\hline
HD~51959 & 06 59 10.09&-07 06 31.95 & 8.05 [1.04] & $-$0.25 & 10.05 & 8.92 & 7.18 & 6.74 & 6.55&11-11-1999\\
HD~88035 & 10 08 41.97&-20 18 49.64 & 2.78 [0.219]& $-$0.18 & 10.24 & 9.14 & 7.39 & 6.89 & 6.77&07-01-2000 \\
HD~121447 &13 55 46.96 & -18 14 56.48& 2.93 [0.80] & $-$0.49 & 9.61 & 7.80 & 5.12 & 4.33 & 4.15&07-01-2000 \\
\hline
\end{tabular}
The numbers within square brackets indicate errors in ${\pi}$.
\end{table*}
}
{\footnotesize
\begin{table*}
{\bf Table 2: Temperatures from photometry }\\
\begin{tabular}{llllllllllll}
\hline
Star Name & $T_{eff}$ & $T_{eff}$ & $T_{eff}$ & $T_{eff}$ & $T_{eff}$& $T_{eff}$ &$T_{eff}$&$T_{eff}$&$T_{eff}$ &$T_{eff}$ & Spectroscopic \\
& & (-0.05) & (-0.5) & (-0.05) & (-0.5) & (-1.0) &(-1.0) & (-0.05) & (-0.5) & (-1.0) & estimates \\
& (J-K) K & (J-H) K & (J-H) K & (V-K) K & (V-K) K & (J-H) K & (V-K) K & (B-V) K &(B-V) K & (B-V) K & ~~~K \\
\hline
HD~51959 & 4564 & 4925 & 4968 & 4721 & 4720 & 4994 & 4714 & 4551 & 4467 & 4403 &5020 \\
HD~88035 & 4616 & 4673 & 4711 & 4726 & 4725 & 4732 & 4719 & 4604 & 4515 & 4446 & 5300 \\
HD~121447 & 3784 & 3775 & 3796 & 3905 & 3903 & 3806 & 3888 & 3616 & 3604 & 3607 &4500 \\
\hline
\end{tabular}
The numbers in the paranthesis indicate the metallicity values at which the
temperatures are calculated.\\
\end{table*}
}
\section{Temperatures from photometric data}
The photometric temperatures of the programme stars are calculated
as described in Mahanta et al. (2016) by using the calibration
equations for giants offered by Alonso et al. (1999, 2001).
The temperature estimates span a wide range. The photometric
temperatures are used as initial values in an iterative process
to obtain the spectroscopic temperature estimates. While the
spectroscopic temperature estimate for HD~51959 matches closely
with the temperature estimate from (J-H) colour calibration,
the spectroscopic temperatures are much higher than those
obtained from (J-H) and (J-K) calibrations in the case of other
two objects. We note however that (J-H) and (J-K) calibrations
give similar results in general as in the case of HD~88035 and
HD~121447 but these two temperature estimates differ by about
400 K in case of HD~51959.
The reason for such a large difference is difficult to understand if
the 2MASS J, H, K values for this object are considered to be as
accurate as for
the other two stars. As strong CH molecular absorption affects the B band,
we have not considered the empirical T$_{eff}$ scale for the
B-V colour. The JHK photometric temperature estimates were made
to have a first hand temperature estimate for the stars. For our
analysis we relied on spectroscopic temperature estimates derived
using large numbers of clean Fe I and Fe II lines.
The estimated values along with the spectroscopic temperatures are
listed in Table 2.
\section{Spectral analysis}
We have derived the atmospheric parameters for the programme stars
by measuring the equivalent widths of clean unblended Fe I and
Fe II lines. For this purpose we have considered only those
lines that have excitation potential in the range 0.0 eV to 5.0 eV
and equivalent width between 20 m{\AA} to 160 m{\AA\,. For the
object HD~51959 we could get 56 Fe I and 8 Fe II lines,
for HD~88035 69 Fe I and 7 Fe II lines, and, for HD~121447 28 Fe I
and 2 Fe II lines, that are usable for abundance
calculation. Due to severe line blending and distortion in the
spectrum of HD~121447 we could not get sufficient number of weak
Fe lines within the above ranges of exitation potential and
equivalent widths. We could notice a large broadening of spectral
features throughout the spectrum of HD 121447. Hence, for this object
we have also considered a few strong unblended lines for deriving
the atmospheric parameters. We have used the latest version of
MOOG (Sneden 1973) along with the Kurucz grid of model atmospheres
with no convective overshooting
(http: //cfaku5.cfa.harvard.edu/) for the calculations. We have assumed
Local Thermodynamic Equilibrium (LTE) for the analysis. Fe lines used
for the analysis along with the measured equivalent widths are
presented in Table 3. The references for the adopted log\,gf values
are also listed in this table. The method of excitation equilibrium
is used for deriving the effective temperature T$_{eff}$, fixing at
a value that makes the slope of the abundance versus excitation
potential of Fe I lines nearly zero (Figure 2).
The microturbulent velocity at this temperature is fixed by demanding
that there be no dependence of the derived Fe I abundances on the
equivalent width of the corresponding lines (Figure 3). The surface
gravity is fixed at a value which makes the abundance of Fe from
the Fe I and Fe II lines equal. Derived atmospheric parameters
are presented in Table 4.
We have also determined the surface gravity (log\,g) using the
parallaxes (${\pi}$) from Van Leeuwen (2007) for HD~51959 and
HD~121447. For the object HD~88035 we have adopted parallax value
from GAIA (http://archives.esac.esa.int/gaia). This method is
precise when parallaxes have small measurement uncertainty. The
following relation is used
\begin{equation}
log {g \over g_{\odot}} = log {M \over M_{\odot}} + 4 log{T_{eff} \over T_{eff}{\odot}} + 0.4(M_{bol} - M_{bol}{\odot})
\end{equation}
\vskip 0.2cm
\noindent
where
\begin{equation}
M_{bol} = V + 5 +5 log{\pi} +BC
\end{equation}
The bolometric corrections are determined using the empirical calibration
of Alonso et al. (1999) with its erratum Alonso et al. (2001) (Table 1).
We have adopted solar values
log\,$g_{\odot}$ = 4.44, T$_{eff}{\odot}$ = 5770 K and
M$_{bol}{\odot}$ = 4.75 mag.
The masses of the programme stars are derived from their locations
in the Hertzsprung-Russell (HR) diagram using spectroscopic log T$_{eff}$
along the x-axis
and photometric log ($L/L_{\odot}$) along y-axis, where evolutionary tracks
(Girardy et al. 2000) are plotted for different masses (Figure 4).
The estimated
log ($L/L_{\odot}$) values of our stars are in good agreement
with estimates obtained for Barium giants by de Castro et al. (2016).
Errors in the mass estimates have maximum contribution coming from
the errors in parallaxes. Error in parallaxes are 1.04, 0.219 and 0.8
mas for HD 51959, HD 88035 and HD121447 respectively. We have considered
an error of 0.3 instead of 0.219 for HD~88035 since it is mentioned
in the GAIA website that errors below 0.3 are very optimistic. The
errors in the log (L/L$_{\odot}$) estimates due to the errors in
parallax values are 0.11, 0.09 and 0.24 for HD 51959, HD 88035
and HD121447 respectively.
Since our objects are of near solar metallicities, we have utilized
the evolutionary tracks for the initial composition Z = 0.0198
and Y = 0.273. The estimated masses 1.3 ${\pm}$ 0.1, 2.2 ${\pm}$ 0.1 and
2.0 ${\pm}$ 0.4 M$_{\odot}$ for HD~51959, HD~88035 and HD~121447
respectively are presented in Table 5. The mass estimates are well
within the range of model predictions by Han et al. (1995) who
found that the masses of Ba stars range between 1.0 and 3.0 M$_{\odot}$.
The estimated log\,g from this method are 3.7 ${\pm}$ 0.09 (HD~51959),
3.2 ${\pm}$ 0.03 (HD~88035) and 2.01 ${\pm}$ 0.05 (HD~121447).
The uncertainty in log g estimates are mostly due to uncertainty
in the parallax estimates that amounts to about 12.9\% (HD~51959),
7.9\% (HD~88035) and 27.3\% (HD~121447).
Although we
have estimated log g values using parallax method for a first check,
we have used spectroscopic estimates of log g for our calculations.
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{epsplot.eps}
\caption {The iron abundances of stars are shown for individual Fe I and
Fe II lines as a function of excitation potential. The solid circles
indicate Fe I lines and solid triangles indicate Fe II lines. }
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{feplot.eps}
\caption{ The iron abundances of stars are shown for individual Fe I and
Fe II lines as a function of equivalent width. The solid circles
indicate Fe I lines and solid triangles indicate Fe II lines.}
\end{figure}
{\footnotesize
\begin{table*}
{\bf Table 3: Fe lines used for deriving atmospheric parameters}\\
\begin{tabular}{lccllllc}
\hline
Wavelength& Element & E$_{low}$ & log gf & HD~51959 &HD~88035 &HD~121447 &Ref\\
~~~~{\AA} & id & ev & &~~m{\AA}(dex)&~~m{\AA}(dex)&~~m{\AA}(dex)& \\
\hline
4109.060 & Fe I & 3.290 & $-$1.56 & - & 107(7.53) & - &1 \\
4446.833 & & 3.687 & $-$1.33 & - & 99(7.47) & - &1 \\
4447.129 & & 2.198 & $-$2.59 &- & 107(7.35) & - &1 \\
4447.720 & & 2.220 & $-$1.34 & - & - & 236(6.69) &1\\
4476.019 & & 2.845 & $-$0.57 &- & - & 241(6.98)&2\\
4485.971 & & 3.654 & $-$2.35 & 41(7.41) &- & &1 \\
4489.739 & & 0.121 & $-$3.97 &- & 157(7.40) & &1 \\
4566.514 & & 3.301 & $-$2.25 & - & 60(7.29) & 68(6.60) &1 \\
4614.192 & & 3.301 & $-$2.62 & 50(7.49) & - & &1\\
4619.287 & & 3.602 & $-$1.12 & - & 106(7.27) & - &1 \\
\hline
\end{tabular}
The numbers in the paranthesis in columns 5 - 7 give the derived
abundances from the respective line.\\
References : 1. F\"uhr et al. (1988) 2. Bridges $\&$ Kornblith (1974) \\
{\bf Note.} This table is available in its entirety in online only.
A portion is shown here for guidance regarding its form and content.\\
\end{table*}
}
\begin{table*}
{\bf Table 4: Derived atmospheric parameters for the programme stars}\\
\begin{tabular}{lccccccc}
\hline
Star Name. & HJD &V$_{r}$ & $T_{eff}$ & log g & $\zeta $ &[Fe I/H] &[Fe II/H] \\
& & km s$^{-1}$ & K & & km s$^{-1}$ & & \\
\hline
HD~51959& 2451493.837 & 38.5 & 5020 &3.65 &1.31 &0.03 &0.01\\
HD~88035& 2451550.715 & $-$ 2.5 & 5300 &3.9 &1.88 &$-$0.05 &0.01\\
HD~121447& 2451550.805 & $-$ 2.3 & 4500 &2.2 &3.02 &$-$0.65 &$-$0.65\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
{\bf Table 5: Stellar masses}\\
\begin{tabular}{lllclcl}
\hline
Star Name & $M_{bol}$ & $ \Delta M_{bol}$ & log $(L/L_{\odot}$)& $\Delta {log (L/L_{\odot})}$& Mass ($M_{\odot}$) & $\Delta Mass(M_{\odot})$\\
\hline
HD 51959 & 3.19 &$\pm$0.26 &0.62 &$\pm$0.11& 1.3&$\pm$0.10\\
HD 88035 & 1.19 &$\pm$0.20 &1.43 &$\pm$0.09& 2.2&$\pm$0.10\\
HD 121447 & -0.35 &$\pm$0.52 &2.04 &$\pm$0.24& 2.0&$\pm$0.40\\
\hline
\end{tabular}
\end{table*}
\subsection{Elemental abundances }
For most of the cases the elemental abundances
are derived by the standard method using the measured equivalent widths.
For a few elements which are known to be affected by the hyperfine splitting,
we have used the spectral synthesis calculation to derive the abundances.
The elemental abundances along with the abundance ratios are presented
in Tables 6 - 8. We have also calculated the [ls/Fe], [hs/Fe] and [hs/ls]
values (Table 9), where ls represents light s-process elements Sr, Y and
Zr and hs represents heavy s-process elements Ba, La, Ce, Nd and Sm
whenever available. The lines used along with the abundances derived
from the individual lines and the references for the log\,gf values adopted
for our calculations are presented in Table 10.
\subsection{Uncertainties in metallicity and elemental abundances}
We have derived the uncertinities in elemental abundances by varying
the stellar atmospheric parameters T$_{eff}$, log g and microturbulence
in the model atmosphere. The uncertainty due to temperature is determined
by varying the temperatures by ${\pm}$ 100 K and recalculating
the Fe abundance. Similarly by varying the log g value by ${\pm}$ 0.1 and
microturbulent velocity by ${\pm}$ 0.1 km s$^{-1}$, we have calculated
the corresponding uncertainties in abundances due to these changes.
The total uncertainty is calculated using the standard equation
of error calculations. We have assumed these values as the minimum
error in the derived abundances. The uncertainty in log ${\epsilon}$
is the standard error when they are derived from more than one line;
when abundances are derived using spectral synthesis calculations
the uncertainty values are taken as ${\pm}$ 0.2 dex which gives a clear
separation from adopted abundance value on either side of the observed
spectrum. The uncertainty in [X/H] can be considered same as that
for log ${\epsilon}$ since the adopted solar abundances are known
to be precise and errors in the solar abundances can be neglected.
In the case of [X/Fe], the source of uncertainty includes those
from the measurement of [Fe/H] which adds up a minimum error
of ${\sim}$ 0.10 dex.
\subsection {Elemental abundance ratios: analysis and interpretation }
The estimated carbon in HD~121447 is found to be enhanced and near
solar in the other two objects (Tables 6, 7, 8). The carbon abundances
are measured using spectrum synthesis calculations of the weak
C$_2$ band at 5165 \AA\, (figure 5).
Using this carbon abundance, the abundance of nitrogen is obtained
from spectrum synthesis calculations of the CN bands at 4215 \AA\, (figure 6)
and 8005 \AA\,.
Nitrogen abundance derived from the CN red region
(8005 \AA\,) is marginally higher by ${\sim}$ 0.15 dex in HD~88035 and
HD~121447. For HD~51959, N abundance derived from 4215 \AA\, band is
0.06 dex lower than that obtained from 8005 \AA\, region.
The abundances quoted in the tables 6, 7 and 8 are averages of these two
values. Nitrogen is found to be enhanced in all
the three objects.
For the linelists of C$_{2}$ and CN bands, we have
consulted Brooke et al. (2013), Sneden et al. (2014) and Ram
et al. (2014) and used the most updated log gf values for the
C$_{2}$ lines in these regions.
The abundances of oxygen in HD~51959
and HD~88035 are estimated from spectrum synthesis calculation
of the oxygen triplet lines around 7774 \AA\, (figure 7). Although
O I triplet
is known to be affected by non-LTE conditions, the effect
decreases in objects with higher gravities. These effects are also
found to disappear for lines with equivalent widths below
about 45 m\AA\,; a discussion on these effects for K giants
is available in Eriksson \& Toft (1979).
Estimated C/O for HD~51959 and HD~88035 are 0.35 and
0.43 respectively are consistent with barium stars.
The estimated [ΣCNO/Fe] for HD~51959 and
HD~88035 are found to be 0.20 and 0.39 respectively.
From estimates of CNO abundances Tomkin and Lambert (1979) have
demonstrated in a sample of barium stars that carbon shows near solar
values, oxygen is mildly
deficient ($\sim$ 0.1 dex), and, nitrogen is mildly enhanced with
$\sim$ 0.3 dex. However, Barbuy et al. (1992) have found carbon
abundances in the range $-$0.25 $\leq$ [C/Fe] $\leq$ 0.3 for a
sample of barium stars; they noted that the less evolved
Ba stars show high N abundances (Barbuy et al. 1992, Allen and
Barbuy 2006).
Estimated
$^{12}$C/$^{13}$C ratios obtained using spectrum synthesis calculations
of the CN band at 8005 \AA\, (figure 8) are found to be
small with values 10.1 and 7.3 for HD~88035 and HD~121447 respectively.
The $^{12}$C/$^{13}$C ratios are derived using the set of $^{12}$CN
lines at 8003.553, 8003.910 \AA\, and $^{13}$CN features at 8004.554,
8004.728, 8004.781 \AA\, which are considered to be more reliable
for this calculation.
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{fig_mass_feros.eps}
\caption{The locations of HD 51959, HD~88035 and HD 121447 are shown
the Hertzsprung-Russell diagram. The evolutionary tracks from Girardi et al.
(2000) are shown for masses 1.0, 1.1, 1.2, 1.3,
1.4, 1.5, 1.6, 1.7, 1.8, 2.0, 2.2, 2.5 and 3.0 M$_{\odot}$ from bottom to top. }
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure_carbon.eps}
\caption{
The spectral synthesis of $C_2$ band around 5165 \AA\,. In all the
panels the dotted lines indicate the synthesized spectra and the solid
lines indicate the observed line profiles. Two alternative synthetic
spectra for [C/Fe] = +0.3 (long-dashed line) and [C/Fe] = $-$0.3
(short-dashed line) are shown to demonstrate the sensitivity of
the line strength to the abundances. }
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure_nitrogen.eps}
\caption{ Spectral synthesis fits of CN band around 4215 \AA\,.
The best fit obtained with a carbon abundance of 8.7 dex and
$^{12}$C/$^{13}C$ ${\sim}$ 10 returns a nitrogen abundance
of 7.1 dex (dotted lines). The solid line corresponds to the
observed spectrum. Two alternative plots with long-dash and
short-dash are shown with [N/Fe] = $\pm$0.3 from the adopted value.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure_oxygen.eps}
\caption{ The spectral synthesis plots of OI triplet lines obtained with the
adopted O abundances (dotted curve). The observed spectrum is shown
by a solid curve. Two alternative plots with long-dash and short-dash
are shown with [O/Fe] = $\pm$0.3 from the adopted value.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{fig-12c13-12dec.eps}
\caption{ The spectral synthesis fits of the CN features around 8005 \AA\,
obtained with the adopted N abundance and $^{12}$C/$^{13}$C values
(dotted curve). The observed spectrum is shown by a solid curve.
Two alternative fits with the solar value of $^{12}$C/$^{13}$C ${\sim}$ 90
(short-dashed line) and 5 (long-dashed line) are shown to illustrate
the sensitivity of the line strengths to the isotopic carbon abundance
ratios.
}
\end{figure}
The near solar Na abundances (Tables 6 \& 7)
derived using the Na I lines at 5682.65 and 5688.22 \AA\
for HD~51959 and HD~88035 are as normally seen in
field giants.
The sample of barium stars studied by de Castro
et al. (2016) show [Na/Fe] ratios in the range 0.3 to 0.6.
A single Al I line at 6783.68 \AA\, gives [Al/Fe] ${\sim}$
0.49 dex for HD~51959, a value not so uncommon to barium stars;
de Castro et al.
(2016) found [Al/Fe] ${\sim}$ 0.43 for HD~43389, which is a metal-poor
star with [Fe/H] ${\sim}$ $-$0.52.
Except for Mg that shows mild underabundance, the abundance ratios
of ${\alpha}$- elements in HD~51959 and HD~88035
are very similar to those normally seen in giants and barium stars.
Mg is also found to be underabundant
in case of
HD~49641 and HD~58368, two strong barium stars analyzed
by Mahanta et al. (2016).
The lowest [Mg/Fe] found in the sample
of de Castro et al. (2016) is [Mg/Fe] ${\sim}$ 0.13 for HD~142751
([Fe/H] = $-$ 0.1). Our estimate of [Mg/Fe] for HD~121447 is
near solar with value of 0.04.
Silicon and calcium show near solar values for HD~51959 and HD~88035
(Tables 6 \& 7), these values in HD~121447 are marginally higher than that
normally seen in barium stars.
The abundance of Ti in case of HD~51959 is near solar (Table 6), and
marginally higher
in HD~88035 and HD~121447. These values are consistent with
those of de Castro's sample.
Abundance of Scandium is derived for HD~51959 and HD~88035
using spectrum synthesis calculation of Sc II line at 6245.63 {\AA}
considering hyperfine structure from Prochaska $\&$ McWilliam (2000).
[Sc/Fe] is near solar in HD~51959 and mildly
enhanced in HD~88035 (Tables 6, 7). The abundance of Sc could not
be measured in HD~121447.
Abundance of V is estimated using spectrum synthesis calculation
of V I line at 5727.028 {\AA\,} taking into account the hyperfine
components from Kurucz data base (Table 6).
Yang et al. (2016) finds a similar value
of [V/Fe] = 0.2 for HD~81797. Their sample shows [Sc/Fe]
in the range $-$0.07 to 0.17. Vanadium is mildly enhanced in HD~88035
and HD~121447 with [V/Fe] = 0.42 (Tables 6 and 7)
Abundances of Sc and V are not reported in de Castro et al. (2016).
Abundance ratios of iron peak elements in HD~51959 and HD~88035
exhibit similar values as seen in normal giants and barium stars.
Cr shows near solar values in all the three objects (Tables 6, 7, 8).
The abundance of manganese is derived using spectrum synthesis calculation
of 6013.51 {\AA} line taking into account the
hyperfine structures from Prochaska $\&$ McWilliam (2000). Considering
the uncertainty limits, Mn is found to be near solar in HD~51959
and mildly underabundant in HD~88035 (Tables 6 \& 7).
This ratio is +0.51 for
HD~121447 which is marginally above the values normally seen
in barium stars. The abundance of Mn in Yang et al. (2016)
ranges from $-$0.27 (HD~58121) to 0.3 (HD~43232) in their sample.
While Co shows a mild enhancement in HD~51959 and HD~88035,
the abundances of Ni are near solar (Tables 6 \& 7).
A mild
enhancement is noticed in HD~121447 with a value of 0.37.
Abundance of Zn derived using a single line at 4810.528 {\AA\,}
gives [Zn/Fe] ${\sim}$ 0.21 for HD~88035.
Abundances of Mn, Co and Zn in de Castro et al. (2016) and Co and Zn in
Yang et al. (2016) are not reported.
Abundance of Sr is measured for HD~51959 and HD~88035
using both the equivalent width measurements as well as spectrum synthesis
calculation of the Sr I line at 4607.7 {\AA} and found to be enhanced.
Abundances of Sr are not reported for their sample in
de Castro et al. (2016) and Yang et al. (2016).
Abundance of Y derived from the Y II lines indicates enhancement
in all the three stars.
Spectrum synthesis calculation fits for Y II line
at 5289.81 {\AA\,} is shown in figure 9.
For HD~121447, the Y abundance is derived using two Y II lines at
5119.11 and 5544.61 {\AA}.
Zirconium abundance derived from Zr I lines in HD~88035 and HD~121447
and Zr II lines at 4414.54
and 4613.97 {\AA\,} for HD~51959 show large enhancement.
Enhancement of Y and Zr
abundances are noticed in the sample of Yang et al. (2016) and also in the
sample of four barium stars studied by Mahanta et al. (2016).
The sample of de Castro et al. (2016) shows a wide range in abundance ratios
from near solar to [X/Fe] $>$ 1.0 for both Y and Zr.
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure_SrY.eps}
\caption{Spectral synthesis of Sr I line at 4607.33 {\AA} and Y II
line at 5289.81 {\AA} are shown for HD 51959 (lower panel) and HD 88035
(upper panel). The dotted lines indicate the synthesized spectra and
the solid lines indicate the observed line profiles. Two alternative
synthetic spectra for [X/Fe] = +0.3 (long-dashed line) and
[X/Fe] = $-$0.3 (short-dashed line) from the observed value are shown
to demonstrate the sensitivity of the line strength to the abundances. }
\end{figure}
We have used Ba II lines at 4934, 5853, 6141 and 6496 {\AA}
whenever available to derive the barium sbunadnce.
Spectrum synthesis calculation of Ba II line at 5853 {\AA}
(figure 10, left panels)
considering the hyperfine splitting contributions from Mc William (1998)
is also performed to estimate barium abundance.
Reported [Ba/Fe] values in Tables 6-8 are
those obtained from spectrum synthesis calculations of
Ba II feature at 5853.66 {\AA}. While
de Castro et al. (2016) have not reported abundances of Ba for their sample,
[Ba/Fe] ranges from 0.17 (HD~11658) to 1.13 (HD~49641) in the sample of
Yang et al. (2016).
Abundance of lanthanum is derived from the spectrum synthesis calculation
of La II line at 4921.77 {\AA\,}
(figure 10, right panels)
considering hyperfine components from Jonsell et al. (2006). The
derived values of La indicate large enhancements (Tables 6, 7, 8).
Estimates of [La/Fe] range from 0.26 (HD~11658) to 1.38 (HD~49641)
in the sample of Yang et al. (2016); these estimates have a wider range
for the sample of de Castro et al. (2016) from 0 (BD$-$01302) to
2.7 (HD~24035).
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{figure_BaLa.eps}
\caption{Spectral synthesis of Ba II line at 5853.66 {\AA\,} and
La II line at 4921.77 {\AA\,} are shown for HD 51959 (lower panel)
and HD 88035 (upper panel). The dotted lines indicate the synthesized
spectra and the solid lines indicate the observed line profiles.
Two alternative synthetic spectra for [X/Fe] = +0.3 (long-dashed line)
and [X/Fe] = $-$0.3 (short-dashed line) from the observed value
are shown to demonstrate the sensitivity of the line strength
to the abundances. }
\end{figure}
Ce, Pr, Nd and Sm abundances are also found to be enhanced in all
the three stars (Tables 6, 7, 8).
In the sample of
de Castro et al. the estimates of [Ce/Fe]
range from $-$0.08 (HD~212484) to
1.87 (HD~107541) and [Nd/Fe]
from 0.0 (HD~51315) to 1.83 (HD~107541).
A spectrum synthesis calculation
of Eu II line at 6645.130 {\AA\,}
by considering the hyperfine components from Worley et al.
(2013) shows enhancement of Eu in HD~121447.
Since this line is found to be slightly distorted,
our estimate may be regarded as an upper limit.
For HD~88035, the Eu abundance
derived from the spectrum synthesis calculation of Eu II line
at 6437.64 {\AA}
shows an enhancement with [Eu/Fe] ${\sim}$ 0.96.
Abundance of dysprosium is derived using Dy II lines
at 4103.310 and 4923.167 {\AA\,} for HD~51959.
For HD~88035 and HD~121447, Dy abundance was derived from a single line
at 4923.17 {\AA}.
Estimated [hs/ls] $>$ 0 for HD~88035 and HD~121447 (where hs refers
to the second peak s-process elements and ls refers to the first
peak s-process
elements) is an indication that the neutron exposures experienced
in their AGB progenitor companions are sufficiently strong for the
production of the more abundant `hs' elements than the `ls' elements.
{\footnotesize
\begin{table*}
{\bf Table 6 : Elemental abundances in HD~51959}\\
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& & & & & \\
& Z & solar $log{\epsilon}^a$ & $log{\epsilon}$& [X/H] & [X/Fe] \\
& & & dex & & \\
\hline
C {\sc i} & 6 & 8.43 & 8.40$\pm$0.20(syn) & -0.03 & -0.06 \\
N {\sc i} & 7 & 7.83 & 8.68$\pm$0.20(syn) & 0.85 & 0.82 \\
O {\sc i} & 8 & 8.69 & 8.85$\pm$0.20(syn) & 0.16 & 0.13 \\
Na {\sc i} & 11 & 6.24 & 6.29$\pm$0.17(6) & 0.05 & 0.02 \\
Mg {\sc i} & 12 & 7.60 & 7.21$\pm$0.16(2) & -0.39 & -0.42 \\
Al {\sc i} & 13 & 6.45 &6.97$\pm$0.20(1) & 0.52 & 0.49 \\
Si {\sc i} & 14 & 7.51 & 7.67$\pm$0.21(4) & 0.16 & 0.13\\
Ca {\sc i} & 20 & 6.34 & 6.11$\pm$0.16(5) & -0.23 & -0.26\\
Sc {\sc ii}* & 21 & 3.15 & 3.30$\pm$.20(1,syn) & 0.15 & 0.14\\
Ti {\sc i} & 22 & 4.95 & 4.89$\pm$0.15(13) & -0.06 & -0.09\\
Ti {\sc ii} & 22 & 4.95 & 4.93$\pm$0.20(1) & -0.02 & -0.03 \\
V {\sc i}* & 23 & 3.93 & 4.10$\pm$0.20(1,syn) & 0.17 & 0.14\\
Cr {\sc i} & 24 & 5.64 & 5.54$\pm$0.18(10) & -0.10 & -0.13\\
Mn {\sc i}* & 25 & 5.43 & 5.20$\pm$0.20(1,syn) & -0.23 & -0.26\\
Fe {\sc i} & 26 & 7.50 & 7.53$\pm$0.12(56) & 0.03& -\\
Fe {\sc ii} & 26 & 7.50 & 7.51$\pm$0.20(8) & 0.01& -\\
Co {\sc i} & 27 & 4.99 & 5.20$\pm$0.13(6) & 0.21 & 0.24\\
Ni {\sc i} & 28 & 6.22 & 6.19$\pm$0.12(14) & -0.03 & -0.07\\
Sr {\sc i}* & 38 & 2.87 & 4.18$\pm$0.20(1,syn) & 1.31 & 1.28\\
Y {\sc ii}* & 39 & 2.21 & 3.20$\pm$0.18(5) & 0.99 & 0.98\\
Zr {\sc ii} & 40 & 2.58 & 3.86 $\pm$0.24(2) & 1.28 & 1.25\\
Ba {\sc ii}* & 56 & 2.18 & 3.00$\pm$0.20(1,syn) & 0.82 & 0.81\\
La {\sc ii}* & 57 & 1.10 & 2.05$\pm$0.20(1,syn) & 0.95 & 0.94\\
Ce {\sc ii} & 58 & 1.58 & 2.61$\pm$0.21(12) & 1.03 & 1.02\\
Pr {\sc ii} & 59 & 0.72 & 1.91$\pm$0.17(3) & 1.19 & 1.18\\
Nd {\sc ii} & 60 & 1.42 & 2.59$\pm$0.20(9) & 1.17 & 1.16\\
Sm {\sc ii} & 62 & 0.96 & 1.99$\pm$0.13(5) & 1.03 & 1.02\\
Dy {\sc ii} & 66 & 1.10 & 2.71$\pm$0.22(2) & 1.61 & 1.60\\
\hline
\end{tabular}
$^{a}$ Asplund et al. (2009) \\
$^*$ abundances are derived using spectral synthesis of respective lines.\\
\end{table*}
}
{\footnotesize
\begin{table*}
{\bf Table 7 : Elemental abundances in HD~88035}\\
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& & & & & \\
& Z & solar $log{\epsilon}^a$ & $log{\epsilon}$& [X/H] & [X/Fe]\\
& & & dex & & \\
\hline
C {\sc i} & 6 & 8.43 & 8.53$\pm$0.20(syn) & 0.10 & 0.1\\
N {\sc i} & 7 & 7.83 & 8.87$\pm$0.20(syn) & 1.04 & 1.09 \\
O {\sc i} & 8 & 8.69 & 8.90$\pm$0.20(syn) & 0.21 & 0.26 \\
Na {\sc i} & 11 & 6.24 & 6.36$\pm$0.13(4) & 0.12 & 0.17\\
Mg {\sc i} & 12 & 7.60 & 7.23$\pm$0.03(2) & -0.37 & -0.32 \\
Si {\sc i} & 14 & 7.51 & 7.69$\pm$0.13(2) & 0.18 & 0.13\\
Ca {\sc i} & 20 & 6.34 & 6.24$\pm$0.16(8) & -0.10 & -0.05\\
Sc {\sc ii}* & 21 & 3.15 & 3.50$\pm$0.20(1,syn) & 0.35 & 0.34\\
Ti {\sc i} & 22 & 4.95 & 5.06$\pm$0.16(13) & 0.11 & 0.16\\
Ti {\sc ii} & 22 & 4.95 & 5.22$\pm$0.18(8) & 0.27 & 0.26\\
V {\sc i}* & 23 & 3.93 & 4.30$\pm$0.20(1,syn) & 0.37 & 0.42\\
Cr {\sc i} & 24 & 5.64 & 5.56$\pm$0.19(6) & -0.08 & -0.03\\
Mn {\sc i}* & 25 & 5.43 & 4.95$\pm$0.20(1,syn) & -0.48 & -0.43\\
Fe {\sc i} & 26 & 7.50 & 7.45$\pm$0.11(69) & -0.05& -\\
Fe {\sc ii} & 26 & 7.50 & 7.51$\pm$0.10(7) & 0.01& -\\
Co {\sc i} & 27 & 4.99 & 5.36$\pm$0.20(5) & 0.37 & 0.42\\
Ni {\sc i} & 28 & 6.22 & 6.24$\pm$0.18(20) & 0.02 & 0.07\\
Zn {\sc i} & 30 & 4.56 & 4.30$\pm$0.20(1) & 0.26 & 0.21 \\
Sr {\sc i}* & 38 & 2.87 & 4.27$\pm$0.20(1,syn) & 1.40 & 1.45\\
Y {\sc ii}* & 39 & 2.21 & 3.42$\pm$0.14(5) & 1.21 & 1.20\\
Zr {\sc i} & 40 & 2.58 & 3.83$\pm$0.11(4) & 1.25 &1.30 \\
Ba {\sc ii}* & 56 & 2.18 & 3.70$\pm$0.20(1,syn) & 1.52 & 1.51\\
La {\sc ii}* & 57 & 1.10 & 2.80$\pm$0.20(1,syn) & 1.70 & 1.69\\
Ce {\sc ii} & 58 & 1.58 & 3.36$\pm$0.19(10) & 1.78 & 1.76\\
Pr {\sc ii} & 59 & 0.72 & 2.70$\pm$0.05(3) & 1.98 & 1.97\\
Nd {\sc ii} & 60 & 1.42 & 3.17$\pm$0.16(11) & 1.75 & 1.74\\
Sm {\sc ii} & 62 & 0.96 & 2.90$\pm$0.13(5) & 1.94 & 1.93\\
Eu {\sc ii}* & 63 & 0.52 & 1.49$\pm$0.19(1,syn) & 0.97 & 0.96\\
Dy {\sc ii} & 66 & 1.10 & 2.56$\pm$0.20(1) & 1.46 & 1.45\\
\hline
\end{tabular}
$^{a}$ Asplund et al. (2009) \\
$^*$ abundances are derived using spectral synthesis of respective lines.\\
\end{table*}
}
{\footnotesize
\begin{table*}
{\bf Table 8 : Elemental abundances in HD~121447}\\
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& & & & & \\
& Z & solar $log{\epsilon}^a$ & $log{\epsilon}$& [X/H] & [X/Fe] \\
& & & dex & & \\
\hline
C {\sc i} & 6 & 8.43 & 8.60$\pm$0.20(syn) & 0.17 & 0.82 \\
N {\sc i} & 7 & 7.83 & 8.26$\pm$0.20(syn) & 0.43 & 1.08 \\
Mg {\sc i} & 12 & 7.60 & 6.99$\pm$0.20(1) & -0.61 & 0.04 \\
Si {\sc i} & 14 & 7.51 & 7.73$\pm$0.20(1) & -0.04 & 0.61\\
Ca {\sc i} & 20 & 6.34 & 6.30$\pm$0.18(4) & -0.04 & 0.61\\
Ti {\sc i} & 22 & 4.95 & 4.59$\pm$0.02(3) & -0.36 & 0.29\\
V {\sc i}* & 23 & 3.93 & 4.30$\pm$0.20(1,syn) & 0.37 & 0.42\\
Cr {\sc i} & 24 & 5.64 & 5.15$\pm$0.05(2) & -0.49 & 0.16\\
Mn {\sc i}* & 25 & 5.43 & 5.29$\pm$0.20(1,syn) & -0.14 & 0.51\\
Fe {\sc i} & 26 & 7.50 & 6.85$\pm$0.15(28) & -0.65& -\\
Fe {\sc ii} & 26 & 7.50 & 6.85$\pm$0.06(2) & -0.65& -\\
Ni {\sc i} & 28 & 6.22 & 5.94$\pm$0.18(6) & -0.28 & 0.37\\
Y {\sc ii}* & 39 & 2.21 & 3.17$\pm$0.26(2) & 0.96 & 1.61\\
Zr {\sc i} & 40 & 2.58 & 3.62$\pm$0.12(2) & 1.04 &1.69\\
Ba {\sc ii}* & 56 & 2.18 & 4.20$\pm$0.20(1,syn) & 2.02 & 2.67\\
La {\sc ii}* & 57 & 1.10 & 2.40$\pm$0.20(1,syn) & 1.67 & 2.32\\
Ce {\sc ii} & 58 & 1.58 & 2.60$\pm$0.16(4) & 1.02 & +1.67\\
Pr {\sc ii} & 59 & 0.72 & 2.62$\pm$0.06(5) & 1.90 & 2.55\\
Nd {\sc ii} & 60 & 1.42 & 2.89$\pm$0.16(5) & 1.47 & 2.12\\
Sm {\sc ii} & 62 & 0.96 & 2.48$\pm$0.15(3) & 1.52 & 2.17\\
Eu {\sc ii}* & 63 & 0.52 & 1.30$\pm$0.20(1,syn) & 0.78 & 1.43\\
Dy {\sc ii} & 66 & 1.10 & 2.96$\pm$0.20(1) & 1.86 & 2.51\\
\hline
\end{tabular}
$^{a}$ Asplund et al. (2009) \\
$^*$ abundances are derived using spectral synthesis of respective lines.\\
\end{table*}
}
\begin{table*}
{\bf Table 9: Observed values for [Fe/H], [ls/Fe], [hs/Fe] and [hs/ls]}\\
\begin{tabular}{lcccc}
\hline
Star Name & [Fe/H] & [ls/Fe] & [hs/Fe] & [hs/ls] \\
\hline
HD 51959& 0.03& 1.18& 1.00& -0.18\\
HD 88035& -0.05& 1.29& 1.67& 0.14\\
HD 121447 & -0.65&1.63&2.39&0.76 \\
\hline
\end{tabular}
\end{table*}
{\footnotesize
\begin{table*}
{\bf Table 10: lines used for deriving elemental abundances}\\
\begin{tabular}{lccllllc}
\hline
Wavelength& Element & E$_{low}$ & log gf& HD~51959&HD~88035&HD~121447&Ref\\
~~~~~ {\AA} & id & ev & & ~~m{\AA}(dex) & ~~m{\AA}(dex) & ~~m{\AA}(dex)& \\
\hline
5682.650 & Na I& 2.100 & -0.70 & 140(6.52) & 136(6.46) & - &1\\
5688.220 & & 2.100 & -0.40 & 139(6.21) & 146(6.27) &-&1\\
6160.747 & & 2.105 & -1.26 & 85(6.31) & - &-&1\\
8194.824 & & 2.105 & 0.49 & 258(6.12) & - &-&1\\
5889.950 & & 0.000 & 0.10 & 545(5.55) & 626(5.87) &-&2\\
5895.920 & & 0.000 & -0.20 & 601(5.95) & 471(5.87) &-&2\\
4702.990 & Mg I& 4.350 & -0.67 & 229(7.37) & 214(7.26) &-&3\\
5528.400 & & 4.350 & -0.49 & 224(7.22) & 227(7.20) &265.9(6.98)&3\\
6783.680 & Al I& 4.021 & -1.44 & 45(6.98) & - &-&4\\
4782.990 & Si I& 4.954 & -2.47 & - & 38(7.85) &-&1\\
\hline
\end{tabular}
The numbers in the paranthesis in columns 5 - 7 give the derived
abundances from the respective line.\\
References: 1. Kurucz \& Peytremann (1975), 2. Wiese et al. (1966),
3. Lincke \& Ziegenbein (1971), 4. Kurucz (1975)\\
{\bf Note.} This table is available in its entirety in online only.
A portion is shown here for guidance regarding its form and content.\\
\end{table*}
}
\begin{table*}
{\bf Table 11: Abundance Uncertainities}\\
\begin{tabular}{llllll}
\hline
Star & Standard & $\delta T_{eff}$&$ \delta$log g &$ \delta \xi$ & Total\\
& error & $\pm$ 100K & $\pm$ 0.1 dex & $\pm$ 0.3 km s$^{-1}$ & Uncertainity \\
\hline
HD 51959 & 0.12 & 0.06 & 0.04 & 0.03&0.14\\
HD 88035 & 0.11& 0.03 & 0.02 & 0.03&0.12\\
HD 121447&0.15& 0.04 &0.02&0.01&0.16\\
\hline
\end{tabular}
\end{table*}
{\footnotesize
\begin{table*}
{\bf Table 12: Atmospheric parameters of HD 121447 }\\
\begin{tabular}{lccccc}
\hline
Star Name. & $T_{eff}$ & log g & $\xi $ &[Fe /H] &Ref \\
& K & & km s$^{-1}$ & & \\
\hline
HD~121447&4500&2.2&3.02&-0.65&1\\
&4000&1.0&2.0& -0.25&2\\
&3900&1.0&2.0&-0.50& 3\\
&4200&0.8&2.5&0.05&4\\
\hline
\end{tabular}
1. Our work, 2. Abia et al. (1998), 3. Merle et al. (2016), 4. Smith (1984)\\
\end{table*}
}
\section {Kinematic analysis}
We have calculated the space velocity for the stars with respect to
the Sun using the method of Johnson \& Soderblom (1987). The space
velocity with respect to the Local Standard of Rest (LSR) is given by \\
\begin{center}
$(U, V, W)_{LSR} =(U,V,W)+(U, V, W)_{\odot}$ km/s.
\end{center}
where, $(U, V, W)_{\odot} =(11.1, 12.2, 7.3)$ km/s (Schonrich et al., 2010)
is the solar motion with respect to LSR and
\begin{center}
$\left[ \begin{array}{c} U \\ V \\ W \end{array} \right] = B.\left[ \begin{array}{c} V_{r} \\ k.\mu_{\alpha}/\pi \\ k.\mu_{\delta}/\pi \end{array} \right]$
\end{center}
where, $B=T.A$, T is the transformation matrix connecting the Galactic
coordinate system and equatorial coordinate system, k = 4.74057 km s$^{-1}$
equivalent of 1 AU in one year and $\mu_{\alpha}$ and $\mu_{\delta}$ are
proper motions in RA and Dec.
The estimates of the components of spatial velocity, $U_{LSR}$,
$V_{LSR}$ and $W_{LSR}$ measured along axes pointing towards
the Galactic center, the direction of Galactic rotation and the North
Galactic Pole respectively are with respect to the Local Standard
of Rest (LSR). We have used the radial velocity estimate and the
corresponding error estimate as measured by us. The estimates for
proper motion are taken from SIMBAD. Distances are measured taking
parallax values from SIMBAD and GAIA whenever available.
\noindent
The total spatial velocity of a star is given by,\\
$V_{spa}^{2}=U_{LSR}^{2}+V_{LSR}^{2}+W_{LSR}^{2}$
The estimated components of spatial velocity and the total spatial
velocity are presented in Table 13. According to Chen et al. (2004)
V$_{spa}$ $\le$ 85 km s$^{-1}$ for a thin disk star. The total
spatial velocities for all the three stars are found to be below
this value.
We have also calculated the probability for the stars to be a member
of thin disk, the thick disk or the halo population following the
procedures of Reddy et al.(2006), Bensby et al. (2003, 2004) and
Mishenina et al. (2004).
The estimated metallicities and low spatial velocities indicate the
stars to be thin disk objects. The probability estimates for them
being members of the thin disk population are 0.99, 0.99 and 0.92
for HD~51959, HD~88035 and HD~121447 respectively (Table 13).
\begin{table*}
{\bf Table 13: Spatial velocity and probability estimates for the program stars.}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline\hline
Star name & $U_{LSR}$ (km/s) & $V_{LSR}$ (km/s) & $W_{LSR}$ (km/s) & $V_{spa}$ (km/s) & $p_{thin}$ & $p_{thick}$ & $p_{halo}$\\
\hline
HD 51959 & $-25.17\pm0.98$ & $-4.12\pm1.2$ & $-9.58\pm2.19$ & $27.25\pm1.86$ & 0.99 & 0.009 & 0.00019\\
HD 88035 & $-36.00\pm4.06$ & $13.02\pm0.84$ & $-12.24\pm2.02$ & $40.19\pm4.52$ & 0.988 & 0.01079 & 0.0002\\
HD 121447 & $-45.19\pm15.05$ & $-17.72\pm8.54$ & $35.61\pm8.19$ & $60.21\pm18.60$ & 0.922 & 0.0755 & 0.0023\\
\hline
\end{tabular}
\end{table*}
{\footnotesize
\begin{table*}
{\bf Table 14: Abundance ratios of carbon and neutron-capture elements in
HD~121447 }\\
\begin{tabular}{lcccccccccccc}
\hline
Star Name. & [C/Fe] & [N/Fe] & [Sr/Fe] & [Y/Fe] &[Zr/Fe] & [Ba/Fe]& [La/Fe]& [Ce/Fe] & [Nd/Fe]& [Sm/Fe]& [Eu/Fe] & Ref \\
\hline
HD 121447& 0.82& 0.96 & -& 1.61&1.64&2.67&2.33&1.67&2.12&2.17&1.43&1\\
&- & -& 1.2&1.5&1.15&1.3&2.0&1.5&1.3&-&0.95&2\\
&0.55& - &0.97&0.94&1.86&2.33&1.74&1.27&-&-&-&3\\
&0.22& 0.52 & 1.22&0.76&0.66&0.57&0.70&0.73&0.65&-&0.39&4\\
\hline
\end{tabular}
1. Our work, 2. Abia et al. (1998), 3. Merle et al. (2016), 4. Smith (1984)\\
\end{table*}
}
\section{Discussions and conclusions}
Chemical analysis of three stars from the Bartkevicius (1996) catalogue
of CH stars clearly shows spectral properties that are characteristics
of barium stars. In particular, with [Ba/Fe] ${\sim}$ 0.81, the object
HD~51959 seems to be a mild barium star, as assigned by
Udry et al. (1998a, 1998b) and Jorissen et al. (1998),
HD~88035 a potential member of strong barium stars, and HD~121447 a
metal-poor barium star.
CH stars are known to be high velocity objects with
V$_{r}$r $\ge$ $\pm$100 km s$^{-1}$ . None of the three objects is found
to be a high velocity object (Table 4). From kinematic analysis all the
three stars are found to be members of thin disk population with high
probability. From radial velocity variations Jorissen et al. (1998, 2016)
have measured a period of 185.7 days and eccentricity 0.01 for the object
HD~121447.
These values are in consistence with the majority of barium and CH stars.
They have suggested this object to be an ellipsoidal variable. The
orbital information for HD~88035 is not available in literature. HD~51959
is believed to be a long period ($\sim$9488 days) object with an
eccentricity of 0.58 (Jorissen et al. in preparation). This is not
surprising as barium stars are known to have longer orbital periods
and greater eccentricities than their population II analogs, the CH stars.
All the three objects in our sample show enhancement in Nitrogen
with [N/Fe] in the range 0.6 to 1.0 (figure 11). An insightful
discussion on C, N, O abundances of barium stars can be found in
Smith (1984). For the object HD~121447, that is common in his sample,
the best values for C and O were found to be logN(C) = 8.7
and logN(O) = 8.8. Using these abundances of C and O he had derived
Nitrogen abundance to be logN(N) = 8.4 for HD~121447. Our C and N
abundance values are slightly lower with log N(C) = 8.6
and log N(N) = 8.14 for this object. We note however, that our
stellar parameters for this object differ considerably with those
of Smith (1984) (Table 12). As discussed by many authors
(Smith (1984), Luck and Lambert (1985), Barbuy et al. (1992), Smiljanic
et al. (2006), Merle et al. (2016)), higher N abundances with low C
abundances observed in some Ba stars indicate CN processing. Increase
in [N/C] ratio as the star ascends the giant branch is attributed to
the mixing process like first dredge up (FDU). Smiljanic et al. (2006)
explains about a more complex kind of rotational mixing which can cause
the increase in [N/C] ratio. From the HR diagram, our objects are in the
first ascend of the giant branch, which clearly supports increase in
N abundance over Carbon's.
Light elements abundance in Ba stars are expected to scale with the
metallicity values. From figure 12, it is clear that our
results are consistent with those for other objects with similar
metallicity values.
The heavy elements abundance ratios observed in the three stars
match closely with majority of barium stars in literature. This is clearly
evident from a comparison of the estimates of the abundance ratios with
those of de Castro et al. (2016) and Yang et al. (2016) (figure 13).
The objects HD~88035 and HD~121447 are peculiar with very high N abundance
along with large enhancement of heavy-s process elements. This could
possibly be due to the presence high mass companions to these objects
where the neutron densities are expected to be high. The large N
enhancement in these objects could be due to HBB operating in the higher
mass AGB companions of these objects. Moreover, higher enhancement of
r-process elements Sm, Eu and Dy in these objects could also indicate
the presence of higher neutron densities in their AGB companions
(Cowan and Rose 1977). But the derived low masses for these objects
which represents a lower limit to the companion mass does not support
the presence of higher mass companion.
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{fig_CNO_lit_11sept.eps}
\caption{CNO abundance ratios observed in HD 51959 (star symbol), HD 88035 (solid tringle) and HD 121447(solid hexagon) with respect to metallicity [Fe/H].
Solid circles represent Ba stars from literature (Smith (1984), Barbuy (1992),
Allen and Barbuy (2006), Merle et al. (2016)).}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{fig_NaNi_20july.eps}
\caption{Abundance ratios of light elements observed in
HD 51959 (star symbol), HD 88035 (solid tringle) and HD 121447 (solid hexagon)
with respect to metallicity [Fe/H]. Solid circles represent Ba stars from
literature (Allen and Barbuy (2006), Liang et al. (2003),
Smiljanic et al. (2007), Zacs (1994), de Castro et al. (2016) and
Mahanta et al. (2016)). Open squares represent
normal giants from literature (Luck and Heiter 2007,
Mishenina et al. 2006). }
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm,width=8cm]{fig_SrEu_20apr17.eps}
\caption{Abundance ratios of heavy elements observed in HD 51959
(star symbol), HD 88035 (solid tringle) and HD 121447 (solid hexagon)
with respect to metallicity [Fe/H]. Solid circles represent Ba stars from
literature (Allen and Barbuy (2006), Liang et al. (2003), Smiljanic et al.
(2007), Zacs (1994), de Castro et al. (2016) and Mahanta et al. (2016)).
Solid squares represent normal giants from literature (Van der Swaelman
et al. (2016), Tautvaisiene et al. (2000) and Luck and Heiter (2007)).
Open circles indicate CH stars from Karinkuzhi \& Goswami (2014, 2015). }
\end{figure}
\vskip 0.3cm
\noindent
{\it Acknowledgement}\\
This work made use of the
SIMBAD astronomical database, operated at CDS, Strasbourg, France, and the
NASA ADS, USA. M. P. is a JRF in the
DST project SB/S2/HEP-010/2013; funding from this project
is gratefully acknowledged. N Sridhar, a student of integrated BS-MS
course of IISER, Bhopal, gratefully acknowledges the summer internship
at IIA, under IIA's Visiting Student Program - 2015. \\
|
2,869,038,156,636 | arxiv | \section{Introduction}
The brain has a great capacity for learning and memory, and the mechanisms that allow it to reliably and flexibly store information can provide new foundational mechanisms for learning in artificial networks.
Perhaps the most widely discussed mechanism associated with learning is Hebbian plasticity \citep{hebb-organization-of-behavior-1949,Markram2011}.
This theory on neural learning states that when one neuron causes repeated excitation of another, the efficiency with which the first cell excites the second is increased.
The basic idea underlying Hebbian mechanisms is the brain's ability to change: local activity changes how neurons in a network communicate with each other, in turn affecting the overall behavior.
In Hebbian plasticity, these changes are to the strength of connections between neurons.
However, experimental observations \citep{Bucher2011, Grossman1979, Hatt1976, Luscher1994} have demonstrated that local activity can affect not only the \textit{strength} of connections but also the \textit{speed} with which action potentials travel between neurons.
This alteration in transmission delays is likely an inherent part of how the brain learns and stores memories, as encoding information in time-locked sequences expands the computational capacity of a network \citep{Izhikevich2006}.
Local plasticity rules, such as spike-timing-dependent plasticity (STDP) \citep{Markram1997}, that change synaptic weights in an activity-dependent manner are of great interest in the context of unsupervised deep learning in deep spiking neural networks (SNNs) \citep{Tavanaei2019}.
But why should plasticity in SNNs be confined to synaptic weights, when we are aware of a much richer repertoire of plastic changes that occur in the brain \citep{Gittis2006plasticity,Zhang2003plasticity,Hansel2001plasticity}?
Delay plasticity in neural networks has been explored, but the majority of studies have used supervised methods \citep{Schrawen2004, Wang2019, Taherkhani2015, Johnston2006}, with one noteworthy study using an unsupervised method to train only the readout layer of a reservoir \citep{Paugam-MoisyHelene2008}.
Here, we present our novel STDP analogue for local delay learning \citep{JorgenThesis}.
In this proposed learning rule, the timing of pre- and post-synaptic spikes influences the \textit{delay} of the connection rather than its weight, causing any subsequent spike transmission between a pair of neurons to occur at a different speed.
The main mechanism of our method is to better align all pre-synaptic spikes causally related to a post-synaptic spike, with the purpose of producing a faster and stronger response in the post-synaptic neuron.
We apply our developed delay learning method to the classification of handwritten digits \citep{LeCun2005mnist} in a simple proof-of-concept and demonstrate that training delays in a feedforward SNN is an effective method for information processing and classification.
Our networks consistently outperformed their untrained counterparts and were able to generalize their training to a digit class unseen during training.
\section{Delay learning in spiking neural networks}
\label{sec:delaylearning}
This section presents the novel activity-dependent delay plasticity method developed in this study and the encoding and decoding approaches of latency coding (LC) and polychronous group pattern (PGP) clustering used in our delay learning framework\footnote{Code available upon request.}.
The goal of our proposed learning method is to consolidate the network activity associated with similar inputs that constitute a distinct input class, so that the network will produce similar patterns of activity to be read out.
With this aim in mind, the delays of pre-synaptic neurons that together produce activity in a post-synaptic neuron are adjusted to better align the arrival of their spikes at the post-synaptic neuron.
Our framework was developed using Izhikevich regular spiking (RS) neurons.
Analogous to how STDP potentiates connections between causally related neurons to enhance the post-synaptic response, our delay plasticity mechanism increases the post-synaptic response by better aligning causally related pre-synaptic spikes.
This alignment process is illustrated in Fig.~\ref{fig:spikealignment} for the case of four pre-synaptic neurons connected to one post-synaptic neuron.
As shown in this figure, the pre-synaptic spikes (purple lines) that arrive (green lines) before the post-synaptic spike (blue line) are pushed towards their average arrival time (yellow line).
The delay $d_{i,j}$ between pre-synaptic neuron $i$ and post-synaptic neuron $j$ is changed according to the following equation:
\begin{equation}
\label{eq:delayeq}
\Delta d_{i,j} =
-3 \, \mathrm{tanh}\left(\frac{t_i+d_{i,j}-\bar{t}_{\mathrm{pre}}}{3}\right),
\;\;\;\; 0 \leq \Delta t_{\mathrm{lag}} < 10~\mathrm{ms},
\end{equation}
where $t_i$ is the spike time of neuron $i$, $\bar{t}_{\mathrm{pre}}$ is the average pre-synaptic arrival time across all neurons with spikes arriving within 10~ms before the post-synaptic spike, and $\Delta t_{\mathrm{lag}} = t_{j} - t_{i} + d_{i,j}$ is the time lag between when the pre-synaptic spike arrives at the post-synaptic neuron and when the post-synaptic neuron fires.
The time window of $10~\mathrm{ms}$ was selected because this is the window in which a pre-synaptic spike elicit a post-synaptic response.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/LearningMechanism.png}
\caption{Schematic overview of the delay learning mechanism. Purple vertical lines indicate presynaptic spike initiation times, green lines indicate presynaptic spike arrival times according to their delays $d_i$, and the blue line indicates the post-synaptic spike time. The learning mechanism works by pushing pre-synaptic spikes that arrive before the post-synaptic spike towards their average arrival time, indicated by the yellow line.}
\label{fig:spikealignment}
\end{figure}
The encoding and decoding approaches are illustrated in Fig.\ \ref{fig:encodingdecoding}.
In LC, inputs are encoded in the relative spike timing of the input neurons.
That is, input channels with a value of $0$ will fire first, followed by other channels in order of increasing input value.
Through experimentation, we determined that rescaling the dynamic range to relative latencies of $[0,40~\mathrm{ms}]$ produced good results.
Our decoding approach of PGP clustering is based on the concept of polychronization, introduced by Izhikevich as the occurrence of ``reproducible time-locked but not synchronous firing patterns'' \citep{Izhikevich2006}.
A polychronous group is an ensemble of neurons that can produce multiple such time-locked PGPs depending on how they are activated.
Because inputs from the same class do not activate precisely the same input neurons, we also introduced a method of assigning distinct output PGPs to the same class.
In this PGP clustering method, we iteratively merge PGPs into clusters based on how closely the order of spikes matches the mean of all PGPs already in that cluster; the threshold for matching was set to 80\% and 90\% of the mean total spike count between the two PGPs being compared.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{Figures/EncodingDecoding.png}
\caption{Illustration of the encoding and decoding methods. Left: Input values are encoded as spike latencies. Right: PGPs are defined as sets of sequential activity triggered by inputs, and they are clustered in a hierarchical manner by checking the ratio of matching spikes with other PGPs.}
\label{fig:encodingdecoding}
\end{figure}
\section{Proof-of-concept: Classification of handwritten digits}
\label{sec:classification}
To demonstrate the utility of our proposed delay learning method, we applied it to the classification of handwritten digits \citep{LeCun2005mnist}.
This dataset consists of images of $28 \times 28$ pixels; we scaled these images down to a size of $10 \times 10$ and assigned an input neuron to each pixel.
The details of our experimental setup are given in Table \ref{tab:architecture}.
We used feedforward networks with three layers, including the input layer, and fixed homogeneous connection weights
\begin{table}
\caption{Network architecture and experimental parameters}
\label{tab:architecture}
\centering
\begin{tabular}{cccccccc}
\toprule
Layer & Number & Connection & & Digits & Train & Test & PGP match\\
size & of layers & probability & Weight & (unseen) & instances & instances & threshold \\
\midrule
$100$ & 3 & 0.1 & $6$ & $0,1,(2)$ & $20$ & $25$ & $80$\%, $90$\% \\
\bottomrule
\end{tabular}
\end{table}
In each iteration of the experiment, a feedforward network was generated with connectivity between layers according to the connection probability and connections assigned random initial delays in the range of $(0,40~\mathrm{ms})$ (integer values with uniform probability).
We then provided inputs from the selected digit classes to this untrained network with local plasticity switched off to give a performance baseline for random delays.
In the training phase, different inputs of the same digit classes were fed into the network with local delay plasticity switched on.
Following training, we again switched off local plasticity and provided the same set of inputs as given in the baseline test phase to assess the performance of the trained network.
One digit class was selected as an ``unseen'' class, i.e., a class presented during testing but not training, to evaluate the network's ability to generalize.
Fig.\ \ref{fig:accuracy} shows the accuracy before and after training,
calculated as the ratio of the count of the most common PGP class to the total presented inputs.
In nearly all cases where the network could separate the digit classes, the trained network performed better than the corresponding untrained network; however, some networks were unable to separate the classes (2.4\% and 45\% of networks for PGP thresholds $\theta=90$\% and $80$\%, respectively; see Fig.\ \ref{fig:accuracy}(a)).
Networks were also able to generalize their learning to a digit class unseen during training (Fig.\ \ref{fig:accuracy}(b)).
Here, the accuracy remained low for the more stringent $\theta=90$\% but reached up to 64\% for $\theta=80$\% (mean accuracy 32\% in 38 networks able to separate the unseen class).
Flexibility with the PGP threshold can thus allow networks to generalize its training to unseen classes while maintaining good performance on trained classes.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figures/Accuracy.png}
\caption{Results of classifying handwritten images of two digits before and after training using delay learning, for (a) 500 and (b) 100 networks initialized and tested with the parameters listed in Table \ref{tab:architecture}. Accuracy of classifying (a) two training digit classes (0,1) and (b) one unseen digit class (2). Results are plotted with jitter for the sake of visualization. Histograms show the number of networks with each given accuracy. Accuracy of 0 indicates non-separable classes.}
\label{fig:accuracy}
\end{figure}
Examples of the activity in the output layers before and after training are shown in Fig.\ \ref{fig:rasters}.
This demonstrates the way the delay learning pushes the network to produce recognizably similar patterns (PGPs) when presented with inputs from the same class, as evidenced by the greater overlap of activity patterns after training.
Prior to training, the network activity is less structured overall and sparse in the final layer (neurons 101--200), whereas after training, the final layer is more active, and consistent spiking patterns can be observed across many inputs from the same class.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/Rasters.png}
\caption{Raster plots of activity in layers 2 and 3 (neurons 1--100 and 101--200, respectively) before and after training. Digit classes 0 and 1 were used for training, and 2 is an unseen third class presented only during testing. Colors represent 25 different inputs from each class. Accuracies at PGP thresholds of 80\% and 90\% are reported in the lower right corner of each plot.}
\label{fig:rasters}
\end{figure}
\section{Discussion}
Neural networks with carefully designed spike time delays can support many time-locked patterns of activity, expanding the coding capacity when compared with traditional rate models \citep{Izhikevich2006}.
Delay learning enables such polychronization in populations of spiking neurons, and our results show that we can take advantage of this richness of activity to train networks that can generalize their training to new inputs.
Our results demonstrate that feed-forward SNNs trained with our proposed local delay plasticity rule produce similar activity patterns in their output layers that can be well classified with a strict PGP matching threshold of 90\%.
Furthermore, lowering the threshold to 80\% yielded some networks able to generalize their training to novel inputs unseen during the training period.
Our proof-of-concept shows the great potential for this local delay learning method; even with only a short training period of 20 digit presentations, PGPs emerge in the network activity that allow for improved classification accuracy.
However, there remains much room for improvement.
In the cases where the network performs poorly, it is largely due to non-separability of input classes, and this is frequently accompanied by a fairly high accuracy prior to training (see Fig.\ \ref{fig:accuracy}(a) with threshold 80\%).
In these cases, the networks are likely being over-trained and producing a homogeneous PGP that reprepsents multiple groups.
An appropriate stopping point for training must be designed to avoid this pitfall.
In future work, networks can also be designed with heterogeneous weights and neuron types beyond the RS neuron.
In these cases, it may also be beneficial to apply the use of optimization techniques, such as evolutionary algorithms, to design effective network layouts.
\section*{Acknowledgements}
This work was partially funded by the SOCRATES project (Research Council of Norway, IKTPLUSS grant agreement 270961) and the DeepCA project (Research Council of Norway, Young Research Talent grant agreement 286558).
\bibliographystyle{unsrtnat}
|
2,869,038,156,637 | arxiv | \section{Introduction}
The Higgs mechanism is introduced in the Standard Model (SM) to provide mass to fundamental particles,
through the spontaneous breaking of the electroweak simmetry. This mechanism implies the existence of a
yet experimentally unobserved scalar particle, the Higgs boson, whose search has represented one of the
major goals of the high energy physics community over the last decade. The mass of the Higgs boson is a
free parameter of the theory, but the strong coupling to massive particles allows to constrain its value:
a global fit, which incorporates the measurements of the top quark and W boson masses, as well as additional
precision electroweak data provided by LEP, SLD and Tevatron experiments~\cite{globalfit}, indicates that a
light Higgs is preferred, M$_H$=89$^{+35}_{-26}$ GeV/c$^2$, with a 95\% Confidence Level (C.L.) upper limit of
158 GeV/c$^2$. On the other hand, results from direct searches at LEP~\cite{LEP} set a 95\% lower limit
of 114.4 GeV/c$^2$.
In the last few years CDF and D0 have steadily increased the efforts in extending the potential sensitivity
of their searches: the most recent combined results~\cite{Comb_Winter_2011} exclude the existence of the Higgs boson
with a mass between 158 and 173 GeV/c$^2$. This interval is expected to further extend, as well
as new data will be included. However, a substantial chance to make a signal observation
or set an exclusion in the entire explored mass range (100$\div$200 GeV/c$^{2}$) will require several improvements
in the analysis methods, beyond the increase of statistics provided by the end of Tevatron operations: a projection
of the probability of seeing a 2$\sigma$ excess, for an integrated luminosity of 10 fb$^{-1}$ per experiment,
calculated assuming 30$\div$40\% of sensitivity increase in the analysis techniques with respect
to Summer 2010 results, is reported in figure~\ref{fig:projection}.
As of this paper, the observed upper limit at the reference
mass of 115 GeV/c$^2$ is 1.58 times the predicted SM cross section (figure~\ref{fig:limit}): this value refers to the CDF
and D0's combined measurements with up to 5.7 fb$^{-1}$ of data~\cite{Comb_Summer_2010}. The plan of the two collaborations
is to come out with a new more stringent combined limit in the low mass region by Summer 2011, when several search
channels will almost double the analyzed integrated luminosity. A summary of the latest public results in the low mass
Higgs boson searches is given in this paper, focusing on the most significant improvements which are being implemented
and will allow to reach the best sensitivity in the next Tevatron combination.
\begin{figure}
\begin{minipage}[b]{8cm}
\centering
\epsfig{figure=2sigma_exclusion_projection.eps,trim=0cm 0cm 0cm 1cm, height=1.8in}
\caption{Tevatron probability projections of seeing a 2$\sigma$ SM Higgs signal excess, for integrated luminosities of
5 fb$^{-1}$ and 10 fb$^{-1}$ per experiment, assuming improvements in the analysis techniques.\label{fig:projection}}
\end{minipage}
\hspace{2mm}
\begin{minipage}[b]{8cm}
\centering
\epsfig{figure=tevbelow150.eps,trim=0.5cm 1cm 0cm 0cm, height=2.4in}
\caption{The observed and expected 95\% C.L. upper limits on the Higgs production cross section, in units of the SM
theoretical cross section, obtained by combining all CDF and D0 analyses with up to 5.7 fb$^{-1}$ of data.\label{fig:limit}}
\end{minipage}
\end{figure}
\section{Experimental apparatus}
A detailed description of the Tevatron collider and CDF and D0 detectors can be found elsewhere~\cite{CDF,D0}.
The accelerator provides $p\bar{p}$ collisions at $\sqrt{s}$ = 1.96 TeV with
stable and well performing operating conditions: as of May 2011 about 60 pb$^{-1}$ are
produced per week, with a typical instantaneous luminosity of 3$\times$10$^{32}$ cm$^{-2}$s$^{-1}$;
since the beginning or Run II, over 10 fb$^{-1}$ of data have been delivered at the two
collision points, and more than 8 fb$^{-1}$ were recorded and made available for the
analyses by each experiment. Tevatron collisions are scheduled to stop in September 2011 and we expect that an additional
2 fb$^{-1}$ of data will be delivered by that date.
\section{Low Mass Higgs Boson at the Tevatron}
At the Tevatron center of mass energy, the dominant Higgs production mode is represented by gluon-gluon fusion,
gg$\rightarrow$H, followed by the associated production with a W or a Z boson, $q\bar{q}\rightarrow$(W/Z)H, and
the vector boson fusion, $qq\rightarrow q$H$q$. Depending on the mass,
the inclusive predicted cross section in the 100$\div$200 GeV/c$^{2}$ interval ranges from about 2 to 0.7 pb: the achievable
signal yield is therefore particularly small if compared to the main SM background processes, which are several orders of
magnitude larger.
The Higgs search is particularly challenging for M$_H \lesssim$ 135 GeV/c$^2$, where the decay mode into b quarks
becomes dominant ($\sim$73\% at M$_H$=115 GeV/c$^2$), making difficult the investigation of the direct production:
although being the most abundant, the gg$\rightarrow$H$\rightarrow b\bar{b}$ process is indeed experimentally
prohibitive because of the overwhelming non-resonant multijet background. It is then preferred to consider the
associated production, whose cross section is smaller of one order of magnitude, but where the leptonic decays
of the W and Z boson provide cleaner signatures, easy to trigger on and with a great reduction of the QCD background.
The most sensitive channels are represented by WH$\rightarrow l\nu b\bar{b}$, ZH$\rightarrow\nu\bar{\nu}b\bar{b}$ and
ZH$\rightarrow llb\bar{b}$, where the Higgs boson is detected through the reconstruction of the jets originating from
the b quark hadronization.
Many other additional channels, although less powerful, are considered since they provide a sizeable contribution to the overall
sensitivity: these include the all-hadronic associated production, where the W and Z bosons are searched
in their hadronic decay, and the low branching ratio (B.R.) decay into a pair of tau leptons or a pair of photons.
No single channel provides by itself the sensitivity to discover the Higgs boson. The best strategy is to perform
dedicated analyses exploiting the specific topological features of the different final states and then combine the
results into one single measurement. In order to maximize the sensitivity and optimize the analysis techniques,
each channel can be further split into subcategories according to the lepton types or the jet multiplicity in the event
selection.
\section{Analysis strategies}
\subsection{Acceptance optimization}
One of the main challenges in the Higgs searches is represented by the need to increase as much as possible the total
signal acceptance: Tevatron experiments are pursuing this target by including new triggers in the online selection,
by relaxing the kinematic cuts and by implementing additional lepton categories or more sophisticated identification
algorithms in the event reconstruction. The larger explored phase space requires nevertheless an accurate understanding
of the selected data sample, whose composition has to be well described by the background modelings.
An example of the potential gain provided by the increased event acceptance is given by the ongoing update of CDF's
ZH$\rightarrow \mu\mu b \bar{b}$ search~\cite{ZHllbb}: the preliminary results, obtained by employing a novel muon identification
based on a neural network (NN) algorithm, as well as an extended kinematic selection, as described in figure~\ref{fig:CDF_ZHmumubb},
indicate a sensitivity improvement of the order of 30$\div$60\% beyond the luminosity scaling, in the 100$\div$150 GeV/c$^2$
mass range.
\begin{figure}[h!]
\begin{center}
\epsfig{figure=trigger_improvement_muonPt.eps,height=1.8in}%
\epsfig{figure=minDR_muon_jet.eps,height=1.9in}
\caption{Acceptance increase in the CDF's ZH$\rightarrow \mu \bar{\mu} b \bar{b}$ search. Left: implementation
of an inclusive trigger selection. Since no specific cuts on muon candidates are applied, a larger fraction of events
is recored, compared to the standard high p$_T$ ($\ge$18 GeV/c) muon trigger. Right: removal of the spatial separation cut
($\Delta$R=$\sqrt{{\Delta\eta}^2+{\Delta\varphi}^2}>$0.4) between the muon and the closest jet.
\label{fig:CDF_ZHmumubb}}
\end{center}
\end{figure}
\subsection{b-quark identification}
When considering final states including b quarks, one fundamental ingredient is the capability of distinguishing
jets originated from b quarks from those coming from gluons, light or c quarks.
Both CDF and D0 have developed specific ''b-tagging'' algorithms, which exploit the relatively long lifetime
of b-hadrons and the high position resolution of the silicon detectors. Different approaches are followed:
CDF's SecVtx~\cite{SecVtx} is based on the reconstruction of the b-hadron secondary vertex, obtained by fitting
the tracks displaced from the interaction point; CDF's JetProb~\cite{JetProb} uses the distribution of the track
impact parameters, with respect to the primary vertex, to build a probability that a jet contains a b-hadron;
More sophisticated algorithms adopted by both CDF and D0 are based on NNs~\cite{btag_NNCDF,btag_NND0} and Boosted
Decision Trees (BDT)~\cite{ZHnnbb} and combine the information provided by different taggers, with the discriminating
power of additional variables, including those related to the leptonic decay of b-hadrons inside the jet.
These multivariate methods benefit of the correlations among the input variables, which help in increasing the
signal to background separation; in addition, they have the advantage to provide continuos outputs instead of a
simple binary one. This allows to easily modify the definition of a b-tagged jet, by changing the cut on the
output distributions, and then alternatively maximize the sample purity or increase the signal acceptance of the
analysis selection.
Typical b-tagging efficiencies are 40$\div$70\%, with a corresponding light flavour jet mistag rate of 0.5$\div$3\%.
\subsection{Multivariate techniques}
Given the small signal to background ratio, the analyses employ multivariate techniques in order to exploit all the
event information, by collecting multiple distributions into a single and more powerful discriminating variable:
the preferred methods are based on NNs, BDTs and matrix elements (ME). The search sensitivity usually increases
by about 20\% with respect to simply using one single kinematic distribution as discriminator.
The reliability of these techniques depends on the goodness of the background modeling for the input variables, which
need to be carefully verified in dedicated control samples.
\section{Results}
In table~\ref{tab:exp} we summarize the expected and observed 95\% C.L. upper limits for the different CDF and D0 search
channels. More information can be found in the references and in the web pages of the two experiments~\cite{www_cdf,www_D0}.
The items marked with an asterisk refer to the analyses which were updated since Summer 2010 and for which a more detailed
description is given here.
\begin{table}[t]
\caption{Observed and expected upper limits at 95\% C.L. on the Higgs boson production cross section, at the reference mass
of 115 GeV/c$^2$, for the CDF and D0 experiments as of May 2011.
\label{tab:exp}}
\vspace{0.4cm}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c|}{CDF} & \multicolumn{3}{|c|}{D0}\\
\cline{2-7}
Channel & $\mathcal{L}$ & Exp.limit & Obs.limit & $\mathcal{L}$ & Exp.limit & Obs.limit\\
& [fb$^{-1}$]& [$\sigma$/$\sigma$(SM)] &[$\sigma$/$\sigma$(SM)] &[fb$^{-1}$] &[$\sigma$/$\sigma$(SM)] & [$\sigma$/$\sigma$(SM)]\\
\hline
WH$\rightarrow l\nu b\bar{b}$~\cite{WH} &5.7 & 3.5& 3.6&5.3 & 4.8&4.1\\
ZH$\rightarrow llb\bar{b}$~\cite{ZHllbb} &5.7 & 5.5& 6.0& 6.2& 5.7& 8.0\\
ZH$\rightarrow \nu\nu b\bar{b}$~\cite{ZHnnbb} &5.7 & 4.0& 2.3& 6.2*& 4.0&3.4\\
VH/VBF$\rightarrow$ $b\bar{b}$+jets~\cite{CDF_allhadronic} &4.0 & 17.8& 9.1& -& -&-\\
H$\rightarrow$ $\tau\tau$+jets~\cite{tautau} &6.0* & 15.2& 14.7& 4.3*& 12.8&32.8\\
H$\rightarrow \gamma\gamma$~\cite{gg} & 4.2& 20.8& 24.6& 8.2*& 11.0&19.9\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{ZH$\rightarrow \nu\nu b\bar{b}$}
The signature of this search is based on two b-jets plus an unbalance of transverse energy ($\met$) due to the undetected
neutrinos, coming from the Z boson invisible decay. The analysis is also sensitive to the WH$\rightarrow l\nu b\bar{b}$ channel,
when the charged lepton from the W escapes the detection. CDF and D0 apply similar event selections and search strategies:
they both require large $\met$ and 2 or 3 jets, at least one of them b-tagged. NNs (CDF) and BDTs (D0) are implemented to
reduce the main background process, represented by QCD multijet production, with $\met$ coming from jet energy mismeasurements.
A second discriminant is then used to separate the signal from the remaining sources of background.
The D0 latest search update has significantly increased the sensitivity thanks to the acceptance gain provided by loosening
the b quark identification requirements, followed by a more clever use of the b-tagger output information. The latter has been
employed as additional input variable for the final multivariate algorithm, thus improving the separation between signal and
background. This new approach results in a 14\% improvement in the expected limit compared to the previous version of the analysis.
The final distribution for events containing two b-tagged jets, and the corresponding observed and expected upper limits
on the Higgs boson production cross section, as a function of the mass, are shown in figure~\ref{fig:D0_ZHnunubb}.
\begin{figure}[h!]
\begin{center}
\epsfig{figure=H98F07b.eps,height=1.9in}%
\epsfig{figure=H98F09b.eps,height=1.9in}
\caption{D0's ZH$\rightarrow \nu\bar{\nu} b\bar{b}$ search. Left: final discriminant distribution for the
double b-tag channel, in the Higgs mass hypothesis of 115 GeV/c$^2$. Right: observed and expected upper limits on
the Higgs boson production cross sections, as a function of the Higgs mass.\label{fig:D0_ZHnunubb}}
\end{center}
\end{figure}
\subsection{H$\rightarrow \tau\tau$+jets}
The B.R. of H$\rightarrow \tau\tau$ is one order of magnitude smaller than H$\rightarrow b\bar{b}$, but the contribution
of this search is significant, since several production modes can be simultaneously investigated. In particular, the gluon
fusion becomes accessible thanks to the selection of the leptonic decay of one of the two taus, which considerably reduces
the multijet background. The requirement of jets in the final state further increases the signal to background ratio and
optimizes the search for the vector boson fusion process and the associated production, where the W and Z are allowed to
decay hadronically. However, the significance of this channel is affected by the similarity of the H$\rightarrow \tau\tau$
signal with the irreducible Z$\rightarrow \tau\tau$ background, both characterized by a resonant tau pair in the final state.
One additional challenge is represented by the hard discrimination of real hadronically decaying taus from quark/gluon jets:
CDF and D0 employ identification algorithms based on BDTs and NNs, respectively.
Both the experiments have recently presented an update of their searches, where the most relevant improvements are related
to the refined multivariate techniques adopted to build the final discriminant. The best separation between signal and
background is achieved by following a two stage procedure: first several independent BDTs are trained to distinguish the Higgs
from the principal sources of background; the different outputs are then combined into one single distribution, chosen to
maximize the sensitivity of the search. Figure~\ref{fig:CDF_tautau} shows the CDF final discriminant for events containing 2
or more jets in the final state.
\begin{figure}
\begin{minipage}[b]{8cm}
\centering
\epsfig{figure=Ztautau_top_0_Zeemumu_01_105_120_2jets_fit.eps,height=2in}%
\caption{\label{fig:CDF_tautau}CDF's H$\rightarrow \tau\tau$+jets search: final discriminant distribution in the Higgs mass
hypothesis of 120 GeV/c$^2$.}
\end{minipage}
\hspace{2mm}
\begin{minipage}[b]{8cm}
\centering
\epsfig{figure=H99F03c.eps,height=2in}
\caption{\label{fig:D0_Hg}D0's H$\rightarrow \gamma\gamma$ search: final discriminant distribution in the Higgs mass hypothesis
of 120 GeV/c$^2$.}
\end{minipage}
\end{figure}
\subsection{H$\rightarrow \gamma\gamma$+X}
The diphoton final state suffers from a very low B.R., but it is interesting because the photon
identification efficiency and the energy resolution are much better then that of b-jets, and the narrow M$_{\gamma\gamma}$ mass peak
can be exploited to reduce backgrounds. The selection is based on the requirement of two high E$_T$ central photons.
The dominant background is the direct SM diphoton production, followed by events with misidentified electrons and jets.
CDF sets a limit by looking for a peak resonance in the M$_{\gamma\gamma}$ distribution; D0 has recently implemented a BDT which
collects five kinematic variables, bringing an improvement of the sensitivity of about 20\% with respect to the luminosity increase
from the previous stage of the analysis.
\section{Conclusions}
We presented the latest results on the Tevatron searches for a low mass SM Higgs boson.
The update of the CDF and D0's combination, currently in progress, will benefit from the ongoing
efforts described in this paper to increase the performances beyond the luminosity scale: the projections shown in
figure~\ref{fig:projection} suggest that, with the full data expected by the end of Run II, accompained by the suitable
improvements in the analysis techniques, the Tevatron could reach the sensitivity to exclude the presence of the SM Higgs
in the entire explored mass range below 150 GeV/c$^2$, with a sizeable chance to set a 3$\sigma$ evidence of its existence.
\section*{References}
|
2,869,038,156,638 | arxiv | \section{Introduction}
Protein sequence families, particularly antibodies, have both well-conserved and variable regions.
In antibodies, the heavy and light chain sequences consist of highly conserved regions known as the framework as well as an array of distinct hypervariable loops, known as complementarity-determining regions (CDRs) \citep{reczko1995prediction}.
Despite the intrinsic variability of CDRs, conditional variation is often conferred by the gene locus admitting the protein \citep{kelows_2020}. Much of an antibody's antigen-binding affinity is owed to the CDRs, while the framework remains fixed or requires minimal change \citep{kuroda2012computer}. For \textit{in silico} modeling, integrating these established aspects of structure and binding can drive the development of better \textit{in situ} antibody therapeutic design \citep{chiu2019antibody}. While work in protein language modeling suggests that models can learn these evolutionary conservation rules~\citep{Rivese2016239118,elnaggar2021prottrans,madani2020progen}, it is an open challenge as to how to explicitly incorporate prior insight at test-time generation, such as sequence-level annotations \citep{anarci2015}, to restrict sampling in certain segments.
The deep manifold sampler was recently proposed as an effective method to sample novel sequences by iterative, optionally gradient-guided steps, of sequence denoising \citep{gligorijevic2021function}.
Empirically, gradient-based guided sampling was shown to selectively encourage changes in functional sites, implicitly leaving non-functional regions unperturbed.
In this work, we propose an alternative to the gradient-based guided design procedure in which predefined regions of a sequence are explicitly preserved, leaving sampling to take place in \textit{a priori} known notable sequence regions.
We conduct an experiment on antibody sequences to demonstrate the deep manifold sampler's ability to focus sampling on a subset of sequence positions. We do so by deliberately corrupting select regions of antibody sequences, that correspond to CDRs, and evaluating the length distribution and composition of sampled CDRs.
\section{Background: the Deep Manifold Sampler} \label{sec:background}
The deep manifold sampler \citep{gligorijevic2021function} is a denoising autoencoder (DAE) specialized for handling variable-length sequences.
As with a typical DAE \citep{vincent2008extracting}, the deep manifold sampler consists of three modules; a corruption process $C(\tilde{x}|x)$, an encoder $F$ and a decoder $G$. Unlike the usual DAE however, the deep manifold sampler has an extra module that determines the change in the length, which we call the ``length conversion'' \citep{shu2020latent}.
The deep manifold sampler assumes as input a sequence of discrete tokens, $x=(x_1, x_2, \ldots, x_{|x|})$, where each token $x_t$ is an item from a finite vocabulary $V$ of unique words or subwords. In the case of protein sequence modeling, $V$ consists of all unique amino acids. The sequence $x$ is corrupted with the corruption process $C$, resulting in a {\it noisy} input sequence $\tilde{x} \sim C(\tilde{x}|x)$. This corruption process can be arbitrary as long as it is largely local and unstructured. It may even alter the length of the sequence, $|x| \neq |\tilde{x}|$.
The encoder $F$ turns the corrupted sequence $\tilde{x}$ into a set of hidden vectors, $h=(h_1, h_2, \ldots, h_{|\tilde{x}|})$, where $h_t \in \mathbb{R}^d$. The encoder can be implemented using any of the widely-used deep architectures, such as transformers \citep{vaswani2017attention}, convolutional networks \citep{gehring2017convolutional} and recurrent networks \citep{sutskever2014sequence,bahdanau2014neural}. In this work, we follow the original deep manifold sampler's encoder, which was implemented as a transformer.
The hidden vectors are pooled to form a single-vector representation:
\begin{align*}
\bar{h} = \frac{1}{|\tilde{x}|} \sum_{t=1}^{|\tilde{x}|} h_t.
\end{align*}
This pooled representation is used by the length conversion to predict the change in the length. At training time, this length change predictor is trained to output $\Delta l^* = |\tilde{x}| - |x|$. When we sample sequences from the deep manifold sampler after training, we use the predicted change $\Delta l$ to adjust the size of the hidden vector set. The adjusted hidden vector set consists of $|\tilde{x}|+\Delta l$ hidden vectors, $z=(z_1, \ldots, z_{|\tilde{x}|+\Delta l})$, where each vector is a weighted sum of the previous hidden vectors $h_1, h_2, \cdots, h_{|\tilde x|}$. To wit, we define
\begin{align}
\label{eq:length-conversion}
z_t = \sum_{t'=1}^{|\tilde{x}|} \omega_{t,t'} h_{t'}
\end{align}
with the position-based softmax weights $w_{t, t'}$ preferring $h_{t'}$ closest to the length-scaled position $|\tilde x|/(|\tilde x| + \Delta l)t$, as follows:
\begin{align}
w_{t, t'} &= \frac{\exp(q_{t, t'})}{\sum_{t''=1}^{|\tilde x|} \exp(q_{t, t''})}\\ \label{eq:sigma}
q_{t, t'} &\propto
\frac{-1}{2\sigma^2}
\left( t' - \frac{|\tilde{x}|}{|\tilde{x}| + \Delta l}t \right)^2.
\end{align}
In Eq.~\eqref{eq:sigma}, $\sigma$ is a learned smoothing parameter.
The decoder $G$ then takes this transformed hidden vector sequence $z$ and outputs a corresponding sequence of logit vectors, $\tilde{y}=(\tilde{y}_1, \ldots, \tilde{y}_{|\tilde{x}|+\Delta l})$, where $\tilde{y}_t \in \mathbb{R}^{|V|}$. These logits are turned into probability distributions over the vocabulary $V$ in many different ways. The original deep manifold sampler implements a non-autoregressive approach \citep{gu2017non,lee2018deterministic}, where each logit is independently turned into a distribution:
\begin{equation}
\label{eq:non-ar}
p(y_t = v | \tilde{x}, \Delta l) = \frac{\exp \left( \tilde{y}_t^v + b^v \right)}{\sum_{v' \in V} \exp \left( \tilde{y}_t^{v'} + b^{v'} \right)},
\end{equation}
where $b^v$ is a bias for token $v$.
It is, however, also possible to use these logits together with a more powerful output module, such as a conditional random field (CRF; \citealt{lafferty2001}), as was recently done in \citep{yi2021nettime}, and autoregressive language models~\citep{mikolov2010recurrent}. For experiments in this paper, we use a variant of the deep manifold sampler with a CRF at the end of the decoder.
At training time, we minimize the negative log-probability of the original sequence $x$ given the corrupted version $\tilde{x}$ and the known $\Delta l^*$ to train the encoder and decoder, while minimizing the negative log-probability of $\Delta l^*$ to train the length change predictor. We parameterize the latter as a classifier. Once training is done, we can draw a series of samples from the deep manifold sampler by repeating the process of corruption, length conversion, and reconstruction.
While the original deep manifold sampler has an additional function predictor that can be used to guide the sampling procedure, we omit that here, as this is optional and can be replaced with another computational oracle without altering the sampling procedure that is the focus of this paper.
\section{Multi-Segment Preserving Sampling}
The deep manifold sampler was originally proposed in the context of protein design in \citet{gligorijevic2021function}. Within this setting, we often consider biological, chemical, and physical knowledge in order to impose constraints that narrow down a large, combinatorial search space \citep{street1999computational,woolfson2021brief}. The deep manifold sampler, on the other hand, stays true to the key principle of deep learning, that is, end-to-end learning, which makes it challenging to explicitly incorporate this knowledge into both learning and sampling. In this paper, we take one step towards enabling this into the sampling procedure of the deep manifold sampler. We assume the availability of knowledge in which segments of an original sequence, from which sampling starts, must be preserved in order to maintain a set of desirable properties. For example, in the case of antibody engineering, it may be desirable to only alter CDR loops while leaving all framework residues intact \citep{kuroda2012computer}.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{fig1.png}
\caption{Multi-segment preserving sampling. (A) Non-preserved segments $\bar{s}$ are corrupted using corruption process $C$, for which a given token (yellow) may be randomly perturbed (blue). This is encoded into the hidden vector set $h$. Length change predictor $p_{\theta}(\Delta l|\bar{h})$ outputs $\Delta l$, which is distributed across $\bar{s}$ (Eq. \ref{eq:nonpreserved}). (B) Segment-preserving sampling follows similar operations on preserved segment $s$ (red) with notable differences. Corruption $C$ yields an unaltered sequence $\tilde{x}$ and we carry over hidden vector $h_t$ of a token within preserved segment $\tilde{s}$ with strength $\beta$ (Eq. \ref{eq:preserved}).}
\label{fig:overview}
\end{figure}
Let $x = (x_1, x_2, \ldots, x_{|x|})$ be the initial sequence from which we run the deep manifold sampler to draw a series of samples over the sequence manifold. Instead of unconstrained sampling, we consider a scenario in which we are provided with a set of non-overlapping segments of the sequence that must be preserved in-order by their starting and ending indices (inclusive):
\begin{align*}
s = ((i_1, j_1), \ldots, (i_K, j_K))
\end{align*}
subject to $i_1>0$, $i_k \leq j_k$ for all $k$, $j_k < i_{k+1}$ for all $k$, and $j_K < |x|-1$. We refer to this set as a {\it preserved-segment set}. Likewise, we can imagine the complement segment set $\bar{s}$ that contains all the segments that are between the to-be-preserved segments in $s$:
\begin{align*}
\bar{s} = ((0, i_1-1), (j_1+1, i_2-1), \ldots, (j_K+1, |x|-1)).
\end{align*}
In order to preserve these segments while altering the remaining parts of the sequence, including their respective lengths, we make a series of modifications to the sampling procedure of the deep manifold sampler. First, we alter the corruption process $C$ such that it does not corrupt the preserved segments. For instance, if the corruption process randomly adds or removes tokens, this is only done to the segments in the complement set $\bar{s}$ but not to those in $s$. The corrupted sequence $\tilde x$ contains an indexing change due to insertions and deletions, so the description of the segment set $s$ must be updated to reflect this --- we denote the preserved segment set of $\tilde x$ by $\tilde{s}$.
The encoder still encodes $\tilde{x}$ into the hidden vector set $h$, as described in Section \ref{sec:background}. While the length change prediction steps also stay the same, the returned length change $\Delta l$ needs to be distributed across the non-preserved segments in order to avoid altering the length of any preserved segment in $\tilde{s}$. We do so proportional to the original lengths of the non-preserved segments. Concretely, we add to the length of each non-preserved segment $(\tilde{j}_k+1, \tilde{i}_{k+1}-1)$:
\begin{equation}
\label{eq:nonpreserved}
\left\lceil \frac{(\tilde{i}_{k+1} - \tilde{j}_k+1)}
{\sum_{k=-1}^{K} (\tilde{i}_{k+1} - \tilde{j}_k+1)}
\Delta l \right\rceil,
\end{equation}
where $\tilde{j}_0 = 0$ and $\tilde{i}_{K+1} = |\tilde{x}|-1$.
After distributing the length difference among the non-preserved segments, we can now construct the index map $o$ that tells us which segment in the new sequence corresponds to each of the preserved segment in $\tilde{x}$. In other words, $y_{o(\tilde{i}_k):o(\tilde{j}_k)} = \tilde{x}_{\tilde{i}_k:\tilde{j}_k}$. Let us use $o(\tilde{s})$ to denote the preserved-segment set derived from $s$ and the length distribution above.
The actual length conversion happens just like before, as in Eq.~\eqref{eq:length-conversion}. We however add an extra step after the length conversion in order to give the decoder a hint about preserved segments and their contents. This is done by carrying over the original hidden vector $h_t$ of a token within a preserved segment:
\begin{equation}
\label{eq:preserved}
z_t \leftarrow
\begin{cases}
(1-\beta) z_t + \beta h_{o^{-1}(t)}, &\text{if } t \in o(\tilde{s}) \\
z_t, &\text{if } t \notin o(\tilde{s}) \\
\end{cases}
\end{equation}
$o^{-1}$ is the inverse index map, and $\beta \in [0, 1]$ is the strength of carry-over.
The decoder turns this length-converted and segment-preserving hidden sequence $z$ into a sequence of logit vectors $\tilde{y}$, just like the original sampling procedure. We then modify the logit vector corresponding to a token with a preserved segment to force the sampled outcome to preserve the token identity:
\begin{equation}
\label{eq:logits}
\tilde{y}_t^v
\leftarrow
\begin{cases}
\infty, &\text{ if } t \in o(\tilde{s}) \text{ and } v = \tilde{x}_{o^{-1}(t)} \\
-\infty, &\text{ if } t \in o(\tilde{s}) \text{ and } v \neq \tilde{x}_{o^{-1}(t)} \\
\tilde{y}_t^v, &\text{ if } t \notin o(\tilde{s})
\end{cases}
\end{equation}
In the case of non-autoregressive modeling, this would result in a Categorical distribution for a preserved token to assign the entire probability mass ($=1$) to the original token identity. If a CRF is used at the end, this would prevent any sequence that violates preservation from being decoded out with non-zero probability.
We can repeat this sampling step with the newly sampled sequence and the corresponding preserved-segment set. This allows us to iteratively draw a series of samples while preserving the segments from the original sequence, designated by the preserved-segment set $s$. Because this iterative sampling procedure preserves multiple segments and their contents, we refer to this procedure as {\it multi-segment preserving sampling} (Figure \ref{fig:overview}).
\section{Related Work}
There are two alternative sequence-modeling paradigms that are closely related to the deep manifold sampler. We briefly describe each of them here in the context of whether and how multi-segment preservation can be implemented.
\subsection{Masked Language Models}
A masked language model is a special case of a DAE, similar to the deep manifold sampler \citep{devlin2018bert,liu2019roberta}. A corruption process $C$ in a masked language model is designed to apply one of three types of corruption: (1) replace a token with a special {\ensuremath{\left<\text{mask}\right>}} token, (2) replace a token with another, randomly-selected token, and (3) no alteration, to a random subset of the tokens within each sequence. All of these types do {\it not} alter the length of the sequence. The masked language model then reconstructs only the corrupted subset, rather than the full sequence as in the deep manifold sampler.
Masked language modeling was originally motivated as a way to pretrain a large-scale neural network rather than as a generative model from which to draw sequences. While \citet{wang2019bert} and \citet{goyal2021exposing} demonstrated that masked language models can yield well-formed sequences, they are neither as popular nor applicable as as other models --- such as autoregressive language models or the deep manifold sampler --- for sequence generation because they do not have a mechanism to automatically model the length distribution.
With the caveat that the length of a sequence cannot be altered at sampling time, a masked language model can be used for multi-segment preserving sampling. This can still be useful as a ``plug-and-play" proposal distribution for a number of downstream tasks including rational design and directed evolution of proteins \citep{woolfson2021brief,arnold1998design,yang2019machine,meier2021language}. However, this approach is limited compared to the proposed strategy of multi-segment preserving sampling with the deep manifold sampler, as our proposal is able to both dynamically adapt the length of a sequence and preserve segments.
\subsection{Denoising Sequence-to-Sequence Models}
A denoising sequence-to-sequence (Seq2Seq) model, where the decoder is autoregressive, has been studied previously in the context of natural language processing \citep{hill2016learning,lewis2019bart}. Unlike masked language models, and similar to the deep manifold sampler, the denoising Seq2Seq model can adaptively change the length of a sequence. However, unlike the deep manifold sampler, there is less control over the direct manipulation of the intermediate hidden vectors, as some of the dependencies between the tokens in the output sequence are captured directly by the decoder without relying on the intermediate representation between the encoder and decoder. It is however an interesting future direction to compare the denoising Seq2Seq model against the deep manifold sampler.
With respect to multi-segment preservation, denoising Seq2Seq models do not readily admit such a sampling strategy. This is due to the inherent intractability in decoding from an autoregressive model with unbounded context. This intractability is often addressed by approximate decoding, such as beam search, which is known to have suboptimal behaviors despite its successful and wide use \citep{welleck2020consistency,welleck2019neural}.
Several studies have proposed to extend beam search to incorporate such constraints \citep{hokamp2017lexically,post2018fast}. Unfortunately most of these approaches incur great computational cost, as their computational complexity grows often linearly with respect to the number of preserved segments --- or the beam size must grow accordingly --- because the underlying algorithm decodes in a greedy left-to-right fashion with a limited hypothesis set (beam). While it is possible to modify a denoising Seq2Seq model to admit multi-segment preserving sampling by letting the decoder only reconstruct non-preserved segments, this is out of scope for this paper.
\section{Experiments}
The proposed algorithm for multi-segment preserving sampling is designed to completely preserve designated segments. Here, we demonstrate a potential application in antibody design enabled by our algorithm coupled with the deep manifold sampler. Antibodies with a particular V-gene have fixed lengths in the framework as well as in the CDR1 and CDR2 regions. As a result, antibodies display most of their diversity in length and amino acid composition in CDR3 \citep{glanville2009precise}. To demonstrate the effectiveness of our approach and restricted variation of the preserved segments, we select all unique human antibody sequences with the \textit{IGHV1-18} gene from the Observed Antibody Space (OAS) database \citep{olsen2022observed} for multi-segment preserving sampling. Using a deep manifold sampler, we sample exclusively from the CDR3, while preserving other regions, and show the length and log-probability (GPT-2) distributions of the generated sequences qualitatively coincide with that of the test data. Table \ref{tab:tab1} illustrates examples of sampled CDR3 regions under different settings of carry-over strength $\beta$.
\begin{table}
\centering
\begin{tabular}{c|l|c}
\textbf{$\beta$} & \textbf{Aligned CDR3 sequence} & \textbf{Edit distance}\\
\midrule
N/A (original) & \texttt{ARDPEWDPF-QANY-YYYGMDV}
& 0 \\
0.0 & \texttt{ARDPEWDPF-QAN--YYYGMDV} & 3 \\
0.1 & \texttt{ARDPEWDPFFQANYNYYYGMVD} & 3 \\
0.5 & \texttt{KRDPEWDRF-QAPY-YTVGMDV} & 5 \\
0.9 & \texttt{ARGPECDPH-QAV-DIYYGMDV} & 6 \\
\vspace{.2cm}
\end{tabular}
\caption{Example outputs of multi-segment preserving sampling when restricting variation to the CDR3 region under different settings of $\beta$. Display is restricted to the sampled region, the rest is preserved by construction.}
\label{tab:tab1}
\end{table}
\subsection{Training details}
We obtained 5,971,552 unique human antibody heavy chain sequences with the \textit{IGHV1-18} gene from the OAS database, with 2,000 and 10,000 sequences set aside for validation and test sets respectively and the remaining used for training.\footnote{
Only sequences with "Redundancy $>$ 1" were retained.
}
We trained a deep manifold sampler on the training set with a constant learning rate of $10^{-4}$ for 60K mini-batch steps with the batch size of 128. The model consisted of a two-layer transformer encoder and decoder, each with 8 heads and the total embedding dimension of 256 and feed-forward layer dimension of 1024. The last layer consists of a CRF for final sequence generation. The rest of the training procedure was the same as described in \citet{gligorijevic2021function}.
In addition, we also trained an autoregressive GPT-2 model using HuggingFace Transformers library v4.16.2 \citep{wolf-etal-2020-transformers} on the same training set in order to demonstrate that the sampler-generated sequences capture the amino-acid token distribution observed in the training set. The model consisted of 6 attention layers with 8 heads and a total embedding dimension of 512 and was trained with a constant learning rate of $4 \times 10^{-4}$ for 25K mini-batch steps with batch size of 1024. The other parameters were set to the default values provided by the package.
\subsection{Sampling details and results}
For each sequence in the test set, we applied multi-segment preserving sampling for one iteration, preserving all non-CDR3 regions with four different $\beta$ values of 0, 0.1, 0.5, and 0.9.\footnote{
The region annotation was obtained from the OAS data unit files.
}
Figure \ref{fig:cd3} and Figure \ref{fig:gpt} show the length and log-probability (GPT-2) distributions of the generated sequences with changes in CDR3 and the test data across all selected $\beta$ values. The CDR3 length distribution of the generated samples matches the natural sequence length distribution for each value of $\beta$. The GPT-2 log-probability distribution of the samples has lower overall mean compared to that of the test distribution but is still within the same range, indicating that the samples are plausible. Both distributions vary only slightly with different values of $\beta$. These two results show the effectiveness of the sampling strategy for generating diverse antibody sequences, restricted to user-defined regions.
In Figure \ref{fig:edits}, we illustrate the distribution of the number of edits in the generated sequences relative to the input seed sequences, including substitutions, insertions, and deletions. The distributional mean increases slightly with higher values of $\beta$. For future work, we plan on a more systematic understanding of the effects of carry-over strength $\beta$ on sample quality and diversity.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{fig2.png}
\caption{The normalized distribution of the CDR3 lengths of the deep manifold sampler-generated sequences (``Samples") and the test set sequences (``Training (OAS)") with four different $\beta$ parameters. From Top Left, Clockwise: Samples were generated with $\beta = 0, 0.1, 0.9$, and $0.5$.}
\label{fig:cd3}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{fig3.png}
\caption{The normalized distribution of the GPT-2 scores of the deep manifold sampler-generated sequences (``Samples") and the test set sequences (``Training (OAS)") with four different $\beta$ parameters. From Top Left, Clockwise: Samples were generated with $\beta = 0, 0.1, 0.9$, and $0.5$.}
\label{fig:gpt}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{bar_total_edits.png}
\caption{The distribution of edit distances between generated samples and their seed sequences with varying settings of $\beta$ (0, 0.1, 0.5, and 0.9).}
\label{fig:edits}
\end{figure}
\section{Conclusion}
We have proposed a sampling procedure for the deep manifold sampler that explicitly preserves designated segments of the input sequence, allowing variation to occur only in non-preserved regions. We find that this approach, multi-segment preserving sampling, is applicable to a number of design problems in the life sciences where we often want to use prior knowledge made available in these domains. With biological sequence design for example, we want to sample new, diverse designs that avoid perturbing well-conserved regions of the input. In this way, we demonstrate the utility of multi-segment preserving sampling by restricting sampling to the CDR3 regions of a collection of antibody heavy chains with the \textit{IGHV1-18} gene and validating the resulting CDR3 designs against a separate GPT-2 model. As shown in Figure \ref{fig:gpt}, the sampled CDR3 regions admit high log-probability scores by the GPT-2 model, providing evidence that the samples are plausible.
Additionally, the CDR3 length distribution of the samples traces the observed length distribution in the training set, suggesting that the model adequately captures the variability in non-preserved segment lengths, despite the lack of explicit provision during training.
In future work, we will extend our exploration on the effect of the carry-over strength $\beta$ in terms of sample quality as well as its usage in conjunction with the function predictor for guided sampling proposed in \citet{gligorijevic2021function}.
\newpage
\vskip 0.2in
\bibliographystyle{plainnat}
|
2,869,038,156,639 | arxiv |
\section{Introduction and Motivation}
The interference channel is a mathematical model relevant to many
communication systems where multiple uncordinated links share a common
communication medium, such as wireless ad-hoc networks or Digital
Subscriber Lines (DSL). In this paper, we focus on the Gaussian parallel
interference channel.
A pragmatic approach that leads to an achievable region or inner bound
of the capacity region is to restrict the system to operate as a set
of independent units, i.e., not allowing multiuser encoding/decoding
or the use of interference cancelation techniques. This achievable
region is very relevant in practical systems with limitations on the
decoder complexity and simplicity of the system. With this assumption,
multiuser interference is treated as noise and the transmission strategy
for each user is simply his power allocation. The system design reduces
then to finding the optimum power distribution for each user over
the parallel channels, according to a specified performance metric.
Within this context, existing works \cite{Cendrillon-Yu}-\cite{Scutari-Palomar-Barbarossa-IT_Journal}
considered the maximization of the information rates of all the links,
subject to transmit power and (possibly) mask constraints on each
link. In \cite{Cendrillon-Yu}-\cite{Yu-Lui}, a centralized approach
based on duality theory \cite{Bazaraa-Sherali-Shetty,Boyd} was proposed
to compute, under technical conditions, the largest achievable rate
region of the system (i.e., the Pareto-optimal set of the achievable
rates).
In \cite{Hayashi-Luo}, sufficient conditions for the optimal spectrum
sharing strategy maximizing the sum-rate to be frequency division
multiple access (FDMA) were derived. However, the algorithms proposed in \cite{Cendrillon-Yu}-\cite{Hayashi-Luo} are computationally expensive and cannot be implemented in a distributed way, require the full-knowledge of the system parameters, and are not guaranteed to converge to the global optimal solution.
Therefore, in \cite{Yu}-\cite{Scutari-Palomar-Barbarossa-IT_Journal}, using a game-theory framework,
the authors focused on distributed algorithms with no centralized
control. In particular, the rate maximization problem
was formulated as a strategic non-cooperative game, where every link
is a player that competes against the others by choosing the transmission
strategy that maximizes its own information rate \cite{Yu}.
Based on the celebrated
notion of Nash Equilibrium (NE) in game theory \cite{Nash-paper},
an equilibrium for the whole system is reached when every player's
reaction is ``unilaterally optimal'', i.e., when, given the rival
players' current strategies, any change in a player's own strategy
would result in a rate loss. In \cite{ChungISIT03}-\cite{Scutari-Palomar-Barbarossa-IT_Journal},
alternative sufficient conditions were derived that guarantee the
uniqueness of the NE of the rate maximization game and the convergence
of alternative distributed waterfilling based algorithms, either synchronous
$-$ sequential \cite{ChungISIT03}-\cite{LPang06} and simultaneous
\cite{Scutari-Barbarossa-GT_PII} $-$ or asynchronous \cite{Scutari-Palomar-Barbarossa-IT_Journal}.
The game theoretical formulation proposed in the cited papers, is
a useful approach to devise totally distributed algorithms. However,
due to possible asymmetries of the system and the inherent selfish
nature of the optimization, the Nash equilibria of the rate maximization
game in \cite{Yu}-\cite{Scutari-Palomar-Barbarossa-IT_Journal} may
lead to inefficient and unfair rate distributions among the links
even when the game admits a unique NE. This unfairness is due to the
fact that, without any additional constraint, the optimal power allocation
corresponding to a NE of the rate maximization game is often
the one that assigns high rates to the users with the highest (equivalent)
channels; which strongly penalizes all the other users. As many realistic
communication systems require prescribed Quality of Service (QoS)
guarantees in terms of achievable rate for each user, the system design
based on the game theoretic formulation of the rate maximization might
not be adequate.
To overcome this problem, in this paper we introduce a new distributed
system design, that takes explicitly into account the rate constraints.
More specifically, we propose a novel strategic non-cooperative game,
where every link is a player that competes against the others by choosing
the power allocation over the parallel channels that attains the desired
information rate, with the minimum transmit power. We will refer to
this new game as \emph{power minimization game}. An equilibrium is
achieved when every user realizes that, given the current power allocation
of the others, any change in its own strategy would result in an increase
in transmit power. This equilibrium is referred to as \emph{Generalized}
Nash Equilibrium (GNE) and the corresponding game is called Generalized
Nash Equilibrium Problem.%
\footnote{According to recent use, we term generalized Nash equilibrium problem
a Nash game where the feasible sets of the players depend on the other
players' strategy. Such kind of games have been called in various
different ways in the literature, for example social equilibrium problems
or just Nash equilibrium problems.}
The game theoretical formulation proposed in this paper differs significantly
from the rate maximization games studied in \cite{Yu}-\cite{Scutari-Palomar-Barbarossa-IT_Journal}.
In fact, differently from these references, where the users are allowed
to choose their own strategies independently from each other, in the
power minimization game, the rate constraints induce a coupling among
the players' admissible strategies, i.e., each player's strategy set
depends on the current strategies of all the other players. This coupling
makes the study of the proposed game much harder than that of the
rate maximization game and no previous result in \cite{Yu}-\cite{Scutari-Palomar-Barbarossa-IT_Journal}
can be used. Recently, the calculation of generalized Nash equilibria
has been the subject of a renewed attention also in the mathematical
programming community, see for example \cite{FPang03}-\cite{Facchinei-Kanzow}.
Nevertheless, in spite of several interesting advances \cite{Facchinei-Kanzow}, none of the
game results in the literature are applicable to the power minimization
game.
The main contributions of the paper are the following. We provide
sufficient conditions for the nonemptiness and boundedness of the
solution set of the generalized Nash problem. Interestingly, these
sufficient conditions suggest a simple admission control procedure
to guarantee the feasibility of a given rate profile of the users. Indeed, our existence proof uses an advanced degree-theoretic result
for a nonlinear complementarity problem in order to handle the
unboundedness of the users' rate constraints.
We also derive conditions for the uniqueness of the GNE.
Interestingly, our sufficient conditions become also necessary in the case of one subchannel.
To compute
the generalized Nash solutions, we propose two alternative totally
distributed algorithms based on the single user waterfilling solution:
The \emph{sequential} IWFA and the \emph{simultaneous} IWFA. The sequential
IWFA is an instance of the Gauss-Seidel scheme: The users update their
own strategy sequentially, one after the other, according to the single
user waterfilling solution and treating the interference generated
by the others as additive noise. The simultaneous IWFA is based on
the Jacobi scheme: The users choose their own power allocation simultaneously,
still using the single user waterfilling solution. Interestingly,
even though the rate constraints induce a coupling among the feasible
strategies of all the users, both algorithms are still totally distributed.
In fact, each user, to compute the waterfilling solution, only needs
to measure the power of the noise plus the interference generated by the other users over each subchannel. It
turns out that the conditions for the uniqueness of the GNE are sufficient
for the convergence of both algorithms. Our convergence
analysis is based on a nonlinear transformation that turns the
generalized game in the power variables into a standard game in the
rate variables. Overall, this paper offers two major contributions
to the literature of game-theoretic approaches to multiuser
communication systems: (i) a new noncooperative game model is
introduced for the first time that directly addresses the issue of
QoS in such systems, and (ii) a new line of analysis
is introduced in the literature of distributed power allocation that
is expected to be broadly applicable for other game models.
The paper is organized as follows. Section \ref{sec:systMod_GTformulation}
gives the system model and formulates the power minimization problem
as a strategic non-cooperative game. Section \ref{Sec:Existence_Uniqueness}
provides the sufficient conditions for the existence and uniqueness
of a GNE of the power minimization game. Section \ref{Sec:IWFAs}
contains the description of the distributed algorithms along with
their convergence conditions. Finally, Section \ref{Sec:Conclusions}
draws the conclusions. Proofs of the results are given in the Appendices
\ref{proof_th:existence(main_body)}--\ref{proof_th:Convergence_IWFA-SIWFA}.
\section{System Model and Problem Formulation}
\label{sec:systMod_GTformulation}In this section we clarify the assumptions
and the constraints underlying the system model and we formulate the
optimization problem explicitly.
\subsection{System model}
\label{Sec:System.Model} We consider a $Q$-user Gaussian $N$-parallel
interference channel. In this model, there are $Q$ transmitter-receiver
pairs, where each transmitter wants to communicate with its corresponding
receiver over a set of $N$ parallel subchannels. These subchannels
can model either frequency-selective or flat-fading time-selective
channels \cite{Tse}.
Since our goal is to find distributed algorithms that do not require
neither a centralized control nor a coordination among the links, we
focus on transmission techniques where no interference cancelation
is performed and multiuser interference is treated as additive colored
noise from each receiver. Moreover, we assume perfect channel state
information at both transmitter and receiver side of each link;\footnote{Note that each user is only required to known its own channel, but not the channels of the other users.} each
receiver is also assumed to measure with no errors the power of the
noise plus the overall interference generated by the other users over
the $N$ subchannels. For each transmitter $q$, the total average
transmit power over the $N$ subchannels is (in units of energy per
transmitted symbol) \begin{equation}
P_{q}=\dfrac{1}{N}\sum_{k=1}^{N}p_{q}(k),\label{Tx-power}\end{equation}
where $p_{q}(k)$ denotes the power allocated by user $q$ over the
subchannel $k$.
Under these assumptions, invoking the capacity expression for the
single user Gaussian channel $-$ achievable using random Gaussian
codes from all the users $-$ the maximum information rate on link
$q$ for a specific power allocation is \cite{Cover}%
\footnote{Observe that a GNE is obtained if each user transmits using Gaussian
signaling, with a proper power allocation. However, generalized Nash
equilibria achievable using non-Gaussian codes may exist. In this paper,
we focus only on transmissions using Gaussian codebooks.%
} \begin{equation}
R_{q}(\mathbf{p}_{q},\mathbf{p}_{-q})=\sum_{k=1}^{N}\log\left(1+\mathsf{sinr}_{q}(k)\right),\label{Rate}\end{equation}
with $\mathsf{sinr}_{q}(k)$ denoting the Signal-to-Interference
plus Noise Ratio (SINR) of link $q$ on the $k$-th subchannel: \begin{equation}
\mathsf{sinr}_{q}(k)\triangleq\frac{\left\vert H_{qq}(k)\right\vert ^{2}p_{q}(k)}{\sigma_{_{q}}^{2}(k)+\sum_{\, r\neq q}\left\vert H_{qr}(k)\right\vert ^{2}p_{r}(k)},\label{SINR_q}\end{equation}
where $\left\vert H_{qr}(k)\right\vert ^{2}$ is the power gain of
the channel between destination $q$ and source $r$; $\sigma_{_{q}}^{2}(k)$
is the variance of Gaussian zero mean noise on subchannel $k$ of
receiver $q$; and $\mathbf{p}_{q}\triangleq\left(p_{q}(k)\right)_{k=1}^{N}\mathbf{\ }$is
the power allocation strategy of user $q$ across the $N$ subchannel,
whereas $\mathbf{p}_{-q}\triangleq\left(\mathbf{p}_{r}\right)_{r\neq q}$
contains the strategies of all the other users.
\subsection{Game theoretic formulation}
\label{Sec:Problem-Formulation}We formulate the system design within
the framework of game theory \cite{Osborne,Aubin-book}, using as
desirability criterion the concept of GNE, see for example \cite{Nash-paper,Rosen}.
Specifically, we consider a strategic non-cooperative game, in which
the players are the links and the payoff functions are the transmit
powers of the users: Each player competes
against the others by choosing the power allocation (i.e., its strategy)
that minimizes its own transmit power, given a constraint on the minimum
achievable information rate on the link. A GNE of the game is reached
when each user, given the strategy profile of the others, does not
get any power decrease by unilaterally changing its own strategy,
still keeping the rate constraint satisfied. Stated in mathematical
terms, the game has the following structure
\begin{equation}
{\mathscr{G}}=\left\{ \Omega,\left\{ {\mathscr{P}}_{q}(\mathbf{p}_{-q})\right\} _{q\in\Omega},\{{P}_{q}\}_{q\in\Omega}\right\} ,\label{Game G}\end{equation}
where $\ \Omega\triangleq\left\{ 1,2,\ldots,Q\right\} $ denotes
the set of the active links, ${\mathscr{P}}_{q}(\mathbf{p}_{-q})\subseteq\mathbb{R}_{+}^{N}$
is the set of admissible power allocation strategies $\mathbf{p}_{q}\in{\mathscr{P}}_{q}(\mathbf{p}_{-q})$
of user $q$ over the subchannels $\mathcal{N}\triangleq\{1,\ldots,N\}$,
defined as\begin{equation}
\hspace{-0.2cm}{\mathscr{P}}_{q}(\mathbf{p}_{-q})\triangleq\!\left\{ \mathbf{x}_{q}\!\!\in\!\!\mathcal{\ \mathbb{R}}_{+}^{N}:\quad R_{q}(\mathbf{x}_{q},\mathbf{p}_{-q})\geq R_{q}^{\star}\right\} .\label{admissible strategy set_user_q}\end{equation}
with $R_{q}(\mathbf{p}_{q},\mathbf{p}_{-q})$ given in (\ref{Rate}),
and $R_{q}^{\star}$ denotes the minimum transmission rate required
by each user, which we assume positive without loss of generality.
In the sequel we will make reference to the vector $\mathbf{R}^{\star}\triangleq(R_{q}^{\star})_{q=1}^{Q}$
as to the \emph{rate profile}. The payoff function of the $q$-th
player is its own transmit power $P_{q}$, given in (\ref{Tx-power}).
Observe that, because of the rate constraints, the set of feasible
strategies ${\mathscr{P}}_{q}(\mathbf{p}_{-q})$ of each player $q$
depends on the power allocations $\mathbf{p}_{-q}$ of all the other
users.
The optimal strategy for the $q$-th player, given the power allocation
of the others, is then the solution to the following minimization
problem \begin{equation}
\begin{array}{ll}
\operatorname*{minimize}\limits _{\mathbf{p}_{q}} & \quad{\displaystyle \sum\limits _{k=1}^{N}}p_{q}(k)\\
\operatorname*{subject}\text{ }\operatorname*{to} & \quad\mathbf{p}_{q}\in{\mathscr{P}}_{q}(\mathbf{p}_{-q})\end{array},\label{Power Game}\end{equation}
where ${\mathscr{P}}_{q}(\mathbf{p}_{-q})$\ is given in (\ref{admissible strategy set_user_q}).
Note that, for each $q$, the minimum in (\ref{Power Game}) is taken
over $\mathbf{p}_{q},$ for a \textit{fixed} but arbitrary $\mathbf{p}_{-q}.$
Interestingly, given $\mathbf{p}_{-q},$ the solution of (\ref{Power Game})
can be obtained in {}``closed\textquotedblright\ form via the solution
of a singly-constrained optimization problem; see \cite{Palomar-Fonollosa05}
for an algorithm to implement this solution in practice.
\begin{lemma} \label{Lemma_WF_solution}For any fixed and nonnegative
$\mathbf{p}_{-q},$ the optimal solution $\mathbf{p}_{q}^{\star}=\{p_{q}^{\star}(k)\}_{k=1}^{N}$
of the optimization problem (\ref{Power Game}) exists and is unique.
Furthermore, \begin{equation}
\begin{array}{c}
\mathbf{p}_{q}^{\star}=\mathsf{WF}_{q}\left(\mathbf{p}_{1},\ldots,\mathbf{p}_{q-1},\mathbf{p}_{q+1},\ldots,\mathbf{p}_{Q}\right)=\mathsf{WF}_{q}(\mathbf{p}_{-q})\end{array},\quad\label{WF_single-user}\end{equation}
where the waterfilling operator $\mathsf{WF}_{q}\left(\mathbf{\cdot}\right)$
is defined as \begin{equation}
\left[\mathsf{WF}_{q}\left(\mathbf{p}_{-q}\right)\right]_{k}\triangleq\left(\lambda_{q}-\dfrac{\sigma_{q}^{2}(k)+{\displaystyle \sum\nolimits _{\, r\neq q}}\left\vert H_{qr}(k)\right\vert ^{2}p_{r}(k)}{\left\vert H_{qq}(k)\right\vert ^{2}}\right)^{+},\quad k\in\mathcal{N},\label{WF_operator}\end{equation}
with $\left(x\right)^{+}\triangleq\max(0,x)$ and the water-level
$\lambda_{q}$ chosen to satisfy the rate constraint $R_{q}(\mathbf{p}_{q}^{\star},\mathbf{p}_{-q})=R_{q}^{\star}$,
with $R_{q}(\mathbf{p}_{q},\mathbf{p}_{-q})$ given in (\ref{Rate}).
\end{lemma}
The solutions of the game ${\mathscr{G}}$ in $($\ref{Game G}$)$,
if they exist, are the Generalized Nash Equilibria, formally defined
as follows.
\begin{definition} \label{NE def} A feasible strategy profile $\mathbf{p}^{\star}=(\mathbf{p}_{q}^{\star})_{q=1}^{Q}$
is a GNE of the game ${\mathscr{G}}$
if \begin{equation}
\
{\displaystyle \sum\limits _{k=1}^{N}}p_{q}^{\star}(k)\leq{\displaystyle \sum\limits _{k=1}^{N}}p_{q}(k),\ \text{\ \ }\forall\mathbf{p}_{q}\in{\mathscr{P}}_{q}(\mathbf{p}_{-q}^{\star}),\text{ }\forall q\in\Omega.\label{pure-NE}\end{equation}
\end{definition}
According to Lemma \ref{Lemma_WF_solution}, all the Generalized Nash
Equilibria of the game must satisfy the condition expressed by the
following Corollary.
\begin{corollary} \label{Corollary_SWF_system}A feasible strategy
profile $\mathbf{p}^{\star}=(\mathbf{p}_{q}^{\star})_{q=1}^{Q}$ is
a GNE of the game ${\mathscr{G}}$
if and only if it satisfies the following system of nonlinear equations\begin{equation}
\begin{array}{c}
\mathbf{p}_{q}^{\star}=\mathsf{WF}_{q}\left(\mathbf{p}_{1}^{\star},\ldots,\mathbf{p}_{q-1}^{\star},\mathbf{p}_{q+1}^{\star},\ldots,\mathbf{p}_{Q}^{\star}\right)\end{array},\quad\forall q\in\Omega,\label{SWF_system}\end{equation}
with $\mathsf{WF}_{q}\left(\mathbf{\cdot}\right)$ defined in (\ref{WF_operator}).
\end{corollary}
Given the nonlinear system of equations (\ref{SWF_system}), the fundamental
questions we want an answer to are: i) \emph{Does a solution exist,
for any given users' rate profile}? ii) \emph{If a solution exists,
is it unique}? iii) \emph{How can such a solution be reached in a
totally distributed way}?
An answer to the above questions is given in the forthcoming sections.
\section{Existence and Uniqueness of a Generalized Nash Equilibrium}
\label{Sec:Existence_Uniqueness} In this section we first provide
sufficient conditions for the existence of a nonempty and bounded
solution set of the Nash equilibrium problem (\ref{Game G}). Then,
we focus on the uniqueness of the equilibrium.
\subsection{Existence of a generalized Nash equilibrium}
Given the rate profile ${\mathbf{R}}^{\star}=(R_{q}^{\star})_{q=1}^{Q}$,
define, for each $k\in\mathcal{N}$, the matrix $\mathbf{Z}_{k}({\mathbf{R}}^{\star})\in\,\mathbb{R}^{Q\times Q}\,$
as \begin{equation}
\mathbf{Z}_{k}({\mathbf{R}}^{\star})\,\triangleq\,\left[\begin{array}{cccc}
\left\vert H_{11}(k)\right\vert ^{2}\, & -(e^{R_{1}^{\star}}-1)\left\vert H_{12}(k)\right\vert ^{2} & \cdots & -(e^{R_{1}^{\star}}-1)\,\left\vert H_{1Q}(k)\right\vert ^{2}\\[7pt]
-(e^{R_{2}^{\star}}-1)\,\left\vert H_{21}(k)\right\vert ^{2}\, & \left\vert H_{22}(k)\right\vert ^{2}\, & \cdots & -(e^{R_{2}^{\star}}-1)\,\left\vert H_{2Q}(k)\right\vert ^{2}\,\\[7pt]
\vdots & \vdots & \ddots & \vdots\\[7pt]
-(e^{R_{Q}^{\star}}-1)\,\left\vert H_{Q1}(k)\right\vert ^{2}\, & -(e^{R_{Q}^{\star}}-1)\,\left\vert H_{Q2}(k)\right\vert ^{2}\, & \cdots & \left\vert H_{QQ}(k)\right\vert ^{2}\,\end{array}\right]\,.\label{Z_kL}\end{equation}
We also need the definition of P-matrix, as given next.
\begin{definition}
A matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$ is called Z-matrix if its off-diagonal entries are all non-
positive. A matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$ is called P-matrix
if every principal minor of $\mathbf{A}$ is positive.
\end{definition}
Many equivalent characterizations for a P-matrix can be given. The
interested reader is referred to \cite{BPlemmons79,CPStone92} for
more details. Here we note only that any positive definite matrix
is a P-matrix, but the reverse does not hold.
Sufficient conditions for the nonemptiness of a bounded solution set
for the game ${\mathscr{G}}$
are given in the following theorem.
\begin{theorem} \label{th:existence(main_body)} The game ${\mathscr{G}}$
with rate profile ${\mathbf{R}}^{\star}=(R_{q}^{\star})_{q=1}^{Q}>\mathbf{0}$
admits a nonempty and bounded solution set if $\mathbf{Z}_{k}({\mathbf{R}}^{\star})$
is a P-matrix, %
for all $k\in\mathcal{N}$, with $\mathbf{Z}_{k}({\mathbf{R}}^{\star})$
defined in (\ref{Z_kL}). Moreover, any GNE $\mathbf{p}^{\star}=(\mathbf{p}_{q}^{\ast})_{q=1}^{Q}$
is such that\begin{equation}
\left(\begin{array}{c}
p_{1}^{\ast}(k)\\[5pt]
\vdots\\[5pt]
p_{Q}^{\ast}(k)\end{array}\right)\,\leq\,\left(\begin{array}{c}
\overline{p}_{1}(k)\\[5pt]
\vdots\\[5pt]
\overline{p}_{Q}(k)\end{array}\right)\,\triangleq\,(\,\mathbf{Z}_{k}({\mathbf{R}}^{\star})\,)^{-1}\left(\begin{array}{c}
\sigma_{1}^{2}(k)\,(\, e^{R_{1}^{\star}}-1\,)\\[5pt]
\vdots\\[5pt]
\sigma_{Q}^{2}(k)\,(\, e^{R_{Q}^{\star}}-1\,)\end{array}\right),\hspace{1.0538pc}k\,\in\mathcal{N}.\label{Upper_bound_GNE}\end{equation}
\end{theorem}
\begin{proof} See Appendix \ref{proof_th:existence(main_body)}.
\end{proof}
A more general (but less easy to check) result on the existence of
a bounded solution set for the game ${\mathscr{G}}$
is given by Theorem \ref{th:main existence} in Appendix \ref{proof_th:existence(main_body)}.
We now provide alternative sufficient conditions for Theorem \ref{th:existence(main_body)}
in terms of a single matrix. To this end, we first introduce the following
matrix\begin{equation}
{\mathbf{Z}}^{\max}({\mathbf{R}}^{\star})\triangleq\,\left[\begin{array}{cccc}
1 & -(e^{R_{1}^{\star}}-1)\beta_{12}^{\max} & \cdots & -(e^{R_{1}^{\star}}-1)\,\beta_{1Q}^{\max}\\[0.2in]
-(e^{R_{2}^{\star}}-1)\,\beta_{21}^{\max} & 1 & \cdots & -(e^{R_{2}^{\star}}-1)\beta_{2Q}^{\max}\\[0.2in]
\vdots & \vdots & \ddots & \vdots\\[7pt]
-(e^{R_{Q}^{\star}}-1)\,\beta_{Q1}^{\max} & -(e^{R_{Q}^{\star}}-1)\,\beta_{Q2}^{\max} & \cdots & 1\end{array}\right]\,,\label{Z_max}\end{equation}
where \begin{equation}
\beta_{qr}^{\max}\,\triangleq\max\limits _{k\in\mathcal{N}}\,\dfrac{\left\vert H_{qr}(k)\right\vert ^{2}}{\left\vert H_{rr}(k)\right\vert ^{2}},\hspace{1.0152pc}\forall r\neq q{\,,\quad q\in\Omega.}\label{beta_max}\end{equation}
We also denote by $e^{{\mathbf{R}}^{\star}}-\mbox{\boldmath{$1$}}$
the $Q$-vector with $q$-th component $e^{R_{q}^{\star}}-1,$ for
$q=1,\ldots,Q$. Then, we have the following corollary.
\begin{corollary} \label{Corollary:SF_Existence_Z_maxL}
If ${\mathbf{Z}}^{\max}({\mathbf{R}}^{\star})$ in (\ref{Z_max})
is a P-matrix, then all the matrices $\{\mathbf{Z}_{k}({\mathbf{R}}^{\star})\}$
defined in (\ref{Z_kL}) are P-matrices. Moreover, any GNE $\mathbf{p}^{\star}=(\mathbf{p}_{1}^{\star})_{q=1}^{Q}$
of the game ${\mathscr{G}}$
satisfies\begin{equation}
p_{q}^{\ast}(k)\leq\overline{p}_{q}(k)=\left(\frac{\max\limits _{r\in\Omega}\sigma_{r}^{2}(k)}{\left\vert H_{qq}(k)\right\vert ^{2}}\right)\left[\left({\mathbf{Z}}^{\max}({\mathbf{R}}^{\star})\right)^{-1}\left(e^{{\mathbf{R}}^{\star}}-\mbox{\boldmath{$1$}}\right)\right]_{q},\quad\forall q\in\Omega,\quad\forall k\,\in\mathcal{N}.\label{eq:FFbounds}\end{equation}
\end{corollary}
\begin{proof} See Appendix \ref{proof_Corollary:SF_Existence_Z_maxL}.
\end{proof}
To give additional insight into the physical interpretation of the
existence conditions of a GNE, we make explicit the dependence of
each channel (power) gain $|H_{qr}(k)|^{2}$ on its own source-destination
distance $d_{qr}$ by introducing the normalized channel gain $|\overline{H}_{qr}(k)|^{2}=|H_{qr}(k)|^{2}d_{qr}^{\gamma},$
where $\gamma$ is the path loss exponent. We have the following corollary.
\begin{corollary} \label{Corollary:SF_Existence_for_ZkL}Sufficient
conditions for the matrices $\{\mathbf{Z}_{k}({\mathbf{R}}^{\star})\}$
defined in (\ref{Z_kL}) to be P-matrices are:\begin{equation}
{\displaystyle \sum\limits _{r\neq q}}\dfrac{|\overline{H}_{qr}(k)|^{2}}{|\overline{H}_{qq}(k)|^{2}}\dfrac{d_{qq}^{\,\gamma}}{d_{qr}^{\,\gamma}}<\frac{1}{\, e^{R_{q}^{\star}}-1\,},\quad\quad\forall r\in\Omega,\quad\forall k\,\in\mathcal{N}.\label{Corollary_SF_for_Z_kL}\end{equation}
\end{corollary}
\begin{proof} The proof comes directly from the sufficiency of the
diagonally dominance property \cite[Definition 2.2.19]{CPStone92} for the matrices $\mathbf{Z}_{k}({\mathbf{R}}^{\star})$
in (\ref{Z_kL}) to be P-matrices \cite[Theorem 6.2.3]{BPlemmons79}
\end{proof}
\begin{remark}\rm A physical interpretation of the conditions in Theorem
\ref{th:existence(main_body)} (or Corollary \ref{Corollary:SF_Existence_for_ZkL})
is the following. Given the set of channels and
the rate constraints, a GNE of $\mathscr{G}$ is guaranteed to exist
if multiuser interference is {}``sufficiently small\textquotedblright$\,$
(e.g., the links are sufficiently far apart). In fact, from (\ref{Corollary_SF_for_Z_kL}),
which quantifies the concept of small interference, one infers that,
for any fixed set of (normalized) channels and rate constraints, there exists a
minimum distance beyond which an equilibrium exists, corresponding
to the maximum level of interference that may be tolerated from each
user. The amount of such a tolerable multiuser interference depends
on the rate constraints: the larger the required rate from each user,
the lower the level of interference guaranteeing the existence of
a solution. The reason why an equilibrium of the game $\mathscr{G}$
might not exist for any given set of channels and rate constraints,
is that the multiuser system we consider is interference limited,
and thus not every QoS requirement is guaranteed to be feasible. In
fact, in the game $\mathscr{G}$, each user acts to increase the transmit
power to satisfy its own rate constraint; which leads to an increase
of the interference against the users. It turns out that, increasing
the transmit power of all the users does not guarantee that an equilibrium
could exist for any given rate profile.
Observe that conditions in Theorem \ref{th:existence(main_body)}
also provide a simple admission control procedure to check if a set
of rate constraints is feasible: under these conditions indeed, one
can always find a \emph{finite} power budget for all the users such
that there exists a GNE where all the rate constraints are satisfied.
\end{remark}
\subsection{ Uniqueness of the Generalized Nash Equilibrium}
Before providing conditions for the uniqueness of the GNE of the game
$\mathscr{G}$, we introduce the following intermediate definitions.
For any given rate profile ${\mathbf{R}}^{\star}=(R_{q}^{\star})_{q=1}^{Q}>\mathbf{0},$
let $\overline{\mathbf{B}}({\mathbf{R}}^{\star})\in\,\mathbb{R}^{Q\times Q}\,$
be defined as\begin{equation}
\left[\overline{\mathbf{B}}({\mathbf{R}}^{\star})\right]_{qr}\,\equiv\,\left\{ \begin{array}{ll}
e^{-R_{q}^{\star}}, & \quad\text{if }q=r,\\[5pt]
-e^{R_{q}^{\star}}\,\widehat{\beta}_{qr}^{\max}, & \quad\text{otherwise,}\end{array}\right.\label{Beta_bar_matrix}\end{equation}
\noindent where\begin{equation}
\widehat{\beta}_{qr}^{\max}\,\triangleq\max\limits _{k\,\in\mathcal{N}}\left(\,\dfrac{\left\vert H_{qr}(k)\right\vert ^{2}}{\left\vert H_{rr}(k)\right\vert ^{2}}\dfrac{\sigma_{r}^{2}(k)+\sum_{\, r^{\,\prime}\neq r}\left\vert H_{rr^{\,\prime}}(k)\right\vert ^{2}\overline{p}_{r^{\,\prime}}(k)}{\sigma_{q}^{2}(k)}\right),\label{eq:def:beta_max_hat}\end{equation}
\noindent with $\overline{p}_{r^{\,\prime}}(k)$ defined in (\ref{Upper_bound_GNE}).
We also introduce $\chi$ and $\rho,$ defined respectively as\begin{equation}
\chi\triangleq1-\max_{q\in\Omega}\left[\left(\, e^{R_{q}^{\star}}-1\right){\displaystyle \sum\limits _{r\neq q}}\beta_{qr}^{\max}\,\right],\label{eq:def:_xi}\end{equation}
\noindent with $\beta_{qr}^{\max}$ given in (\ref{beta_max}), and\begin{equation}
\rho\triangleq\frac{e^{R_{\max}^{\star}}-1}{e^{R_{\min}^{\star}}-1},\label{eq:def_rho}\end{equation}
\noindent with\begin{equation}
R_{\max}^{\star}\triangleq\max\limits _{q\in\Omega}R_{q}^{\star},\text{\quad and\quad}R_{\min}^{\star}\triangleq\min\limits _{q\in\Omega}R_{q}^{\star}.\end{equation}
Sufficient conditions for the uniqueness of the GNE of the game $\mathscr{G}$
are given in the following theorem.
\begin{theorem} \label{th:uniqueness_(main body)} Given the game
${\mathscr{G}}$
with a rate profile ${\mathbf{R}}^{\star}=(R_{q}^{\star})_{q=1}^{Q}>\mathbf{0},$
assume that the conditions of Theorem \ref{th:existence(main_body)}
are satisfied.
If, in addition, $\overline{\mathbf{B}}({\mathbf{R}}^{\star})$ in
(\ref{Beta_bar_matrix}) is a P-matrix, then the game ${\mathscr{G}}$
admits a unique GNE. \end{theorem}
\begin{proof} See Appendix \ref{proof_th:uniqueness_(main body)}.
\end{proof}
More stringent but more intuitive conditions for the uniqueness of
the GNE are given in the following corollary.
\begin{corollary} \label{Corollary:SF_Uniqueness}Given the game
${\mathscr{G}}$ with rate profile ${\mathbf{R}}^{\star}=(R_{q}^{\star})_{q=1}^{Q}>\mathbf{0},$
assume that \begin{equation}
0<\chi<1,\end{equation}
so that a GNE for the game ${\mathscr{G}}$ is guaranteed to exist,
with $\chi$ defined in (\ref{eq:def:_xi}). Then, the GNE is unique
if the following conditions hold true\begin{equation}
{\sum_{r\neq q}}\,\beta_{qr}^{\max}\,\,\left\{ \,\left({\max_{k\,\in\mathcal{N}}\frac{\sigma_{r}^{2}(k)}{\sigma_{q}^{2}(k)}}\right)\,+\left[{\max_{r^{\,\prime}\in\Omega}}\,\left({\max_{k\,\in\mathcal{N}}}\,{\frac{\sigma_{r^{\,\prime}}^{2}(k)}{\sigma_{q}^{2}(k)}}\right)\right]\,\left({\frac{\rho}{\chi}}-1\,\right)\,\right\} <\frac{1}{e^{2R_{q}^{\star}}},\quad\forall q\in\Omega,\label{SF_2_Uniqueness}\end{equation}
\noindent with $\rho$ defined in (\ref{eq:def_rho}).
In particular, when $\sigma_{r}^{2}(k)=\sigma_{q}^{2}(n),$ $\forall r,q\in\Omega$
and $\forall k,n\in\mathcal{N},$ conditions (\ref{SF_2_Uniqueness})
become\begin{equation}
{\displaystyle \sum\limits _{r\neq q}}\max\limits _{k\,\in\mathcal{N}}\left\{ \dfrac{|\overline{H}_{qr}(k)|^{2}}{|\bar{H}_{rr}(k)|^{2}}\right\} \dfrac{d_{rr}^{\,\gamma}}{d_{qr}^{\,\gamma}}\,\,<\frac{\,\gamma}{e^{R_{q}^{\star}}-1},\quad\forall q\in\Omega,\label{eq:SF_uniq_equal_sigmas}\end{equation}
\noindent with \begin{equation}
\gamma\triangleq\frac{{\max\limits _{q\in\Omega}}\left\{ e^{-R_{q}^{\star}}-e^{-2R_{q}^{\star}}\right\} }{\dfrac{e^{R_{\max}^{\star}}-1}{e^{R_{\min}^{\star}}-1}+{\max\limits _{q\in\Omega}}\left\{ e^{-R_{q}^{\star}}-e^{-2R_{q}^{\star}}\right\} }<1.\end{equation}
\end{corollary}
\begin{proof} See Appendix \ref{proof_Corollary:SF_Uniqueness}.
\end{proof}
\subsection{On the conditions for existence and uniqueness of the GNE}
It is natural to ask whether the sufficient conditions as given by
Theorem \ref{th:existence(main_body)} (or the more general ones given
by Theorem \ref{th:main existence} in Appendix A) are tight. In the next proposition,
we show that these conditions become
indeed necessary in the special case of $N=1$ subchannel.
\begin{proposition} \label{pr:1-tone problem} Given the rate profile
$\mathbf{R}^{\star}=(R_{q}^{\star})_{q=1}^{Q},$ the following
statements are equivalent for the game $\mathscr{G}$ when $N=1$:%
\footnote{In the case of $N=1$, the power allocation $p_{q}(k)=p_{q}$ of each
user, the channel gains $\left\vert H_{rq}(k)\right\vert ^{2}=\left\vert H_{rq}\right\vert ^{2}$
and the noise variances $\sigma_{q}^{2}(k)=\sigma_{q}^{2}$ are independent
on index $k$. Matrix $\mathbf{Z}_{k}({\mathbf{R}}^{\star})=\mathbf{Z}({\mathbf{R}}^{\star})$
is defined as in (\ref{Z_kL}), where each $\left\vert H_{rq}(k)\right\vert ^{2}$
is replaced by $\left\vert H_{rq}\right\vert ^{2}.$ %
}
\begin{description}
\item [{\rm}] (a) The problem (\ref{Power Game}) has a solution for some
(all) $(\sigma_{q}^{2})_{q=1}^{Q}>0$.
\item [{\rm}] (b) The matrix $\mathbf{Z}({\mathbf{R}}^{\star})$ is a
P-matrix.
\end{description}
If any one of the above two statements holds, then the game $\mathscr{G}$
has a unique solution that is the unique solution to the system of
linear equations: \begin{equation}
\left\vert H_{qq}\right\vert ^{2}\, p_{q}-(\, e^{R_{q}^{\star}}-1\,)\,{\displaystyle {\sum_{r\neq q}}\,\left\vert H_{qr}\right\vert ^{2}\, p_{r}\,=\,\sigma_{q}^{2}\,(\, e^{R_{q}^{\star}}-1\,)\qquad\forall q\in\Omega.}\label{eq:1-tone}\end{equation}
\end{proposition}
\begin{proof} See Appendix \ref{proof_Proposition_G_one_tone}. \end{proof}
\begin{remark} \rm Proposition \ref{pr:1-tone problem} also shows that it is, in general, very hard to obtain improved sufficient condition for the existence and boundedness of solutions to the problem with $N > 1$, as any such condition must be implied by condition (b) above for the the 1-subchannel case, which, as shown by the proposition, is necessary for the said existence, and also for the uniqueness as it turns out.
\end{remark}
\begin{remark} \rm Observe that, when $N=1$, the game $\mathscr{G}$
leads to classical SINR based \emph{scalar }power control problems
in flat-fading CDMA (or TDMA/FDMA) systems, where the goal of each
user is to reach a prescribed SINR (see (\ref{SINR_q})) with the
minimum transmit power $P_{q}$ \cite{Bambos}. In this case, given
the rate profile $\mathbf{R}^{\star}=(R_{q}^{\star})_{q=1}^{Q}$ and
$N=1,$ the SINR target profile $\boldsymbol{\mathsf{sinr}}^{\star}\triangleq(\mathsf{sinr}_{q}^{\star})_{q=1}^{Q},$
as required in classical power control problems \cite{Bambos}, can
be equivalently written in terms of $\mathbf{R}^{\star}$as\begin{equation}
\mathsf{sinr}_{q}^{\star}=e^{-R_{q}^{\star}}-1,\quad q\in\Omega,\end{equation}
and the Nash equilibria $\mathbf{p}^{\star}=(p_{q}^{\star})_{q=1}^{Q}$
of the game $\mathscr{G}$ become the solutions of the following system
of linear equations\begin{equation}
\,\,\mathbf{Z}({\mathbf{R}}^{\star})\mathbf{p}^{\star}=\left(\begin{array}{c}
\sigma_{1}^{2}\,\mathsf{sinr}_{1}^{\star}\\[5pt]
\vdots\\[5pt]
\sigma_{Q}^{2}\mathsf{sinr}_{Q}^{\star}\end{array}\right).\label{eq:scalar_power_control_problem}\end{equation}
Interestingly, the necessary and sufficient condition (b) given in Proposition \ref{pr:1-tone problem}
is equivalent to that known in the literature for the existence and
uniqueness of the solution of the classical SINR based power control
problem (see, e.g., \cite{Bambos}). Moreover, observe that, in the
case of $N=1,$ the solution of the game $\mathscr{G},$ coincides
with the upper bound in (\ref{Upper_bound_GNE}).\end{remark}
\medskip
\noindent \textbf{Numerical example.}
Since the existence and uniqueness
conditions of the GNE given so far depend on the channel power gains
$\left\{ |H_{qr}(k)|^{2}\right\} $, there is a nonzero probability
that they are not satisfied for a given channel realization drawn
from a given probability space and rate profile. To quantify the adequacy
of our conditions, we tested them over a set of channel impulse responses
generated as vectors composed of $L=6$ i.i.d. complex Gaussian random
variables with zero mean and variance equal to the square distance
between the associated transmitter-receiver links (multipath Rayleigh
fading model). Each user transmits over a set of $N=32$ subcarriers.
We consider a multicell cellular network as depicted in Figure \ref{Fig:check_cond}a),
composed of $7$ (regular) hexagonal cells, sharing the same band.
Hence, the transmissions from different cells typically interfere
with each other. For the simplicity of representation, we assume that
in each cell there is only one active link, corresponding to the transmission
from the base station (BS) to a mobile terminal (MT).
According to this geometry, each MT receives a useful signal that
is comparable, in average sense, with the interference signal transmitted
by the BSs of two adjacent cells. The overall network can be modeled
as a $7$-users interference channel, composed of $32$ parallel subchannels.
In Figure \ref{Fig:check_cond}b), we plot the probability that existence
(red line curves) and uniqueness (blue line curves) conditions as
given in Theorem \ref{th:existence(main_body)} and Theorem \ref{th:uniqueness_(main body)},
respectively, are satisfied versus the (normalized) distance $d\in[0,1)$
[see Figure \ref{Fig:check_cond}a)], between each MT and his BS (assumed
to be equal for all the MT/BS pairs). We considered two different
rate profiles, namely $R_{q}^{\star}=1$ bit/symb/subchannel (square
markers) and $R_{q}^{\star}=2$ bit/symb/subchannel (cross markers),
$\forall q\in\Omega$. As expected, the probability of existence and
uniqueness of the GNE increases as each MT approaches his BS (i.e.,
$d\rightarrow1$), corresponding to a decrease of the intercell interference.
\begin{figure}[h]
\vspace{-0.5cm}
\par
\begin{center}
\includegraphics[trim=0.000000in 0.000000in 0.000000in
-0.212435in, width=7cm,height=7cm]{Fig1.eps}
(a)
\par
\includegraphics[trim=0.000000in 0.000000in 0.000000in
-0.212435in, width=11cm,height=7.7cm]{Fig2.eps}
\vspace{-0.4cm}(b)
\end{center}
\par
\vspace{-0.8cm}\caption{{\protect{\small Probability of existence (red line curves)
and uniqueness (blue line curves) of the GNE
versus $d$ {[}subplot (b)] for a $7$-cell (downlink) cellular system
{[}subplot (a)] and rate profiles $R_{q}^{\star}=1$ bit/symb/subchannel
(square markers) and $R_{q}^{\star}=2$ bit/symb/subchannel (cross
markers), $\forall q\in\Omega$.}}}\label{Fig:check_cond}
\end{figure}
\section{Distributed Algorithms}
\label{Sec:IWFAs} The game $\mathscr{G}$ was shown to admit a GNE,
under some technical conditions, where each user attains the desired
information rate with the minimum transmit power, given the power
allocations at the equilibrium of the others. In this section, we
focus on algorithms to compute these solutions. Since we are interested
in a decentralized implementation, where no signaling among different
users is allowed, we consider totally distributed algorithms, where
each user acts independently to optimize its own power allocation
while perceiving the other users as interference. More specifically,
we propose two alternative totally distributed algorithms based on
the waterfilling solution in (\ref{WF_single-user}), and provide
a unified set of convergence conditions for both algorithms.\vspace{-0.3cm}
\subsection{Sequential iterative waterfilling algorithm}
The sequential Iterative Waterfilling Algorithm (IWFA) we propose
is an instance of the Gauss-Seidel scheme (by which, each user's power
is sequentially updated \cite{Bertsekas Book-Parallel-Comp}) based
on the mapping (\ref{WF_single-user}): Each player, sequentially
and according to a fixed updating order, solves problem (\ref{Power Game}),
performing the single-user waterfilling solution in (\ref{WF_single-user}).
The sequential IWFA is described in Algorithm 1
\input{IWFA.tex}
\bigskip{}
The convergence of the algorithm is guaranteed under the following
sufficient conditions.
\begin{theorem} \label{th:Convergence_IWFA} Assuming $\mathrm{Number\hspace{0.005cm}\_\hspace{0.05cm}of\hspace{0.005cm}\_\ iterations}=\infty$,
the sequential IWFA, described in Algorithm 1, converges linearly
\footnote{A sequence $\{x_{n}\}$ is said to converge linearly to $x^{\star}$ if there is a constant $0<c<1$ such that $|| x_{n+1}-x^{\star}|| \leq c || x_{n}-x^{\star}||$ for all $n\geq \bar{n}$ and some $\bar{n}\in \mathbb{N}$.} to the unique GNE of the game ${\mathscr{G}}$,
if the conditions of Theorem \ref{th:uniqueness_(main body)} are
satisfied. \end{theorem}
\begin{proof} See Appendix \ref{proof_th:Convergence_IWFA-SIWFA}.
\end{proof}
\medskip{}
\begin{remark}\rm Observe that the convergence of the algorithm is guaranteed
under the same conditions obtained for the uniqueness of the solution
of the game. As expected, the convergence is ensured if the level
of interference in the network is not too high. \end{remark}
\begin{remark} \rm The main features of the proposed algorithm are its
low-complexity and distributed nature. In fact, despite the coupling
among the users' admissible strategies due to the rate constraints,
the algorithm can be implemented in a totally distributed way, since
each user, to compute the waterfilling solution (\ref{WF_single-user}),
only needs to locally measure the interference-plus-noise power over
the $N$ subchannels [see (\ref{SINR_q})] and waterfill over this
level. \end{remark}
\begin{remark} \rm Despite its appealing properties, the sequential IWFA
described in Algorithm $1$ may suffer from slow convergence if the
number of users in the network is large, as we will also show numerically
in Section \ref{Sec:SIWFA}. This drawback is due to the sequential
schedule in the users' updates, wherein each user, to choose its own
strategy, is forced to wait for all the other users scheduled before
it. It turns out that the sequential schedule, as in Algorithm $1$,
does not really gain from the distributed nature of the multiuser
system, where each user, in principle, is able to change its own strategy,
irrespective of the update times of the other users. Moreover, to
be performed, the sequential update requires a centralized synchronization
mechanism that determines the order and the update times of the users.
We address more precisely this issue in the next section. \end{remark}
\subsection{Simultaneous iterative waterfilling algorithm}
\label{Sec:SIWFA} To overcome the drawback of the possible slow speed
of convergence, we consider in this section the \emph{simultaneous}
version of the IWFA, called simultaneous Iterative-waterfilling Algorithm.
The algorithm is an instance of the Jacobi scheme \cite{Bertsekas Book-Parallel-Comp}:
At each iteration, the users update their own PSD \emph{simultaneously},
performing the waterfilling solution (\ref{WF_single-user}), given
the interference generated by the other users in the \emph{previous}
iteration. The simultaneous IWFA is described in Algorithm $2$.
\bigskip{}
\input{SIWFA.tex}
\bigskip{}
Interestingly, (sufficient) conditions for the convergence of the
simultaneous IWFA are the same as those required by the sequential
IWFA, as given in the following.
\begin{theorem} \label{th:Convergence_SIWFA} Assuming $\mathrm{Number\hspace{0.005cm}\_\hspace{0.05cm}of\hspace{0.005cm}\_\ iterations}=\infty$,
the simultaneous IWFA, described in Algorithm 2, converges linearly
to the unique GNE of the game ${\mathscr{G}}$,
if the conditions of Theorem \ref{th:uniqueness_(main body)} are
satisfied. \end{theorem}
\begin{proof} See Appendix \ref{proof_th:Convergence_IWFA-SIWFA}.
\end{proof}
\begin{remark}\rm Since the simultaneous IWFA is still based on the waterfilling
solution (\ref{WF_single-user}), it keeps the most appealing features
of the sequential IWFA, namely its low-complexity and distributed
nature. In addition, thanks to the Jacobi-based update, all the users
are allowed to choose their optimal power allocation simultaneously.
Hence, the simultaneous IWFA is expected to be faster than the sequential
IWFA, especially if the number of active users in the network is large.
\end{remark}
\noindent \textbf{Numerical Example.} As an example, in Figure \ref{SIWFA-IWFA},
we compare the performance of the sequential and simultaneous IWFA,
in terms of convergence speed. We consider a network composed of 10
links and we show the rate evolution of three of the links corresponding
to the sequential IWFA and simultaneous IWFA as a function of the
iteration index $n$ as defined in Algorithms 1 and 2. In Figure \ref{SIWFA-IWFA}a)
we consider a rate profile for the users with two different classes
of service; whereas in Figure \ref{SIWFA-IWFA}b) the same target
rate for all the users is required. As expected, the sequential IWFA
is slower than the simultaneous IWFA, especially if the number of
active links $Q$ is large, since each user is forced to wait for
all the other users scheduled before updating its power allocation.\vspace{-0.2cm}
\noindent %
\begin{figure}[h]
\vspace{-0.5cm}
\par
\begin{center}
\includegraphics[trim=0.000000in 0.000000in 0.000000in
-0.212435in, height=7cm]{Example_Two_Rates_2}
(a)
\par
\includegraphics[trim=0.000000in 0.000000in 0.000000in
-0.212435in, height=7cm]{Example_Equal_Rates_2}
(b)
\end{center}
\par
\vspace{-0.4cm}\vspace{-0.3cm}\caption{{{\small Rates of the users versus iterations: sequential IWFA (solid
line curves), simultaneous IWFA (dashed line curves), $Q=10,$ $d_{rq}/d_{qr}$,
$d_{rr}=d_{qq}=1,$ $\gamma=2.5$.}}}
\label{SIWFA-IWFA}%
\end{figure}
\section{Conclusions}
\label{Sec:Conclusions}In this paper we have considered the distributed
power allocation in Gaussian parallel interference channels,
subject to QoS\ constraints. More specifically, we have proposed
a new game theoretic formulation of the power control problem, where
each user aims at minimizing the transmit power while guaranteeing
a prescribed information rate. We have provided sufficient conditions
for the nonemptiness and the boundedness of the solution set of the
Nash problem. These conditions suggest a simple admission control
procedure to check the feasibility of any given users' rate profile.
As expected, there exists a trade-off between the performance achievable
from each user (i.e., the achievable information rate) and the maximum level
of interference that may be tolerated in the network. Under some additional
conditions we have shown that the solution of the generalized Nash
problem is unique and we have proved the convergence of two distributed
algorithms: The sequential and the simultaneous IWFAs. Interestingly,
although the rate constraints induce a coupling among the feasible
strategies of the users, both algorithms are totally distributed,
since each user, to compute the waterfilling solution, only needs
to measure the noise-plus-interference power across the subchannels.
Our results are thus appealing in all the practical distributed multipoint-to-multipoint
systems, either wired or wireless, where centralized power control
techniques are not allowed and QoS in terms of information rate must
be guaranteed for each link.
One interesting direction that is worth of further investigations
is the generalization of the proposed algorithms to the case of asynchronous
transmission and totally asynchronous updates among the users, as
did in \cite{Scutari-Palomar-Barbarossa-IT_Journal} for the rate
maximization game.
\section{Appendices}
|
2,869,038,156,640 | arxiv | \section{Introduction}
Kick motions are an essential part of robot soccer. In recent years, the speed of the game has increased a lot with most teams now being able to stably walk at high speeds. Thus, fights for the ball are more common. A flexible kick motion that is able to adapt to different and changing ball locations as well as to different kick speeds on the fly while keeping the robot in balance during pushes from other robots is a huge advantage in such situations.
Several methods to design and execute flexible kick motions have already been developed. For instance, Müller et al. \cite{BIKE-RoboCup-2010} model the kick foot trajectory using hand-crafted piecewise Bézier curves, which are modified on the fly to adapt to different ball positions. However, handcrafting Bézier curves is a complex and time consuming task. Wenk et al. \cite{RC-Wenk-Roefer-14} tackle this problem by automatically inferring trajectories based on the ball position, kick velocity, and kick direction. While this method works, it does not allow the user to influence the resulting trajectory, \ie creating special purpose kicks like backward kicks is not possible.
In this paper, we present a middle ground between the two above-mentioned approaches: A kick motion that can be hand-crafted easily by using kinematic teach-in or be created by a multitude of optimization algorithms while retaining the ability to adapt to different ball positions and kick velocities.
This is done by using a modified version of Dynamic Movement Primitives (DMPs)\cite{ijspeert2013dynamical} to describe the kick trajectory.
During the kick, the robot is dynamically balanced using a Linear Quadratic Regulator (LQR) with previews to keep the Zero Moment Point (ZMP) inside the support polygon. This involves a new way of estimating the ZMP based on a model of the motor behavior of the NAO.
The remainder of the paper is organized as follows: Section \ref{sec:motor} introduces the motor model that is the basis of the ZMP calculation, \secref{sec:balance} explains the ZMP estimation and introduces the balancing algorithm while \secref{sec:dmp} introduces DMPs and explains how we use them to model a kick trajectory. Sections \ref{sec:eval} and \ref{sec:conclusion} wrap up the paper with an evaluation of the kick motion and a conclusion.
\section{Model-Based Motor Position Prediction}
\label{sec:motor}
As described below, the NAO's motor response delay is usually 30 ms. Thus, the ZMP balancer needs to take into account that the motor will not be at the currently measured position when the current command reaches the motor. Our solution to this problem is to predict the current motor position using a mathematical model and to use this prediction in our control algorithms.
\subsection{Determining the NAO's Motor Response Delay}
\label{subsec:delay}
The \nao's motors are position-controlled using the proprietary NaoQi software. It processes the commands and relays them to an ARM-7 micro controller at 100 Hz. The controller distributes the commands over RS-485 to several dsPIC micro controllers which are responsible for controlling the actual motors. Measured motor positions travel back the same chain\cite{naoDesign}. This chain together with the slow control rate of 100 Hz induces a delay between sending a command and being able to measure a reaction of the motor.
To determine the actual delay, a motor is moved from a resting position into a random direction and the time between sending the command and measuring a movement is recorded. A movement is registered as soon as the measured motor position deviates from the position that the motor was in when the command was issued. No threshold is used. For measuring, the internal sensor is used.
This is done 100 times for each leg motor of four different NAOs, thus, we get 4400 measurements in total.
As shown in \figref{fig:motor_delay_histogram}, the vast majority of motor reactions occurs at 30 milliseconds.
The measured average distance of the reactions at 10 and 20 ms is $0.087^\circ$. This is below the maximum accuracy of the motors, which is $0.1^\circ$. Therefore, we can assume that the measurements at 10 and 20 ms are due to sensor noise. However, the measurements at 40 ms cannot be discarded as noise.
Taking a closer look, it seems that some joints in some robots are more prone to responding after 40 ms than others, suggesting that hardware wear or defects might cause a delayed measurement.
Thus, for a fully repaired robot it is safe to assume that the motor response delay is 30 ms. Actually, the delay might be anywhere between 20 and 30 ms, but due to the 100 Hz duty cycle, more precise measurements are not possible.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{hist_combined.pdf}
\caption{Histograms of motor response delay. Each bar is one motor of one robot. The zoomed part shows the motors that responded after 40ms, color-coded by robot. The experiment was conducted using the leg motors of four robots and repeating each trial 100 times. Thus, 4400 responses were measured in total.}
\label{fig:motor_delay_histogram}
\end{figure}
\subsection{A Model to Estimate the Motor Position}
We propose to model the behavior of a motor as second order dynamical system based on a mass spring damper:
\begin{align}
T^2\ddot{y}(t) + 2DT min(\dot{y}(t), V_{max}) + y(t) = u(t), \text{~~} T, D, y, u \in \mathbb{R}
\end{align}
$T$ is the time constant, $y(t)$ is the motor position at time $t$, $D$ is a dampening constant, $V_{max}$ is the maximum motor velocity that is used to limit $\dot{y}(t)$, and $u(t)$ is the requested motor position at time $t$.
The parameters $(T, D, V_{max})$ need to be set in a way that the model optimally mimics a motor. This can be achieved by minimizing the error function $J$:
\begin{align}
J = \sum_{s \in S} \sum_{i=0}^{|s|} d(i) (m(i) - s(i))^2
\end{align}
$S$ is a set of step responses for a given motor, $|s|$ the number of measurements in step response $s$, $m(i)$ the position of the model at the $i$-th step, $s(i)$ the actual motor position at the $i$-th step and $d(i) = 0.85^i$ a decay function.
The decay function $d$ emphasizes the short term model quality over the long term, \ie we prefer parameters that provide a better short term prediction over parameters that provide an overall good prediction. This is done because in our use case the model is only used to predict a short amount of time.
Sets of step responses can be generated by applying step functions, which jump from zero to their respective values instantly, with different step heights to the motor. \Figref{fig:step_response_fit} shows a set of step responses that has been recorded and the respective optimal model response. For fitting, the first three samples of the step response should be ignored to make up for the motor response delay.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.48\textwidth]{lhippitch_fit.eps} \label{fig:step_response_fit}}
\subfigure[]{\includegraphics[width=0.48\textwidth]{lhiproll_walk_unshifted.eps} \label{fig:step_response_walk_shift}}
\caption{(a) Step responses and their optimal fit. The blue lines are the step functions, red are the step responses, and green the response of the best fitting model. The model of the depicted LHipPitch joint has an error that is slightly above average (see \Tabref{tab:model_walk_error}). (b) Prediction and actual motor response of a motor while walking}
\end{figure}
\subsection{Model Evaluation}
To show that the model can be used to predict the real world motor positions of the \nao, we compared the model response and the actual motor position while executing a five second walking motion. For the comparison, the real motor values have been shifted in time to remove the measurement delay. \Figref{fig:step_response_walk_shift} shows an excerpt of the experiment of the LHipRoll motor.
The average absolute error over all leg joints is $0.185^{\circ}$ with a variance of $0.0573$. Thus, the model seems to be able to predict the motor behavior with sufficient accuracy. Detailed results for each motor can be seen in \tabref{tab:model_walk_error}.
\begin{table}[t]
\parbox{.49\linewidth}{
\centering
\begin{tabular}{|c|c|c|}
\hline
Joint & Avg. error & Variance\\\hline\hline
LAnklePitch & $0.196^{\circ}$ & 0.04\\\hline
LAnkleRoll & $0.153^{\circ}$ & 0.02\\\hline
LHipPitch & $0.216^{\circ}$ & 0.053\\\hline
LHipRoll & $0.111^{\circ}$ & 0.009\\\hline
LHipYawPitch & $0.048^{\circ}$ & 0.003\\\hline
LKneePitch & $0.363^{\circ}$ & 0.145\\\hline
RAnklePitch & $0.216^{\circ}$ & 0.054\\\hline
RAnkleRoll & $0.170^{\circ}$ & 0.025\\\hline
RHipPitch & $0.307^{\circ}$ & 0.092\\\hline
RHipRoll & $0.097^{\circ}$ & 0.007\\\hline
RKneePitch & $0.297^{\circ}$ & 0.118\\\hline
\end{tabular}
\vspace{1mm}
\caption{Error in leg motor predictions}
\label{tab:model_walk_error}
}
\parbox{.49\linewidth}{
\centering
\begin{tabular}{|c|c|c|}
\hline
Robot & \begin{tabular}[b]{@{}c@{}} Avg. error\\ over all leg joints\end{tabular} & Variance\\\hline\hline
Original & $0.185^{\circ}$ & 0.057\\\hline
NAO 1 & $0.159^{\circ}$ & 0.055\\\hline
NAO 2 & $0.183^{\circ}$ & 0.065\\\hline
NAO 3 & $0.179^{\circ}$ & 0.056\\\hline
NAO 4 & $0.181^{\circ}$ & 0.071\\\hline
NAO 5 & $0.197^{\circ}$ & 0.061\\\hline
\end{tabular}
\vspace{1mm}
\caption{Errors when applying model parameters that have been optimized for one NAO to five different NAOs}
\label{tab:model_walk_error_portability}
}
\end{table}
We were also interested in the portability of the model parameters. As documented in \tabref{tab:model_walk_error_portability}, a repetition of the experiment on different robots (always using the same model again) was successful.
\section{ZMP-based Balancing}
\label{sec:balance}
The robot needs to be kept in balance while kicking.
A good measure for the balance of a robot is the Zero Moment Point (ZMP). A robot is said to be dynamically stable if the ZMP is inside the support polygon\cite{vukobratovic2004zero}.
The center of pressure of the support polygon and the Zero Moment Point are coincident\cite{sardain2004forces}. Therefore, it is possible to use the pressure sensors under the NAO's feet to measure the ZMP, if it exists. However, the sensors are quite inaccurate and often faulty. To avoid using these sensors, we estimate the ZMP $(z_x, z_y)^T$ based on the cart-table model proposed by Kajita et al.\cite{kajitaCartTable}:
\begin{align}
\begin{pmatrix} z_x\\z_y\end{pmatrix} =
\begin{pmatrix} c_x\\c_y\end{pmatrix} - \frac{c_z}{g} \begin{pmatrix}\ddot{c_x}\\ \ddot{c_y}\end{pmatrix}
\end{align}
$(c_x, c_y, c_z)^T$ is the center of mass (COM) and $g\approx9.81$ the gravitational force.
Due to the measurement delay and the high sensor noise, we estimate the COM using the motor model and forward kinematics $(c_x^m, c_y^m, c_z^m)^T$. The motor model is initialized using sensor readings at the beginning of the motion. While the kick motion is being executed, the model is not updated from sensor readings.
Tilting the robot over the edges of the supporting foot does not influence the estimated ZMP. To detect such situations, we calculate the scaled difference $P\Delta\Theta=P(\gamma - \phi)$ between the expected torso orientation $\gamma$ as provided by the motor model and the measured torso orientation $\phi$ as provided by the IMU and scale the COM accordingly\cite{alcaraz2013robust}. The unitless constant factor $P$ needs to be adjusted manually. We chose $P=(30, -30)^T$. \Figref{fig:zmp_shift:a} shows how the ZMP behaves with and without tilt detection.
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{shift_com_zmp}\label{fig:zmp_shift:a}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{graph.eps}\label{fig:balance_overview}}
\caption{(a) The red and blue lines show the estimated ZMP position with and without tilt detection while the robot is being tilted backwards. (b) The balancing process.}\label{fig:zmp_shift}
\end{figure}
To be able to measure outside influences, \eg someone pushing the robot, we replaced the COM acceleration by the acceleration of the torso $\ddot{o}$ as measured by the NAO's IMU. This can be done because the COM is usually inside the torso and thus both accelerations are similar.
Thus, the final ZMP is calculated by:
\begin{align}
\begin{pmatrix} z_x\\z_y\end{pmatrix} =
P\Delta\Theta\begin{pmatrix} c_x^m\\c_y^m\end{pmatrix} - \frac{c_z^m}{g} \begin{pmatrix}\ddot{o_x}\\ \ddot{o_y}\end{pmatrix}
\end{align}
To finally balance the robot, an LQR preview controller as described in \cite{tasse2010, tasse2013, RC-Wenk-Roefer-14} has been implemented.
The inputs of the controller are the current ZMP and current COM as well as the next 50 desired ZMP positions. \Figref{fig:balance_overview} shows an overview of the balancing sub-system.
\section{Describing Kicks Using Dynamic Movement Primitives}
\label{sec:dmp}
The kick trajectory is described by using Dynamic Movement Primitives (DMPs)\cite{schaal2006dynamic, ijspeert2013dynamical}.
For the sake of simplicity, this chapter only considers one-dimensional DMPs but they can be easily scaled to $n$ dimensions as long as the dimensions are independent of each other, \ie for translational movements. For rotational movements, a special DMP formulation has been introduced by Ude et al.\cite{dmpUde}. However, for kick motions it is sufficient to simply keep the foot level to the ground, thus no rotational DMP was used in this paper.
DMPs model goal-directed movements as weakly non-linear dynamical systems. They consist of the canonical system and the transformation system.
The \textit{canonical system} $s$ describes the phase of the movement:
\begin{align}
\tau\dot{s} &= -\alpha_ss \label{eq:canonical_system}
\end{align}
The phase $s$ replaces the time in the transformation system. Intuitively, it drives the transformation system similar to a clock\cite{Muelling30012013}.
$s$ conventionally starts at 1 and monotonically converges to zero. $\tau$ is the execution time of the movement. As long as $\tau$ remains constant, a closed solution for the canonical system exists\cite{Muelling30012013}:
\begin{align}
s(t) = exp(\frac{-\alpha_s}{\tau} t) \label{eq:canonicalSystemDirect}
\end{align}
$\alpha_s$ determines how fast $s$ converges. The value of $\alpha_s$ has to be chosen in a way that $s$ is sufficiently close to zero at the end of the execution. We chose $\alpha_s$ by setting $s(\tau) = 0.01$ and solving \eqref{eq:canonicalSystemDirect} for $\alpha_s$.
The \textit{transformation system} is defined by two first order differential equations:
\begin{align}
\tau\dot{z} &= \alpha_z(\beta_z(g-y)-z) + sf(s)\label{eq:transformation_system_1}\\
\tau\dot{y} &= z\label{eq:transformation_system_2}
\end{align}
$\tau$ is the execution time of the movement, $\alpha_z$ and $\beta_z$ are damping constants, $g$ is the goal position of the movement, $y$ is the current position of the movement, $s$ is the current phase as defined by \equref{eq:canonical_system} and $f(s)$ is called the forcing term.
The dampening constants are set for critical dampening, \ie $\beta_z = \alpha_z/4$. We chose $\beta_z=25$ and $\alpha_z=6.25$, but the exact values do not matter as long as the system is critically dampened.
The forcing term defines the movement's shape. With $f(s) = 0$, the system is just a PD controller converging to $g$ and reaching it at time $\tau$. One could say that $f$ superimposes its shape onto the PD controller.
To ensure that $f$ does not keep the system from reaching the desired goal position, $f$ is scaled by the phase, thus diminishing the influence of $f$ towards the end of the movement.
To be able to express arbitrary movements, $f$ is typically chosen to be a radial basis function approximator:
\begin{align}
f(s) &= \frac{\sum_{i=1}^N \psi_i(s)w_i}{\sum_{i=1}^N \psi_i(s)}, w_i \in \mathbb{R} \label{eq:rbf}
\end{align}
$\psi_i$ is the i-th Gaussian radial basis function with mean $c_i$ and variance $\sigma_i^2$:
\begin{align}
\psi_i(s) = exp(-\frac{1}{2\sigma_i^2}(s - c_i)^2)
\end{align}
The weights $w_i$ can be chosen to create any desired function and thus define the shape of the whole movement. Different learning and optimization approaches can be used to find the weights for a certain movement, \eg Schaal et al. \cite{schaal2003control} describe how to imitate a given trajectory.
Since the goal position $g$ is part of \equref{eq:transformation_system_1}, the shape also depends on $g$. This means that the weights can only force the system into a certain shape for one specific value of $g$. When $g$ changes, the shape ``bends'' to reach the new goal position. In general, this is undesired behavior and is solved by scaling $f$ if $g$ changes.
\begin{figure}[tb]
\centering
\subfigure[]{
\includegraphics[width=0.475\textwidth]{shift_goal_position_no_scaling}
\label{fig:shift_goal_position_no_scaling:a}
}
\subfigure[]{
\includegraphics[width=0.475\textwidth]{dmp_time_scale}
\label{fig:shift_goal_position_no_scaling:b}
}
\caption{(a) Behavior of the kick motion when the goal position is adapted. (b) Temporal scaling capability of the DMP.}
\label{fig:shift_goal_position_no_scaling}
\end{figure}
However, we found that kick motions ``bend'' in a natural way (\figref{fig:shift_goal_position_no_scaling}), \ie if the goal position is moved closer to the start position, the robot will swing further back and vice versa. This is exactly the behavior that one would expect from a kicking motion because it ensures that the distance between the inflection point and the goal position remains the same. This is important because the kicking velocity depends on this distance.
Thus, the DMP formulation allows us to define an arbitrary movement in task space and later scale its execution time as well as to move the goal position while retaining a sane shape.
A major downside of the formulation is that the final velocity is always zero making it unsuitable for kick motions because kick motions need to reach the target with a specific velocity. Solutions to this problem were proposed by
Kober et al. \cite{koberDmp} and Mülling et al. \cite{Muelling30012013}.
They replaced the goal $g$ in \equref{eq:transformation_system_1} with the position, velocity and acceleration of a moving target $g_p$.
\begin{align}
\tau\dot{z} = \alpha_z(\beta_z(g_p - y) + \tau\dot{g_p} - z) + \tau^2\ddot{g_p} + sf(s) \label{eq:muellingDmp}
\end{align}
While Kober et al. used a target that is moving on a straight line, Mülling et al. used a fifth order polynomial.
\begin{align}
g_p(t) = \sum_{i=0}^5b_it,~~~~
\dot{g_p}(t) = \sum_{i=i}^5ib_it^{i-1},~~~~
\ddot{g_p}(t) =\sum_{i=2}^5(i^2-i)b_it^{i-2}
\end{align}
The coefficients $b_i$ are calculated by applying the bounding conditions:
\begin{align}
g_p(t_0) &= y_0, ~~~ \dot{g_p}(t_0) = \dot{y_0}, ~~~ \ddot{g_p}(t_0) = \ddot{y_0}\\
g_p(\tau) &= g, ~~~ \dot{g_p}(\tau) = \dot{g}, ~~~ \ddot{g_p}(\tau) = 0
\end{align}
Due to the time dependency, the coefficients need to be recalculated if $\tau$ changes.
In this way, a new parameter $\dot{g}$ is introduced. It represents the velocity at the end of the movement. However, the weights now depend on $\dot{g}$ as well. This means that if the goal velocity is changed, the shape of the movement will change. As shown in \figref{fig:muelling_dmp_speed_scale:a}, the trajectory reacts to changes in the goal velocity with huge changes and becomes inexecutable.
\begin{figure}[tb]
\centering
\subfigure[]{
\includegraphics[width=0.475\textwidth]{muelling_dmp_bend_without_scale}
\label{fig:muelling_dmp_speed_scale:a}
}
\subfigure[]{
\includegraphics[width=0.475\textwidth]{muelling_dmp_bend_with_scale}
\label{fig:muelling_dmp_speed_scale:b}
}
\caption{Reaction of the DMP to changes in the final velocity. (a) Shows the reaction of the original DMP while (b) shows how the DMP reacts with our new scaling term $A$.}
\label{fig:muelling_dmp_speed_scale}
\end{figure}
We propose to fix this by scaling the forcing term with the novel factor $A = (\dot{g}_{new} - \dot{y}_0) / (\dot{g} - \dot{y}_0)$, where $\dot{y}_0$ is the starting velocity of the trajectory and $\dot{g}_{new}$ is the new goal velocity.
\Figref{fig:muelling_dmp_speed_scale:b} shows that this produces much better results. If the velocity is increased, the wind up phase gets longer, if it is reduced, the wind up phase gets shorter until it completely disappears if the requested goal velocity is zero. This is exactly the behavior that one would expect from a kick motion.
Thus, the final form of the DMP used in our experiments is:
\begin{align}
\tau\dot{z} &= \alpha_z(\beta_z(g_p - y) + \tau\dot{g_p} - z) + \tau^2\ddot{g_p} + sf(s)A\\
\tau\dot{y} &= z\label{eq:dmp_final}\\
\tau\dot{s} &= -\alpha_ss
\end{align}
This DMP responds well to changes in goal position and goal velocity. It is noteworthy that both parameters can be changed mid-execution without causing discontinuities. The implementation used in our experiments has been released as part of the B-Human Code Release 2015\cite{codeRel2015} and is available online\footnote{{\scriptsize \url{https://github.com/bhuman/BHumanCodeRelease/tree/master/Src/Tools/Motion}}}.
\section{Evaluation}
\label{sec:eval}
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& B-Human left & Imitated left & B-Human right & Imitated right\\\hline\hline
Avg. distance & 5.3 m & 4.3 m & 4.61 m & 3.6 m\\
Avg. angular deviation & $5.62^\circ$ & $8.06^\circ$ & $9.69^\circ$ & $7.48^\circ$\\
Avg. ball location (cm) & (-2.53, 496.93)&(-1.63, 430.03)& (7.52, 453.53)&(4.41, 361.56)\\
Royston H-value & 4.48 & 0.407 & 1.62 & 3.08\\
Royston p-value & 0.106 & 0.812 & 0.43 & 0.000062\\
Is normal distributed & Yes & Yes & Yes & No\\\hline
\end{tabular}
\end{center}
\caption{Kick distance comparision between the B-Human kick motion of 2015 and an imitation of that motion. The data for the imitated kick with the right leg contained two outliers. If those outliers are removed the result is normally distributed as well.}
\label{tab:imitatedKickResults}
\end{table}
Several experiments have been done to evaluate the kick motion. The setup is identical for all experiments: The robot is standing at the side line of the field and kicks the ball into the field, as depicted in \figref{fig:kick_experiments}. All experiments have been done with the official RoboCup SPL ball of 2015, a Mylec street hockey ball that is 65 mm in diameter and weighs 55 g. Each experiment consists of 30 kicks.
We used the Royston H-Test\cite{royston1983some} with a significance level of 0.05 to determine that the measured kick distances are normally distributed. Normally distributed kick results indicate that the results have only been influenced by natural noise, \ie there is probably no systematic error in the test setup or the implementation.
To compare the performance of the kick motion to an existing one, we used imitation learning \cite{schaal2003control} to learn weights that imitate the kick motion of team B-Human of 2015\cite{codeRel2015,BIKE-RoboCup-2010} and executed 30 kicks with each leg and each kick motion.
The results can be seen in \tabref{tab:imitatedKickResults} and \figref{fig:compare_imitated_kick}.
\begin{figure}[tb]
\centering
\subfigure[]{
\includegraphics[width=0.475\textwidth]{kick_left_comparision}
\label{fig:compare_imitated_kick:a}
}
\subfigure[]{
\includegraphics[width=0.475\textwidth]{kick_right_comparision}
\label{fig:compare_imitated_kick:b}
}
\caption{Kick positions of the B-Human and imitated kick motions: The red cross is the kick origin. The dots are the positions where the balls came to a halt. The blue dots originate from the B-Human kick, the red dots from the imitated kick. (a) shows the results of kicks with the left foot while (b) shows the right foot.}\label{fig:compare_imitated_kick}
\end{figure}
To test the generalization qualities of the kick motion, we conducted four experiments with different ball positions and velocities. In the first experiment, the ball is positioned 65 mm to the left. In the second experiment it is moved 80 mm forward, the third and fourth experiment reduced the kick velocity by $1/4$ and $1/2$ of the original kick velocity respectively. The results can be seen in \tabref{tab:generalizationResults}. To reach the position of the left ball, the robot had to fully stretch the leg. Therefore, the knee motor could not be used to generate a forward force, thereby significantly reducing the reached kick distance. The other experiments show a reasonable scaling towards the desired kick distance. Videos showing the kick generalization can be found at \url{https://youtu.be/g73pPCWcQvw} and \url{https://youtu.be/eANtiAiMmTg}.
\begin{figure}[tb]
\centering
\subfigure[]{
\includegraphics[width=0.475\textwidth]{kick_left_versuch}
\label{fig:kick_experiments:a}
}
\subfigure[]{
\includegraphics[width=0.475\textwidth]{kick_front_versuch}
\label{fig:kick_experiments:b}
}
\caption{Ball position generalization experiments: In (a) the ball was moved 65 mm to the left, in (b) it was moved 80 mm to the front.}
\label{fig:kick_experiments}
\end{figure}
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
& Left ball & Forward ball & $3/4$ speed & $1/2$ speed\\\hline\hline
Avg. distance & 1.31 m & 3.29 m & 2.4 m & 1.83 m\\
Avg. angular deviation & $5.91^\circ$ & $4.98^\circ$ & $5.83^\circ$ & $4.73^\circ$\\
Avg. ball location (cm) & (12.63, 130.93) & (22.87, 328.0) &(5.44, 239.16)&(2.96, 182.90)\\
Royston H-value & 6.17 & 1.15 & 3.65 & 3.43\\
Royston p-value & 0.04 & 0.56 & 0.144 & 0.142\\
Is normal distributed & No & Yes & Yes & Yes \\\hline
\end{tabular}
\end{center}
\caption{Kick distance results for generalized kicks. The data of the left generalization contained one outlier. If it is removed, the result is normally distributed.}
\label{tab:generalizationResults}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We presented a kick motion for the NAO robot that can imitate arbitrary kick trajectories and adapt them to different ball positions as well as different kick velocities. The kick motion is modeled using a slightly modified variant of DMPs. While executing the kick, the robot is kept dynamically stable using a ZMP preview controller. Additionally, we proposed a model of the NAO's motors and used it to improve the calculation of the ZMP.
We have shown that this method of generating kick motions works but cannot kick the ball as far as a manually tuned motions. However, they are more versatile. The ball does not need to be placed perfectly to be kicked and the kick speed can be adjusted. Additionally, the underlying DMPs are easy to extend and lend themselves well to a multitude of optimization algorithms \cite{kober2013reinforcement}.
The kick motion presented in this paper has been successfully used in the corner kick challenge competition at RoboCup 2015.
\subsection*{Acknowledgement}
We would like to thank the members of the team B-Human for providing the
software framework for this work.
|
2,869,038,156,641 | arxiv | \section{Introduction}
One case of homological mirror symmetry is an equivalence between the derived category of coherent sheaves on a Fano variety $X$ and the Fukaya-Seidel category of its mirror Landau-Ginzburg, or LG, model $\mathbf{w} : X^{mir} \to \mathbb{C}$. There are many constructions of the mirror \cite{auroux}, \cite{HV} but all depend on a choice of symplectic form on $X$. Moving within the complexified K\"ahler cone of $X$ gives an open parameter space of mirror LG models. While the Fukaya-Seidel categories of any two mirror LG models from this space are equivalent, we may assign distinct exceptional collections and semi-orthogonal decompositions to certain regions. We observe that these decompositions should be related to the space of stability conditions of $D^b (X)$. For more on LG models from this vantage point, see \cite{HKK}, \cite{IKS} , \cite{KKSP} and \cite{CP}.
In \cite{DKK}, \cite{Kerr}, the authors examine this phenomena in the toric context and compactify the space of LG models into a toric stack $\mlg{A}{A^\prime}$ using methods from \cite{GKZ}, \cite{Lafforgue}. The boundary of this stack gives degenerations of the LG models where both the fiber and the base of the model degenerate. Examining the fixed points of $\mlg{A}{A^\prime}$, we see a LG model decompose into a chain of regenerated circuit LG models. In \cite{DKK} we considered the symplectic topology of these degenerated pieces and observed that, under homological mirror symmetry, they correspond to semi-orthogonal components of $D^b(X)$ obtained by running the toric minimal model program on the mirror toric Fano $X$. The mirror symmetric decomposition is a run of the minimal model program on $X$. We expect that this type of correspondence between Mori theoretic semi-orthogonal decompositions and degenerated Landau-Ginzburg models holds in much more generality, leading to a new approach to birational geometry.
In addition to reviewing the definitions and theorems in the approach outlined above, we give a detailed account of a basic example, which is still rich in structure. Here our LG models are simply single variable degree $(n + 1)$ polynomials $f:\mathbb{C}\rightarrow\mathbb{C}$, and the Newton polytope is simply the interval $[0,n + 1]$. Because the fibers are finite sets, this allows us to set aside much of the technical symplectic topology in \cite{DKK}. The secondary polytope of the interval was investigated in \cite{GKZ}, where it was shown to be a cube whose lattice structure is strongly related to $A_n$ representation theory. We investigate the universal family over the toric stack of this cube which was identified with a quotient of the Losev-Manin space in \cite{blume}. Finally, we analyze the monotone path polytope in this setting, as well as the combinatorial structure of the vanishing thimbles near the degenerated LG models. After this careful study, we observe connections with the classical representations of $A_n$ quivers and an interpretation of reflection functors as wall crossing in the space of stability conditions of the $A_n$ category.
In the final section of the paper, we explore the homological mirror of the three point blow up $X$ of $\mathbb{P}^2$. We build on the work of \cite{kalo} which studied relations between Sarkisov links. In the usual setup of Sarkisov links, earlier contractions do not play a prominent role. We explain how the toric compactification of the LG model mirror of $X$ preserves this data and gives a more complete picture of the minimal model program for $X$.
\vspace{2mm}{\em Acknowledgements:} The authors would like to thank D. Auroux, M. Ballard, C. Doran, D. Favero, M. Gross, F. Haiden, A. Iliev, S. Keel, M. Kontsevich, J. Lewis, T. Pantev, C. Prizhalkovskii, H. Rudatt, E. Scheidegger, Y. Soibelman and G. Tian for valuable comments and suggestions.
\section{\label{sec:toric} Toric Landau-Ginzburg models}
In this section we review constructions from \cite{DKK} which compactify the moduli of hypersurfaces in a toric stack and a moduli space of LG models. This is followed by a detailed definition of radar screens, which are distinguished bases for the LG models designed to preserve categories in the degenerated models. The choices involved in defining these bases are condensed into a torsor over the monotone path stack.
\subsection{\label{sec:tslg} Toric stacks and LG models}
We start by introducing the toric machinery that we need for the rest of the paper. Letting $M$ be a rank $d$ lattice and $A$ a finite subset in $ M$, we take $\du{A} \subset N$ to be the collection of primitives normal vectors to facets of $Q = {\text{Conv}} ( A)$. Here we use the usual notation of $N = {\text{Hom}} (M , \mathbb{Z})$ and write ${\text{Conv}} (A)$ for the convex hull of a set of points. The normal fan $\mathcal{F}_Q$ of $Q$ has $\du{A}$ as the set of generators for one cones and defines an abstract simplicial complex structure on the set $\du{A}$. We take the toric variety $X_Q$ to be the variety associated to $\mathcal{F}_Q$.
To promote this to a toric stack, we follow the prescription given in \cite{BCS} and \cite{Cox}. Define $U_Q \subset \mathbb{C}^{\du{A}}$ to be the open toric variety given by taking the fan in $\mathbb{R}^{\du{A}}$ consisting of cones $ {\text{Cone}} \{e_\alpha : \alpha \in \sigma\}$ where $\sigma$ is any cone in the normal fan $\mathcal{F}_Q$ of $Q$. The map $\beta_{\du{A}} : \mathbb{Z}^{\du{A}} \to N$ is defined to take $e_\alpha$ to $\alpha$ and we write its kernel and cokernel as $L_{\du{A}}$ and $K_{\du{A}}$. Define the group $\ptgroup{Q} = (L_{\du{A}} \otimes \mathbb{C}^*) \oplus {\text{Tor}} (K_{\du{A}}, \mathbb{C}^* )$ as a subgroup of $(\mathbb{C}^*)^{\du{A}}$ using the inclusion and connecting homomorphism to obtain the stack
\begin{equation*}
\mathcal{X}_Q = [U_Q / \ptgroup{Q}] .
\end{equation*}
The variety $X_Q$ is the coarse space of $\mathcal{X}_Q$. As in the case of toric varieties defined from polytopes, the stack $\mathcal{X}_Q$ comes equipped with a line bundle $\mathcal{O}_Q (1)$. Letting $Q_\mathbb{Z} = Q \cap M$, the space of sections $H^0 (\mathcal{X}_Q , \mathcal{O}_Q (1))$ has an equivariant basis $\{s_\alpha :\alpha \in Q_\mathbb{Z} \}$ and a linear system $\linsys{A} = {\text{Span}} \{s_\alpha : \alpha \in A\}$. We distinguish two open subsets of $\linsys{A}$, the full sections
\begin{equation*} \linsysfull{A} = \{s = \sum c_\alpha s_\alpha : c_\alpha \ne 0 , \text{ for all vertices } \alpha \in Q\}, \end{equation*}
and the very full sections
\begin{equation*} \linsysveryfull{A} = \{s = \sum c_\alpha s_\alpha : c_\alpha \ne 0 \text{ for every } \alpha \in A\}. \end{equation*}
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{examples1.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3700,998)(2876,-3710)
\end{picture}%
\caption{\label{fig:examples1} The sets $\example{1}$ and $\example{2}$.}
\end{figure}
As illustrated in Figure \ref{fig:examples1}, we take the sets $\example{1} = \{0, 1, 2, 3\} \subset \mathbb{Z} $ and $\example{2} = \{(0, 0), (-1, -1), (-1, 0), (0, 1), (1, 1), (1, 0), (0, -1)\} \subset \mathbb{Z}^2$. The first example gives $X_{\examplep{1}} = \mathbb{P}^1$, with line bundle $\mathcal{O}_{\example{1}} (1) = \mathcal{O} (3)$ and the linear system $\linsys{\example{1}}$ consists of all sections. The full sections $\linsysfull{\example{1}}$ are those that do not vanish at the torus fixed points $0$ and $\infty$. The second example $X_{\examplep{2}}$ is a $3$ point blow up of $\mathbb{P}^2$. The bundle $\mathcal{O}_{\examplep{2}} (1)$ in this case is the anti-canonical bundle and the linear system again consists of all sections.
Many LG models arising in homological mirror symmetry are obtained from pencils on $\mathcal{X}_Q$ contained in $\linsys{A}$. It is common for the behavior of these pencils at infinity and zero to be prescribed. We now give a concise definition of this constraint.
\begin{defn} Let $A^\prime$ be a proper subset of $A$. An $A^\prime$-sharpened pencil on $\mathcal{X}_Q$ is a pencil $W \subset \linsys{A}$ which has a basis $\{s_1, s_\infty\}$ for which $s_1 \in \linsysveryfull{A}$ and $s_\infty = \sum_{\alpha \in A^\prime} c_\alpha s_\alpha$. Let $\shpen{A}{A^\prime}$ be the open subset of $A^\prime$-sharpened pencils in the Grassmannian $Gr_2 (H^0 (\mathcal{X}_Q , \mathcal{O}_Q (1)))$.
\end{defn}
Let us examine two other equivalent ways of defining an $A^\prime$-sharpened pencil. If $s_1 = \sum_{\alpha \in A} c_\alpha s_\alpha \in W \cap \linsysveryfull{A}$, then take $s_0 = \sum_{\alpha \not\in A^\prime} c_\alpha s_\alpha$ and $s_\infty = \sum_{\alpha \in A^\prime} c_\alpha s_\alpha$. The pair $(s_0, s_\infty) \in \mathbb{C}^{(A^\prime)^\circ} \times \mathbb{C}^{A^\prime}$ gives another basis for $W$ which is unique up to a multiple $(\lambda s_0 , \lambda s_\infty)$ for some $\lambda \in \mathbb{C}^*$. We define
\begin{equation*} \mathbf{w} = [s_0 : s_\infty] : \mathcal{X}_Q - \{ s_\infty = 0\} \to \mathbb{C} \end{equation*}
to be the Landau-Ginzburg model of the $A^\prime$-sharpened pencil $W$.
Alternatively, we may write $W \in \shpen{A}{A^\prime}$ as the closure of an equivariant map, or orbit, $i_W : \mathbb{C}^* \to \linsysveryfull{A}$. Taking the one parameter subgroup $G_{A^\prime} \subset (\mathbb{C}^*)^A$ given by the cocharacter $\gamma_{A^\prime} = \sum_{\alpha \in A^\prime} e^\vee_\alpha \in (\mathbb{Z}^A)^\vee$ and any very full section $s \in W$, observe that $W = \overline{\{\lambda \cdot s : \lambda \in G_{A^\prime}\}}$. When referring to an $A^\prime$-sharpened pencil, we may utilize any one of these three equivalent viewpoints. As we will observe in the next section, the orbit perspective turns out to be quite useful.
In general, the fibers of $\mathbf{w}$ over $0$ and $\infty$ have bad behavior which is corrected by judicious blow ups. We explain this bad behavior from a global perspective. Let $\mathcal{D}_Q = \sum \mathcal{D}_i$, where the sum is over the facets of $Q$, be the toric boundary of $\mathcal{X}_Q$ and for any subset $J$ of facets, let $\mathcal{Z}_J = \cap_{j \in J} \mathcal{D}_j$. If $s \in \linsys{A}$, write $\mathcal{Y}_s$ for the hypersurface defined by $s$ and $\mathcal{Y}_{s, J} = \mathcal{Y}_s \cap \mathcal{Z}_J$. For any subset $U \subset \linsys{A}$, we have the incidence stacks $\mathcal{I} (U) \subset U \times \mathcal{X}_Q$ and $\mathcal{I}_J \subset U \times \mathcal{X}_Q$ whose points are given by pairs $\{(s, y) : s \in U, y \in \mathcal{Y}_s\}$ and $\{(s, y ) : s \in U, y \in \mathcal{Y}_{s, J} \}$ respectively.
\begin{prop} The set $U = \linsysfull{A}$ is the maximal open subset of $\linsys{A}$ for which the projection $\pi_{\linsys{A}} : \mathcal{I}_J (U) \to U$ is flat for all subsets $J$.
\end{prop}
This follows from the observation that the sections which are not full are equivalent to sections that contain fixed points of the toric action. Thus they contain zero dimensional intersections $\mathcal{Z}_J$. For our purposes, a reasonable moduli space of sections should not exhibit this behavior. In the next subsection, we modify these sections along with their fibers in order to obtain a proper flat family.
\subsection{The secondary stack}
To remedy the fact that the incidence varieties give a poorly behaved parameter space for the hypersurface, we review the constructions of the secondary and Lafforgue stacks given in \cite{DKK}, where more details can be found. We assume the reader is familiar with material found in \cite{BCS}, \cite{Cox} and \cite{GKZ}. Given $A$ as above, the secondary polytope $\psec{A} \subset \mathbb{R}^A$ is an $(|A| - d - 1)$-dimensional polytope whose faces correspond to regular subdivisions $S = \{(Q_i, A_i) : i \in I\}$ of $A$. The normal fan $\fans{A}$ of $\psec{A}$ can be refined to a fan $\fanlt{A}$ as in \cite{hacking} and \cite{Lafforgue} by considering pairs $( S, Q^\prime )$ of a subdivision $S$ along with a set $Q^\prime$ which is a face of a subdivided polytope $(Q_i , A_i) \in S$. Then a cone $\sigma_{(S, Q^\prime )} $ in $\fanlt{A}$ is defined as all functions $\eta$ on $A$ whose lower convex hull gives the marked subdivision $S$ and whose minimum is achieved on $Q^\prime \cap A$.
\begin{prop}[\cite{DKK}] If $\Delta^{A} \subset \mathbb{R}^A$ is the unit simplex, then $\fanlt{A}$ is the normal fan of the Minkowski sum $\ptlaf{A} := \psec{A} + \Delta^A \subset \mathbb{R}^A$.
\end{prop}
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{seclaf1.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(6800,2214)(278,-3223)
\end{picture}%
\caption{\label{fig:example2} The secondary polytope and Lafforgue polytope for $\example{1}$}
\end{figure}
As the secondary polytope $\psec{A}$ lies in a translation of the kernel $L_A \otimes \mathbb{R} := \ker (\mathbb{R}^A \to M_\mathbb{R} \oplus \mathbb{R})$, it enjoys $(d + 1)$ constraints. Similarly, the polytope $\ptlaf{A}$, which we call the Lafforgue polytope, lies in a translation of the hyperplane $H = \{\sum_\alpha c_\alpha e_\alpha : \sum_{\alpha} c_\alpha = 0 \}$ and therefore has $(|A| - 1)$ dimensions. This drastically limits the number of examples of Lafforgue polytopes that one can visualize, but our case $\example{1}$ rendered in Figure \ref{fig:example2} gives an indication of the relationship between the original marked polytope $(Q, A)$, the secondary polytope $\psec{A}$ and Lafforgue polytope $\ptlaf{A}$. Observe that to each vertex of the secondary polytope, there is a regular triangulation of $(Q, A)$ which can be seen as a unique subset of the facets of the Lafforgue polytope. The second example, $\example{2}$ has a $4$-dimensional secondary polytope with $32$ vertices and a $7$-dimensional Lafforgue polytope. Nevertheless, we will be able to use a polytope derived from this data to analyze the minimal model runs of the homological mirror of $\mathcal{X}_{\examplep{2}}$ in the last section.
Since $\ptlaf{A}$ is a Minkowski sum, we have maps $\tilde{\pi}_{A} : X_{\ptlaf{A}} \to X_{\psec{A}}$ and $\tilde{\pi}_Q : X_{\ptlaf{A}} \to \mathbb{P}^{|A| - 1}$. If $i: p \hookrightarrow X_{\psec{A}}$ sends a point to the orbit $\orb{S}$ associated to a subdivision $S = \{(Q_i , A_i ) : i \in I\}$, then we may define $X_S$ as the pullback in the fiber square
\begin{equation} \label{eq:fundex}
\begin{CD}
X_S @>j>> X_{\ptlaf{A}} @>{\tilde{\pi}_Q}>> \mathbb{P}^{|A| - 1} \\ @V{\rho_S}VV @V{\tilde{\pi}_A}VV @. \\ p @>i>> X_{\psec{A}} @.
\end{CD}
\end{equation}
One only needs to trace through the definitions to see that $\tilde{\pi}_Q \circ j$ maps $X_S$ into the union of the images of the toric varieties $X_{Q_i}$ via their $\linsys{A_i}$ maps. On the level of varieties, this gives us a simultaneous degeneration of $X_Q$ and $\mathcal{O}_Q (1)$. Taking a global section of $\tilde{\pi}_Q^* (\mathcal{O} (1))$ yields a degeneration of hypersurfaces. In this way, we have a universal space for performing the degenerations along the lines of the Mumford construction.
We would like to promote this setup to a morphism of stacks $\pi : \laf{A} \to \secon{A}$ so that $\mathcal{I} (\linsysfull{A} )$ has an \'etale map to $\laf{A}$ and the quotient $[\mathcal{I} (\linsysfull{A}) / (\mathbb{C}^*)^{d + 1}]$ is naturally an open substack of $\secon{A}$. This was done carefully in \cite{DKK} and we review the procedure here.
Both the secondary and Lafforgue polytopes have vertices in a hyperplane parallel to $H_\mathbb{Z} = \{\sum c_\alpha e_\alpha : \sum c_\alpha = 0\} \subset \mathbb{Z}^A$. We consider both polytopes to live in $H \subset \mathbb{R}^A$ and write $i_H : H_\mathbb{Z} \to \mathbb{Z}^A$ for the inclusion. As with the case of $\du{A}$, there is an exact sequence
\begin{equation} \label{eq:fundex} \begin{CD} 0 @>>> L_A @>{\delta_A}>> \mathbb{Z}^A @>{\beta_A}>> M @>>> K_A @>>> 0 \end{CD} . \end{equation}
The hyperplanes supporting $\ptlaf{A}$ can be partitioned into horizontal and vertical hyperplane sections $ \du{\ptlaf{A}} = \du{\ptlaf{A}}^h \cup \du{\ptlaf{A}}^v$ and the vertical hyperplanes are all scalar multiples of $(\beta_A \circ i_H)^\vee (\du{A})$. To define a stacky fan, one must choose generators for the one cones. As opposed to taking primitives for the one cones in $\du{\ptlaf{A}}^v$, we take the generators in the image $(\beta_A \circ i_H)^\vee (\du{A})$, while for the horizontal hyperplanes in $\du{\ptlaf{A}}^h$, we choose the primitives of the hyperplanes in $\du{\ptlaf{A}}^h$. This defines the stacky fan which gives Lafforgue stack $\laf{A}$ associated to $A$.
To give the definition of the secondary stack, we rely on a universal colimit construction for toric stacks. It is not hard to show that, if $\tilde{\mathcal{X}}_{\psec{A}}$ is the stack given by $\psec{A}$, then there is a map $g: \laf{A} \to \tilde{\mathcal{X}}_{\psec{A}}$. The colimit stack $\secon{A}$ of $g$ comes equipped with a map $\pi : \laf{A} \to \secon{A}$. Both $\secon{A}$ and $ \pi$ can be described by the universal property that if $g$ can be factored into two flat, equivariant morphisms $h_1 \circ h_2$ where $h_1 : \laf{A} \to \mathcal{X}$ and $\mathcal{X}$ is a (good) toric stack, then there is a map $k: \secon{A} \to \mathcal{X}$ with $h_1 = k \circ \pi$. This property makes $\secon{A}$ the best toric candidate for the moduli stack of hypersurfaces in $\mathcal{X}_Q$.
\begin{thm}[\cite{DKK}] There is a hypersurface $\hyp{A} \subset \laf{A}$ for which the map $\pi: \hyp{A} \cap (\partial \laf{A})_J \to \secon{A}$ is flat for all horizontal boundary strata $J \subset \du{\ptlaf{A}}^h$. The stack $\secon{A}$ contains a dense open substack $V \approx [\mathcal{I} (\linsysfull{A} ) / (\mathbb{C}^*)^{d + 1}]$ for which $\pi : \hyp{A} (V) \to V$ is equivalent to the quotient map of $[\mathcal{I} (\linsysfull{A} ) / \mathbb{C}^*]$.
\end{thm}
This theorem shows that $\pi : \hyp{A} \to \secon{A}$ is a reasonable compactification of the universal hypersurface over the moduli stack of toric hypersurfaces. It is this compactification that allows us to degenerate LG models and understand their components.
It will be useful to identify the hypersurface of sections in $V \subset \secon{A}$ that do not transversely intersect the toric boundary of $\mathcal{X}_Q$. Recall that the principal $A$-determinant from \cite{GKZ} does this precisely and has the secondary polytope as its Newton polytope. Thus it can be written naturally as a section of $\mathcal{O}_{\secon{A}} (1)$ and we write $\mathcal{E}_A$ for its zero loci.
\subsection{The stack of Landau-Ginzburg models}
In this subsection we will review a toric compactification of the space of Landau-Ginzburg models arising from $A^\prime$-sharpened pencils. Near the fixed points of this compactification, we give a procedure for obtaining a semi-orthogonal decomposition of the directed Fukaya category of the model.
The geometry of a fiber polytope has already proven useful in the case of a secondary polytope. As it turns out, this more general notion works well in describing several moduli problems in the toric setting \cite{logstable}, \cite{KerrFP}. In particular, given two toric varieties, or stacks, $\mathcal{X}_{Q_1}$ and $\mathcal{X}_{Q_2}$ arising from marked polytopes, one may define a space of equivariant morphisms $\psi : \mathcal{X}_{Q_1} \to \mathcal{X}_{Q_2}$ for which $\psi^* (\mathcal{O}_{Q_2} (1)) = \mathcal{O}_{Q_1} (1)$ up to toric equivalence. This space has a reasonable compactification to a toric stack whose moment polytope is the fiber polytope $\Sigma (Q_2, Q_1)$.
In the previous section, we examined the case where $Q_2$ was the simplex and $Q = Q_1$. This gave the secondary polytope $\psec{A} = \Sigma (Q_2, Q_1)$ as the moment polytope of the stack $\secon{A}$, which was regarded as a compactification of the moduli stack of toric hypersurfaces of $\mathcal{O}_Q (1)$ in $\mathcal{X}_{Q}$. Prior to this construction, we considered LG models on $Q$ to be $A^\prime$-sharpened pencils $W$ which were given as equivariant maps $i_W : \mathbb{C}^* \to \linsysfull{A}$. Two $A^\prime$-sharpened pencils $W$, $\tilde{W}$ are equivalent if $W = \lambda \tilde{W}$ for some $\lambda \in (\mathbb{C}^*)^{d + 1}$. Thus, from the perspective of equivariant maps, up to toric equivalence a LG model is an equivariant map $\iota_W : \mathbb{C}^* \to [ \linsysfull{A} / (\mathbb{C}^*)^{d + 1}]$. By equivariant, we mean with respect to the torus embedding $\mathbb{C}^* \to L_A^\vee \otimes \mathbb{C}^*$ given by the cocharacter $\gamma := \delta^\vee_A (\gamma_{A^\prime})$ where $\delta_A$ is defined in equation \ref{eq:fundex} and $\gamma_{A^\prime} \in (\mathbb{Z}^A)^\vee$ in section \ref{sec:tslg}. Note that the codomain of $\iota_W$ is an open chart for the stack $\secon{A}$ implying that the collection of such maps is an open chart of equivariant maps from $\mathbb{P}^1$ to the toric stack $\secon{A}$ with respect to the character map $\gamma : L_A \to \mathbb{Z}$.
This map $\gamma : L_A \to \mathbb{Z}$ induces a map on polytopes $\psec{A} \to [0, N]$ for some $N$ determined by $A^\prime$. The fiber polytope $\Sigma (\psec{A} , [0, N])$ is known as the monotone path polytope (an example of an iterated polytope, see \cite{BS2}) and is denoted $\Sigma_{\gamma} (\psec{A} )$. The associated fiber stack $\mlg{A}{A^\prime}$ defined in \cite{KerrFP} then serves as a compactification of the open set of $G_{\gamma_{A^\prime}} := \mathbb{Z} \langle \gamma \rangle \otimes \mathbb{C}^*$ orbits contained in the dense subset of $\secon{A}$. Its coarse space is simply the toric variety associated to $\Sigma_\gamma (\psec{A})$. We codify these notions in the following proposition.
\begin{prop} The quotient stack $\qshpen{A}{A^\prime} = [\shpen{A}{A^\prime} / (\mathbb{C}^*)^{d + 1}]$ of $A^\prime$-sharpened pencils forms an open dense subset of the proper toric stack $\mlg{A}{A^\prime}$. The fixed points of $\mlg{A}{A^\prime}$ are in one to one correspondence with parametric simplex paths of $\gamma : \psec{A} \to [0, N]$ and will be called maximal degenerations of $\mathbf{w}$.
\end{prop}
Recall from \cite{BS1} that a parametric simplex path of a linear function on a polytope is an edge path which increases relative to the linear function. One consequence of the above construction is that, over any point $\psi$ in $\mlg{A}{A^\prime}$, there is a chain $\langle \psi_1, \ldots, \psi_t \rangle$ of projective lines which has a flat family of degenerated toric varieties (or stacks) lying over it. In the dense orbit, there is one such line, and the toric variety is irreducible, so we obtain a LG model. As we approach the toric boundary, we bubble $\psi$ into a stable map $\{\psi_i\}$ on several components $\cup_1^t \mathbb{P}^1$, and simultaneously degenerate the fibers of the LG model. In a maximally degenerated LG model, we have a chain $\langle C_1, \ldots, C_k \rangle$ of maps to one dimensional orbits of $\secon{A}$. Such strata correspond to the edges of the secondary polytope $\psec{A}$ which in turn correspond to circuit modifications or bistellar flips of the triangulations at the vertices. In \cite{DKK}, components of the fibers over each stable component were examined and found to reproduce well known relations in the mapping class group. They were also conjectured to represent homological mirrors to birational maps of the minimal model on $\mathcal{X}_Q^{mir}$.
\subsection{Semi-orthogonal decompositions}
Our next goal is to stratify our space of Landau-Ginzburg models so that for every strata, we obtain a semi-orthogonal decomposition of the Fukaya-Seidel category of the associated model. The decomposition we obtain will bear a direct relationship to the monotone paths corresponding to the the maximal degenerations. To do this, we start by recalling the notion of a radar screen which will yield a class of distinguished basis of paths for the LG model \cite{SeidelFPL}. The definition given here differs from that in \cite{DKK}, but gives the generalizes it and has the advantage of being defined for a generic LG model. Before we start the definition, it is worth keeping in mind that radar screens are auxiliary concepts depending only on configurations of points in $\mathbb{C}^*$ and do not depend on any of the toric stack definitions given earlier. In fact, one can consider their definition to be a logarithmic variant of the more conventional procedure which chooses a distinguished basis of paths to be those with constant imaginary value in the positive real direction \cite{HV}.
Let $E_r = (\mathbb{C}^*)^r / \mathfrak{G}_r$ be the parameter space of $r$ unmarked points in $\mathbb{C}^*$ and $P = \{z_1, \ldots, z_r\} \in E_r$. We order the points so that $ |z_i | \geq |z_{i + 1}|$ for $1 \leq i \leq r$ and choose a lift $\tilde{P} = \{w_1, \ldots, w_r\}$ such that $e^{w_i} = z_i$. Inductively define paths $p_i : [0,\infty) \to \mathbb{C}$ starting at $w_{i}$ as follows. For $i = 1$, we take the path $p_1 (t) = w_1 + t$. Assume $p_{i }$ has been defined, then we take $p_{i + 1} $ to be the concatenation $p_{i} \ast \ell_i \ast \ell_i^\prime$ where $\ell_i^\prime (t) = w_{i} + t \cdot \text{Im} (w_{i +1} - w_i )$ for $t \in [0, 1]$ and $\ell_i (t) = \text{Re} (w_i) + \text{Im} (w_{i + 1}) - t \cdot \text{Re} (w_i - w_{i + 1})$. While $p_i$ are a collection of overlapping paths, it is clear that for any $\varepsilon$, we can perturb $p_i$ to $\tilde{p}_i$ so that $||p_i - \tilde{p}_i||_{L^2} < \varepsilon$ and $\{\tilde{p}_i: 1 \leq i \leq r \}$ forms a distinguished basis for $\tilde{P}$. Furthermore, if the values $|z_i|$ are distinct, the distinguished basis defined in this way is unique up to isotopy for $\varepsilon \ll 1$.
\begin{defn} With the notation above, we say that $\mathcal{B}_{\tilde{P}} = \{e^{\tilde{p}_i} : 1 \leq i \leq r \}$ is a radar screen distinguished basis and take $\mathcal{R}_{P}$ to be the collection of all such bases. If $\tilde{P} \subset \{w \in \mathbb{C} : 0 \leq Im (w) < 2\pi\}$ we write $\mathcal{B}_{P}$ and call any such basis a fundamental radar screen.
\end{defn}
Our main application of this definition is when $P$ is the collection of critical values of a LG model $\mathbf{w} \in \shpen{A}{A^\prime}$. Let $\Delta_{A, A^\prime}$ be the variety of all $A^\prime$-sharpened pencils that do not intersect the principal $A$ determinant $\mathcal{E}_A$ transversely regarded as a subvariety of $\shpen{A}{A^\prime}, \qshpen{A}{A^\prime}$ or $\mlg{A}{A^\prime}$. We denote its complement in $\shpen{A}{A^\prime}$ and $\qshpen{A}{A^\prime}$ by $V_{A, A^\prime}$ and $\mathcal{V}_{A, A^\prime}$ respectively. Take $\tilde{E}_r = E_r / \mathbb{C}^*$ to be the quotient where $\mathbb{C}^*$ acts by multiplication.
\begin{prop} Suppose $r = |i_W^{-1} (\mathcal{E}_A )|$ for some $W \in V_{A, A^\prime}$. The map $\mathbf{c} : V_{A, A^\prime} \to E_r$ given by $\mathbf{c} (W) = i_W^{-1} (\mathcal{E}_A)$ can be completed to a commutative diagram
\begin{equation*}
\begin{CD} V_{A, A^\prime} @>{\mathbf{c}}>> E_r \\
@VVV @VVV \\
\mathcal{V}_{A, A^\prime} @>{\tilde{\mathbf{c}}}>> \tilde{E}_r
\end{CD}
\end{equation*}
\end{prop}
\begin{proof} This follows from the quasi-homogeneous property of the principal $A$ determinant with respect to the $(\mathbb{C}^*)^{d + 1}$ action on $\mathbb{C}^A$ (\cite{GKZ}). Indeed, we have that if $i_W, i_{\tilde{W}} \in \shpen{A}{A^\prime}$ are equivalent, then there exists $(\lambda, \eta) \in (\mathbb{C}^*)^{d + 1} = \mathbb{C}^* \times (\mathbb{C}^* \otimes N) $ such that $\lambda (1 \otimes \beta_A)^\vee (\eta ) i_W (z) = i_{\tilde{W}} (z)$ for all $z \in \mathbb{C}^*$. But then $E_A (i_{\tilde{W}} (z) ) = 0$ if and only if $\lambda^v \cdot \beta_A (p)(\eta ) \cdot E_A (i_W (z)) = 0$ where $v = (d + 1) \text{Vol} (Q)$, $p \in \psec{A}$ and $M$ is identified with ${\text{Hom}} (\mathbb{C}^* \otimes N , \mathbb{C}^*)$.
\end{proof}
This proposition shows that a choice of radar screen for the critical values of an $A^\prime$-sharpened pencil can be consistently made on the quotient space $\mathcal{V}_{A, A^\prime}$. Now, the discriminant $\Delta : E_r \to \mathbb{C}$ given by $\Delta (z_1,\ldots, z_r) = \prod_{i < j} (z_i - z_j)^2$ is homogeneous and thus its zero locus is pulled back from $\tilde{E}_r$. The associated braid group $\tilde{B}_r = \pi_1 (\tilde{E}_r - \{\Delta = 0\} )$ is in fact a quotient of the subgroup of the braid group $B_{r + 1}$ which is pure on the strand at the origin. It is clear that the map $\tilde{\mathbf{c}}$ induces a representation of the fundamental group of $\mathcal{V}_{A, A^\prime}$ into $\tilde{B}_r$. More generally, we have a representation of fundamental groupoids
\begin{equation*} \mathbf{r} : \Pi (\mathcal{V}_{A, A^\prime} ) \to \Pi (\tilde{E}_r - \Delta ). \end{equation*}
Define $\Delta_\mathbb{R} (z_1 , \ldots, z_r ) = \prod_{i < j} (|z_i | - |z_j|)^2$ to obtain a real stratification $\tilde{\mathcal{S}} = \{R_\rho : \rho \in \mathcal{P}\}$ of $\tilde{E}_r$. Here $\mathcal{P}$ denotes the set of partitions of $\{1, \ldots, r\}$ and $R_\rho = \{\{z_1, \ldots, z_r \} : |z_i| = |z_j| \text{ for } i \sim_\rho j ,\text{ and } |z_i| \leq |z_{i + 1}|\}$.
\begin{defn} The pullback stratification
\begin{equation*} \mathcal{S} = \{R \text{ a component of } \tilde{\mathbf{c}}^{-1} (R_\rho ) : R_\rho \in \tilde{\mathcal{S}}\} \end{equation*}
on $\qshpen{A}{A^\prime}$ will be called the norm stratification.
\end{defn}
From the description of the toric boundary strata of $\mlg{A}{A^\prime}$, we may extend the norm stratification on $\mathcal{V}_{A, A^\prime}$ to the boundary. We will avoid the details of this extension, which are evident from the fact that the orbits degenerate to sequences of orbits, and refer to the resulting stratification on $\mlg{A}{A^\prime}$ as the extended norm stratification.
To use the definitions given above, we need a result that gives us a well defined category on which to work. If the sharpening set $A^\prime$ is chosen carefully, the associated LG model $\mathbf{w}$ has a sensible definition of a Fukaya-Seidel category $\fs{\mathbf{w}}$. For example, we have the following proposition.
\begin{prop}[\cite{DKK}] \label{prop:lef} Assume $A^\prime \subset A$ is contained in the interior of $Q$. If $W$ transversely intersects the principal $A$-determinant, then its LG model $\mathbf{w}$ is a Lefschetz pencil.
\end{prop}
In particular, the singularities are isolated and Morse, and parallel transport is well defined along the base so that the usual notion of Fukaya-Seidel categories applies \cite{SeidelFPL}. Along with this, we have that the collection of distinguished bases is acted on by the full braid group $B_r$ which extends to an action on exceptional collections via mutations \cite{vcm}.
From this proposition and the results in the above references, it is not difficult to obtain our main theorem for this section.
\begin{thm}\label{thm:so} The space $\mathcal{V}_{A, A^\prime}$ has a stratification $\mathcal{S}$ such that, for any component $R \in \mathcal{S}$, there exists a $\mathbb{Z}^r$ torsor $\mathcal{C}_R$ of semi-orthogonal decompositions of $\fs{\mathbf{w}}$ satisfying:
\begin{itemize}
\item[(i)] If $R_1 < R_2 \in \mathcal{S}$ then there is a bijective map $\tau :\mathcal{C}_2 \to \mathcal{C}_1$ such that the semi-orthogonal decomposition $S \in \mathcal{C}_2$ refines that given by $\tau (S)$.
\item[(ii)] If $\gamma: [0,1] \to \mathcal{V}_{A, A^\prime}$ is any morphism in $\Pi (\mathcal{V}_{A, A^\prime} )$ with $\gamma (0), \gamma (1) \in R_0$ giving exceptional collections, then $\mathbf{r} (\gamma )$ acts by mutation to map $\mathcal{C}_{R_0}$ to itself.
\item[(iii)] Assume $R \in \mathcal{S}$ and $\mathbf{w} \in \overline{R}$ is in the boundary of $\mlg{A}{A^\prime}$ and corresponds to a sequence $\langle \mathbf{w}_1 , \ldots, \mathbf{w}_t \rangle$. Then every semi-orthogonal decomposition in $\mathcal{C}_R$ refines the decomposition $\langle \fs{\mathbf{w}_1} , \ldots , \fs{\mathbf{w}_t} \rangle$.
\end{itemize}
\end{thm}
The first two statements follow from directly from the definition, while the third from the structure of the monotone path polytope. Note that this gives a direct relationship between the combinatorics of maximal degenerations, or parametric simplex paths, and decompositions of Fukaya-Seidel categories of Landau-Ginzburg models near such degenerations.
\section{$A_n$-categories}
In this section we consider the most basic possible case, the directed $A_n$-category. We give a detailed construction of the secondary, Lafforgue and monotone path stacks in this case. In particular, we describe the combinatorics of the monodromy maps around the discriminant and toric boundary. Symplectic geometry in these cases is completely absent, as the Fukaya categories are more of a combinatorial nature. Nevertheless, the structure and geometry of the decompositions and representations of this category is surprisingly rich and illustrates some of the techniques that are applied in higher dimensions.
\subsection{The Lafforgue stack of an interval}
The derived $A_n$ category can be given by the Fukaya-Seidel category of a single polynomial
\begin{equation*} \mathbf{w} (x) = c_{n + 1} x^{n + 1} + \cdots + c_1 x + c_0 . \end{equation*}
whose marked Newton polytope $(Q, A)$ is clearly $Q = [0, n + 1]$, $A = \{0, 1, \ldots, n + 1\}$.
A first step to understanding this example is to characterize the stacks $\secon{A}$ and $\laf{A}$. We will derive $\laf{A}$ by obtaining its stacky fan. The secondary polytope of $A$ was examined in \cite{GKZ} and seen to be affinely equivalent to the representation theoretic polytope $P (2 \rho )$ which is the convex hull of the dominant weights $\{\omega\}$ of $A_n$ such that $\omega \leq 2\rho$ where $2\rho $ is the sum of the positive roots. We begin by reviewing their observations and establishing notation.
Take $\{e_0, \ldots, e_{n + 1} \}$ as a basis for $\mathbb{Z}^A$ and $\{\alpha_1, \ldots, \alpha_n\}$ a basis for $\Lambda_r \approx \mathbb{Z}^n$ and examine the fundamental exact sequence for $A$
\begin{equation*}
\begin{CD} 0 @>>> \Lambda_r @>{\delta_A}>> \mathbb{Z}^A @>{\beta_A}>> \mathbb{Z}^2 @>>> 0 \end{CD}
\end{equation*}
where $\beta_A (e_i) = (1, i)$ and $\delta_A (\alpha_i ) = -e_{i - 1} + 2 e_i - e_{i + 1} $. Write $C_n$ for the $A_n$ Cartan matrix
and recall that this serves as a transformation matrix from the simple roots to the fundamental weights. We will view $\Lambda_r$ as the $A_n$ root lattice with simple roots $\{ \alpha_i\}$, fundamental weights $\{\lambda_1 , \ldots, \lambda_n\} \subset \Lambda := \Lambda_r^\vee$ and $\rho = \sum_{i = 1}^n \lambda_i = \frac{1}{2} \sum_{i = 1}^n i (n + 1 - i) \alpha_i$ the Weyl element.
\begin{prop}[\cite{GKZ}] \label{prop:secfan} The normal fan $\mathcal{F}$ of $\psec{A}$ in $\Lambda \otimes \mathbb{R}$ has $1$-cone generators $\{\alpha_1, \ldots, \alpha_n , -\lambda_1, \ldots, -\lambda_n\}$ in the weight lattice $\Lambda$ and cones
\begin{equation*} \sigma_{I, J} = {\text{Span}}_{\mathbb{R}_{\geq 0}} \{ -\lambda_i , \alpha_j : i \in I, j \in J\} \end{equation*}
for any pair of disjoint sets $I, J \subset [n]$.
\end{prop}
The vertices of $\psec{A}$ are easily seen to be in one to one correspondence with subsets $K = \{k_0 < \ldots < k_m\} \subset \{1, \ldots, n\}$ representing the triangulations $T_K = \{([k_i , k_{i + 1}], \{k_i, k_{i + 1}\})\}$ and corresponding to the vertex $\varphi_K = \sum_{j = 0}^m (k_{i + 1} - k_{i - 1} ) e_{k_i}$ of $\psec{A}$.
In order to obtain the secondary stack, we need to write out the stacky fan for the Lafforgue stack and find the limit stack. The facets of the Lafforgue polytope were shown in \cite{DKK} to correspond to pointed coarse subdivisions of $A$. Since $\ptlaf{A}$ is an $(|A| - 1)$ dimensional polytope, we define the supporting hyperplane functions $\du{\ptlaf{A}}$ as elements in $(\mathbb{Z}^A)^\vee$ but restrict them to linear functions on $ \Gamma = \{\sum c_i e_i : \sum c_i = 0\}$. Letting $f_i = \delta_A (\alpha_i)$ and $f_0 = e_{0} - e_{1}$, we take $\{f_0, f_1, \ldots, f_n\}$ as a basis for $\Gamma$ so that $\delta_A : \Lambda \to \mathbb{Z}^A$ lifts to $\tilde{\delta}_A : \Lambda \to \Gamma$. As we will show in a moment, there are $3n + 2$ facets of $\ptlaf{A}$, so the stacky fan is obtained by a fan in $\mathbb{R}^{3n + 2}$ along with a map $\xi_A : \mathbb{Z}^{3n + 2} \to \Gamma^\vee$ which gives the group $\tgroup{\ptlaf{A}} \simeq \ker (\xi_A ) \otimes \mathbb{C}^*$. We write $\{g_i : 1 \leq i \leq 3n + 2\}$ for the standard basis of $\mathbb{Z}^{3n + 2}$
The pointed coarse subdivisions $(S, B)$ of $A$ can be classified into three types. For each $1 \leq i \leq n$, there is a pointed subdivision $( (Q, A - \{i\}), A - \{i \} )$ whose supporting hyperplane function $g_i$ is given by $e^\vee_i \in (\mathbb{Z}^A)^\vee$ so that $\xi_A (g_i) = - f_{i - 1} + 2f_i - f_{i + 1}$. Also, for every $1 \leq i \leq n$, there are two pointed subdivisions corresponding to $Q = [0, i] \cup [i , n + 1 ]$ with pointing set $\{0, \ldots, i\}$ and $\{i , \ldots , {n + 1} \}$ respectively. The supporting primitives are easily seen to be $g_{2i} = \sum_{j = i + 1}^{m + 1} (j - i) e^\vee_j$ and $g_{3i} = \sum_{j = 0 }^i (i - j) e^\vee_j$ implying $\xi_A (g_{2i} ) = -f_i $ and $\xi_A (g_{3i} ) = -f_i - f_0$. Finally, there are two vertical pointed subdivisions $( (Q, A), \{ 0 \})$ and $( (Q, A ), \{{n + 1}\})$ corresponding to the one cones for $\mathcal{X}_Q$. The linear functions corresponding to these two subdivisions are $g_{3n + 1}, g_{3n + 2}$ which map to $\xi_A (g_{3n + i}) = (-1)^{i + 1} f_0^\vee$. We can write the map $\xi_A$ as the matrix
\begin{equation*}
\xi_A = \left[ \begin{array}{c| c |c| cc }
-1 \text{ } 0 \text{ } \cdots \text{ } 0 & 0 \text{ } \cdots \text{ } 0 & 1 \text{ } \cdots \text{ } 1 & 1 & -1 \\
\hline
\text{ } C_n \text{ } & \text{ } -I \text{ } & \text{ } -I \text{ } & 0 & 0
\end{array}
\right]
\end{equation*}
where $I$ is the $n \times n$ identity matrix.
The maximal cones of the Lafforgue fan $\mathcal{F}_{\ptlaf{A}}$ are indexed by pointed triangulations $\{(K, k) :K \subset [n], k \in K \}$. For example, if $k \ne 0, n + 1$, the cone in $\mathcal{F}_{\ptlaf{A}}$ associated to $(K, k)$ is
\begin{equation*} \sigma_{(K, k)} = {\text{Cone}} ( \{g_j : j \not\in K\} \cup \{g_{2j} : j \in K, j \leq k\} \cup \{ g_{3j} : j \in K , j \geq k \} ).\end{equation*}
If $k \in \{0, n + 1\}$, we add $g_{3n + 1}, g_{3n + 2}$ respectively to the generating set above. This in particular implies that $\mathcal{F}_{\ptlaf{A}}$ is a simplicial fan.
There is a map of fans $\pi_{\mathcal{F}} : \mathcal{F}_{\ptlaf{A}} \to \mathcal{F}_{\psec{A}}$ which can be promoted to a map of canonical toric stacks. Performing a calculation of the limit stacky fan defined in \cite{DKK} then gives the following proposition.
\begin{prop} The Lafforgue stack $\laf{A}$ for $A = \{0, \ldots, n + 1 \}$ is smooth with covering $\{U_{(K,k) }\}$. The secondary stack $\secon{A}$ is smooth and is given by the stacky fan given in Proposition \ref{prop:secfan}.
\end{prop}
This implies that for $n > 1$, the secondary stack is given by taking the canonical stack of the normal fan $\mathcal{F}_{\psec{A}}$ while for $n = 1$ we have $\secon{A} = \mathbb{P} (1, 2)$. This reproduces the stacks studied in \cite{blume} which are quotients of the Losev-Manin stack. In all cases, there is a covering of $\secon{A}$ by $\{U_{K} \}_{K \subset [n]}$ where $U_K$ is the chart associated to the cone $\sigma_{K, [n] - K}$. Let us describe an open chart $U_K$ in the covering given above. For $K = \{k_1 , \ldots , k_m \} \subset [n]$, assume $k_1 < \cdots < k_m$ and write $k_0 = 0$, $k_{m + 1} = n + 1$ and $r_i = k_i - k_{i -1}$ for $i = 1, \ldots, m + 1$. Write $\mu_r$ for the group of $r$-th roots of unity and let $\mu_r$ act on $\mathbb{C}^{r }$ via $\zeta (z_1, \ldots, z_{r }) = (\zeta z_1 , \zeta^2 z_2 , \ldots, \zeta^{r -1 } z_{r - 1} , z_r )$. We also take this as an action on the first $(r - 1)$ coordinates. Then, using the basis of the open cones in the weight lattice given in \ref{prop:secfan}, the open stack $U_{K}$ is easily seen to be the quotient stack
\begin{equation*}
U_K \approx \left[ \mathbb{C}^{r_1} \times \cdots \times \mathbb{C}^{r_{m + 1} -1} / \mu_{r_1} \times \cdots \times \mu_{r_{k + 1}} \right].
\end{equation*}
This local description extends over the Lafforgue stack and the universal hypersurface. Indeed, writing $G_K$ for the group $\mu_{r_1} \times \mu_{r_{m + 1}}$, it is not hard to show that there is a polydisc neighborhood $V_K = [D_1 \times \cdots \times D_m / G_K ]$ near the origin of $U_K$, with
\begin{equation*} \pi_{\mathcal{H}}^{-1} (V_K ) \approx \left[ \left( \cup_{j = 1}^{m + 1} \mu_{r_j} \right) \times D_1 \times \cdots \times D_m / G_K \right] . \end{equation*}
Here $G_K$ acts in the obvious way on the set $\cup \mu_{r_j}$.
\subsection{Vanishing trees of maximal degenerations}
Having described the secondary stack and Lafforgue stack for $A_n$, we would like to consider our space of Landau-Ginzburg models which define the directed $A_n$-category. For this purpose, we choose $A^\prime = \{0\}$ and consider all $A^\prime$-sharpened pencils on $\mathcal{X}_Q = \mathbb{P}^1$. Since $A^\prime$ is not in the interior of $A$, we cannot apply Proposition \ref{prop:lef}. However, in this case we can state the following proposition whose proof is evident from the definitions in section \ref{sec:toric}.
\begin{prop} Let $A = \{0, \ldots, n + 1\}$ and $A^\prime = \{0\}$. The Landau-Ginzburg model of an $A^\prime$-sharpened pencil is a degree $(n + 1)$ polynomial $\mathbf{w}(z) = c_{n + 1} z^{n + 1} + \cdots + c_{0}$ such that $c_i \ne 0$ for $0 \leq i \leq n + 1$.
\end{prop}
Note that the Fukaya-Seidel category is extremely sensitive to the choice of sharpening point (or set). For example, if $A = \{0, 1, 2\}$ and we chose $A^\prime = \{1\}$ instead of $\{0\}$, we would obtain the homological mirror of $\mathbb{P}^1$ instead of the category of vector spaces (or the $A_1$-category).
Recall that the moment polytope of the stack of $\mlg{A}{A^\prime}$ is the monotone path polytope of $\psec{A}$ relative to the function $\gamma_{A^\prime} = e_0^\vee$ and that maximal degenerated LG models correspond to monotone edge paths of $\psec{A}$. We describe this polytope in the following proposition.
\begin{prop}\label{prop:Anmon} The monotone path polytope $\Sigma_{\gamma_{A^\prime}} (\psec{A} )$ is combinatorially equivalent to an $(n - 1)$-dimensional cube.
\end{prop}
\begin{proof} By the results of \cite{BS1}, the vertices of $\Sigma_{\gamma_{A^\prime}} (\psec{A} )$ correspond to parametric simplex paths on $\psec{A}$. We recall that the vertices of $\psec{A}$ are labeled by subsets $K = \{k_0, \ldots, k_m\} \subset \{1, \ldots, n\}$ with associated triangulation $T_K = \{[k_i, k_{i + 1}]\}$. The image of $\gamma_{A^\prime}$ is easily seen to be $[1, n + 1]$, where the set of vertices of $\psec{A}$ sent to $1$ are all subdivisions $\{1,k_1 , \cdots, k_m\}$. Omitting the element $1$, we identify these with subsets $J = \{k_1, \ldots, k_m\} \subset \{2, \ldots, n\}$. Now observe that to any such vertex, there is a unique parametric simplex path on $\psec{A}$ relative to $\gamma_{A^\prime}$ which has $J$ as its minimum. Indeed, if $P = (J = K_0, K_1, \ldots, K_r)$ is a sequence of vertices in a parametric simplex path, then $\{K_i, K_{i + 1}\}$ is an edge of $\psec{A}$ and $\gamma_{A^\prime} (K_i) < \gamma_{A^\prime} (K_{i + 1})$. It is not hard to see that $K_i = \{k_{i + 1} , \ldots, k_m\}$ gives such a path, establishing the existence claim. To see that it is unique, suppose $P^\prime = (K_0, \ldots, K_i, K_{i+1}^\prime, \cdots, K_{r})$ is any other parametric simplex path. Since $\{K_i, K_{i + 1}^\prime\}$ is an edge of $\psec{A}$, we have that $K_i^\prime$ is obtained from $K_i$ by inserting or deleting a element. As $\gamma_{A^\prime} (K_{i + 1}^\prime ) > \gamma_{A^\prime} (K_i)$, we cannot insert a point, and deleting any element besides $k_{i + 1}$ does not affect the value of $\gamma_{A^\prime}$. Therefor $K^\prime_{i + 1} = K_{i + 1}$ and the path is unique.
By the Minkowski integral description of fiber polytopes, one easily observes that any face of $\gamma^{-1}_{A^\prime} (1)$ gives a face of $\Sigma_{\gamma_{A^\prime}} (\psec{A})$. Since the vertices are in bijection, this implies that the face lattices are equal and yields the proposition.
\end{proof}
From the proof of this proposition we obtain a combinatorial description of the sequence of circuits associated to maximal degenerations. Our next goal is to give a complete description of the semi-orthogonal decompositions connected to such sequences. We first must recall the degeneration and regeneration procedure from \cite{DKK}. Consider a monotone path specified by $J = \{0, 1 = k_0, k_1, \ldots, k_m = n + 1\}$ and a function $\eta : A \to \mathbb{Z}$ which defines the triangulation given by $J$ (see \cite{BFS} or \cite{GKZ}). Briefly recall that, for $a < b \in A$, if we denote $\eta^\prime_{a,b} = \eta(b ) - \eta (a) / (b - a)$ then this means that $\eta^\prime_{k_i, k_{i + 1}}$ is increasing relative to $i$ and that $\eta (a)$ lies above the under-graph of $\eta$ for $a \not\in J$. To simplify the treatment, we also assume that $\eta^\prime_{k_i, k_{i + 1}} \in \mathbb{Z}$. Fix any $\mathbf{c} = (c_{n + 1}, \ldots, c_0) \in \mathbb{C}^{n + 2}$ such that $c_i = 1$ for $i \in J$, and define the family of polynomials
\begin{equation*}
\psi (\mathbf{c},s, t) (z) = \left( \sum_{i=1}^{n +1} c_i s^{\eta (i)} z^i \right) + s^{\eta (0)} t
\end{equation*}
which, for $s \ne 0$ give very full sections $\psi (\mathbf{c},s, t) \in \linsysveryfull{A}$.
Notice that this gives an $s$ parameterized family of $A^\prime$-sharpened pencils $\psi (\mathbf{c}, s, \_ ) $. After quotienting with the appropriate group, we can think of $\psi$ as a function from $\mathbb{C}^*$ to $\mlg{A}{A^\prime}$, or as a function from $(\mathbb{C}^*)^2 $ to $\secon{A}$. We will shift between perspectives in what follows.
As was seen in the proof of Proposition \ref{prop:Anmon}, the sequence $\langle C_1, \ldots, C_m \rangle$ of circuit modifications associated to $J$ are supported on $C_i = \{0, k_{i - 1} , k_{i } \}$ and correspond to edges of $\psec{A}$ with vertices $T_{K_{i -1}}$ and $T_{K_i}$ where $T_i = \{0, k_i , k_{i + 1} , \cdots , k_m\}$. For any $1 \leq i \leq m$, we may reparameterize $\psi$ so as to obtain a regeneration of $C_i$. First recall that such a regeneration of $C_i$ is a map $\tilde{\psi}_i$ completing a diagram
\begin{equation} \label{diag:reg}
\begin{tikzcd} \mathbb{C}^* \arrow[hook]{r}\arrow{d}{\rho_i} & \mathbb{C} \times \mathbb{C}^* \arrow{d}{\tilde{\psi}_i} \\
\secon{C_i} \arrow[hook]{r} & \secon{A}
\end{tikzcd}
\end{equation}
where $\rho_i$ is \'etale onto the complement of $\{0, \infty\} \subset \secon{C_i}$, the top arrow is the inclusion into $\{0\} \times \mathbb{C}^*$ and $\tilde{\psi}_i$ is a finite map. Explicitly, this is given by reparameterizing $\tilde{\psi}_i (s, t) = \psi \left( \mathbf{c}, s, s^{\eta (k_{i + 1} ) -\eta (0) - k_{i + 1} \eta^\prime_{k_{i + 1}, k_i}} t \right)$ and completing to $s = 0$. Indeed, letting $z = s^{\eta^\prime_{k_i, k_{i + 1} }} u$ we have that $\lim_{s \to 0} \tilde{\psi}_i (s, t) $ converges to the circuit pencil which can be written in the $u$ coordinate as
\begin{equation} \label{eq:wi} \mathbf{w}_i (u) = u^{k_{i + 1}} + u^{k_i} + t . \end{equation}
The functions $\mathbf{w}_i$ are precisely those yielding the subcategories in the semi-orthogonal decomposition from Theorem \ref{thm:so}. One can think of the reparameterization as giving an asymptotic prescription for the $i$-th bubble in the stable map limit of $\psi$ as $s$ tends to $0$. Moreover, it is important to remember that the fibers of $\pi :\laf{A} \to \secon{A}$ over $\tilde{\psi}$ themselves degenerate into reducible chains of projective lines $\cup_{j = i + 1}^m \mathbb{P}^1$ as in Figure \ref{fig:degeneration}.
\begin{figure}[t]
\begin{picture}(0,0)%
\includegraphics{Anbase2.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5349,3820)(301,-3599)
\put(1165,-3350){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_1$}%
}}}}
\put(2821,-3557){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_2$}%
}}}}
\put(4475,-3350){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_3$}%
}}}}
\put(2737,-2811){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\secon{A}$}%
}}}}
\put(2737,-123){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\laf{A}$}%
}}}}
\put(316,-2750){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\{0,1,2,4,8\}$}%
}}}}
\put(5145,-2736){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\{0,8\}$}%
}}}}
\put(3669,-3473){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\{0,4,8\}$}%
}}}}
\put(1758,-3473){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\{0,2,4,8\}$}%
}}}}
\end{picture}%
\caption{\label{fig:degeneration} The pullback of $\secon{A}$ and $\laf{A}$ for the monotone path $J = \{0, 1, 2, 4, 8\}$}
\end{figure}
Letting $s$ and $t$ tend to $0$ in $\tilde{\psi}_{i + 1} (s, t)$, one approaches the fixed point of $\secon{A}$ associated to the triangulation $T_{K_i}$ in the monotone path. The $u$ roots $\fib{A}{q} := \pi_{\mathcal{H}}^{-1} (q)$ of $ q = \tilde{\psi}_{i + 1} (s, t ) \in \secon{A}$ converge to the degenerated hyperplane section. As described at the end of the previous subsection, this hypersurface degeneration results in a partition of the fiber $\fib{A}{q}$ into $(m - i)$ subsets $F_{i,i + 1} (q), \ldots, F_{i, m} (q)$. For every $j > i $, the set $F_{i, j}(q)$ converges to $(k_j - k_{j - 1})$ roots of unity of the $j$-th component of the degeneration of the fiber, while the component $F_{i, i + 1} (q)$ converges to the $k_i$ roots of unity. Thus all of the $F_{i,j}(q)$ are cyclically ordered sets. For the example illustrated in Figure \ref{fig:degeneration}, the sets $F_{0,1} (q) , F_{0,2} (q), F_{0,3}(q), F_{0,4}(q)$ are colored green, purple, red and blue, respectively.
For $j > i + 2$ the subsets $F_{i,j} (q)$ experience no monodromy as $q$ varies near the $C_i$ component, regardless of the path. Thus for $j > i + 2 $ there is a collection of unique monodromy isomorphisms $\tau_{i,j} : F_{i, j} (q) \to F_{i + 1,j} (q^\prime )$ where $q$ and $q^\prime$ approach $0$ and $\infty$ respectively of $C_i$. To identify the remaining sets, we must choose a radar screen $\mathcal{B}$ for $\tilde{\psi}_{i + 1} (s, t)$ to obtain the isomorphism
\begin{equation*} \tau_{i, i + 2} :F_{i , i + 1} (q) \cup F_{i, i + 2}(q) \to F_{i + 1, i + 2} (q^\prime) . \end{equation*}
To define $\tau_{i, i + 2}$, we take the path first path $p_j \in \mathcal{B}$ which does not end on a point in $C_i$, degenerate, and reparameterize the component $\gamma_i$ of $p_j$ so that it is a path from $0$ to $\infty$ in $C_i$ (if $i = 1$, take the last path and concatenate to extend it to $0$). Then $\tau_{i, i + 2}$ is defined as the monodromy along $\gamma_i : [0, 1] \to C_i$.
Note that in the $1$ dimensional case, vanishing thimbles of a polynomial $\mathbf{w}$ are simply paths in $\mathbb{C}$ with endpoints on a fiber $\mathbf{w}^{-1} (q)$. Labeling them according to which path in the radar screen they are defined by, we obtain an edge labeled tree which we refer to as the vanishing tree of $\mathbf{w}$ with respect to $\mathcal{B}$. If we omit the grading, this tree encodes all of the data necessary to compute the algebra of the morphisms between exceptional objects in $\fs{\mathbf{w}}$. We would like to give a concrete combinatorial formulation of this vanishing tree.
Towards this end, suppose $S_1, S_2$ and $S_3$ are finite sets such that $S_1$ and $S_3$ have a cyclic order and $|S_3|= |S_1 | + |S_2|$. We call a bijection $\sigma : S_1 \cup S_2 \to S_3$ a cyclic $|S_2|$ insertion if $\sigma|_{S_1}$ preserves the cyclic order. Now, assume $S_2$ comes equipped with a total order $<$ and label $S_2 = \{s_1, \cdots, s_{|S_2|}\}$. Extend this to a partial order on $S_1 \cup S_2$ by taking $s < s^\prime$ if $s \in S_1$ and $s^\prime \in S_2$. If $\sigma$ is a cyclic $|S_2|$ insertion and $s_l \in S_2$, define $m_\sigma (s_k ) = s^\prime \in S_1 \cup S_2$ to be the unique element less than $s_k$ such that every element $s \in S_3$ in the cyclic interval between $\sigma (m_\sigma (s_k))$ and $ \sigma (s_k ) $ satisfies $s_k < \sigma^{-1} (s)$. We define the incidence graph of this function
\begin{equation*}
I_{\sigma , <} = \{ (\sigma (s) , \sigma (m_{\sigma} (s))) : s \in S_2 \} \subset S_3 \times S_3
\end{equation*}
Now, let $\tilde{P} (s)$ be a choice of logarithms of the critical values of $\psi (\mathbf{c}, s, \_ )$.
\begin{thm} \label{thm:an} For $s \ll 1$ and a radar screen distinguished basis $\{p_1, \ldots, p_n\} = \mathcal{B}_{\tilde{P} (s)}$, the map $\tau_{i, i + 2}$ is a cyclic $(k_{i + 1} - k_i)$ insertion. There is a unique total order $<$ on $F_{i, i + 2}$ such that the vanishing graph associated to $\{p_{k_i}, \ldots, p_{k_{i + 1}}\}$ is $ I_{\tau_{i, i + 2}, <}$. Furthermore, every cyclic $(k_{i + 1} - k_i)$ insertion $\sigma$ and total order $<$ arises as a monodromy map for some radar screen and regeneration of $\mathbf{w}_i$.
\end{thm}
The dictionary to use for the input structures of this theorem is as follows. The sets $F_{i , i + 1} (q), F_{i, i + 2}(q), F_{i + 1, i +2} (q^\prime)$ give $S_1, S_2, S_3$ respectively, the perturbation coefficients $\mathbf{c}$ gives a total order on $S_2$ and the radar screen gives $\sigma = \tau_{i, i + 2}$.
\begin{proof} We sketch a proof. Let $a = k_{i + 1} - k_i$, $b = k_i$ and recall that the LG-model $\tilde{\psi}_i (s, t)$ corresponding to $C_i$ converges to $\mathbf{w}_i (u) = u^{a + b} + u^b + t$. As an $A^\prime$ sharpened pencil on $\mathbb{C}^*$, this is $[f(u):1] := [u^{a+b} + u^b : 1] = [-t : 1]$. We take a moment to understand the geometry of this elementary polynomial $f$. First observe that $\mathbf{w}_i$ has $a$ critical values at scaled roots of unity $d \zeta$ for $R = |b / (a + b)|^{1/a}$ and $\zeta \in \mu_a$, as well as a $(b-1)$ ramified critical value at $0$. Let $S^1_r$ be the radius $r$ circle and examine the contour $S_r := f^{-1} (S^1_r) \subset \mathbb{C}$ as we vary $r$. It is not hard to see that for $r > R$, $S_r$ is a circle which is an $(a + b)$-fold cover of $S^1_r$, while for $r < R$ it is a union of $a$ circles, $a$ of which cover $S^1_r$ once and the remaining circle covering it $b$ times. For $r = R$, $S_r$ is a circle with $a$ pinched pairs of points. This is illustrated in Figure \ref{fig:cont}.
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{cont2.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1726,1726)(758,-969)
\put(2001,-472){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{.82,0,0}$r=R$}%
}}}}
\put(997,409){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,0,0}$r>R$}%
}}}}
\put(1377, 56){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{.56,0,0}$r<R$}%
}}}}
\end{picture}%
\caption{\label{fig:cont} Contours for $|f (u)| = r$}
\end{figure}
Note that $\mathbf{w}_i$ is degenerate in the sense that it lies in the closure of the discriminant $\Delta_{A, A^\prime}$. Nevertheless, the sets $F_{i, i + 1} (q)$ and $F_{i, i + 2} (q)$ converge, up to a phase, to the $b$ roots of the inner circle of Figure \ref{fig:cont} and points contained in one of each of the $a$ outer circles, respectively. Were we to regenerate a straight line path from $q$ to $q^\prime$, it is not hard to see that the monodromy would then give a cyclic $b$-insertion $F_{i, i + 1} (q) \cup F_{i , i + 2} (q) \to F_{i + 1, i + 2} (q^\prime )$.
To obtain the actual monodromy map $\tau_{i, i + 2}$, we need to define a radar screen $\mathcal{B}$ for the regeneration of $\mathbf{w}_i / \mu_{a}$ along $\tilde{\psi}_i (s, t)$. Recall from section \ref{sec:toric} that a radar screen is a distinguished basis of paths $ \mathcal{B} = \{p_1, \ldots, p_a, p_{a + 1}\}$ ending on the $(a + 1)$ critical values $\{q_1, \ldots, q_a, 0\}$ of $\tilde{\psi}_i (s, \_ )$, ordered so that $|q_i| > |q_{i + 1}|$. It is determined uniquely by the regeneration $\tilde{\psi}_i$ and a choice of logarithmic lifts $\{\tilde{q}_1, \ldots, \tilde{q}_a\}$ of the critical values. We note that, to first order, only the $(c_{a + b - 1 }, \ldots, c_{b + 1} )$ projection of the coefficient $\mathbf{c} = (c_{n + 1}, \ldots, c_0)$, matters in determining the norm ordering of the critical values for $\tilde{\psi}_i (s, \_ )$. It is easy to see that one may prescribe any ordering with a judicious choice of such coefficients.
The key point in the proof is that for $s \ll 1$, the Figure \ref{fig:cont} is only mildly modified, so that instead of pinching off $a$ circles at once when $r = R$, we pinch off circles one by one, each time $r = |q_i|$. Through the identification of $F_{i, i + 2}$ with the outer contours given above, we see that the ordering of critical values gives a total ordering $<$ of $F_{i, i + 2}$. Between each pinch, the choice of logarithmic branch $\tilde{q}_i = \log (q_i)$ for the radar screen $\mathcal{B}$ has the effect of rotating the circle $S_r$ as one performs monodromy along $p_{i + 1}$. Note that this monodromy preserves the cyclic ordering of the $b$ points that survive the pinching, so that the total monodromy of $F_{i, i + 1} (q)$ along $\delta_{a + 1}^{-1}$ also preserves the cyclic order. These observations show that $\tau_{i, i + 2}$ is a cyclic $a$ insertion and $F_{i, i + 2}(q)$ is totally ordered.
To establish the claim about the vanishing tree, simply observe that, for each pinch, we add a vanishing cycle connecting two points of the fiber $f^{-1} (p_j (z))$ on the central component of $S_r$. One of the points will always be the point pinched off, and the other will be one of its cyclic neighbors. The fact that this is always the clockwise neighbor corresponds to our choice of counter-clockwise orientation for the radar screen distinguished basis. It is left as an exercise to see that the resulting collection of pairs of points is $I_{\tau_{i, i + 2}, <}$.
\end{proof}
Applying this to the example $J = \{0,1,2,4,8\}$ illustrated in Figure \ref{fig:degeneration} with fundamental radar screen gives the vanishing tree in Figure \ref{fig:vtr}. One starts with the unique $(1,1)$ cyclic insertion yielding the green vanishing thimble, proceeds to a $(2, 2)$ insertion which gives two red vanishing thimbles, and completes the tree with a $(4, 4)$ insertion producing the $4$ blue vanishing thimbles. In general, the proof above shows that the insertions can be chosen arbitrarily. However, if we choose the fundamental radar screen distinguished basis for an exponential sequence $J = \{0\} \cup \{2^i : 0\leq i \leq m\}$, it can be shown that each insertion is a perfect shuffle \cite{perfectshuffle}. This reflects a general phenomenon that the fundamental radar screen gives insertions $\tau_{i, i + 1}$ that maximally separate the points in $F_{i, i+ 1} (q)$ and $F_{i, i + 2} (q)$.
\begin{figure}[h]
\begin{picture}(0,0)%
\includegraphics{A8b.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1469,1469)(1359,-1313)
\end{picture}%
\caption{\label{fig:vtr} A vanishing tree for $J = \{0, 1, 2, 4, 8\}$.}
\end{figure}
\subsection{Interpretations of $A_n$ degenerations}
We conclude this section with some observations and corollaries of Theorem \ref{thm:an}. First note that, were we to find the actual exceptional collection associated to the vanishing tree, we would need to include gradings on each of the edges. For ease of exposition, we neglect these gradings, commenting only that choosing different logarithmic branches in a radar screen that yield equivalent vanishing trees will generally alter the graded version.
Now, recall that Gabriel's theorem classifies quivers of finite type as directed Dynkin diagrams \cite{gabriel}. The set of quivers $\mathcal{Q}$ whose the underlying graph is $A_n$, can be identified with the power set of $ \{ 2, \cdots, n\}$. To obtain a precise correspondence, order the vertices of the quiver by $\{v_1, \ldots, v_n\}$ and edges $\{e_2, \ldots, e_n\}$ where $e_i = \{v_{i - 1}, v_i\}$. We say $\mathfrak{o} (e_i ) = \pm 1$ if $e_i$ is directed towards $i$ or $(i -1 )$ respectively. Then map $J = \{k_1, \ldots, k_{m - 1}\} \subset \{2, \ldots, n\}$ to the unique quiver $\Gamma_J$ which satisfies $\mathfrak{o} (e_{i}) = -1$ if and only if $i \in J$.
Given such a quiver, we propose that there is a natural $T$-structure on the derived category $\mathcal{D}$ of right modules over the path algebra $\mathcal{P} (\Gamma_J )$ (recall that, up to equivalence, $\mathcal{D}$ is independent of $J$). First, write $P_i$ for the projective module of all paths with target $v_i$. Let $p_J : \{1, \cdots, n \} \to \mathbb{Z}$ be the function $p_J (j) = \frac{j }{2} - \frac{1}{2} \sum_{i = 1}^j \mathfrak{o} (e_{i + 1})$. We view $p_J$ as a perversity function and define $(\mathcal{D}^{\geq 0}_J , \mathcal{D}^{\leq 0}_J )$ as subcategories for which a bounded chain complex $C^*$ of right $\mathcal{P} (\Gamma_J)$ modules is in $\mathcal{D}^{\geq 0}_J$ if $H^k ({\text{Hom}} (P_i , C^*)) = 0$ for all $k < p_J (i)$ and likewise for $\mathcal{D}^{\leq 0}_J$.
Let us now construct a complete exceptional collection $\mathbf{E}_J = \langle E_1, \ldots, E_n \rangle$ in the heart $\mathcal{D}^{\geq 0}_J \cap \mathcal{D}^{\leq 0}_J$ for a given $J$. Define $E_i = P_i[p_J (i)]$ if $\mathfrak{o} (e_{i + 1}) = 1$ and
\begin{equation*} E_i = (0 \leftarrow P_i[p_J(i)] \stackrel{e_{i + 1}}{\longleftarrow} P_{i + 1}[p_J(i + 1)] \leftarrow 0) \end{equation*}
otherwise.
Given an exceptional collection $\mathbf{E} = \langle E_1, \ldots, E_n \rangle $, write $R_{\mathbf{E}} = {\text{Ext}}^* (\oplus_1^n E_i , \oplus_1^n E_i )$ for its Yoneda algebra. A simple computation gives the following proposition.
\begin{prop} The collection $\mathbf{E}_J$ is a complete, strong exceptional collection for $\mathcal{D}$. The heart of $(\mathcal{D}^{\geq 0}_J, \mathcal{D}^{\leq 0}_J)$ is equivalent to the category of finitely generated right modules over $R_{\mathbf{E}_J}$. \end{prop}
To connect this collection to maximal degenerations of Landau-Ginzburg models, we approach the fixed point $\psi_J \in \mlg{A}{A^\prime}$ associated to $J$ via a degeneration path $\psi (\mathbf{c}, s, \_)$. Using Theorem \ref{thm:an}, we can describe the vanishing tree of a radar screen $\mathcal{B}$ through a sequence of totally ordered, cyclic $(k_{i + 1} - k_i)$-insertions. Partitioning $\{1, \cdots , n + 1\}$ to the ordered sets $S_i = \{k_{i - 1} + 1, \ldots, k_{i}\}$ for $1 \leq i \leq m$, we identify
\begin{equation*}
F_{i, i + 1} = \{1, k_1, k_1 - 1, \ldots, k_0 + 1, k_2, \ldots, k_1 + 1, \ldots \ldots, k_{i - 1} , \ldots , k_{i - 2} + 1 \}
\end{equation*}
where the cyclic order is as written. We define the cyclic insertions $\sigma_i : F_{i, i + 1} \cup S_{i + 1} \to F_{i + 1, i + 2}$ as the inclusion. Let $R(J)$ be the radar screen which yields this data and write the resulting exceptional collection as $\mathbf{E}_{R (J)}$.
\begin{prop} For every $J \subset \{2, \cdots , n\}$, $R_{\mathbf{E}_{R (J)}} \approx R_{\mathbf{E}_J}$.
\end{prop}
This proposition suggests the space $\mlg{A}{A^\prime}$ is connected to the space of stability conditions for $\mathcal{D}$. In the $A_n$ case, near maximal degeneration points, we obtain $T$-structures for the triangulated category of the LG-model which relate directly to abelian categories of directed $A_n$ quivers. In other words, we have categorified the bijection between fixed points of the monotone path stack and directed $A_n$ quivers to an equivalence between an exceptional collection of a degeneration near the fixed point and an exceptional collection naturally associated to the directed quiver. Moving from one fixed point to another along certain edges of the monotone path polytope crosses a wall in the norm stratification which results in Coxeter functors, or tiltings, of the ambient triangulated category.
We end the section on the $A_n$ case by a brief comment on homological mirror symmetry. It is known that the homological mirror category for $A_n$ is the graded derived category of singularities for $z^n : \mathbb{C} \to \mathbb{C}$ \cite{HV}. One can view this category as a weighted divisor blow-up of the origin in $\mathbb{C}$ with weight $n$. The monotone path associated to $J$ may then be viewed as mirror to a sequence of $m$ blow ups with weights $(k_{i + 1} - k_i)$. This perspective fits well with birational mirror symmetry landscape discussed in \cite{DKK} and \cite{katzarkov}.
\section{Three point blow up of $\mathbb{P}^2$}
We conclude this paper with an example of a different flavor than previous sections. Throughout, let $X_3$ denote a smooth del Pezzo surface of degree 6; that is, a blow up of $\mathbb{P}^2$ at three distinct non-collinear points. This space is mirror (and isomorphic) to $X_{\examplep{2}}$ given in section \ref{sec:toric}. The case of degree 7 was considered in \cite[Section 5]{DKK}. Recall that $\textrm{Pic}(X_3)\otimes\mathbb{R}$ is spanned by the pull-back of the hyperplane class and the exceptional divisors $E_1$, $E_2$, $E_3$ corresponding to the blown-up points, and that the effective cone $\textrm{Eff}(X_3)$ is generated by $E_1$, $E_2$, $E_3$, along with the pull-backs of the lines through the pairs of points, $E_{12}$, $E_{13}$, $E_{23}$. The effective cone admits a chamber decomposition into Zariski chambers, with each maximal chamber corresponding to a birational model obtained from $X_3$ by birational contractions; moreover, the codimension $1$ external walls of $\textrm{Eff}(X_3)$, equipped with this decomposition, correspond to Mori fibrations obtained from $X_3$, and the codimension 2 external walls correspond to Sarkisov links between the fibrations. We refer to \cite{hacon} for a general discussion of Mori fibrations and Sarkisov links from the perspective of chambers.
The structure of the external walls of $\textrm{Eff}(X_3)$ was considered in particular by Kaloghiros in \cite[Example 4.7]{kalo}, as a special case of a substantially more general result concerning codimension 3 external walls and relations amongst Sarkisov links. It is convenient to consider a dual graph $\Gamma_3$, with vertices corresponding to the codimension 1 external walls (i.e. the Mori fibrations) and edges corresponding to the codimension 2 external walls (i.e. Sarkisov links). A picture of this graph appears in \cite[Figure 6]{kalo}. By inspection this graph is observed to be the edge graph of the 3-dimensional associahedron. This is consistent with toric mirror symmetry and the results of \cite[Section 5]{DKK}, as the associahedron appears as a facet of the secondary polytope of the point configuration $\example{2} \subset \mathbb{Z}^2$ which is the Batyrev mirror of $X_3$.
\begin{figure}
\includegraphics[scale=.23]{mpp4.png}
\caption{\label{fig:mpp}The monotone path polytope for $(\examplep{2}, \example{2})$}
\end{figure}
As noted in \cite{kalo} the graph $\Gamma_3$ has 14 vertices, which correspond to Mori fibrations as follows:
\begin{itemize}
\item[(i)] 2 vertices correspond to to the trivial fibration $\mathbb{P}^2\rightarrow\{\textrm{pt}\}$, where $\mathbb{P}^2$ is obtained from $X_3$ by blowing down $E_1,E_2,E_3$, respectively $E_{12},E_{13},E_{2,3}$.
\item[(ii)] 6 vertices correspond each to the fibration $\mathbb{F}_1\rightarrow\mathbb{P}^1$, where the map $X_3\rightarrow \mathbb{F}_1$ factors through the blow down of one of $E_1,E_2,E_3,E_{12},E_{13},E_{23}$.
\item[(iii)] 6 vertices correspond each to the fibration $\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1$, where the map $X_3\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ factors through a blow down of one of $E_1,E_2,E_3$, and one of the two projections $\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1$ is fixed.
\end{itemize}
On the other hand, the monotone path polytope of the secondary polytope of $A$ with respect to the $\{0\}$-sharpening is of small enough complexity to be constructed via software. A picture of the resulting truncated associahedron appears in Figure \ref{fig:mpp}. We observe that it has 36 vertices. Qualitatively, they correspond to the possible choices in the above description of $\Gamma_3$.
\begin{itemize}
\item[(i)] 12 vertices correspond to the trivial fibration $\mathbb{P}^2\rightarrow\{\textrm{pt}\}$, where $X_3\rightarrow\mathbb{P}^2$ is one of the six ordered blow-downs of $E_1,E_2,E_3$, or one of the six ordered blow-downs of $E_{12},E_{13},E_{23}$.
\item[(ii)] 12 vertices correspond to the fibration $\mathbb{F}_1\rightarrow\mathbb{P}^1$, where the map $X_3\rightarrow\mathbb{F}_1$ is given by an ordered blow-down of two of $E_1,E_2,E_3$, or an ordered blow-down of two of $E_{12},E_{13},E_{23}$.
\item[(iii)] 12 vertices correspond to the fibration $\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1$, where the map $X_3\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ factors through a blow down of one of $E_1,E_2,E_3$, and a blow down of one of $E_{12},E_{13},E_{23}$ not disjoint to $E_i$, and one of the two projections $\mathbb{P}^1\times\mathbb{P}^1\rightarrow\mathbb{P}^1$ is fixed. \end{itemize}
We observe that a vertex of the dual graph representing a Mori fibration is replaced with the collection of full runs of the minimal model program on $X_3$ whose last birational map is that Mori fibration. As was conjectured in \cite{DKK}, the semi-orthogonal decompositions of $D^b (X_3)$ arising from such runs are then conjectured to yield equivalent subcategories to those arising from the maximal degenerations of the mirror LG model.
We conclude with a brief discussion of prospects for extending beyond the toric case, and in particular to del Pezzo surfaces $X_k$ of degrees 1 through 6. The birational geometry of these surfaces is classical, though intricately structured \cite{manin}. Motivated by the Hori-Vafa ansatz, \cite{AKO} posited the mirror in each case to be a Landau-Ginzburg model $f_k : Y_k \rightarrow \mathbb{P}^1$ of a rational elliptic surface $Y_k$ with prescribed fiber at $\infty$. They verified homological mirror symmetry in the form $D^b{X_k}\cong \fs{f_k}$; however, the identification of the K\"{a}hler moduli of $X_k$ with the complex moduli of $Y_k$ was not pursued. This identification was completed in unpublished work of Pantev \cite{pantev}.
In general, if $f: Y \to \mathbb{P}^1$ is a compactified LG model, results from \cite{KKSP} show that the complex $T_{Y,Y_\infty} \to f^*(T_{\mathbb{P}^1,\infty})$ defining perturbations of $f$ that fix the fiber at infinity can be integrated to produce a smooth moduli stack $\mathcal{M}$ of LG models. It is not hard to see that, when $f$ arises as a sharpened pencil, the quotient of $\mathcal{M}$ by the action of $\mathbb{C}^* \times \mathbb{C}$ naturally embeds as a substack of $\mlg{A}{A^\prime}$. In the toric cases, $Y_k$ can be obtained from the Batyrev mirror family by explicit blow-ups and we have seen that $\mlg{A}{A^\prime}$ is a natural geometric compactification of the complex moduli of the Batyrev mirror. While we suspect a similar nested compactification exists in the non-toric cases, they do not appear to have been studied from this vantage point. However, see \cite{looijenga} for a thorough study of compact moduli of rational elliptic surfaces. The recent investigations of Donaldson \cite{donaldson} regarding $K-$ and $b-$ stability of Fano manifolds will be relevant, replacing the role that classical geometric invariant theory plays in constructing the chamber decomposition on the effective cone.
The above considerations provide a convenient way of studying surfaces whose derived category is close to being generated by an exceptional collection. In particular our analysis suggests the following conjecture.
\begin{conj} The derived category of the Barlow surface is not generated by an exceptional collection.
\end{conj}
The proof of this conjecture will lead to examples of nontrivial categories with trivial K - theory.
|
2,869,038,156,642 | arxiv | \section{\label{sec:Introduction}Introduction}
In May 2019, the quantum Hall effect \cite{Klitzing1980a,Stormer1982} was formally included among the select group of high-precision experiments to form the basis of a new SI system of units based on the "Planck constant $h$, the elementary charge $e$, the Boltzmann constant $k$, and the Avogadro constant $N_A$" \cite{vonKlitzing2019}. This had long been awaited and certainly represents a great achievement and fitting fulfillment of the vision for "nat\"{u}rliche Masseinheiten" \cite{translation}
proposed by Max Planck \cite{Planck1900}. In his essay to celebrate this achievement \cite{vonKlitzing2019}, von Klitzing also points out "that a microscopic picture of the quantum Hall effect for real devices with electrical contacts and finite current flow is still missing."
Prominent examples of such microscopic details are so-called "bubble" and "stripe" phases \cite{Fogler2002}. They have been identified, e.g., by transport experiments in higher Landau levels (LLs) of ultra-high mobility samples \cite{Lilly1999a,Du1999,Du2000b} and are characterized by strong transport anisotropies (stripes) or reentrance effects (bubbles). It is believed that the phases correspond to density modulations with characteristic geometric non-uniformities due to the interplay of Coulomb interaction and the wave functions in higher Landau levels.
Early work in modelling density modulations in the quantum Hall regime, starting from the celebrated Chklovskii, Shklovskii and Glazman picture \cite{Shklovskii1992}, assumed uni-directional charge-density waves (CDWs) \cite{Koulakov1995,Fogler1996d} while mean field treatments established the possibility of anisotropic phases in a Fermi liquid \cite{Fradkin1999,Spivak2006}. However, spatially resolved information does not yet exist of these phases. Experimentally this is due to the intrinsic challenge of using local scanning probes in low temperatures for such remotely doped systems \cite{Hashimoto2008,HasCFS12,Friess2014}. Nevertheless, much indirect experimental evidence for the existence of bubble and stripe phases has now been accumulated \cite{Kukushkin2011a,Liu2013a,Pan2014,Friess2014,Wang2015,Pollanen2015a,Msall2015b,Mueed2016a,Shi2017b,Friess2017b,Friess2018,Bennaceur2018b}. Theoretical modelling has likewise concentrated on transport signatures of these phases \cite{Ettouhami2006,Ettouhami2007,Cote2002,Cote2016} while spatially resolved models of bubbles and stripes are only available in clean systems \cite{Cote2003}.
The period of the stripe patterns has been predicted to follow $d \propto R_c$ with $R_c= l_B \sqrt{2 n +1}$ the cyclotron radius in LL $n$ at the Fermi energy \cite{Goerbig2004a,Koulakov1995} and $l_B= \sqrt{\hbar/eB}$ the magnetic length.
Experimentally, $1.5 R_c$ \cite{Friess2014} and $3.6 R_c$ \cite{Kukushkin2011a} have been reported while $\sim 2.7 R_c$ is predicted theoretically \cite{Goerbig2004a}.
\begin{figure*}[t]
(a)\includegraphics[width=0.63\columnwidth,clip=true,trim=20 30 85 50]{Fig1a_CD_nu6200.png}
(b)\includegraphics[width=0.63\columnwidth,clip=true,trim=20 30 85 50]{Fig1b_CD_nu6548.png}
(c)\includegraphics[width=0.63\columnwidth,clip=true,trim=20 30 85 50]{Fig1c_CD_nu4535.png}
\caption{
\label{fig-nur}
\label{fig-bubble_big_nu_626}
\label{fig-stripes_big_nu65}
\label{fig-stripes_big_nu45}
Spatially resolved filling factor distribution $\nu_{\downarrow}(\vec{r})$ for different total filling factors (a) $\nu = 6.20$, (b) $\nu = 6.54$ and (c) $\nu = 4.54$ with $B=\SI{1.5}{Tesla}$. The colours denote different $\nu_{\downarrow}(\vec{r})$ values as indicated in the legends. The thin black lines are contours.}
\end{figure*}
In the present work, we show how stripes and bubbles emerge at weak disorder as self-consistent solutions of the Hartree-Fock (HF) equations, i.e.\ in the experimentally relevant regime and without any ad hoc assumptions beyond a smooth disorder. We provide the full spatial resolution of both phases from the length scale of $l_B$ to near macroscopic sample sizes. This high resolution allows \emph{quantitative} comparison with current experimental efforts \cite{Friess2014,Friess2017b,Friess2018,Mueed2016a,Kukushkin2011a}.
A central insight provided by our work is the importance of many-body aspects. It should be clear that a fully self-consistent HF approach in a disordered environment goes well beyond earlier Thomas-Fermi-based (non-)linear screening models. The inclusion of a converged exchange interaction term essentially alters the physics. This not only changes the spatial distribution of stripes and bubbles, but rather is the main reason of their emergence: neither pure Hartree nor a non-interacting model leads to emerging stripes/bubbles unless coupled with additional assumptions.
The key-mechanism is a Hund's rule behaviour for the occupation of the spin-split LLs. The resulting $g$-factor enhancement is then a local quantity depending on the local filling factor $\nu(\vec{r}) = 2 \pi l_B^2 \rho(\vec{r})$ \cite{OswaldEPL2017,OswaldPRB2017} with $\rho(\vec{r})$ the local carrier density. This \emph{exchange-enhanced} $g$-factor is a concept that allows to discuss the exchange interaction within the single electron picture \cite{Wiegers1997,Kendirlik2013}. By doing so, we find that the local variation of the enhancement of the Zeeman energy due to $\nu(\vec{r})$ has to be considered in addition to the laterally varying Hartree potential, leading to a modified effective potential for the electrons that also strongly modifies the screening behaviour. In addition to the largely repulsive Hartree part of the self-consistent Thomas-Fermi screening, the $\nu(\vec{r})$ dependence of enhanced Zeeman energy leads to a positive feedback loop in the self-consistent carrier redistribution and produces an instability of the electron density $\rho$, which may lead to jumps either to a locally full or locally empty LL \cite{OswaldEPL2017,OswaldPRB2017}, resulting in a clustering of the filling factor that is triggered by the disorder or edge potential. The boundaries of those clusters finally create narrow channels that align mainly along the edge or random potential fluctuations. In case of a very clean high mobility electron system such a trigger effect for the cluster formation by the random potential is missing and the electron system has to find such a cluster structure by self organization, resulting in the formation of stripes or bubbles.
The numerical simulations are performed as described in Refs.\ \cite{OswaldEPL2017,OswaldPRB2017,Sohrmann2007} via a variational minimization of the self-consistent Hartree-Fock-Roothaan equation \cite{Aok79,YosF79,MacA86,MacG88}. To simulate ultra-high mobility samples, the random potential strength is kept low. The filling factor and the lateral system size are made as large as possible with respect to the available computing power. Configurations up to $\SI{1}{\mu m^2}$ are achievable as shown in Fig.\ \ref{fig-bubble_big_nu_626} with spatial resolution of $\sim \SI{4.4}{nm}$ well below $l_B$ for a magnetic field $B$ varying from, e.g., $1$ ($l_B\sim \SI{26}{nm}$) to $\SI{6.5}{Tesla}$ ($l_B\sim \SI{10}{nm}$). The random potential is generated by Gaussian impurity potentials of radius $\SI{40}{nm}$, the number of impurities is $N=2000$, their random placement results in a fluctuating potential of $V_\text{max} = \SI{0.43}{mV}$ and $V_\text{min} = \SI{-0.50}{mV}$ \cite{supplement}. At a total filling factor of, e.g., $\nu = 6.54$ ($= \nu_\downarrow + \nu_\uparrow = 3.54 + 3.0$) and $B = \SI{1.5}{Tesla}$, this corresponds to more than $2000$ electrons. In order to generate transport data, we employ a non-equilibrium network model (NNM) introduced previously \cite{OswO06}. A very large number of step by step calculations are required \cite{SohOR09} in the NNM. Hence, for keeping within the available computing time, the transport simulations have been performed for smaller sample size such as $500 \times \SI{500}{nm^2}$.
In Fig.\ \ref{fig-bubble_big_nu_626} we depict the variation in $\nu_{\downarrow}(\vec{r})$ for three different densities at fixed magnetic field.
Fig.\ \ref{fig-bubble_big_nu_626}(a) shows the situation far from any half-odd average $\nu_{\downarrow}$.
The required area of the filled (spin-down) clusters is only a minor part of the total area. As seen in the figure this can be achieved by a nearly evenly-spaced distribution of "bubble"-like shaped clusters. Since we are considering already the $4$th partly-filled spin-split LL, the boundaries of the bubbles consist of three sub-stripes because of the $3$ nodes of the Landau basis function for the $4$th LL \cite{HasCFS12}. A boundary with such an internal structure takes up a substantial area as well and even tends to dominate the region of a single bubble as a whole. In other words, the area that one bubble needs is dominated by the width of the boundaries and not by the region of an assumed idealized full LL.
When increasing the filling factor towards half filling, as shown in Fig.\ \ref{fig-stripes_big_nu65}(b), it becomes clear that such a bubble-like geometry of the clusters is impossible to achieve with the given total $\nu$, because round bubbles would leave too much unfilled space between the bubbles even if touching each other. The only way to remove the unfilled space between bubbles is a change of the geometry so that the boundaries of different clusters arrange almost in parallel, which means a transfer to a "stripe"-like geometry.
In order to demonstrate that the width of the boundaries of the stripes and bubbles depend on the LL index, Fig.\ \ref{fig-stripes_big_nu65}(c) shows the half-filled LL at filling factor $\nu \approx 4.5$ instead of $\nu = 6.5$ and one can see that there is one sub-stripe less at the boundary, allowing a more dense arrangement of stripes or bubbles than in the higher LLs.
These results suggest that the higher the filling $\nu$ the higher the tendency for creating the stripe pattern near a half-filled LL. Overall, the figures show that our HF calculations can provide a fascinating insight in the spatial behaviour of bubble and stripe configurations.
Fig.\ \ref{transport} shows the transport data obtained by applying the NNM at different carrier densities.
\begin{figure}[tb]
\includegraphics[width=\columnwidth,clip=false,trim=40 20 100 50]{Fig2_transport.png}
\caption{
\label{transport}
Longitudinal resistance $R_{xx}$ for easy (horizontal) and hard (vertical) direction (cp.\ Fig.\ \ref{fig-nur}) as a function of carrier density $\rho$.
The sample size is $500\times\SI{500}{nm^2}$, $B = \SI{1.5}{Tesla}$, the impurity radius $\SI{40}{nm}$, and the number of impurities $N=100$. The random impurity placement results in a fluctuating potential from $V_\text{min} = \SI{-1.7}{mV}$ to $V_\text{max} = \SI{1.7}{mV}$.}
\end{figure}
Since the stripe alignment tends to be more horizontal, i.e.\ along the $x$-direction as shown in Fig.\ \ref{fig-nur}, the longitudinal resistance $R_{xx}$ appears higher for vertical sample current and lower for horizontal current flow \cite{supplement}. This is consistent with experimental observations resulting in large $R_{xx}$ peaks for vertical current, while for a horizontal current the $R_{xx}$ peaks are hardly visible. When the difference between horizontal and vertical $R_{xx}$ is no longer prominent, we find that we have reached a bubble phase.
The characteristics of the microscopic structure of the stripes consist on one hand in the periodicity of the stripe pattern and on the other hand in the microscopic details of the boundaries of the stripes. Together with the geometric shape of the stripes (and, indeed, the bubles), these determine the relation between the areas of full and empty LLs.
Focusing first on the periodicity, Fig.\ \ref{fig-fourier} presents the 2D-Fourier transformation of the lateral carrier density at various magnetic fields from $B=\SI{1}{Tesla}$ to $B=\SI{6.5}{Tesla}$ at a fixed filling factor of $\nu = 4.5$.
\begin{figure}[thb]
(a)\hspace*{0ex}
\includegraphics[angle=0,keepaspectratio=true,width=0.95\columnwidth,clip=false,trim=20 140 650 0]{Fig3a_Fourier_plots.png}\\
(b)\includegraphics[width=0.95\columnwidth,clip=true]{Fig3b_sqrtB.png}
\caption{\label{fig-fourier}\label{fig-FFT}
(a) Two-dimensional Fourier spectra at $\nu = 4.5$ for different $B$ fields. The different spectra are shifted to the right for better visibility.
For a sample size of $1000 \times \SI{1000}{nm^2}$, the frequency in units of $\SI{0.001}{nm^{-1}}$ (left axis) equals the number of stripes that can be accommodated within the boundaries. The corresponding period length is shown on the right axis and the color bar indicates the Fourier intensity in arb.\ units.
The horizontal red lines incidate selected values.
(b) Variation of the reciprocal stripe period $1/d$ ($\blacksquare$) as function of $\sqrt{B}$. The solid lines corresponds to $d = \alpha \sqrt{2 n + 1}R_c$ for LL index $n=3$ with $\alpha=2.7$ (blue line, theory) \cite{Goerbig2004a}, $3.6$ (red line, experiment) \cite{Kukushkin2011a} and $2.9$ (black line, our fit), respectively. The grey dotted grid lines highlight selected $B$ and $d$ values.}
\end{figure}
While Fig.\ \ref{fig-fourier} (a) shows only a selection of a few spectra, Fig.\ \ref{fig-fourier} (b) displays the trend of all evaluated spectra. As can be seen in $x$-direction there is a smooth and somewhat smeared out distribution that starts from zero, indicating that there is no clear periodicity in $x$-direction. In contrast, in $y$-direction, there is a well pronounced maximum which matches the reciprocal stripe period in $y$-direction as also seen in Figs.\ \ref{fig-stripes_big_nu65} (b+c). For $B=\SI{1.5}{Tesla}$ the reciprocal period (wave number) appears to be close to $6$ \cite{scale} and for $B=\SI{6}{Tesla}$ it matches $12$, which is consistent with a $\sqrt{B}$ dependence. Furthermore, we extract the mean period of the corresponding stripe patterns, similar to those shown in Fig.\ \ref{fig-nur}, to be approximately $\SI{175}{nm}$ for $B=\SI{1.5}{Tesla}$ and $\SI{81}{nm}$ for $B=\SI{6}{Tesla}$. Extending the analysis to other $B$ values, we indeed find a clear $\sqrt{B}$ behaviour as shown in Fig.\ \ref{fig-fourier} (b).
Previous theoretical results on the period $d$ of the stripes have led to the expectation $d = \alpha R_c =\alpha \sqrt{(2n+1)\hbar B/e}$
\cite{Goerbig2004a,Koulakov1995}. Experimentally, $\alpha=1.5$ \cite{Friess2014} and $3.6$ \cite{Kukushkin2011a} have been estimated. For $n=3$ and $B=\SI{1.5}{Tesla}$, these give $\SI{88}{nm}$ and $\SI{210}{nm}$, respectively. Obviously, only the latter one is compatible with our result at $\SI{1.5}{Tesla}$. Conversely, from Fig.\ \ref{fig-fourier} (b), we extract a value $\alpha=2.9 \pm 0.1$. This agrees very well with previous straight-line CDW-based predictions of $2.7$ \cite{Goerbig2004a} and $2.8$ \cite{Fogler1996d}.
Friess et al.\ \cite{Friess2014} investigated the Knight shift in nuclear magnetic resonance (NMR) spectra to demonstrate the co-existence of regions with different spin polarization due to the periodic variation of the filling factor in stripe and bubble phases. Their method provides direct information about the area fractions, although there is no direct information about geometry and periodicity. In order to extract also microscopic details of the structural information, they use a semiclassical model \cite{Tiemann2012,Friess2014}
based on superpositions of the \emph{single} electron densities obtained from the Landau basis functions.
In our case, we can similarly model the NMR intensity as
$I_{\nu}(f)=\int \mathcal{G}\{f-[f_{0,\nu-1}-\nu(\vec{r})\cdot K_{\text{max},\nu}]\})\ dr^2$ with Gaussian $\mathcal{G}$ describing the absorption spectrum of individual nuclei \cite{Tiemann2012}, $f$ the NMR frequency,
$K_\text{max}$ the maximal Knight shift for the fully spin-polarized LL at odd $\nu$ and $f_0$ the frequency of the non-shifted NMR line for the non spin-polarized situation at even filling factor $\nu-1$. Numerically, $I_{\nu}(f)$ is calculated by evaluating our \emph{interacting} $\nu(\vec{r})$ at each $\vec{r}$ and summing over all points of a typically $229 \times 229$ grid.
In Fig.\ \ref{fig-NMR}, we show that $I_{\nu}(f)$ exhibits features that red-shift to lower frequencies when increasing $\nu$, e.g., from the non-spin polarized $\nu=4$ to a fully spin polarized $\nu=5$ (see supplement \cite{supplement} for $\nu=2 \rightarrow 3$).
\begin{figure}[bt]
(a)\hspace*{0ex}\includegraphics[width=0.34\columnwidth,clip,trim=120 40 60 60]{Fig4a_NMRspec00075.png}
(b)\vspace*{-1ex}\hspace*{0ex}\includegraphics[width=0.54\columnwidth,clip=true,trim=20 70 0 0]{Fig4b_NMR_fein2.png} \\
\caption{
\label{fig-NMR}
(a) Simulation results of $I_{\nu}(f)$ using our HF results for $\nu_{\downarrow}(\vec{r})$ as input. The traces are shifted vertically for better visibility and varied in steps of $\Delta\nu=0.1$.
(b) Color plot of $I_{\nu}(f)$ for very fine resolution $\Delta\nu=0.01$. The color legend gives the numerical values of $I_{\nu}(f)$.
}
\end{figure}
At intermediate filling factors the spectral response splits, mainly into double/triple peak structures. This indicates the co-existence of regions with different filling factor as expected from a stripe or bubble like electron distribution (see supplement \cite{supplement}). Around $\nu \sim 4.4$ one can recognise \emph{three} peaks with changing weights in the superposition while varying the total filling factor. This is in good agreement with the experimental results \cite{Friess2014}, but naturally less so when compared to a single-particle modelling.
Intuitively, the driving force for the formation of bubbles and stripes can be understood as follows:
the Hund's rule behaviour causes a $g$-factor enhancement that maintains the tendency to fill up just one spin-level as much as possible before starting to fill up the next spin-level. When entering a stripe/bubble from outside (e.g.\ $\nu = 4.0$, not spin polarized) to inside (e.g.\ $\nu = 5.0$, fully spin polarized) the $g$-factor enhancement also drastically increases. Inside the stripe/bubble the spin-levels are strongly pushed apart, corresponding to a significantly lower energy of the occupied lower spin state, while outside there is minimal $g$-factor enhancement at even filling factor $\nu = 4.0$. In terms of an effective single particle picture the electrons in the stripes/bubbles encounter a potential well established by the local variation of the $g$-factor enhancement and which dominates over the repulsive Hartree interaction. From a qualitative point of view, the stripes can be understood as leaky non-coupled electron wave guides that consist of self assembled one-dimensional potential wells and the leakage shows up as the boundary region as discussed before.
In conclusion, we have computed the microscopic picture of stripe and bubble phase for the IQH regime at high LLs and in weak disorder. Our results rely on spatially-resolved, self-consistent HF calculations of nearly macroscopic sizes of $\mathcal{O}(\SI{1}{\mu m^2})$ and, for transport calculations, are coupled to device contacts with finite currents.
The existence of microscopic stripes and bubbles at weak disorder is thus confirmed. Their spatial features show intriguing extended oscillations along the stripes and surrounding the bubbles. These are clearly due to the structure of the underlying Landau states.
We find that the stripe period scales with ${B}^{-1/2}$ as expected and agrees in detail with previous experimental measurements.
Overall, together with results from HF calculations in the strong disorder regime \cite{Sohrmann2007,OswaldPRB2017,OswaldEPL2017}, this shows that the IQH regime can now be described spatially resolved with high accuracy from microscopic to near-macroscopic length scales. Our results shine new light on the understanding of the microscopic picture of the IQH effect
and demonstrate the permanent dominance of many particle physics for quantum Hall physics. The demonstrated Hund's rule behavior in context with the $g$-factor enhancement allows to incorporate the exchange interaction into an intuitive understanding of the major effects driving weakly and strongly disordered quantum Hall systems
This work received funding by the CY Initiative of Excellence (grant "Investissements d'Avenir" ANR-16-IDEX-0008) and developed during RAR's stay at the CY Advanced Studies, whose support is gratefully acknowledged. We thank Warwick's Scientific Computing Research Technology Platform for computing time and support. UK research data statement: Data accompanying this publication are available from the corresponding authors.
\bibliographystyle{apsrev4-1}
\section{Dispersion for the clean case}
In Fig.\ \ref{fig-bubble_strips} we show the variation in $\nu(\vec{r})$ for three different densities at fixed magnetic field $B$. These results are identical to Fig.\ 1 of the main text for $\nu_{\downarrow}(\vec{r})$ but also show $\nu_{\uparrow}(\vec{r})$ in the left column. The choice of colors is as in Refs.\ \cite{SOswaldEPL2017,SOswaldPRB2017}.
\begin{figure*}[tb]
(a) \includegraphics[width=0.95\columnwidth]{sFig1aL_CD_nu_6200_sp1.jpg
\hspace*{-0.8\columnwidth} $\uparrow$ \hspace{0.8\columnwidth}$\downarrow$
\includegraphics[width=0.95\columnwidth]{sFig1aR_CD_nu_6200_sp2.jpg}\\% Here is how to import EPS art
(b) \includegraphics[width=0.95\columnwidth]{sFig1bL_CD_nu_6548_sp1.jpg
\hspace*{-0.8\columnwidth} $\uparrow$ \hspace{0.8\columnwidth}$\downarrow$
\includegraphics[width=0.95\columnwidth]{sFig1bR_CD_nu_6548_sp2.jpg}\\% Here is how to import EPS art
(c) \includegraphics[width=0.95\columnwidth]{sFig1cL_CD_nu_4535_sp1.jpg
\hspace*{-0.8\columnwidth} $\uparrow$ \hspace{0.8\columnwidth}$\downarrow$
\includegraphics[width=0.95\columnwidth]{sFig1cR_CD_nu_4535_sp2.jpg
\caption{
\label{fig-bubble_strips}
Lateral carrier density distribution mapped on the filling factor scale $\nu(\vec{r})$ for different total filling factors (a) $\nu = 6.20$, (b) $\nu = 6.54$ and (c) $\nu = 4.54$. The left and right columns contains results for $\nu_\uparrow$ and $\nu_\downarrow$, respectively.
The color shades represent the filling factor range, where
green indicates the second LL for $\nu_{\downarrow,\uparrow} = 1 \rightarrow 2$, red the third LL, light blue the fourth and yellow denoting LL $5$. These colours are as in Refs.\ \onlinecite{SOswaldEPL2017,SOswaldPRB2017} and the value indicated for $\rho$, $B$ denote the electron density in units of $\SI{e11}{cm^{-2}}$ and $\SI{1}{Tesla}$, respectively.
}.
\end{figure*}
In Fig.\ \ref{CD_nu_450-Sx-HH-00}(a) and (c) one can see that for pure Hartree interaction there is no stripe formation. The density modulation in $\nu(\vec{r})$ is much less than in Figs.\ \ref{fig-nur} and \ref{fig-bubble_strips} and roughly follows the random potential shown in Fig.\ \ref{random_pot}. Furthermore, the charge density modulation in the spin-up and spin-down levels "repel" each other due to the Hartree interaction.
In Fig.\ \ref{CD_nu_450-Sx-HH-00}(b) and (c) we show the situation without interaction. Clearly, there is also no stripe formation. The $\nu(\vec{r})$ modulation rather closely follows the random potential of Fig.\ \ref{random_pot}. The charge densities in the spin-up and spin-down levels do not influence each other and follow nearly identically the disorder potential because of any missing interaction.
\begin{figure*}[tb]
\mbox{ } \hfill Hartree \hfill non-interacting \hfill \mbox{ }\\
(a) $\uparrow$ \includegraphics[width=0.45\textwidth]{CD_nu_450S2-HH_.png}
(b) $\uparrow$\includegraphics[width=0.45\textwidth]{CD_nu_450S2-00_.png}\\
(c) $\downarrow$ \includegraphics[width=0.45\textwidth]{CD_nu_450S1-HH_.png}
(d) $\downarrow$\includegraphics[width=0.45\textwidth]{CD_nu_450S1-00_.png}
\caption{
\label{CD_nu_450-Sx-HH-00}
Lateral charge density of the partly filled top Landau levels at total filling factor $\nu=4.5$. The top row with (a) and (b) gives the spin-down levels while to bottom row shows that spin-down levels in (c) and (d). The left column with (a) and (c) has been calculated for pure Hartree interaction while the right column with (b) and (d) shows a non-interacting situation. The color shades represent the filling factor $\nu(\vec{r})$ as given in the scales and the lines denote equal heights in $\nu(\vec{r})$.
}
\end{figure*}
\begin{figure*}[tb]
\includegraphics[width=0.95\textwidth]{random_pot.png}
\caption{
\label{random_pot}
Lateral random disorder potential $V(\vec{r})$ visualized in a false color plot. The lines denote equipotentials while the potential energies are indicated by the colors as in the color scale provided.
}
\end{figure*}
Fig.\ \ref{transport_Rxy} complements Fig.\ \ref{transport} by showing in addition the Hall resistance $R_{xy}$.
\begin{figure*}[tb]
\includegraphics[width=0.95\textwidth]{transport_Rxy.png}
\caption{
\label{transport_Rxy}
Hall resistance $R_{xy}$ for the longitudinal resistance $R_{xx}$ shown in Fig.\ 2 with the same set of parameters.
}
\end{figure*}
In order to show that the presence of the remnants of the LL wave functions around each stripe is significant, we perform the calculations of $I_{\nu}(f)$ for three test patterns for $\nu(\vec{r})$. The results are given in Fig.\ \ref{fig-NMR-test}. We find that only the variation given by $\nu(\vec{r})$ as calculated in HF can reproduce essential global features of the experimental NMR results presented in Ref.\ \cite{SFriess2014}.
\begin{figure*}[tb]
\includegraphics[width=0.95\textwidth]{sFig2a_test-pattern1.png}
\includegraphics[width=0.95\textwidth]{sFig2b_test-pattern2.png}
\includegraphics[width=0.95\textwidth]{sFig2c_test-pattern3.png}
\includegraphics[width=0.95\textwidth]{sFig2d_test-pattern4.png}
\caption{
\label{fig-NMR-test}
Calculations of the NMR intensity $I_{\nu}(f)$ for 4 different stripe-like variations of $\nu(\vec{r})$ at $\nu=4.5$. Rows 1-4 corresponds to (a) a simply square modulation, (b) a sinusoidal modulation, (c) a sinusoidal variations with left and right shoulders, (d) a modulation as computed from HF. The first column shows the spatial variations in $\nu(\vec{r})$ as given by the color scales. The second column represent a typical cross-section for each situation and the third column shows the estimated NMR intensity $I_{4.5}(f)$.
}.
\end{figure*}
In Fig.\ \ref{fig-NMR-stripes-bubbles} we show the behavior of $I_{\nu}(f)$ for stripes and bubble-like charge density waves. As in Fig.\ \ref{fig-NMR-test}, the HF results for $\nu(\vec{r})$ lead to a reasonable qualitative agreement with the non-interacting model used in Ref.\ \cite{SFriess2014}. Nevertheless, the details around, e.g., $\nu=2.5$ are rather different, highlighting the importance of interactions.
\begin{figure*}[tb]
\mbox{ } \hfill $\nu=2$--$3$ \hfill $\nu=4$--$5$ \hfill \mbox{ }\\
(a)\includegraphics[width=0.47\textwidth]{NMR_fein-2-3square.png}
(b)\includegraphics[width=0.47\textwidth]{NMR_fein-4-5square.png}\\
\mbox{ } \hfill $\nu=2.5$ \hfill $\nu=4.5$ \hfill \mbox{ }\\
(c)\includegraphics[width=0.47\textwidth]{CD_nu_250_.png}
(d)\includegraphics[width=0.47\textwidth]{CD_nu_450_.png}
\caption{
\label{fig-NMR-stripes-bubbles}
NMR intensities $I_{\nu}(f)$ for (a) $\nu=2-3$ and (b) $\nu=4-5$ and local $\nu(\vec{r})$ at (c) $\nu=2.5$ and (d) $\nu=4.5$ in left and right columns, respectively. (b+d) The right column reproduces results already shown in Figs.\ \ref{fig-nur} and \ref{fig-NMR}. The color shades represent $I_{\nu}(f)$ and $\nu(\vec{r})$ as given by the scales. Lines in (c+d) connect equal height in $\nu(\vec{r})$.
}
\end{figure*}
We note that normally stripes appear only starting with filling factor $\nu=4.5$. This is known also experimentally, but for experimental reasons the authors of Ref.\ \onlinecite{SFriess2014} could not go to that filling factor. Instead, they used filling factor $\nu=2.5$ and forced, by using an in-plane component of the magnetic field, the electron system to form a stripe pattern. Clearly, there is no need for our simulations to also model this experimental "trick".
In order to compare the effect of stripe patterns on the NMR Knight shift, we therefore use the stripe pattern in the "correct" range $\nu=4$--$5$. The result in Fig.\ \ref{fig-NMR-stripes-bubbles} (b) has striking similarities but seems indeed a bit richer in features than the experimental curve for $\nu=2$--$3$.
However, since the Knight shift spectrum looses its local information due to the spatial integration, we can also evaluate the range $\nu = 2$--$3$ as shown in Fig.\ \ref{fig-NMR-stripes-bubbles}. Indeed the agreement with the experiments of Ref.\ \onlinecite{SFriess2014} becomes even better in this filling factor range. We can still see in total 3 peaks, two of them clearly separated and a third one as a shoulder on the high frequency flank, just as shown for the experiments in Fig. 2b of Ref.\ \onlinecite{SFriess2014}. This makes the agreement almost perfect for the experiments as shown in Fig.\ \ref{fig:NMRspec-2-3} albeit not with the non-interacting modelling of Fig.\ 2c \cite{SFriess2014}. Filling factor $6$--$7$ requires much more computing time and, although a most interesting question, there are currently no experiments available for comparison.
\begin{figure*}[tb]
\includegraphics[width=0.95\textwidth]{NMRspec-2-3.png}
\caption{
\label{fig:NMRspec-2-3}
NMR intensities $I_{\nu}(f)$ as shown in Fig.\ \ref{fig-NMR}(a) but for the filling factor range $\nu=2 - 3$.
}
\end{figure*}
|
2,869,038,156,643 | arxiv | \section*{Introduction}
Quasi-hereditary algebras were introduced by Cline, Parshall and Scott to study highest weight categories which arise in the representation theory of semisimple complex Lie algebras and algebraic groups \cite{{CPS}, {S}}.
Dlab and Ringel intensely studied quasi-hereditary algebras from the viewpoint of the representation theory of artin algebras \cite{{DR2}, {DR}, {DR6}}.
Motivated by Iyama's finiteness theorem, Ringel introduced the notion of left-strongly quasi-hereditary algebras in terms of highest weight categories \cite{R}.
One of the advantages of left-strongly quasi-hereditary algebras is that they have better upper bound for global dimension than that of general quasi-hereditary algebras.
Moreover, Ringel studied a special class of left-strongly quasi-hereditary algebras called strongly quasi-hereditary algebras.
Let $A$ be an artin algebra with Loewy length $m$.
In \cite{A}, Auslander studied the endomorphism algebra $B:=\End_{A}(\bigoplus_{j=1}^{m}A/J(A)^{j})$ and proved that $B$ has finite global dimension.
Furthermore, Dlab and Ringel showed that $B$ is a quasi-hereditary algebra \cite{DR3}.
Hence $B$ is called an Auslander--Dlab--Ringel (ADR) algebra.
Recently, Conde gave a left-strongly quasi-hereditary structure on ADR algebras \cite{C}.
Moreover, ADR algebras were studied in \cite{{C2}, {CEr}} and appeared in \cite{{Co}, {KK}}.
In this paper, we study ADR algebras of semilocal modules introduced by Lin and Xi \cite{LX}.
Recall that a module $M$ is called semilocal if $M$ is a direct summand of modules which have a simple top.
Since any artin algebra is a semilocal module, the ADR algebras of semilocal modules are a generalization of the original ADR algebras.
In \cite{LX}, they proved that ADR algebras of semilocal modules are quasi-hereditary.
We refine this result in Section \ref{ADR}.
\begin{mainthm} [Theorem \ref{thm1}] \label{thma}
The Auslander--Dlab--Ringel algebra of any semilocal module is left-strongly quasi-hereditary.
\end{mainthm}
As an application, we give a tightly upper bound for global dimension of an ADR algebra (see Corollary \ref{cor}).
In Section \ref{SQHADR}, we study a connection between ADR algebras and strongly quasi-hereditary algebras.
An ADR algebra is a left-strongly quasi-hereditary algebra but not necessarily strongly quasi-hereditary.
We give characterizations of original ADR algebras to be strongly quasi-hereditary.
\begin{mainthm} [Theorem \ref{thm2}] \label{thmb}
Let $A$ be an artin algebra with Loewy length $m \geq 2$ and $J$ the Jacobson radical of $A$.
Let $B:=\End_A(\bigoplus_{j=1}^{m} A/J^j)$ be the ADR algebra of $A$.
Then the following statements are equivalent.
\begin{itemize}
\item[{\rm (i)}] $B$ is a strongly quasi-hereditary algebra.
\item[{\rm (ii)}] $\gl B =2$.
\item[{\rm (iii)}] $J \in \add (\bigoplus_{j=1}^{m} A/J^j)$.
\end{itemize}
\end{mainthm}
It is known that if $B$ is strongly quasi-hereditary, then the global dimension of $B$ is at most two \cite[Proposition A.2]{R}.
We note that algebras with global dimension at most two are not necessarily strongly quasi-hereditary.
However, for original ADR algebras, the converse is also true.
\section{Preliminaries}
\medskip
\subsection*{Notation}
Let $A$ be an artin algebra, $J(A)$ the Jacobson radical of $A$ and $\mathrm{D}$ the Matlis dual.
We denote by $\gl A$ the global dimension of $A$.
We fix a complete set of representatives of isomorphism classes of simple $A$-modules $\{ S(i) \; | \; i \in I \}$.
We denote by $P(i)$ the projective cover of $S(i)$ and $E(i)$ the injective hull of $S(i)$ for any $i \in I$.
We write $\mod A$ for the category of finitely generated right $A$-modules and $\proj A$ for the full subcategory of $\mod A$ consisting of finitely generated projective $A$-modules.
For $M \in \mod A$, we denote by $\add M$ the full subcategory of $\mod A$ whose objects are direct summands of finite direct sums of $M$.
The composition of two maps $f:X \to Y$ and $g: Y \to Z$ is denoted by $g \circ f$.
For a quiver $Q$, we denote by $\alpha \beta$ the composition of two arrows $\alpha: x \to y$ and $\beta: y \to z$ in $Q$.
We denote by $K$ an algebraically closed field.
\medskip
In this section, we quickly review a relationship between strongly quasi-hereditary algebras and rejective chains.
For more detail, we refer to \cite{{I2}, {T}}.
We start this section with recalling the definition of left-strongly quasi-hereditary algebras.
Let $\leq$ be a partial order on the index set $I$ of simple $A$-modules.
For each $i \in I$, we denote by $\nabla(i)$ the maximal submodule of $E(i)$ whose composition factors have the form $S(j)$ for some $j \leq i$.
The module $\nabla(i)$ is called the {\it costandard module} corresponding to $i$.
Let $\nabla:= \{ \nabla(i) \; | \; i \in I \}$ be the set of costandard modules.
We denote by $\mathcal{F}(\nabla)$ the full subcategory of $\mod A$ whose objects are the modules which have a $\nabla$-filtration, that is, $M \in \mathcal{F}(\nabla)$ if and only if there exists a chain of submodules
\[
M=M_0 \supseteq M_1 \supseteq \cdots \supseteq M_l=0
\]
such that $M_i/M_{i+1}$ is isomorphic to a module in $\nabla$.
For $M \in \mathcal{F}(\nabla)$, we denote by $(M: \nabla(i))$ the filtration multiplicity of $\nabla(i)$, which dose not depend on the choice of $\nabla$-filtrations.
\begin{definition}[{\cite[\S 4]{R}}]
Let $A$ be an artin algebra and $\leq$ a partial order on $I$.
\begin{itemize}
\item[{\rm (1)}] A pair $(A, \leq)$ (or simply $A$) is called {\it left-strongly quasi-hereditary} if there exists a short exact sequence
\begin{equation*}
0 \to \nabla(i) \to E(i) \to E(i) /\nabla(i) \to 0
\end{equation*}
for any $i \in I$ with the following properties:
\begin{enumerate}
\item [{\rm (a)}] $E(i)/\nabla(i) \in \mathcal{F}(\nabla)$ for any $i \in I$;
\item [{\rm (b)}] if $(E(i)/\nabla(i) :\nabla(j)) \not= 0$, then we have $i < j$;
\item [{\rm (c)}] $E(i)/\nabla(i)$ is an injective $A$-module, or equivalently, $\nabla(i)$ has injective dimension at most one.
\end{enumerate}
\item[{\rm (2)}] We say that a pair $(A, \leq)$ (or simply $A$) is {\it right-strongly quasi-hereditary} if $(A^{\op}, \leq)$ is left-strongly quasi-hereditary.
\item[{\rm (3)}] We say that a pair $(A, \leq)$ (or simply $A$) is {\it strongly quasi-hereditary} if $(A, \leq)$ is left-strongly quasi-hereditary and right-strongly quasi-hereditary.
\end{itemize}
\end{definition}
By definition, strongly quasi-hereditary algebras are left-strongly quasi-hereditary algebras.
Since a pair $(A, \leq)$ satisfying the conditions (a) and (b) is a quasi-hereditary algebra, left-strongly quasi-hereditary algebras are quasi-hereditary.
Left-strongly (resp.\ right-strongly) quasi-hereditary algebras are characterized by total left (resp.\ right) rejective chains, which are chains of certain left (resp.\ right) rejective subcategories.
We recall the notion of left (resp.\ right) rejective subcategories.
Let $\mathcal{C}$ be an additive category, and put $\mathcal{C}(X, Y):=\Hom_{\mathcal{C}}(X,Y)$.
In this section, {\it we assume that any subcategory is full and closed under isomorphisms, direct sums and direct summands.}
\begin{definition} [{\cite[2.1(1)]{I}}] \label{rejsub}
Let $\mathcal{C}$ be an additive category.
A subcategory $\mathcal{C}'$ of $\mathcal{C}$ is called
\begin{itemize}
\item[{\rm (1)}] a \emph{left $($resp.\ right$)$ rejective subcategory} of $\mathcal{C}$ if, for any $X\in\mathcal{C}$, there exists an epic left $($resp.\ monic right$)$ $\mathcal{C}'$-approximation $f^X \in \mathcal{C}\left(X,Y\right)$ $($resp.\ $f_X \in \mathcal{C}\left(Y,X\right))$ of $X$,
\item[{\rm (2)}] a \emph{rejective subcategory} of $\mathcal{C}$ if $\mathcal{C}'$ is a left and right rejective subcategory of $\mathcal{C}$.
\end{itemize}
\end{definition}
To define a total left (resp.\ right) rejective chain, we need the notion of cosemisimple subcategories.
Let $\mathcal{J}_{\mathcal{C}}$ be the Jacobson radical of $\mathcal{C}$.
For a subcategory $\mathcal{C}'$ of $\mathcal{C}$, we denote by $[\mathcal{C}']$ the ideal of $\mathcal{C}$ consisting of morphisms which factor through some object of $\mathcal{C}'$, and by $\mathcal{C}/[\mathcal{C}']$ the factor category (\emph{i.e.}, $\mathit{ob}(\mathcal{C}/[\mathcal{C}']):=\mathit{ob}(\mathcal{C})$ and $(\mathcal{C}/[\mathcal{C}'])(X,Y):= \mathcal{C}(X,Y)/[\mathcal{C}'](X,Y)$ for any $X, Y \in \mathcal{C}$).
Recall that an additive category $\mathcal{C}$ is called a {\it Krull--Schmidt} category if any object of $\mathcal{C}$ is isomorphic to a finite direct sum of objects whose endomorphism rings are local.
We denote by $\ind \mathcal{C}$ the set of isoclasses of indecomposable objects in $\mathcal{C}$.
\begin{definition}
Let $\mathcal{C}$ be a Krull--Schmidt category.
A subcategory $\mathcal{C}'$ of $\mathcal{C}$ is called {\it cosemisimple} in $\mathcal{C}$ if $\mathcal{J}_{\mathcal{C}/[\mathcal{C'}]}=0$ holds.
\end{definition}
We give a characterization of cosemisimple left rejective subcategories.
\begin{proposition} [{\cite[1.5.1]{I2}}] \label{crrs}
Let $\mathcal{C}$ be a Krull--Schmidt category and let $\mathcal{C}'$ be a subcategory of $\mathcal{C}$.
Then $\mathcal{C}'$ is a cosemisimple left $($resp.\ right$)$ rejective subcategory of $\mathcal{C}$ if and only if, for any $X \in \ind \mathcal{C} \setminus \ind \mathcal{C}'$,
there exists a morphism $\varphi: X \to Y$ $($resp.\ $\varphi : Y \to X)$ such that $Y \in \mathcal{C}'$ and $\mathcal{C}(Y, -) \xrightarrow{-\circ \varphi} \mathcal{J}_{\mathcal{C}}(X, -)$ $($resp.\ $\mathcal{C}(-, Y) \xrightarrow{\varphi \circ -} \mathcal{J}_{\mathcal{C}}(-, X))$ is an isomorphism on $\mathcal{C}$.
\end{proposition}
Now, we introduce the following key notion in this paper.
\begin{definition}[{\cite[2.1(2)]{I}}] \label{rejch}
Let $\mathcal{C}$ be a Krull--Schmidt category.
A chain
\begin{equation*}
\mathcal{C}= \mathcal{C}_0 \supset \mathcal{C}_1 \supset \cdots \supset \mathcal{C}_n =0
\end{equation*}
of subcategories of $\mathcal{C}$ is called
\begin{itemize}
\item[{\rm (1)}] a \emph{rejective chain} if $\mathcal{C}_i$ is a cosemisimple rejective subcategory of $\mathcal{C}_{i-1}$ for $1 \leq i \leq n$,
\item[{\rm (2)}] a \emph{total left $(${\rm resp.}\ right$)$ rejective chain} if the following conditions hold for $1 \leq i \leq n$:
\begin{enumerate}
\item[(a)] $\mathcal{C}_i$ is a left (resp.\ right) rejective subcategory of $\mathcal{C}$;
\item[(b)] $\mathcal{C}_{i}$ is a cosemisimple subcategory of $\mathcal{C}_{i-1}$.
\end{enumerate}
\end{itemize}
\end{definition}
The following proposition gives a connection between left-strongly quasi-hereditary algebras and total left rejective chains.
\begin{proposition} [{\cite[Theorem 3.22]{T}}] \label{thm0}
Let $A$ be an artin algebra.
Let $M$ be a right $A$-module and $B:= \End_A(M)$.
Then the following conditions are equivalent.
\begin{itemize}
\item[{\rm (i)}] $B$ is a left-strongly $($resp.\ right-strongly$)$ quasi-hereditary algebra.
\item[{\rm (ii)}] $\proj B$ has a total left $($resp.\ right$)$ rejective chain.
\item[{\rm (iii)}] $\add M$ has a total left $($resp.\ right$)$ rejective chain.
\end{itemize}
In particular, $B$ is strongly quasi-hereditary if and only if $\add M$ has a rejective chain.
\end{proposition}
We end this section with recalling a special total left rejective chain, which plays an important role in this paper.
\begin{definition}[{\cite[Definition 2.2]{I2}}]
Let $A$ be an artin algebra and $\mathcal{C}$ a subcategory of $\mod A$.
A chain
\begin{equation*}
\mathcal{C}= \mathcal{C}_0 \supset \mathcal{C}_1 \supset \cdots \supset \mathcal{C}_n =0
\end{equation*}
of subcategories of $\mathcal{C}$ is called an \emph{$A$-total left $(${\rm resp.}\ right$)$ rejective chain of length $n$} if the following conditions hold for $1 \leq i \leq n$:
\begin{enumerate}
\item[(a)] for any $X \in \mathcal{C}_{i-1}$, there exists an epic $(${\rm resp.}\ monic$)$ in $\mod A$ left $(${\rm resp.}\ right$)$ $\mathcal{C}_i$-approximation of $X$;
\item[(b)] $\mathcal{C}_{i}$ is a cosemisimple subcategory of $\mathcal{C}_{i-1}$.
\end{enumerate}
\end{definition}
All $A$-total left rejective chains of $\mathcal{C}$ are total left rejective chains.
Moreover, If $\mathrm{D}A \in \mathcal{C}$, then the converse also holds.
We can give an upper bound for global dimension by using $A$-total left rejective chains.
\begin{proposition}[{\cite[Theorem 2.2.2]{I2}}] \label{iygl}
Let $A$ be an artin algebra and $M$ a right $A$-module.
If $\add M$ has an $A$-total left $($resp.\ right$)$ rejective chain of length $n>0$, then $\gl \End_A(M) \leq n$ holds.
\end{proposition}
\section{ADR algebras of semilocal modules} \label{ADR}
The aim of this section is to show Theorem \ref{thma}.
First, we recall the definition of semilocal modules.
\begin{definition}
Let $M$ be an $A$-module.
\begin{itemize}
\item[(1)] $M$ is called a {\it local} module if $\top M$ is isomorphic to a simple $A$-module.
\item[(2)] $M$ is called a {\it semilocal} module if $M$ is a direct sum of local modules.
\end{itemize}
\end{definition}
Clearly, any local module is indecomposable and any projective module is semilocal.
Throughout this section, suppose that $\displaystyle M$ is a semilocal module with Loewy length $\ell \ell (M)=m$.
We denote by $\widetilde{M}$ the basic module of $\oplus_{i=1}^{m} M/MJ(A)^i$ and call $\End_A(\widetilde{M})$ the {\it Auslander--Dlab--Ringel algebra} (ADR algebra) of $M$.
Note that $\End_A(\widetilde{A})$ is an ADR algebra in the sense of \cite{C}.
Lin and Xi showed that the ADR algebras of semilocal modules are quasi-hereditary (see \cite[Theorem]{LX}).
In this section, we refine this result.
\begin{theorem}\label{thm1}
The ADR algebra of any semilocal module is left-strongly quasi-hereditary.
\end{theorem}
Observe that Theorem \ref{thm1} gives a better upper bound for global dimension of ADR algebras (see Remark \ref{rem}).
In the following, we give a proof of Theorem \ref{thm1}.
Let $\mathsf{F}$ be the set of pairwise non-isomorphic indecomposable direct summands of $\widetilde{M}$ and $\mathsf{F}_i$ the subset of $\mathsf{F}$ consisting of all modules with Loewy length $m-i$.
We denote by $\mathsf{F}_{i, 1}$ the subset of $\mathsf{F}_i$ consisting of all modules $X$ which do not have a surjective map in $\mathcal{J}_{\mod A}(X,N)$ for all modules $N$ in $\mathsf{F}_i$. For any integer $j>1$, we inductively define the subsets $\mathsf{F}_{i,j}$ of $\mathsf{F}_{i}$ as follows: $\mathsf{F}_{i, j}$ consists of all modules $X\in\mathsf{F}_{i}\setminus \bigcup_{1 \leq k \leq j-1}\mathsf{F}_{i,k}$ which do not have a surjective map in $\mathcal{J}_{\mod A}(X,N)$ for all modules $N\in\mathsf{F}_{i}\setminus \bigcup_{1 \leq k \leq j-1}\mathsf{F}_{i,k}$.
We set $n_{i}:=\min\{j\mid \mathsf{F}_{i}=\bigcup_{1 \leq k \leq j}\mathsf{F}_{i,k}\}$ and $n_{M}:=\sum_{i=0}^{m-1} n_i$.
For $0 \leq i \leq m-1$ and $1 \leq j \leq n_i$, we set
\begin{align}
\mathsf{F}_{>(i,j)}&:=\mathsf{F}\setminus ((\cup_{-1\le k \le i-1}\mathsf{F}_{k})\cup (\cup_{1\le l \le j}\mathsf{F}_{i,l})),\notag \\
\mathcal{C}_{i,j} &:= \add \bigoplus_{N \in \mathsf{F}_{> (i,j)}} N, \notag
\end{align}
where $\mathsf{F}_{-1}:=\emptyset$.
Now, we display an example to explain how the subsets $\mathsf{F}_{i, j}$ are given.
\begin{example} \label{eg}
Let $A$ be the $K$-algebra defined by the quiver
\[
\xymatrix@=15pt{ 1 \ar[r] & 2 \ar[r] \ar [d] & 3 \\
& 4
}
\]
and $M:= P(1) \oplus P(1)/S(3) \oplus P(1)/S(4) \oplus P(2)/S(3)$.
We can easily check that $M$ is a semilocal module.
The ADR algebra $B$ of $M$ is given by the quiver
\[
\xymatrix@=15pt{ P(1)/S(4) \ar[r]^{a} & P(1) & P(1)/S(3) \ar[l]_{b} \ar[rd]^{c} \\
& P(1)/ P(1) J(A)^2 \ar[lu]^{d} \ar[ru]_{e} \ar[rd]_{f} & &P(2)/ S(3) \\
& S(1) \ar[u]^{g} & S(2) \ar [ru]_{h}
}
\]
with relations $da-eb, ec-fh$ and $gf$.
Then $\mathsf{F}_{0,1}=\{ P(1)/S(4), P(1)/S(3) \}$, $\mathsf{F}_{0,2}=\{ P(1) \}$, $\mathsf{F}_{1,1}=\{ P(1)/P(1)J(A)^2, P(2)/S(3) \}$, $\mathsf{F}_{2,1} = \{ S(1), S(2) \}$.
\end{example}
To prove Theorem \ref{thm1}, we first show the following proposition.
\begin{proposition} \label{prop}
Let $A$ be an artin algebra and $M$ a semilocal $A$-module.
Then $\add \widetilde{M}$ has the following $A$-total left rejective chain with length $n_{M}$.
\begin{equation*}
\add \widetilde{M} =:\mathcal{C}_{0,0}\supset \mathcal{C}_{0,1} \supset \cdots \supset \mathcal{C}_{0,n_0} \supset \mathcal{C}_{1,1} \supset \cdots \supset \mathcal{C}_{m-1,n_{m-1}}=0.
\end{equation*}
\end{proposition}
To show Proposition \ref{prop}, we need the following lemma.
\begin{lemma} \label{lem2}
For any $M' \in \mathsf{F}_{0,1}$, the canonical surjection $\rho : M' \twoheadrightarrow M'/M' J(A)^{m-1}$ induces an isomorphism
\begin{equation*}
\varphi : \Hom_A (M'/M' J(A)^{m-1}, \widetilde{M}) \xrightarrow{- \circ \rho} \mathcal{J}_{\mod A}(M', \widetilde{M}).
\end{equation*}
\end{lemma}
\begin{proof}
Since $\varphi$ is a well-defined injective map, we show that $\varphi$ is surjective.
Let $N$ be an indecomposable summand of $\widetilde{M}$ with Loewy length $k$ and let $f:M' \to N$ be any morphism in $\mathcal{J}_{\mod A}(M', N)$.
Then we show $f(M' J(A)^{m-1})=0$.
(i) Assume that $\top M' \not \cong \top N$ or $k=m$.
Then we have $\im f\subset NJ(A)$, and hence
\begin{align}
f(M' J(A)^{m-1})=f(M')J(A)^{m-1}\subset (NJ(A))J(A)^{m-1}=0. \notag
\end{align}
(ii) Assume that $\top M' \cong \top N$ and $k<m$.
Since $m-k>0$ holds, we obtain
\begin{align}
f(M' J(A)^{m-1})=f(M')J(A)^{m-1}\subset NJ(A)^{m-1}= (NJ(A)^{k})J(A)^{m-k-1}=0. \notag
\end{align}
Since $f(M' J(A)^{m-1})=0$ holds, there exists $g: M' /M' J(A)^{m-1}\to N$ such that $f=g \circ \rho$.
\begin{align}
\xymatrix{
0\ar[r]&M' J(A)^{m-1}\ar[r]\ar[rd]_{0}&M' \ar[r]^{\rho\hspace{10mm}}\ar[d]^{f}&M' /M' J(A)^{m-1}\ar[r]\ar@{-->}[dl]^{\exists{g}}&0\\
&&N&&
}\notag
\end{align}
Hence the assertion follows.
\end{proof}
Now, we are ready to prove Proposition \ref{prop}.
\begin{proof}[Proof of Proposition \ref{prop}]
We show by induction on $n_{M}$.
If $n_{M}=1$, then this is clear.
Assume that $n_{M} >1$.
By Proposition \ref{crrs} and Lemma \ref{lem2}, $\mathcal{C}_{0,1}$ is a cosemisimple left rejective subcategory of $\add \widetilde{M}$.
Since $N:=M/(\oplus_{X \in \mathsf{F}_{0,1}} X) \oplus (\oplus_{X \in \mathsf{F}_{0,1}} X/ X J(A)^{m-1})$ is a semilocal module satisfying $\widetilde{N}=\widetilde{M}/\oplus_{X\in\mathsf{F}_{0,1}}X$ and $n_{N}<n_{M}$, we obtain that
\begin{align}
\add \widetilde{N}= \mathcal{C}_{0,1} \supset \cdots \supset \mathcal{C}_{0,n_{0}}\supset \mathcal{C}_{1,1} \supset \cdots \supset \mathcal{C}_{m-1, n_{m-1}} =0\notag
\end{align}
is an $A$-total left rejective chain by induction hypothesis.
By composing $\mathcal{C}_{0,0} \supset \mathcal{C}_{0,1}$ and it, we have the desired $A$-total left rejective chain.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}]
By Proposition \ref{thm0}, it is enough to show that $\add \widetilde{M}$ has a total left rejective chain.
Hence the assertion follows from Proposition \ref{prop}.
\end{proof}
We give some remark on partial orders for left-strongly quasi-hereditary algebras
\begin{remark}
We define two partial orders on the isomorphism classes of simple $B$-modules.
One is $\{ \mathsf{F}_{0,1} < \cdots < \mathsf{F}_{0,n_0} < \mathsf{F}_{1,1} < \cdots < \mathsf{F}_{m-1, n_{m-1}} \}$, called the {\it ADR order}.
Another one is $\{ \mathsf{F}_{0} < \mathsf{F}_{1} < \cdots < \mathsf{F}_{m-1} \}$, called the {\it length order}.
By Proposition \ref{prop}, ADR algebras of semilocal modules are left-strongly quasi-hereditary with respect to the ADR order.
On the other hand, Conde shows that original ADR algebras are left-strongly quasi-hereditary with respect to the length order \cite{C}.
Since, for an original ADR algebra, the length order coincides with the ADR order, we can recover Conde's result.
However, the ADR algebra of a semilocal module is not necessarily left-strongly quasi-hereditary with respect to the length order, as shown by the following example.
\end{remark}
\begin{example}
Let $A$ and $M$ be in Example \ref{eg}.
Then we can check that the ADR algebra $B$ of $M$ is left-strongly quasi-hereditary with respect to the ADR order
\begin{equation*}
\{ \mathsf{F}_{0,1} < \mathsf{F}_{0,2} < \mathsf{F}_{1,1} < \mathsf{F}_{2,1} \}.
\end{equation*}
However, we can also check that $B$ is not left-strongly quasi-hereditary with respect to the length order
\begin{align}
\{ \{P(1)/S(3), P(1)/S(4), P(1)\} < \{P(1)/P(1)J(A)^2, P(2)/S(3)\} < \{S(1), S(2)\} \}.\notag
\end{align}
\end{example}
As an application, we give an upper bound for global dimension of ADR algebras.
\begin{corollary} \label{cor}
Let $A$ be an artin algebra and $M$ a semilocal $A$-module.
Then
\begin{align}
\gl \End_{A}(\widetilde{M})\le n_{M}. \notag
\end{align}
\end{corollary}
\begin{proof}
By Proposition \ref{prop}, $\add\widetilde{M}$ has an $A$-total left rejective chain with length $n_{M}$.
Hence the assertion follows from Proposition \ref{iygl}.
\end{proof}
\begin{remark} \label{rem}
In \cite{LX}, they showed that the ADR algebra of a semilocal module $M$ is quasi-hereditary.
This implies $\gl \End_{A}(\widetilde{M})\le 2(n_{M}-1)$ by \cite[Statement 9]{DR}.
By Corollary \ref{cor}, we can obtain a better upper bound for global dimension of ADR algebras.
This can be seen by the following example.
\end{remark}
The following example tells us that the upper bound for the global dimension in Corollary \ref{cor} is tightly.
Let $n \geq 2$. Let $A$ be the $K$-algebra defined by the quiver
\[
\def\objectstyle{\scriptstyle}
\def\labelstyle{\scriptstyle}
\vcenter{
\hbox{
$
\xymatrix@C=6pt{
& & 1 \ar[lld] \ar[ld] \ar@{}[d]|(.6){\dots} \ar[rd] \ar[rrd]
& & \\
2 & 3 & \dots\dots & n-1 & n
}
$
}
}
\]
and $M$ a direct sum of all factor modules of $P(1)$. Clearly, $M$ is semilocal and $n_M=n$.
Let $B$ be its ADR algebra. Then we have
\begin{align*}
\gl B =
\begin{cases}
n-1 & (n \geq 3)\\
2 & (n=2).
\end{cases}
\end{align*}
Indeed, the assertion for $n=2$ clearly holds.
Assume $n \geq 3$.
It is easy to check that, for $X \in \mathsf{F}_{0,l}$ ($1 \leq l \leq n_0$),
\begin{align*}
\pd_{B^{\op}} \top (\Hom_A(X, \widetilde{M})) =l. \notag
\end{align*}
Thus we have
\begin{align*}
\max \{ \pd_{B^{\op}} \top (\Hom_A(X, \widetilde{M})) \mid X \in \mathsf{F} \} = n_0 = n_M -1. \notag
\end{align*}
Hence the assertion for $n \geq 3$ holds.
\section{Strongly quasi-hereditary ADR algebras} \label{SQHADR}
In this section, we prove Theorem \ref{thmb}.
We keep the notation of the previous section.
Throughout this section, $A$ is an artin algebra with Loewy length $m$ and $B:=\End_{A}(\widetilde{A})$ the ADR algebra of $A$.
Then $n_j =1$ holds for any $0 \leq j \leq m-1$.
Hence we obtain the following $A$-total left rejective chain by Proposition \ref{prop}.
\begin{equation} \label{rej}
\add \widetilde{A} \supset \mathcal{C}_{0,1} \supset \mathcal{C}_{1,1} \supset \cdots \supset \mathcal{C}_{m-1,1} = 0.
\end{equation}
Note that if $m=1$, then $B$ is semisimple.
Hence we always assume $m\ge 2$ in the rest of section.
\begin{theorem}\label{thm2}
Let $A$ be an artin algebra with Loewy length $m \geq 2$ and $B$ the ADR algebra of $A$.
Then the following statements are equivalent.
\begin{itemize}
\item[{\rm (i)}] $B$ is a strongly quasi-hereditary algebra.
\item[{\rm (ii)}] The chain \eqref{rej} is a rejective chain of $\add \widetilde{A}$.
\item[{\rm (iii)}] $\gl B =2$.
\item[{\rm (iv)}] $J(A) \in \add \widetilde{A}$.
\end{itemize}
\end{theorem}
To prove Theorem \ref{thm2}, we need the following lemma.
\begin{lemma}\label{lem3}
Let $A$ be an artin algebra.
If $P(i)J(A) \in \add \widetilde{A}$ for any $i \in I$, then $P(i)J(A)/P(i)J(A)^j \in \add \widetilde{A}$ for $1 \leq j \leq m$.
\end{lemma}
\begin{proof}
Since $P(i)J(A) \in \add \widetilde{A}$, we have $P(i)J(A) \cong \displaystyle{\bigoplus_{k, l} P(k)/P(k)J(A)^l}$.
For simplicity, we write $P(i)J(A) \cong P(k)/P(k)J(A)^l$.
Then we have $P(i)J(A)/P(i)J(A)^j \cong (P(k)/P(k)J(A)^l)/(P(k)J(A)^j/P(k)J(A)^l) \cong P(k)/P(k)J(A)^j \in \add \widetilde{A}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2}]
(ii) $\Rightarrow$ (i): The assertion follows from Proposition \ref{thm0}.
(i) $\Rightarrow$ (iii): It follows from \cite[Proposition A.2]{R} that the global dimension of $B$ is at most two.
It is enough to show that there exists a $B$-module such that its projective dimension is two.
Let $S$ be a simple $A$-module.
Then we have the following short exact sequence.
\begin{equation*}
0 \to \mathcal{J}_{\mod A}(\widetilde{A},S) \to \Hom_A (\widetilde{A},S) \to \top \Hom_A(\widetilde{A},S) \to 0.
\end{equation*}
Assume that $\mathcal{J}_{\mod A}(\widetilde{A},S)$ is a projective right $B$-module.
Then there exists an $A$-module $Y \in \add \widetilde{A}$ such that $\mathcal{J}_{\mod A}(\widetilde{A},S) \cong \Hom_A(\widetilde{A},Y)$.
By $S \in \add \widetilde{A}$, there exists a non-zero morphism $f : Y \to S$ such that $\Hom_A(\widetilde{A},f) : \Hom_A(\widetilde{A}, Y) \to \Hom_A(\widetilde{A}, S)$ is an injective map.
Since the functor $\Hom_{A}(\widetilde{A},-)$ is faithful, $f$ is an injective map.
Hence $f$ is an isomorphism.
This is a contradiction since $\mathcal{J}_{\mod A}(\widetilde{A},S) \cong \Hom_A(\widetilde{A},S)$.
Therefore, we obtain the assertion.
(iii) $\Leftrightarrow$ (iv): This follows from \cite[Proposition 2]{Sm}.
(iv) $\Rightarrow$ (ii):
First, we show that $\mathcal{C}_{0,1}$ is a cosemisimple rejective subcategory of $\add \widetilde{A}$.
By Proposition \ref{prop}, it is enough to show that $\mathcal{C}_{0,1}$ is a right rejective subcategory of $\add \widetilde{A}$.
For any $X \in \ind(\add \widetilde{A}) \setminus \ind(\mathcal{C}_{0,1})$, there exists an inclusion map $\varphi : XJ(A) \hookrightarrow X$ with $XJ(A) \in \mathcal{C}_{0,1}$ by the condition (iv).
Since $X$ is a projective $A$-module such that its Loewy length coincides with the Loewy length of $A$, the map $\varphi$ induces an isomorphism
\begin{equation*}
\Hom_A(\widetilde{A}, XJ(A)) \xrightarrow{\varphi \circ -} \mathcal{J}_{\mod A}(\widetilde{A}, X).
\end{equation*}
It follows from Proposition \ref{crrs} that $\mathcal{C}_{0,1}$ is a cosemisimple right rejective subcategory of $\add \widetilde{A}$.
Hence we obtain that $\mathcal{C}_{0,1}$ is a cosemisimple rejective subcategory of $\add \widetilde{A}$.
Next, we prove that $\add \widetilde{A}$ has a rejective chain
\begin{equation*}
\add \widetilde{A} \supset \mathcal{C}_{0,1} \supset \mathcal{C}_{1,1} \supset \cdots \supset \mathcal{C}_{m-1,1} = 0
\end{equation*}
by induction on $m$.
If $m=2$, then the assertion holds.
Assume that $m \geq 3$.
Let $X \in \ind(\mathcal{C}_{0,1}) \setminus \ind(\mathcal{C}_{1,1})$.
Then $X=P(i)/P(i)J(A)^{m-1}$ for some $i \in I$ and we have
\begin{equation*}
(P(i)/P(i)J(A)^{m-1})J(A/J^{m-1}(A)) \cong P(i)J(A)/P(i)J(A)^{m-1}.
\end{equation*}
Since $P(i)J(A) \in \add \widetilde{A}$, we obtain $P(i)J(A)/P(i)J(A)^{m-1} \in \mathcal{C}_{0,1}$ by Lemma \ref{lem3}.
By induction hypothesis, $\mathcal{C}_{0,1}$ has the following rejective chain.
\begin{equation*}
\mathcal{C}_{0,1} \supset \mathcal{C}_{1,1} \supset \cdots \supset \mathcal{C}_{m-1,1} = 0.
\end{equation*}
Composing it with $\add \widetilde{A} \supset \mathcal{C}_{0,1}$, we obtain a rejective chain of $\add \widetilde{A}$.
\end{proof}
By Theorem \ref{thm2}(i) $\Rightarrow$ (ii), a strongly quasi-hereditary structure of the ADR algebra $B$ can be always realized by the ADR order. However, for a semilocal module, such an assertion does not necessarily hold. In fact, we give an example that the ADR algebra of a semilocal module is strongly quasi-hereditary but not strongly quasi-hereditary with respect to the ADR order.
\begin{example}
Let $A$ be the $K$-algebra defined by the quiver
\[
\xymatrix@=15pt{ 1 \ar@(lu,ld)_{\alpha} \ar[r]^{\beta} & 2
}
\]
with relations $\alpha \beta$ and $\alpha^3$.
Clearly, $M:= P(1) \oplus P(1)/\soc P(1) \oplus P(2)$ is a semilocal module.
The ADR algebra $B$ of $M$ is given by the quiver
\[
\xymatrix@=15pt{ & P(1) \ar@<-1.5ex>[ldd]_{a} \ar[rd]^{b} & \\
& P(1)/P(1)J(A)^2 \ar[u]^{c} \ar[d]_{d} &P(1)/\soc P(1) \ar[l]^{e} \\
P(2) &S(1) \ar[ru]_{f} &
}
\]
with relations $eca, fed$ and $cb-df$.
Then $B$ is not strongly quasi-hereditary with respect to the ADR order $\{ \mathsf{F}_{0,1} < \mathsf{F}_{1,1} < \mathsf{F}_{1,2} < \mathsf{F}_{2,1} \}$, but $B$ is strongly quasi-hereditary with respect to $\{ P(1) < P(1)/P(1)J(A)^2 < P(1)/\soc P(1) < \{P(2), S(1) \} \}$.
\end{example}
\subsection*{Acknowledgment}
The author wishes to express her sincere gratitude to Takahide Adachi and Professor Osamu Iyama.
The author thanks Teresa Conde and Aaron Chan for informing her about the reference \cite[Proposition 2]{Sm}, which greatly shorten her original proof.
|
2,869,038,156,644 | arxiv |
\chapter{Background material}\label{ch:back}
In this chapter, we review the necessary background material for this thesis.
We begin by introducing in section \ref{sec:HEfree} the problem we wish to solve, that is, the \emph{free-space problem} for the Helmholtz equation. This problem is defined on an unbounded domain. We will explain how \emph{absorbing boundary conditions} (ABCs) and the \emph{exterior Dirichlet-to-Neumann map} (DtN map) are related, and can be used to reformulate the free-space problem from an unbounded domain to a bounded one. This reformulation allows us to solve the free-space problem.
We then briefly review existing techniques for constructing ABCs for the Helmholtz equation in section \ref{sec:abc}. From this, we will understand why, when solving the Helmholtz equation in a heterogeneous medium, ABCs are computationally expensive. This suggests an approach where we compress an ABC, or the related exterior DtN map. In other words, we find a way to apply the ABC which allows for a faster solve of the free-space problem.
To attack this problem, we first look at the \emph{half-space} DtN map, which is known analytically in uniform medium. In some respect, this half-space DtN map is quite similar to the exterior DtN map. To understand this similarity, we introduce in section \ref{sec:HEhalf} the concepts of the \emph{exterior} and \emph{half-space} problems for the Helmholtz equation, and the related half-space DtN map. In the next chapters, we will prove facts about the half-space DtN map which will inform our compression scheme for the exterior DtN map. %
Before we explain our work, we end this chapter of background material with section \ref{sec:strip}. We first describe how we may eliminate unknowns in the Helmholtz discretized system to obtain a Riccati equation governing the half-space DtN map. We then describe how the most straightforward way of compressing the exterior DtN map, by eliminating the exterior unknowns in the Helmholtz discretized system, which we call \emph{layer-stripping}, is prohibitively slow. This will explain also how, even if we have a more efficient way of obtaining the DtN map, applying it at every solve of the Helmholtz equation might still be slow. This is why we have developed the two-step procedure presented in this thesis: first an expansion of the DtN map, then a fast algorithm for the application of the DtN map.
\section{The Helmholtz equation: free-space problem}\label{sec:HEfree}
We consider the scalar Helmholtz equation in $\mathbb{R}^2$,
\begin{equation}\label{eq:HE}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = f(\mathbf{x}), \qquad \mathbf{x} = (x_1, x_2),
\end{equation}
with compactly supported $f$. We shall call this $f$ the \emph{right-hand side} or the \emph{source}. Here, the solution we seek to this equation is $u$. The function $c$ in \eqref{eq:HE} is called the \emph{medium of propagation}, or simply \emph{medium}. When $c$ is a constant, we say the medium is \emph{uniform}. When $c$ varies, we say the medium is \emph{heterogeneous}. We call $\omega$ the frequency, and note (this shall be explained later) that a high frequency makes the problem harder to solve numerically.
Throughout this work we consider the unique solution $u$ to \eqref{eq:HE} determined by the Sommerfeld radiation condition (SRC) at infinity: when $c(\mathbf{x})$ extends to a uniform $c$ outside of a bounded set\footnote{If the medium $c(\mathbf{x})$ does not extend to a uniform value outside of a bounded set, it is possible one could use the limiting absorption principle to define uniqueness of the solution to the Helmholtz equation. Start from the wave equation solution $u(\mathbf{x},t)$. We would like to take the Fourier transform in time of $u$ to obtain the Helmholtz solution. We may take the Laplace transform of $u$ instead to obtain $\hat{u}(\mathbf{x},s)$, and ask that the path of integration in the $s$ variable approach the imaginary axis from the decaying side. This could then be used to define the Helmholtz solution.}, the SRC is \cite{McLean}
\begin{equation}\label{eq:src}
\lim_{r \rightarrow \infty} r^{1/2} \left( \frac{\partial u}{\partial r} - ik u \right) = 0, \qquad k = \frac{\omega}{c},
\end{equation}
where $r$ is the radial coordinate. We call the problem of finding a solution $u$ to \eqref{eq:HE} and \eqref{eq:src} the \emph{free-space problem}.
\subsection{Solution in a uniform medium using the Green's function}\label{sec:Gfsol}
When the medium $c$ is uniform, there exists an analytical solution to the free-space problem, using the \emph{Green's function}.
\begin{definition}\label{def:Green}
The free-space \emph{Green's function} for the Helmholtz equation \eqref{eq:HE} is the unique function $G(\mathbf{x},\mathbf{y})$ which solves the free-space problem with right-hand side a delta function, $f(\mathbf{x})=\delta(|\mathbf{x}-\mathbf{y}|)$, for every fixed $\mathbf{y}$.
\end{definition}
It is well-known (p. 19 of \cite{bookChew}) that the Green's function $G$ of the uniform free-space problem \eqref{eq:HE}, \eqref{eq:src} is the following:
\begin{equation}\label{eq:Gfree}
G(\mathbf{x},\mathbf{y})=\frac{i}{4} H_0^{(1)}(k|\mathbf{x}-\mathbf{y}|)
\end{equation}
where $H_0^{(1)}$ is the Hankel function of zeroth order of the first kind. Then, one can compute the solution $u(\mathbf{x})$ for any $\mathbf{x} \in \mathbf{R}^2$, with now any given right-hand side $f$ supported on some bounded domain, using the following formula (see p. 62 of \cite{FollandIntroPDE}):
\begin{equation}\label{eq:Gsol}
u(\mathbf{x})=\int_{\mathbf{R}^2} G(|\mathbf{x}-\mathbf{y}|)f(\mathbf{y}) \ dy.
\end{equation}
To rapidly and accurately evaluate the previous expression at many $\mathbf{x}$'s is not easy: $G(\mathbf{x},\mathbf{y})$ has a singularity at $\mathbf{x}=\mathbf{y}$, and so care must be taken in the numerical evaluation of the integral in \eqref{eq:Gsol}. Since the issue of evaluating the solution to the free-space problem when the Green's function is known is not directly relevant to this thesis, we refer the reader to \cite{bem}.
\subsection{Solution in a heterogeneous medium using Absorbing Boundary Conditions}
For a number of practical applications of the Helmholtz equation, as was mentioned in the introductory chapter, one cannot find an analytical solution to this problem because the medium is not uniform, that is, $c$ is not a constant. In particular, the Green's function is not known, and one expects that calculating this Green's function numerically is just as hard as calculating the solution with numerical methods.
To obtain a numerical solution to an equation on an unbounded domain then, one must first reformulate this equation to obtain a possibly modified equation on a bounded domain. Hence, we pick a domain $\Omega$ in $\mathbb{R}^2$, with the compact support of $f$ contained in $\Omega$, such that $\Omega$ contains the area of interest, that is, where we care to obtain a solution. We now seek to reformulate the SRC on the boundary $\partial \Omega$, so that the resulting solution inside $\Omega$ matches that of the free-space problem. This leads us to define an \emph{Absorbing Boundary Condition}:
\begin{definition}
An \emph{Absorbing Boundary Condition} for the Helmholtz equation \eqref{eq:HE} is a condition on $\partial \Omega$, the boundary of a closed, bounded domain $\Omega \in \mathbb{R}^2$, which uniquely defines a solution to the Helmholtz equation restricted to $\Omega$, such that this unique solution matches the solution to the free-space problem \eqref{eq:HE}, \eqref{eq:src}.
\end{definition}
Clearly, if we can reformulate the SRC on the boundary $\partial \Omega$, so that the resulting solution inside $\Omega$ matches that of the free-space problem, we will obtain an ABC.
ABCs are extremely important to the numerical solution of the free-space problem because, as alluded to previously, they allow us to restrict the computational domain to a bounded domain $\Omega$, where a solution can be computed in finite time with finite memory. We will discuss ABCs in more detail in the next section, section \ref{sec:abc}.
We will explain in the next chapter the particular, quite rudimentary, solver we used in our numerical experiments. We note here that a better solver should be used for treating larger problems or obtaining better accuracy, for two reasons. First of all, as we explain in more detail in section \ref{sec:strip}, the cost of solving the Helmholtz problem with a standard solver in two dimensions is $O(N^4)$, which is prohibitive. Secondly, as we discuss in section \ref{sec:compABC}, a higher frequency means we need more points per wavelength in our discretization -- this is known as the \emph{pollution effect}. To treat larger problems, there exist better solvers such as the sweeping preconditioner of Engquist and Ying \cite{Hsweep,Msweep}, the shifted Laplacian preconditioner of Erlangga \cite{erlangga,er}, the domain decomposition method of Stolk \cite{stolk}, or the direct solver with spectral collocation of Martinsson, Gillman and Barnett \cite{dirfirst,dirstab}. The problem of appropriate numerical solvers for the Helmholtz equation in high frequency is very much a subject of ongoing research and not the purpose of this thesis, hence we do not discuss this further.
\subsection{The Dirichlet-to-Neumann map as an ABC}\label{sec:dtn}
We now seek to reformulate the SRC \eqref{eq:src} on $\partial \Omega$. There are many ways to do that numerically, as we shall see in section \ref{sec:abc}, but we wish to highlight this particular, analytical way because it introduces a fundamental concept, the \emph{Dirichlet-to-Neumann} map.
Let $G(\mathbf{x},\mathbf{y})$ be the Green's function for the free-space problem. Define the single and double layer potentials, respectively, on some closed contour $\Gamma$ by the following, for $\psi, \ \phi$ on $\Gamma$ (see details in \cite{McLean}, \cite{CK}):
\[
S \psi (\mathbf{x})=\int_{\Gamma} G(\mathbf{x},\mathbf{y}) \ \psi(\mathbf{y}) \ dS_y, \qquad T \phi (\mathbf{x}) =\int_{\Gamma} \frac{\partial G}{\partial \nu_{\mathbf{y}}} (\mathbf{x},\mathbf{y}) \ \phi(\mathbf{y}) \ dS_{\mathbf{y}},
\]
where $\nu$ is the outward pointing normal to the curve $\Gamma$, and $\mathbf{x}$ is not on $\Gamma$.
Now let $u^+$ satisfy the Helmholtz equation \eqref{eq:HE} in the exterior domain $\mathbb{R} \setminus \overline{\Omega}$, along with the SRC \eqref{eq:src}. Then Green's third identity is satisfied in the exterior domain: using $\Gamma= \partial \Omega$, we get
\begin{equation}\label{eq:GRF}
T u^+ - S \frac{\partial u}{\partial \nu}^+ = u^+, \qquad \mathbf{x} \in \mathbb{R}^2 \setminus \overline{\Omega}.
\end{equation}
Finally, using the jump condition of the the double layer $T$, we obtain Green's identity on the boundary $\partial \Omega$:
\[
(T - \frac{1}{2} I ) \, u^+ - S \frac{\partial u}{\partial \nu}^+ = 0, \qquad \mathbf{x} \in \partial \Omega.
\]
When the single-layer potential $S$ is invertible\footnote{This is the case when there is no interior resonance at frequency $\omega$, which could be circumvented by the use of combined field integral equations as in \cite{CK}. The existence and regularity of $D$ ultimately do not depend on the invertibility of $S$.}, we can let $D = S^{-1} (T - \frac{1}{2} I )$, and equivalently write (dropping the $+$ in the notation)
\begin{equation}\label{eq:dtn-abc}
\frac{\partial u}{\partial \nu} = D u, \qquad \mathbf{x} \in \partial \Omega.
\end{equation}
The operator $D$ is called the \emph{exterior Dirichlet-to-Neumann map} (or DtN map), because it maps the Dirichlet data $u$ to the Neumann data $\partial u/\partial \nu$ with $\nu$ pointing outward. The DtN map is independent of the right-hand side $f$ of \eqref{eq:HE} as long as $f$ is supported in $\Omega$. The notion that \eqref{eq:dtn-abc} can serve as an exact ABC was made clear in a uniform medium, e.g., in \cite{engmaj} and in \cite{kelgiv}. Equation \eqref{eq:dtn-abc} continues to hold even when $c(\mathbf{x})$ is heterogenous in the vicinity of $\partial \Omega$, provided the correct (often unknown) Green's function is used. The medium is indeed heterogeneous near $\partial \Omega$ in many situations of practical interest, such as in geophysics.
The DtN map $D$ is symmetric. The proof of the symmetry of D was shown in a slightly different setting here \cite{symm} and can be adapted to our situation. Much more is known about DtN maps, such as the many boundedness and coercivity theorems between adequate fractional Sobolev spaces (mostly in free space, with various smoothness assumptions on the boundary). We did not attempt to leverage these properties of $D$ in the scheme presented here.
We only compress the exterior DtN map in this work, and often refer to it as the DtN map for simplicity, unless there could be confusion with another concept, for example with the \emph{half-space DtN map}. We shall talk more about the half-space DtN map soon, but first, we review in the upcoming section existing methods for discrete absorbing boundary conditions.%
\section{Discrete absorbing boundary conditions}\label{sec:abc}
There are many ways to realize an absorbing boundary condition for the Helmholtz equation, and we briefly describe the main ones in this section. We start with ABCs that are surface-to-surface, and move on to ABCs which involve surrounding the computational domain $\Omega$ by an absorbing layer. The later approach is often more desirable because the parameters of the layer can usually be adjusted to obtain a desired accuracy. We then discuss the complexity of ABCs in heterogeneous media.%
\subsection{Surface-to-surface ABCs}
An early seminal work in absorbing boundary conditions was from Engquist and Majda, who in \cite{engmaj} consider the half-space problem ($x_1 \geq 0$) for the wave equation, with uniform medium, and write down the form of a general wave packet traveling to the left (towards the negative $x_1$ direction).
%
From this, they calculate the boundary condition which exactly annihilates those wave packets, and obtain \footnote{We are omitting details here for brevity, including a discussion of pseudo-differential operators and their symbols.}: %
%
\begin{equation} \label{sym}
d/dx - i\sqrt{\omega^2-\xi^2}
\end{equation}
where $(\omega,\xi)$ are the dual variables to $(y,t)$ in Fourier space. They can then approximate the square root in various ways in order to obtain usable, i.e. local in both space and time, boundary conditions, recalling that $i\omega$ corresponds to $\partial/\partial y$ and $i\xi$ corresponds to $\partial/\partial t$.
Hagstrom and Warburton in \cite{haglap} also consider the half-space problem, take the transverse Fourier-Laplace transforms of the solution and use a given Dirichlet data on the boundary of the half-space to obtain what they call a complete wave representation of the solution, valid away from the boundary. They then use this representation to obtain approximate local boundary condition sequences to be used as ABC's. Again, their method was developed for the uniform case.
Keller and Givoli, in \cite{kelgiv}, use a different technique: they assume a circular or spherical $\Omega$, and a uniform medium outside of this $\Omega$. They can then use the Green's function of the exterior problem, which is known for a circle, in order to know the solution anywhere outside $\Omega$, given the boundary data $u$ on $\partial \Omega$. They can then differentiate this solution in the radial (which is the normal) coordinate, and evaluate it on $\partial \Omega$ to obtain the exterior DtN map. They can now use this DtN map as an ABC in a Helmholtz solver. This technique can be seen as \emph{eliminating the exterior unknowns}: as we do not care to know the solution outside of $\Omega$, we can use the information we have on the exterior solution to reduce the system to one only on the inside of $\Omega$. This meaning of \emph{eliminating the exterior unknowns} shall become more obvious when we apply this to the discretized Helmholtz equation in section \ref{sec:strip}.
Somersalo et al. (\cite{somer}) also use the DtN map, this time to solve an interior problem related to the Helmholtz equation. They use a differential equation of Riccati type to produce the DtN map. When introduce the elimination of unknowns in section \ref{sec:strip}, we shall demonstrate the connection we have found between the \emph{half-space} problem and a Riccati equation for the DtN map.
The aforementioned ABCs are not meant to be a representative sample of all the existing techniques. Unfortunately, these techniques either do not apply to heterogeneous medium, or do not perform very well in that situation. In contrast, various types of absorbing layers can be used in heterogeneous media, with caution, and are more flexible.
\subsection{Absorbing layer ABCs}\label{sec:layers}
Another approach to ABCs is to surround the domain of interest by an \emph{absorbing layer}. %
While a layer should preferably be as thin as possible, to reduce computational complexity, its design involves at least two different factors: 1) waves that enter the layer must be significantly damped before they re-enter the computational domain, and 2) reflections created when waves cross the domain-layer interface must be minimized. The Perfectly Matched Layer of B\'erenger (called PML, see \cite{berenger}) is a convincing solution to this problem in a uniform acoustic medium. Its performance often carries through in a general heterogeneous acoustic medium $c(\mathbf{x})$, though its derivation strictly speaking does not.
PML consists of analytically continuing the solution to the complex plane for points inside the layer. This means we apply a coordinate change of the type $x \rightarrow x + i \int^x \sigma(\tilde{x}) \ d\tilde{x}$, say for a layer in the positive $x$ direction, with $\sigma=0$ in the interior $\Omega$, and $\sigma$ positive and increasing inside the layer. Hence the equations are unchanged in the interior, so the solution there is the desired solution to the Helmholtz equation, but waves inside the layer will be more and more damped, the deeper they go into the layer. This is because the solution is a superposition of complex exponentials, which become decaying exponentials under the change of variables. Then we simply put zero Dirichlet boundary conditions at the end of the layer. Whatever waves reflect there will be tiny when they come back out of the layer into the interior. For a heterogeneous medium, we may still define a layer-based scheme from a transformation of the spatial derivatives which mimics the one done for the PML in a uniform medium, by replacing the Laplacian operator $\Delta$ by some $\Delta_{layer}$ inside the PML, but this layer will not be perfectly matched anymore and is called a \emph{pseudo-PML} (pPML). In this case, reflections from the interface between $\Omega$ and the layer are usually not small. It has been shown in \cite{adiabatic} that, in some cases of interest to the optics community with nonuniform media, pPML for Maxwell's equations can still work, but the layer needs to be made very thick in order to minimize reflections at the interface. In this case, the Helmholtz equation has to be solved in a very large computational domain, where most of the work will consist in solving for the pPML. In fact, the layer might even cause the solution to grow exponentially inside it, instead of forcing it to decay (\cite{diazjoly}, \cite{back}), because the group and phase velocities have an opposite sign. %
An ABC scheme which is more stable by construction is the one of Appel\"o and Colonius \cite{appcol}. They use a smooth coordinate transform to reduce an unbounded domain to a bounded one with a slowing-down layer, and damp the spurious waves thus created by artificial viscosity (high-order undivided differences). The stability of this scheme follows from its construction, so that it can be used in problems for which the pPML is unstable. However, this method is not ideal because it requires discretizing higher and higher order space derivatives in order to obtain better and better results.
\subsection{Complexity of ABCs in heterogeneous media}\label{sec:compABC}
Unfortunately, discrete absorbing layers such as the pPML may need to be quite wide in practice, or may be otherwise computationally costly (because for example of high-order artificial viscosity in \cite{appcol}). Call $L$ this width (in meters). Although this is not a limitation of the framework presented in this paper, we discretize the Helmholtz operator in the most elementary way using the standard five-point difference stencil. Put $h = 1/N$ for the grid spacing, where $N$ is the number of points per dimension for the interior problem, inside the unit square $\Omega = [0,1]^2$. While $\Omega$ contains $N^2$ points, the total number of unknowns is $O\left((N+2w)^2\right)$ in the presence of the layer, where $w=L/h$ is its width in number of grid points. In a uniform medium, the PML width $L$ needed is a fraction of the wavelength, i.e. $L \sim \lambda=\frac{2\pi}{\omega} \sim \frac{1}{N}$, so that we need a constant number of points independently of $N$: $w=L/h=LN \sim 1$. However, in nonuniform media, the heterogeneity of $c(\mathbf{x})$ can limit the accuracy of the layer. If we consider an otherwise uniform medium with an embedded scatterer outside of $\Omega$, then the pPML will have to be large enough to enclose this scatterer, no matter $N$. For more general, heterogeneous media such as the ones considered in this paper, we often observe that convergence as a function of $L$ or $w$ is delayed compared to a uniform medium. That means that we have $L \sim L_0$ so that $w \sim NL_0$ or $w = O(N)$, as we assume in the sequel.
The authors of \cite{appcol} have not investigated the computational complexity of their layer for a given accuracy, but it is clear that a higher accuracy will require higher-order derivatives for the artificial viscosity, and those are quite costly. Fortunately, the framework to be developed over the next chapters also applies to the compression of such a layer, just as it does to any other ABC.
In the case of a second-order discretization, the rate at which one must increase $N$ in order to preserve a constant accuracy in the solution, as $\omega$ grows, is about $N =O(\omega^{1.5})$. This unfortunate phenomenon, called the \emph{pollution effect}, is well-known: it begs to \emph{increase} the resolution, or number of points per wavelength, of the scheme as $\omega$ grows \cite{nvsom,BabPollut}. As we saw, the width of the pPML may be as wide as a constant value $L_0$ independent of $N$, hence its width generally needs to scale as $O(\omega^{1.5})$ grid points.
Next, we introduce the exterior and half-space problems for the Helmholtz equation. We explain how those are related, and how knowledge from the solution of one will help us with the solution to the other.
\section{The Helmholtz equation: exterior and half-space problems}\label{sec:HEhalf}
The previous two sections addressed the fact that we wish to obtain the exterior DtN map in order to approximate the free-space solution in $\Omega$, and how to do that using ABCs and the DtN map. However, ABCs can be computationally intensive. To obtain the exterior DtN map numerically in a feasible way, we will need solve in chapter \ref{ch:probing} the exterior problem, and so we define it here. The \emph{half-space problem} for the Helmholtz equation is also interesting to us because we can write down an analytical formula for the DtN map, and use that to gain knowledge that might prove to be more general and apply to the exterior DtN map. Hence we begin by explaining the exterior problem. Then we introduce the half-space problem and its DtN map, and why this might give us insights into the exterior DtN map. We then state important results which will be used in the next chapters.
\subsection{The exterior problem}\label{sec:extprob}
The exterior problem consists of solving the free-space problem, but outside of some domain $\Omega$, given a Dirichlet boundary condition $g$ on $\partial \Omega$ and the SRC \eqref{eq:src}. That is, the following has to hold:
\begin{equation}\label{eq:HEext}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = 0, \ \mathbf{x} = (x_1, x_2) \in \Omega^c,
\end{equation}
where $\Omega^c=\mathbb{R}^2 \setminus \Omega$, with the boundary data $u(\mathbf{x})=g(\mathbf{x})$ for $\mathbf{x} \in \partial \Omega$ and a given $g$. Again, we require the SRC to hold. We call this the \emph{exterior problem}, since we solve for the solution $u$ outside of the domain $\Omega$. We are interested in this problem because, if we can find the solution $u$, then we can take its derivative, normal to $\partial \Omega$, and obtain the exterior Dirichlet-to-Neumann map. This is how we will calculate the DtN map numerically in the next chapter. Then, of course, the DtN map can be used to solve the free-space problem reformulated on $\Omega$, which is our goal. This sounds like a circular way of solving the free-space problem. This is because, as we shall see in more detail in the next chapter, solving the exterior problem a few times will give us the exterior DtN map, which will speed up all free-space solves.
In the next chapter, we will use $\Omega=\left[0,1\right]^2$, a square of side 1. For practical computations, a rectangular domain made to fit tightly around the scatterer of interest is often used, especially if we have a thin and long scatterer. As we will see in section \ref{sec:abc}, numerous ABCs have been designed for the rectangular domain, so choosing a rectangular domain here is not a limitation. Then, the numerical DtN map is a matrix which, when multiplied by a vector of Dirichlet values $u$ on $\partial \Omega$, outputs Neumann values $\partial_\nu u$ on $\partial \Omega$. In particular, we can consider the submatrix of the DtN map corresponding to Dirichlet values on one particular side of $\Omega$, and Neumann values on that same side. As we shall see next, this submatrix should be quite similar to the half-space DtN map.
\subsection{The half-space problem}\label{sec:half}
We consider again the scalar Helmholtz equation, but this time we take $\Omega$ to be the top half-space $\mathbb{R}_+^2=\left\{ (x_1,x_2) : x_2 \geq 0 \right\}$, so that now the boundary $\partial \Omega$ is the $x_1$-axis, that is, when $x_2=0$. And we consider the exterior problem for this $\Omega$:
\begin{equation}\label{eq:hsHE}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = 0, \qquad \mathbf{x} = (x_1, x_2), \qquad x_2 \leq 0
\end{equation}
with given boundary condition $g$:
\begin{equation}\label{eq:bchalf}
u(x_1,0)=g(x_1,0),
\end{equation}
requiring some decay on $g(x_1,0)$ as $x_1 \rightarrow \pm \infty$ and the SRC \eqref{eq:src} to hold in the bottom half-space $\mathbb{R}_-^2=\left\{ (x_1,x_2) : x_2 \leq 0 \right\}$. We shall explain how to find this analytical DtN map for the half-space problem in uniform medium, but first, we discuss the relevance of the half-space DtN map for the exterior DtN map.
Let us use again $\Omega=\left[0,1\right]^2$ as we introduced earlier, and let us call $S_1$ the bottom side of $\partial \Omega$: $S_1=\left\{(x_1,x_2): 0 \leq x_1 \leq 1, x_2=0 \right\}$. For the exterior problem, we prescribe boundary values $g$ on $\partial \Omega$, solve the exterior problem and obtain values of $u_\text{ext}$ everywhere outside of $\Omega$. From those, we know the exterior DtN map $D_\text{ext}:\partial \Omega \rightarrow \partial \Omega$. Consider now the values of $u_\text{ext}$ we have just found, along the $x_1$ axis: we can use those to define the boundary condition \eqref{eq:bchalf} of the half-space problem. The solution $u_\text{half}$ we find for this half-space problem on the bottom half-space $\mathbb{R}_-^2$ coincides with $u_\text{ext}$ on that half-space. Similarly, the exterior DtN map $D_\text{ext}$ restricted to the bottom side of $\Omega$, $D_\text{ext}: S_1 \rightarrow S_1$ coincides with the half-space DtN map $D_\text{half}$, restricted to this same side $D_\text{half}: S_1 \rightarrow S_1$.
This relationship between the half-space and exterior DtN maps remains when solving those problems numerically. To solve the exterior problem numerically, we proceed very similarly to how we would for the free-space problem. As we saw in the previous section on ABCs, we may place an ABC on $\partial \Omega$ for the free-space problem. For the exterior problem, we place this ABC just a little outside of $\partial \Omega$, as in Figure \ref{fig:ext}. We then enforce the boundary condition $u(\mathbf{x})=g(\mathbf{x})$ for $\mathbf{x}$ on $\partial \Omega$, and solve the Helmholtz equation outside of $\Omega$, inside the domain delimited by the ABC. We thus obtain the solution $u$ on a thin strip just outside of $\Omega$, and we can compute from that $u$ the DtN map $\partial_\nu u$ on $\partial \Omega$. This is why we need to put the ABC just a little outside of $\Omega$, and not exactly on $\partial \Omega$.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.45]{./figs/diag_ext.pdf}
\caption{The exterior problem: $\Omega$ is in grey, there is a thin white strip around $\Omega$, then an absorbing layer in blue.}
\label{fig:ext}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.45]{./figs/diag_half.pdf}
\caption{The half-space problem: there is thin white strip below the bottom side of $\Omega$, and an absorbing layer (in blue) surrounds that strip on three sides.}
\label{fig:half}
\end{minipage}
\end{figure}
To solve the half-space problem numerically, we will need again an ABC, in order to reformulate the problem on a smaller, bounded domain. We can prescribe values of $u$ only along the bottom edge $S_1$ as in Figure \ref{fig:half}, leave a thin strip just below that edge, surround that thin strip by an ABC on all three other sides, and solve for the solution $u$. We can then, from the solution $u$ in this thin strip, calculate an approximate half-space DtN map on $S_1$. And we see how this approximate half-space DtN map restricted to $S_1$ will be similar (but not exactly the same) to the exterior DtN map restricted to that same edge $S_1$, given the same boundary data on $S_1$ and 0 boundary data on the other edges of $\partial \Omega$. The main difference between the two maps is that in the exterior case, the two corners of $S_1$ will cause scattering, some of which will affect the solution in the top half-space $\mathbb{R}_+^2$ as well.%
It is because of this connection between the half-space and the exterior DtN maps that we analyze further the half-space DtN map, and use insights obtained from this analysis to treat the exterior DtN map.
\subsection{Solution to the half-space problem in a uniform medium using the Green's function}\label{sec:hsG}
In section \ref{sec:Gfsol} we used the free-space Green's function $G$ to obtain the solution $u$ anywhere from integrating $G$ with the right-hand side $f$. We can do the same for the half-space problem. First, we define the Green's function of the half-space problem:
\begin{definition}\label{def:Greenhalf}
The half-space \emph{Green's function} for the Helmholtz equation \ref{eq:hsHE} is the unique function $G(\mathbf{x},\mathbf{y})$ which solves the half-space problem with zero boundary data, that is $g=0$ in \eqref{eq:bchalf}, and with right-hand side the delta function $f(\mathbf{x})=\delta(|\mathbf{x}-\mathbf{y}|)$, for every fixed $\mathbf{y}$.
\end{definition}
This half-space Green's function, which we shall call $G_\text{half}$, is once again well-known, and can be obtained from the free-space Green's function by the reflection principle (p. 110 of \cite{FollandIntroPDE}), with $ \mathbf{x}=(x_1,x_2)$, $\mathbf{y}=(y_1,y_2)$ and $\mathbf{x}'=(-x_1,x_2)$:
\begin{equation}\label{eq:Greenhalf}
G_\text{half}(\mathbf{x},\mathbf{y})=G(\mathbf{x},\mathbf{y})- G(\mathbf{x}',\mathbf{y}).
\end{equation}
Then, the solution $u$ to the half-space problem with $g=0$ is as expected:
\begin{equation}\label{eq:Gsolhalf}
u(\mathbf{x})=\int_S G_\text{half}(|\mathbf{x}-\mathbf{y}|)f(\mathbf{y}) \ dy, \ \mathbf{x} \in \mathbb{R}^2 \setminus S.
\end{equation}
where $S=\left\{(x_1,x_2):x_2=0\right\}$ is the $x_1$ axis. This half-space Green's function will be useful to us, and in particular, we are interested in the following result of \cite{Hsweep}, slightly reformulated for our purposes:
\begin{lemma}\label{Glowrank}
\emph{Theorem 2.3 of \cite{Hsweep}.} Let $G_\text{half}$ be the half-space Green's function as defined above. Let $n \in \mathbb{N}$ be some discretization parameter, $n$ even, and let $h=1/n$. Let $Y=\left\{\mathbf{y}_j=(jh,-h), j=1, \ldots, n/2 \right\}$ and $X=\left\{\mathbf{x}_j=(jh,-h), j=n/2+1, \ldots, n \right\}$. Then $\left(G_\text{half}(\mathbf{x},\mathbf{y})\right)_{\mathbf{x} \in X, \ \mathbf{y} \in Y}$ is numerically low-rank. More precisely, for any $\varepsilon >0$, there exist a constant $J=O(\log k |\log\varepsilon |^2)$, functions $\left\{\alpha_p(\mathbf{x})\right\}_{1\leq p \leq J}$ for $\mathbf{x} \in X$ and functions $\left\{\beta_p(\mathbf{y})\right\}_{1\leq p \leq J}$ for $\mathbf{y} \in Y$ such that
\[ \left| G(\mathbf{x},\mathbf{y})-\sum_{p=1}^J \alpha_p(\mathbf{x})\beta_p(\mathbf{y}) \right| \leq \varepsilon \ \text{for} \ \mathbf{x} \in X, \mathbf{y} \in Y.\]
\end{lemma}
We provide this lemma here because the concept of an operator being \emph{numerically low-rank} away from its diagonal will be useful later.
For now, we define the half-space DtN map. We first want to find a kernel that will give us the solution on the bottom half-space when multiplied with the boundary data $g$ and not with the right-hand side $f$, which we will soon assume to be 0.
Such a kernel exists, and is related to $G_\text{half}$. From equation (2.37) of \cite{FollandIntroPDE}, we may write
$$ u(\mathbf{x})=\int_{S} \partial_{\nu_\mathbf{y}}G_\text{half}(\mathbf{x},\mathbf{y}) \ u(\mathbf{y}) \ dS_\mathbf{y}, \ \mathbf{x} \in \mathbb{R}^2 \setminus S $$
where $\nu$ is the outward- (thus downward-) pointing normal to $S$. Hence we obtain
\begin{equation}\label{eq:halfsol}
u(\mathbf{x})=-\int \left. \partial_{y_2}G_\text{half}\left(\mathbf{x},(y_1,y_2)\right)\right|_{y_2=0} \ u(y_1,0) \ dy_1, \ \mathbf{x} \in \mathbb{R}^2 \setminus S.
\end{equation}
We have found a different way of expressing the solution $u$ to the half-space problem using the half-space Green's function $G_\text{half}$.
\subsection{The analytical half-space DtN map}
Since we wish to consider the exterior Dirichlet-to-Neumann map for $u$ on $x_2=0$, $Du(x_1,0)=\left. \partial_{\nu_\mathbf{x}}u(x_1,x_2) \right|_{x_2=0}=-\left. \partial_{x_2} u(x_1,x_2) \right|_{x_2=0}$, we take the normal derivative in \eqref{eq:halfsol} and evaluate at $x_2=0$ (using the fact that $u(y_1,0)=g(y_1,0)$ is the boundary data):
$$Du(x_1,0)= \int \left. \partial_{x_2} \partial_{y_2} G_\text{half}\left((x_1,x_2),(y_1,y_2)\right) \right|_{x_2=0,y_2=0} \ g(y_1,0) \ dy_1, \ x_1 \in \mathbb{R}.$$
Hence we have found the kernel of the half-space DtN map to be, essentially, two derivatives of the half-space Green's function. Since we know that Green's function, we can use it to obtain an analytical expression for the half-space DtN map. Notice that $\frac{\partial}{\partial z} H_0^{(1)}(z)=-H_1^{(1)}(z)$, %
We have
$$\partial_{x_2} \partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i}{4} \partial_{x_2} \partial_{y_2} H_0^{(1)}(z)$$
with
$$z=k|\mathbf{x}-\mathbf{y}|=k\sqrt{(x_1-y_1)^2+(x_2-y_2)^2},$$
so that
$$\partial_{y_2} z = k^2\frac{y_2-x_2}{z}, \ \partial_{x_2} z = k^2\frac{x_2-y_2}{z}.$$
Hence
$$\partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i}{4} \frac{\partial z}{\partial y_2} \partial_z H_0^{(1)}(z) = \frac{i k^2 }{4} \frac{x_2-y_2}{z} H_1^{(1)}(z)$$
and
\begin{equation}\label{eq:Gxy}
\partial_{x_2} \partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i k^2}{4} \left( \frac{z-k^2(x_2-y_2)^2/z}{z^2} H_1^{(1)}(z) + k^2\left(\frac{x_2-y_2}{z}\right)^2 \partial_z H_1^{(1)}(z) \right).
\end{equation}
Also, we have
$$\partial_{x_2} \partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i}{4} \partial_{x_2} \partial_{y_2} H_0^{(1)}(z')$$
with
$$z'=k\sqrt{(x_1-y_1)^2+(-x_2-y_2)^2},$$
so that
$$\partial_{y_2} z' = k^2\frac{y_2+x_2}{z'}, \ \partial_{x_2} z' = k^2\frac{y_2+x_2}{z'}.$$
Hence
$$\partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i}{4} \frac{\partial z'}{\partial y_2} \partial_{z'} H_0^{(1)}(z') = \frac{i k^2}{4} \frac{-x_2-y_2}{z'} H_1^{(1)}(z')$$
and
\begin{equation}\label{eq:Gxpy}
\partial_{x_2} \partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i k^2}{4} \left( \frac{-z'+k^2(x_2+y_2)^2/z'}{z'^2} H_1^{(1)}(z') -k^2\left(\frac{x_2+y_2}{z'}\right)^2 \partial_{z'} H_1^{(1)}(z') \right).
\end{equation}
Now let $x_2=0,y_2=0$ but $0 \neq k|\mathbf{x}-\mathbf{y}|= k|x_1-y_1|=\left. z \right|_{x_2=0,y_2=0}=\left. z' \right|_{x_2=0,y_2=0}$ in \eqref{eq:Gxy} and \eqref{eq:Gxpy}, so that
\begin{eqnarray*}
\partial_{x_2} \partial_{y_2} \left. G_\text{half}(\mathbf{x},\mathbf{y}) \right|_{x_2=0,y_2=0} &=& \partial_{x_2} \partial_{y_2} \left. G(\mathbf{x},\mathbf{y}) \right|_{x_2=0,y_2=0} - \partial_{x_2} \partial_{y_2} \left. G(\mathbf{x}',\mathbf{y}) \right|_{x_2=0,y_2=0} \\
&=& \left. \frac{i k^2}{2k|\mathbf{x}-\mathbf{y}|} H_1^{(1)}(k|\mathbf{x}-\mathbf{y}|) \right|_{x_2=0,y_2=0} \\
&=& \frac{i k^2}{2k|x_1-y_1|} H_1^{(1)}(k|x_1-y_1|)
\end{eqnarray*}
Thus we have found that the half-space DtN map kernel is:
\begin{equation}\label{eq:hsDtNkernel}
K(r)=\frac{i k^2}{2kr} H_1^{(1)}(kr)
\end{equation}
and thus the half-space DtN map is:
\begin{equation}\label{eq:hsDtNmap}
Du(x_1,0)= \int K(|x_1-y_1|) g(y_1,0) \ dy_1.
\end{equation}
As we shall prove in chapter \ref{ch:plr}, the half-space DtN map kernel \eqref{eq:hsDtNkernel} for a uniform medium is numerically low-rank away from its singularity, just as the half-space Green's function is from Lemma \ref{Glowrank}. This means that, if $x$ and $y$ are coordinates along the infinite boundary $S$, and $|x-y|\geq r_0$ for some constant quantity $r_0$, then the DtN map, a function of $x$ and $y$, can be approximated up to error $\varepsilon$ as a sum of products of functions of $x$ only with functions of $y$ only (\emph{separability}) and that this sum is finite and in fact involves only a small numbers of summands (\emph{low-rank}). Hence, a numerical realization of that DtN map in matrix form should be compressible. In particular, blocks of that matrix which are away from the diagonal should have low column rank. We shall make all of this precise in chapter \ref{ch:plr}.
Recall that the goal of this thesis, to compress ABCs, will be achieved by approximating the DtN map in more general cases than the uniform half-space problem. Before we explain our approach for compressing an ABC in the next two chapters, we first explain the most straightforward way of obtaining a DtN map from an ABC, by eliminating the unknowns in the absorbing layer in order to obtain a reduced system on the interior nodes. This solution, however, is computationally impractical. It only serves to make explicit the relationship between ABCs and the DtN map.
\section{Eliminating the unknowns: from any ABC to the DtN map}\label{sec:strip}
We now wish to explain the fundamental relationship between discrete ABCs and the discrete DtN map: any discrete ABC can be reformulated as a discrete DtN map on $\partial \Omega$. We present two similar ways of obtaining that relationship using Schur complements: one for the half-space problem, and one for the free-space problem.
\subsection{Eliminating the unknowns in the half-space problem}
We consider the half-space problem, in which we care about the solution in the top half-plane. We assume from the SRC that the solution far away from $x_2=0$ in the bottom half is small. We want to eliminate unknowns from far away on the bottom side, where the solution is so small we ignore it as zero, towards the positive $x_2$ direction in order to obtain an outgoing Dirichlet-to-Neumann map in the $-x_2$ direction. We assume $f=0$ everywhere in the bottom half-plane. Let $u_1$ denote the first line of unknowns at the far bottom, $u_2$ the next line, and so on. We first use a Schur complement to eliminate $u_1$ from the discretized (using the standard five-point stencil) Helmholtz system which is as follows:
\[ \frac{1}{h^2}
\begin{pmatrix} \\
& S_1 & P_1 & \\
& & & \\
& P_1^T& C_1 & \\
& & & \\
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{1} \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \vdots \\ \\ \end{pmatrix}.
\]
We then define similarly the matrices $S_k$, $P_k$ and $C_k$ corresponding to having eliminated lines $u_1$ through $u_{k-1}$ from the system. Only the upper block of $C_1$, or $C_k$ when subsequently eliminating line $u_k$, will be modified from the Schur complement. Indeed, since we eliminate from bottom to top and we have a five-point stencil, the matrices $P_k$ will be
\[ P_k=
\begin{pmatrix} & I & 0 & \cdots \\
\end{pmatrix}
\]
and so we may write the following recursion for the Schur complements:
\begin{equation}\label{Srel}
S_{k+1}=M-S_k^{-1},
\end{equation}
where $M$ is the main block of the 2D Helmholtz operator multiplied by $h^2$, that is,
\[ M=
\begin{pmatrix}
& -4+h^2k^2 & 1 & & \ & \\
& 1 & -4+h^2k^2 & \ddots & \ & \\
& & \ddots & \ddots & \ & \\
& & & & \ & \\
\end{pmatrix} = -2I + Lh^2.\]
Here $L$ is the 2nd order discretization of the 1D Helmholtz operator in the $x_1$ direction, $\partial_{x_1}\partial_{x_1}+k^2$. Now, at each step we denote
\begin{equation}\label{StoD}
S_k=hD_k-I.
\end{equation}
Indeed, looking at the first block row of the reduced system at step $k-1$ we have
\[S_ku_k +Iu_{k+1}=0\]
or
\[(hD_k-I)u_k+Iu_{k+1}=0.\]
From this we obtain the DtN map $D_k$ from a backward difference:
\[ \frac{u_k-u_{k+1}}{h}=D_ku_k. \]
Now we use (\ref{StoD}) inside (\ref{Srel}) to obtain
\[ hD_{k+1}-I=M+(I-hD_k)^{-1}=M+I+hD_k+h^2D_k^2+O(h^3)\]
or
\[hD_{k+1}-hD_k=Lh^2+h^2D_k^2+ O(h^3), \]
which we may rewrite to obtain a discretized Riccati equation (something similar was done in \cite{keller}):
\begin{equation}\label{DR}
\frac{D_{k+1}-D_k}{h}=L+D_k^2+ O(h),
\end{equation}
of which we may take the limit as $h \rightarrow 0$ to obtain the Riccati equation for the DtN map $D$ in the $-x_2$ direction:
\begin{equation}\label{R}
D_{x_2}=(\partial_{x_1}\partial_{x_1}+k^2)+D^2.
\end{equation}
This equation is to be evolved in the $+x_2$ direction, starting far away in the bottom half-space. Looking at the steady state, $D_{x_2}=0$, we get back $D^2=-\partial_{x_1}\partial_{x_1}-k^2$, which is the Helmholtz equation with 0 right-hand side $f$ (which we have assumed to hold in the bottom half-space). Hence we conclude that the Riccati equation for the DtN map could be used to obtain the DtN map in the half-space case, and maybe even more complicated problems. We leave this to future work, and turn to a very similar way of eliminating unknowns, but for the exterior DtN map with $\Omega=[0,1]^2$ this time. This technique will not give rise to a Riccati equation, but will help us understand how the DtN map can be used numerically to solve the free-space Helmholtz equation reformulated on $\Omega$.
\subsection{Eliminating the unknowns in the exterior problem}
In this subsection, we assume we need an absorbing layer of large width, $w \geq N$ in number of points. We write the system for the discrete Helmholtz equation as
\begin{equation}\label{HEsys}
\begin{pmatrix} \\
& A & P & \\
& & & \\
& P^T& C & \\
& & &
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{out} \\ \\ u_{\Omega} \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \\ f_\Omega \\ \\ \end{pmatrix},
\end{equation}
with $A=\Delta_{layer} + k^2 I$ and $C=\Delta + k^2 I$, with $\Delta$ overloaded to denote discretization of the Laplacian operator, and $\Delta_{layer}$ the discretization of the Laplacian operator inside the absorbing layer. We wish to eliminate the exterior unknowns $u_{out}$ from this system in order to have a new system which only depends on the interior unknowns $u_{\Omega}$. The most obvious way of eliminating those unknowns is to form the Schur complement $S=C-P^TA^{-1}P$ of $A$ by any kind of Gaussian elimination. For instance, in the standard raster scan ordering of the unknowns, the computational cost of this method\footnote{The cost of the Schur complement procedure is dominated by that of Gaussian elimination to apply $A^{-1}$ to $P$. Gaussian elimination on a sparse banded matrix of size $s$ and band $b$ is $O(sb^2)$, as can easily be infered from Algorithm 20.1 of \cite{tref}.} is $O(w^4)$ --- owing from the fact that $A$ is a sparse banded matrix of size $4Nw+4w^2$ which is $O(w^2)$, and band $N+2w$. Alternatively, elimination of the unknowns can be performed by layer-stripping, starting with the outermost unknowns from $u_{out}$, until we eliminate the layer of points that is just outside of $\partial \Omega$. The computational cost will be $O(w^4)$ in this case as well. To see this, let $u_{w}$ be the points on the outermost layer, $u_{w-1}$ the points in the layer just inside of $u_{w}$, etc. Then we have the following system:
\[
\begin{pmatrix} \\
& A_w & P_w & \\
& & & \\
& P_w^T& C_w & \\
& & & \\
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{w} \\ \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \\ \vdots \\ \\ \end{pmatrix}
\]
Note that, because of the five-point stencil, $P_w$ has non-zeros exactly on the columns corresponding to $u_{w-1}$. Hence the matrix $P_w^TA_w^{-1}P_w$ in the first Schur complement $S_w=C_w-P_w^TA_w^{-1}P_w$ is non-zero exactly at the entries corresponding to $u_{w-1}$. It is then clear that, in the next Schur complement, to eliminate the next layer of points, the matrix $A_{w-1}$ (the block of $S_w$ corresponding to the points $u_{w-1}$) to be inverted will be full. For the same reason, every matrix $A_j$ to be inverted thereafter, for every subsequent layer to be eliminated, will be a full matrix. Hence at every step the cost of forming the corresponding Schur complement is at least on the order of $m^3$, where $m$ is the number of points in that layer. Hence the total cost of eliminating the exterior unknowns by layer stripping is approximately
\[ \sum_{j=1}^{w} (4(N+2j))^3 = O(w^4). \]
Similar arguments can be used for the Helmholtz equation in 3 dimensions. In this case, the computational complexity of the Schur complement or layer-stripping methods would be $O(w^3 (w^2)^2)=O(w^7)$ or $~\sum_{j=1}^{w} (6(N+2j)^2)^3=O(w^7)$. Therefore, direct elimination of the exterior unknowns is quite costly. Some new insight will be needed to construct the DtN map more efficiently.
We now remark that, whether we eliminate exterior unknowns in one pass or by layer-stripping, we obtain a reduced system. It looks just like the original Helmholtz system on the interior unknowns $u_{\Omega}$, except for the top left block, corresponding to $u_{0}$ the unknowns on $\partial \Omega$, which has been modified by the elimination procedure. Hence with the help of some dense matrix $D$ we may write the reduced, $N^2$ by $N^2$ system as
\begin{equation}\label{HEred}
Lu=
\begin{pmatrix}
(hD-I)/h^2 & I/h^2 & 0 & & \cdots & \\
& & & \\
I/h^2 & & & \\
& & & \\
0 & & [ \; \Delta + k^2 I \; ] & \\
& & & \\
\vdots & & & \\
\\
\end{pmatrix} \quad
\begin{pmatrix} u_{0} \\ \\ u_{-1} \\ \\ u_{-2} \\ \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} 0 \\ \\ f_{-1} \\ \\ f_{-2} \\ \\ \vdots \\ \\ \end{pmatrix}
\end{equation}
and we have thus obtained an absorbing boundary condition which we may use on the boundary of $\Omega$, independent of the right-hand side $f$. Indeed, if we call $u_{-1}$ the first layer of points inside $\Omega$, we have $ (I-hD)u_{0} = u_{-1} $, or
\[
\frac{u_{0} - u_{-1}}{h}=Du_{0} ,
\]
a numerical realization of the DtN map in \eqref{eq:dtn-abc}, using the ABC of choice. Indeed, elimination can be used to reformulate any computationally intensive ABC, not just absorbing layers, into a realization of \eqref{eq:dtn-abc}. Any ABC is equivalent to a set of equations relating unknowns on the surface to unknowns close to the surface, and possibly auxiliary variables. Again, elimination can reduce those equations to relations involving only unknowns on the boundary and on the first layer inside the boundary, to obtain a numerical DtN map $D$. A drawback is that forming this matrix $D$ by elimination is prohibitive, as we have just seen.%
As for the complexity of solving the Helmholtz equation, reducing the ABC confers the advantage of making the number of nonzeros in the matrix $L$ (of Section \ref{sec:strip}) independent of the width of the absorbing layer or complexity of the ABC. After elimination of the layer, it is easy to see that $L$ has about $20N^2$ nonzero entries, instead of the $5N^2$ one would expect from a five-point stencil discretization of the Helmholtz equation, because the matrix $D$ (part of a small block of $L$) is full. Although obtaining a fast matrix-vector product for our approximation of $D$ could reduce the application cost of $L$ from $20N^2$ to something closer to $5N^2$, it should be noted that the asymptotic complexity does not change -- only the constant does.%
This thesis addresses those two problems, obtaining the DtN map and applying it fast. The next chapter, chapter \ref{ch:probing}, suggests adapting the framework of matrix probing in order to obtain $D$ in reasonable complexity. Subsequently, chapter \ref{ch:plr} presents a compression method which leads to a fast application of the DtN map.
\chapter{Conclusion}\label{ch:conclusion}
In this thesis, we have compressed the Dirichlet-to-Neumann map for the Helmholtz equation in two steps, using matrix probing followed by the partitioned low-rank matrix framework. This approach is useful for applications in heterogeneous media because absorbing boundary conditions can be very costly. Especially when one needs to solve the Helmholtz equation with many different sources, it makes sense to perform the precomputation required of our two-step scheme, to speed up subsequent solves.
Probing the DtN map $D$ ultimately makes sense in conjunction with a fast algorithm for its application. In full matrix form, $D$ costs $N^2$ operations to apply. With the help of a compressed representation, this count becomes $p$ times the application complexity of any atomic basis function $B_j$, which may or may not be advantageous depending on the particular expansion scheme. The better solution for a fast algorithm, however, is to post-process the compressed expansion from probing into a slightly less compressed, but more algorithmically favorable one, such as hierarchical or partitioned low-rank matrices. These types of matrix structures are not parametrized enough to lend themselves to efficient probing -- see for instance \cite{Hmatvec} for an illustration of the large number of probing vectors required -- but give rise to faster algorithms for matrix-vector multiplication. Hence the feasibility of probing, and the availability of a fast algorithm for matrix-vector multiplication, are two different goals that require different expansion schemes.
We found that in order to obtain an efficient compression algorithm, we need to perform some precomputations. The leading cost of those is equivalent to a small number of solves of the original problem, which we can afford if we plan to make many solves in total. Then, a matrix-vector application of the DtN map in a Helmholtz solver is nearly linear in the dimension of the DtN map. The worst-case complexity of a matrix-vector application is in fact super-linear. General oscillatory integrals can often be handled in optimal complexity with the butterfly algorithm \cite{butterflyFIO}. We did not consider this algorithm because currently published methods are not adaptive, nor can they handle diagonal singularities in a kernel-dependent fashion.
\newline
\newline
Let us summarize the different complexity scalings of our method, recalling that without our compressed ABC, a thousand solves of the free-space problem would cost a thousand times the complexity of one solve, and that complexity depends on the solver used and on the fact that the ABC used might be very costly. By contrast, the cost of precomputation for matrix probing is of a few (1 to 50) solves of the exterior problem -- roughly equivalent to a solve of the free-space problem with the costly ABC. The precomputation cost for PLR compression is $O(N R^2 |\mathcal{B}|)$ where $|\mathcal{B}|$ is the number of leaves in the compressed DtN map -- empirically $|\mathcal{B}|=O(\log N)$ with a worst case of $|\mathcal{B}|=O(\sqrt{N})$. Finally, the application cost of the compressed ABC is $O(N R |\mathcal{B}|)$, which typically leads to a speed-up of a factor of 30 to 100 for $N \approx 1000$. Using the compressed ABC in a solver, the computational cost of making a thousand solves is then reduced to that of a thousand solves where the ABC is not so costly anymore, but only amounts to a matrix-vector multiplication of nearly linear cost.
Of course, the method presented here has some limitations. The main ones come from the first step of the scheme, matrix probing, which requires a careful design of basis matrices. This is harder to do when the wavelength is comparable to the size of features in the medium, or when the medium has discontinuities.
\newline
\newline
In addition to the direct applications the proposed two-step scheme has in making a typical (iterative) Helmholtz solver faster when in presence of a heterogeneous media, it could also improve solvers based on Domain Decomposition. Some interesting recent work on fast solvers for the Helmholtz equation has been along the direction of the Domain Decomposition Method (DDM). This method splits up the domain $\Omega$ in multiple subdomains. Depending on the method, the subdomains might or might not overlap. In non-overlapping DDM, transmission conditions are used to transfer information about the solution from one domain to the next. Transmission conditions are crucial for the convergence of the algorithm. After all, if there is a source in one subdomain, this will create waves which should travel to other parts of $\Omega$, and this information needs to be transmitted to other subdomains as precisely as possible.%
In particular, Stolk in \cite{stolk} has developed a nearly linear complexity solver for the Helmholtz equation, with transmission conditions based on the PML. Of course, this relies on the PML being an accurate absorbing boundary condition. One may assume that, for faster convergence in heterogeneous media, thicker PMLs might be necessary. In this case, a precomputation to compress the PML might prove useful. Other solvers which rely on transmission conditions are those of Engquist and Ying \cite{Hsweep,Msweep}, Zepeda-N\'u\~nez et al. \cite{zepeda}, Vion and Geuzaine \cite{Vion}.
Another way in which the two-step numerical scheme presented in this thesis could be useful is for compressing the Helmholtz solution operator itself. Indeed, we have compressed ABCs to the DtN map, by using knowledge of the DtN map kernel. If one knows another kernel, one could expand that kernel using matrix probing then compress it with PLR matrices, and obtain a fast application of that solution operator.
\section*{Acknowledgments}
I would like to thank my advisor, Laurent Demanet, for his advice and support throughout my time as a graduate student.
\newline
\newline
Also, I would like to acknowledge professors I took classes from, including my thesis committee members Steven Johnson and Jacob White, and qualifying examiners Ruben Rosales and again Steven Jonhson. I appreciate your expertise, availability and help.
\newline
\newline
I would like to thank collaborators and researchers with whom I discussed various ideas over the years, it was a pleasure to learn and discover with you.
\newline
\newline
Thanks also to my colleagues here at MIT in the mathematics PhD program, including office mates. %
It was a pleasure to get through this together.
\newline
\newline
Thanks to my friends, here and abroad, who supported me. From my decision to come to MIT, all the way to graduation, it was good to have you.
\newline
\newline
Finally, I thank my loved ones for being there for me always, and especially when I needed them.
\chapter{Introduction}\label{ch:intro}
This work investigates arbitrarily accurate realizations of absorbing (a.k.a. open, radiating) boundary conditions (ABC), including absorbing layers, for the 2D acoustic high-frequency Helmholtz equation in certain kinds of heterogeneous media. Instead of considering a specific modification of the partial differential equation, such as a perfectly matched layer, we study the broader question of compressibility of the nonlocal kernel that appears in the exact boundary integral form $Du=\partial_\nu u$ of the ABC. The operator $D$ is called the \emph{Dirichlet-to-Neumann} (DtN) map. This boundary integral viewpoint invites to rethink ABCs as a two-step numerical scheme, where
\begin{enumerate}
\item a precomputation sets up an expansion of the kernel of the boundary integral equation, then
\item a fast algorithm is used for each application of this integral kernel at the open boundaries in a Helmholtz solver.
\end{enumerate}
This two-step approach may pay off in scenarios when the precomputation is amortized over a large number of solves of the original equation with different data. This framework is, interestingly, half-way between a purely analytical or physical method and a purely numerical one. It uses both the theoretical grounding of analytic knowledge and the intuition from understanding the physics of the problem in order to obtain a useful solution.
The numerical realization of ABC typically involves absorbing layers that become impractical for difficult $c(\mathbf{x})$, or for high accuracy. We instead propose to realize the ABC by directly compressing the integral kernel of $D$, so that the computational cost of its setup and application would become competitive when (\ref{eq:HE}) is to be solved multiple times. Hence this paper is not concerned with the design of a new ABC, but rather with the reformulation of existing ABCs that otherwise require a lot of computational work per solve. In many situations of practical interest we show that it is possible to ``learn" the integral form of $D$, as a precomputation, from a small number of solves of the exterior problem with the expensive ABC. By ``small number", we mean a quantity essentially independent of the number of discretization points $N$ along one dimension -- in practice as small as 1 or as large as 50. We call this strategy matrix probing. To show matrix probing is efficient, we prove a result on approximating $D$ in a special case, with inverse powers multiplied by a complex exponential. This leads us to the successful design of a basis for a variety of heterogeneous media.
Once we obtain a matrix realization $\tilde{D}$ of the ABC from matrix probing, we can use it in a Helmholtz solver. However, a solver would use matrix-vector multiplications to apply the dense matrix $\tilde{D}$. Hence the second step of our numerical scheme: we compress $\tilde{D}$ using partitioned low-rank (PLR) matrices to acquire a fast matrix-vector product. This second step can only come after the first, since it is the first step that gives us access to the entries in $D$ and allows us to use the compression algorithms of interest to us. We know we can use hierarchical or partitioned-low-rank matrices to compress the DtN map because we prove the numerical low-rank property of off-diagonal blocks of the DtN map, and those algorithms exploit favorably low-rank blocks. Since PLR matrices are more flexible than hierarchical matrices, we use them to compress $\tilde{D}$ into $\overline{D}$ and apply it to vectors in complexity ranging from $O(N\log N)$ (more typical) to $O(N^{3/2})$ (worst case). %
The precomputation necessary to set up the PLR approximation is of similar complexity. This can be compared to the complexity of a dense matrix-vector product, which is $O(N^2)$.
In this introduction, we first motivate in section \ref{sec:motivate} the study of the Helmholtz equation in an unbounded domain by presenting important applications. We then give more details on the steps of our numerical scheme in section \ref{sec:struct}, which will make explicit the structure of this thesis.%
\section{Applications of the Helmholtz equation in an unbounded domain}\label{sec:motivate}
We consider the scalar Helmholtz equation in $\mathbb{R}^2$,
\begin{equation}\label{eq:HE0}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = f(\mathbf{x}), \qquad \mathbf{x} = (x_1, x_2),
\end{equation}
with compactly supported $f$, the \emph{source}. We seek the solution $u$. The function $c$ in \eqref{eq:HE0} is called the \emph{medium}, $\omega$ the frequency. %
We consider the unique solution $u$ to \eqref{eq:HE0} determined by the Sommerfeld radiation condition (SRC), which demands that the solution be outgoing. %
We call the problem of finding a solution $u$ to \eqref{eq:HE0} with the SRC the \emph{free-space problem}. There are many applications that require a solution to the free-space Helmholtz problem, or the related free-space Maxwell's equations problem. To solve the free-space problem numerically, we reformulate the problem to a bounded domain $\Omega$ in which we shall find the solution $u$. One must then impose an \emph{absorbing boundary condition} (ABC) on the boundary $\partial \Omega$. ABCs are designed to absorb waves impinging on $\partial \Omega$ so the waves do not reflect back in $\Omega$ and pollute the solution there.
ABCs in heterogeneous media are often too costly, but the two-step scheme of this thesis addresses this problem. %
Let us now discuss applications of this two-step scheme.%
The main application is wave-based imaging, an inverse problem for the wave equation
\begin{equation}\label{eq:WE}
\Delta U(\mathbf{x},t) - \frac{1}{c^2(\mathbf{x})}\frac{\partial^2 U(\mathbf{x},t)}{\partial t^2} = F(\mathbf{x},t), \qquad
\end{equation}
and related equations. The Helmholtz equation is equivalent to the wave equation because we can decompose the solution $U$ and the source $F$ into time-harmonic components by a Fourier transform in time. %
Another type of wave equation, the Maxwell's equations, can also be reformulated as Helmholtz equations on the various components of the electric and magnetic fields.%
An inverse problem is as follows: instead of trying to find the solution $u$ of \eqref{eq:HE0} given $\omega$, $f$ and $c$, we do the opposite. In other words, we are given the solution $u$ at a set of receiver locations for some $\omega$ and various sources $f$. We try to determine the medium $c$ from that information. We are usually interested in (or can only afford) $c(\mathbf{x})$ for $\mathbf{x}$ in some part of the whole space, say some bounded domain $\Omega$, and absorbing boundary conditions are then necessary. To solve the inverse problem in $\Omega$, we need to use a lot of sources, say a thousand. The details of why many sources are useful, and how to solve the inverse problem, are not relevant to this thesis. What is relevant is that, in the course of solving the inverse problem for say the Helmholtz equation, it is needed to solve the free-space Helmholtz problem for all these sources, with ABCs, in heterogeneous media. We list here some imaging applications where the Helmholtz equation, or other types of wave equations, are used to solve inverse problems with ABCs in heterogeneous media, that is, where the numerical scheme in this thesis might prove useful.
\begin{itemize}
\item In seismic imaging, we acquire knowledge of the rock formations under the earth's surface. That is, we want to know the medium $c$ in which the waves propagate. The domain $\Omega$ might be a region surrounding a seismic fault \cite{seismic} where one wants to assess earthquake hazards, or a place where one might like to find hydrocarbons, other minerals, or even geothermal energy \cite{geothermal}. ABCs are needed to simulate the much larger domain in which $\Omega$ is embedded, that is, the Earth. Sources (which might be acoustic or electromagnetic) and receivers might be placed on the surface of the Earth or inside a well, or might be towed by a boat or placed at the bottom of the sea. An earthquake can also be used as a source.
\item Ultrasonic testing \cite{ultrasonic} is a form of non-destructive testing where very short ultrasonic pulses are sent inside an object. The object might be, say, a pipe that is being tested for cracks or damage from rust, or a weld being tested for defects. The received reflections or refractions from those ultrasonic pulses are used for diagnosis on the object of interest. $\Omega$ might be the object itself, or a part of it. Ultrasonic imaging \cite{3dmedimag} is also used in medicine to visualize fetuses, muscles, tendons and organs. The domain $\Omega$ is then the body part of interest.
\item Synthetic-aperture radar imaging is used to visualize a scene by sending electromagnetic pulses from an antenna aboard a plane or satellite \cite{radar}. It is also used to detect the presence of an object \cite{borden} far away such as a planet, or through clouds or foliage.
\end{itemize}
An entirely different application of the Helmholtz equation resides in photonics \cite{JohnsonPhot}. The field of photonics studies the optical properties of materials. In particular, one tries to construct photonic crystals, a periodic medium with desired properties depending on the specific application. It is of course useful to first test photonic crystals numerically to observe their properties before actually building them. However, since crystals are assumed to be infinite, absorbing boundary conditions need to be used to reformulate the problem on a bounded domain. This is where our two-step numerical scheme can be used.
\section{A two-step numerical scheme for compressing ABCs}\label{sec:struct}
The next chapter, chapter \ref{ch:back}, will present theoretical facts about the Helmholtz equation and related concepts, which will be useful at various points in later chapters for developing the proposed framework. Then, chapter \ref{ch:probing} will present the first step of the scheme: matrix probing and how it is used for providing a rapidly converging expansion of the DtN map. Next, chapter \ref{ch:plr} will contain the second part of the proposed two-step scheme: the compression of the matrix probing expansion, for fast evaluation, using partitioned low-rank matrices. After the precomputations of matrix probing and PLR matrices, we finally obtain a fast and accurate compressed absorbing boundary condition. Chapter \ref{ch:conclusion} concludes this thesis with a review of the main new ideas presented, identification of further research directions and open problems, and also an overview of how the presented framework could be used in other contexts. A summary of the steps involved in the presented two-step numerical scheme is available in appendix \ref{ch:steps}.
\subsection{Background material}
We begin by introducing again the problem we wish to solve, the \emph{free-space problem} for the Helmholtz equation. This problem is defined on an unbounded domain such as $\mathbb{R}^2$, but can be solved by reformulating to a bounded domain we will call $\Omega$. One way to do this is to impose what we call the \emph{exterior Dirichlet-to-Neumann map} (DtN map) on $\partial \Omega$. The DtN map $D$ relates Dirichlet data $u$ on $\partial \Omega$ to the normal derivative $\partial_\nu u$ of $u$, where $\nu$ is the unit outward vector to $\partial \Omega$: $Du=\partial_\nu u$. This allows us to solve the Helmholtz equation on $\Omega$ to obtain a solution $u$ which coincides, inside $\Omega$, with the solution to the free-space problem.
We shall see that any \emph{absorbing boundary condition} (ABC) or Absorbing Layer (AL) can be used to obtain the DtN map. As we mentioned before, an ABC is a special condition on the boundary $\partial \Omega$ of $\Omega$, which should minimize reflections of waves reaching $\partial \Omega$. %
Different ABCs work differently however, and so we will review existing techniques for constructing ABCs for the Helmholtz equation, and see again how they are computationally expensive in a variable medium. This suggests finding a different, more efficient, way of obtaining the exterior DtN map from an ABC. To attack this problem, we consider the \emph{half-space} DtN map, known analytically in a constant medium. This half-space DtN map is actually quite similar to the exterior DtN map, at least when $\Omega$ is a rectangular domain. Indeed, the interface along which the DtN map is defined for the half-space problem is a straight infinite line $x_2=0$. For the exterior problem on a rectangle, the DtN map is defined on each of the straight edges of $\Omega$. Hence the restriction of the exterior DtN map to one such edge of $\Omega$, which say happens to be on $x_2=0$, behaves similarly to the restriction of the half-space DtN map to that same edge. The main difference between those two maps is created by scattering from corners of $\partial \Omega$. Both in chapter \ref{ch:probing} and \ref{ch:plr}, we will prove facts about the half-space DtN map which will inform our numerical scheme for the exterior DtN map.
We end the chapter of background material with how the most straightforward way of obtaining the exterior DtN map from an ABC, \emph{layer-stripping}, is prohibitively slow, especially in variable media. This will also explain how, even if we have an efficient way of obtaining the DtN map from an ABC, applying it at every solve of the Helmholtz equation will be slow. In fact, once we have the map $D$ in $Du=\partial_\nu u$, we need to apply this map to vectors inside a Helmholtz solver. But $D$ has dimension approximately $N$, if $N$ is the number of points along each direction in $\Omega$, so matrix-vector products with $D$ have complexity $N^2$. This is why we have developed the two-step procedure presented in this thesis: first an expansion of the DtN map using matrix probing, then a fast algorithm using partitioned low-rank matrices for the application of the DtN map.
\subsection{Matrix probing}
Chapter \ref{ch:probing} is concerned with the first step of this procedure, namely, setting up an expansion of the exterior DtN map kernel, in a precomputation. This will pave the way for compression in step two, presented in chapter \ref{ch:plr}.
Matrix probing is used to find an expansion of a matrix $M$. For this, we assume an expansion of the type
\[ M \approx \tilde{M} = \sum_{j=i}^{p} c_jB_j, \]
where the \emph{basis matrices} $\left\{B_j\right\}$ are known, and we wish to find the coefficients $\left\{c_j\right\}$. We do not have access to $M$ itself, but only to products of $M$ with vectors. In particular, we can multiply $M$ with a random vector $z$ to obtain
\[ w=Mz \approx \sum_{j=1}^p c_j B_j z = \Psi_z \, \mathbf{c}.\]
We can thus obtain the vector of coefficients $\mathbf{c}$ by applying the pseudoinverse of $\Psi_z$ to $Mz=w$.
For matrix probing to be an efficient expansion scheme, we need to carefully choose the basis matrices $\left\{B_j\right\}$. Here, we use knowledge of the half-space DtN map to inform our choices: a result on approximating the half-space DtN map with a particular set of functions, inverse powers multiplied by a complex exponential. Hence our basis matrices are typically a discretization of the kernels
\begin{equation}%
B_j(x,y)= \frac{e^{ik|x-y|}}{(h+|x-y|)^{j/2}},
\end{equation}
where $h$ is our discretization parameter, $h=1/N$. The need for a careful design of the basis matrices can however be a limitation of matrix probing. In this thesis, we have used also insights from geometrical optics to derive basis matrices which provide good convergence in a variety of cases. Nonetheless, in a periodic medium such as a photonic crystal, where the wavelength is as large as the features of the medium, the geometrical optics approximation breaks down. Instead, we use insights from the solution in a periodic medium, which we know behaves like a Bloch wave, to design basis matrices. However, the periodic factor of a Bloch wave does not lead to very efficient basis matrices since it is easily corrupted by numerical error. Another limitation is that a medium which has discontinuities created discontinuities as well in the DtN map, forcing again a more careful design of basis matrices.
Once we do have basis matrices, we can probe the DtN map $D$, block by block. Indeed, we choose $\Omega=[0,1]^2$, and so $\partial \Omega$ is the four edges of the square\footnote{The framework does carry over to polygonal domains easily, but we do not cover this here.}, numbered counter-clockwise. Hence, $D$ has a 4 by 4 structure corresponding to restrictions of the DtN map to each pair of edges. To obtain the product of the $(i_M,j_M)$ block $M$ of $D$ with a random vector, we need to solve what we call the \emph{exterior problem}: we put a random Dirichlet boundary condition on edge $j_M$ of $\partial \Omega$, solve the Helmholtz equation outside of $\Omega$ using an ABC, and take the normal derivative of the solution on edge $i_M$ of $\partial \Omega$.
In situations of practical interest, we obtain the DtN map as a precomputation using matrix probing with leading complexity of about 1 to 50 solves of the exterior problem with the expensive ABC -- a number of solves essentially independent of the number of discretization points $N$. A solve of the exterior problem is essentially equivalent to a solve of the original problem with that same expensive ABC.
We then present a careful study of using matrix probing to expand the DtN map in various media, use the matrix probing expansion to solve the Helmholtz equation, and document the complexity of the method.
In the next chapter, we will present a fast algorithm for applying the DtN map's expansion found by matrix probing.
\subsection{Partitioned low-rank matrices}
In chapter \ref{ch:plr}, we produce a fast algorithm for applying the DtN map $D$ to vectors, since this is an operation that a Helmholtz solver needs. Again, we needed matrix probing first, to obtain an explicit approximation of $D$, to be compressed in order to obtain a fast matrix-vector product. Indeed, we do not have direct access to the entries of $D$ at first, but rather we need to solve a costly problem, the exterior problem, every time we need a multiplication of $D$ with a vector. Now that we have an explicit representation of $D$ from matrix probing, with cost $O(N^2)$ for matrix-vector products, we can compress that representation to obtain a faster product.
We have mentioned before how the half-space DtN map is similar to the exterior DtN map. In chapter \ref{ch:plr}, we will prove that the half-space DtN map kernel $K$ in constant medium is numerically low rank. This means that, away from the singularity of $K$, the function $K(|x-y|)$ can be written as a short sum of functions of $x$ multiplied by functions of $y$:
\[ K(|x-y|)=\sum_{j=1}^J \Psi_j(x)\chi_j(y) + E(x,y)\]
with error $E$ small. The number of terms $J$ depends logarithmically on the error tolerance and on the frequency $k$. This behavior carries over to some extent to the exterior DtN map. Thus we use a compression algorithm which can exploit the low-rank properties of blocks of $D$ that are not on the diagonal, that is, that are away from the singularity.
A well-known such compression framework is called \emph{hierarchical matrices}. Briefly, hierarchical matrices adaptively divide or compress diagonal blocks. We start by dividing the matrix in 4 blocks of half the original matrix's dimension. The two off-diagonal blocks are compressed: we express them by their singular value decomposition (SVD), truncated after $R$ terms. The two diagonal blocks are divided in four again, and we recurse: off-diagonal blocks are compressed, diagonal blocks are divided, etc. We do not divide a block if compressing it will result in less error than our error tolerance $\varepsilon$ -- this is adaptivity. The parameters $R$ and $\varepsilon$ are chosen judiciously to provide a fast matrix-vector multiplication with small error.
However, the hierarchical matrix framework can only apply to matrices with a singularity along the diagonal. This is not useful to us, since for example a block of $D$ corresponding to two consecutive edges of $\partial \Omega$ will have the singularity in a corner. We thus decide to use partitioned low-rank (PLR) matrices for compressing and applying $D$. PLR matrices are more flexible than hierarchical matrices: when we divide a block, or the original matrix, in 4 sub-blocks, any of those four sub-blocks can be divided again. Any block that is not divided any further is called a \emph{leaf}, and $\mathcal{B}$ is the set of all leaves. If a singularity is in a corner, then the PLR compression algorithm will automatically divide blocks close to that corner, but will compress farther blocks since they have lower numerical rank. Note that we use the randomized SVD \cite{randomSVD} to speed us the compression, so that its complexity is of order $O(NR^2|\mathcal{B}|)$, where $|\mathcal{B}|$ is often on the order of $\log N$ but will be $\sqrt{N}$ in the worst case. Similarly, the complexity of a matrix-vector product is usually $O(NR|\mathcal{B}|)$, which for $N\approx 1000$ provides a speed-up over a dense matrix-vector product of a factor of 30 to 100. We also show that the worst case complexity of a matrix-vector product in the PLR framework is $O(N^{3/2})$. This should be compared to the complexity of a dense matrix-vector product, which is $O(N^2)$.
We may then use PLR matrices to compress the DtN map, and use this compressed map in a Helmholtz solver. We verify the complexity of the method, and present results on the solution of the Helmholtz equation with a probed and compressed DtN map.
\subsection{Summary of steps}
In appendix \ref{ch:steps}, we present a summary of the various operations involved in implementing the presented two-step algorithm.
\chapter{Partitioned low-rank matrices for compressing the Dirichlet-to-Neumann map}\label{ch:plr}
In the previous chapter, we explained in detail the first step of two in our numerical scheme for compressing ABC's. This first step consisted in approximating the DtN map $D$ by $\tilde{D}$ using matrix probing. To do this, we considered each block $M$ of $D$ separately, corresponding to an edge-to-edge restriction of $D$. We approximated each $M$ by a matrix probing expansion. %
We saw how we could obtain an accurate approximation of $M$ using appropriate basis matrices $B_j$. We were left with the task of producing a fast algorithm for applying the resulting $\tilde{M}$ to vectors, since this is an operation that a Helmholtz solver needs. This is what we do in the current chapter, by compressing $\tilde{M}$ into a new $\overline{M}$ which can then be applied fast.
We also explained in the previous chapter why we needed probing: to obtain an explicit approximation of $D$, to be compressed in order to obtain a fast matrix-vector product. Indeed, we do not have direct access to the entries of $D$, but rather we need to solve the costly exterior problem every time we need a multiplication of $D$ with a vector. We have already mentioned how the approach of Lin et al. \cite{Hmatvec}, for example, would require $O(\log N)$ such solves, with a large constant.
We alluded that we might be able to compress each block $M$ (or $\tilde{M}$) of $D$ (or $\tilde{D}$) when we presented background material in chapter \ref{ch:back}. Indeed, we discussed the fact that the half-space Green's function $G_\text{half}$ in constant medium is separable and low-rank away from its singularity. Because the half-space DtN map kernel $K$ is simply two derivatives of $G_\text{ext}$, we expect $K$ to also be separable and low-rank, and we prove this at the end of the present chapter, in section \ref{sec:septheo}. See also the numerical verification of that theorem in section \ref{sec:sepnum}. Because the half-space DtN map is strongly related to the exterior DtN map as we mentioned in chapter \ref{ch:back}, we expect the exterior DtN map kernel to also be separable and low rank, at least in favorable conditions such as a constant medium. But first, as in the previous chapter, we begin by explaining the technique we use, partitioned low-rank (PLR) matrices, in section \ref{sec:introplr}. Compression of an $N$ by $N$ matrix into the PLR framework is nearly linear in $N$, and so is matrix-vector multiplication. We then show the results of using this PLR technique on test cases in section \ref{sec:usingplr}. %
\section{Partitioned low-rank matrices}\label{sec:introplr}
As we have discussed in chapter \ref{ch:back}, when an operator is separable and low-rank, we expect its numerical realization to have low-rank blocks under certain conditions. In our case, the DtN map $K(x-y)$ is separable and low-rank away from the singularity $x=y$ and so we expect its numerical realization to have low-rank blocks away from its diagonal. This calls for a compression scheme such as the hierarchical matrices of Hackbush et al. \cite{hmat1}, \cite{hmat2}, \cite{hmatlect}, to compress off-diagonal blocks. However, because we expect higher ranks away from the singularity in variable media, and because different blocks of the DtN map will show a singularity elsewhere than on the diagonal, we decide to use a more flexible scheme called \emph{partitioned low rank} matrices, or PLR matrices, from \cite{Jones}.
\subsection{Construction of a PLR matrix}
PLR matrices are constructed recursively, using a given tolerance $\epsilon$ and a given maximal rank $R_{\text{max}}$. We start at the top level, level 0, with the matrix $M$ which is $N$ by $N$ and $N$ is a power of two\footnote{Having a square matrix with dimensions that are powers of two is not necessary, but makes the discussion easier.}. We wish to compress $M$ (in the next sections we will use this compression scheme on probed blocks $\tilde{M}$ of the DtN map, but we use $M$ here for notational simplicity). We first ask for the numerical rank $R$ of $M$. The numerical rank is defined by the Singular Value Decomposition and the tolerance $\epsilon$, as the number $R$ of singular values that are larger than or equal to the tolerance. If $R>R_{\text{max}}$, we split the matrix in four blocks and recurse to the next level, level 1 where blocks are $N/2^1$ by $N/2^1$. If the numerical rank of $M$ is less than or equal to $R_{\text{max}}$, $R \leq R_{\text{max}}$, we express $M$ in its low-rank form by truncating the SVD of $M$ after $R$ terms. That is, the SVD of $M$ is $M=U\Sigma V^*=\sum_{j=1}^{N} U_j \sigma_j V_j^*$ where $U$ and $V$ are orthonormal matrices with columns $\left\{U_j\right\}_{j=i}^{N}$ and $\left\{V_j\right\}_{j=i}^{N}$, and $\Sigma$ is the diagonal matrix of decreasing singular values: $\Sigma=\text{diag}(\sigma_1, \sigma_2, \ldots, \sigma_N)$. Then, if $R \leq R_{\text{max}}$, we compress $M$ to $\overline{M}=\sum_{j=1}^{R} U_j \sigma_j V_j^*$ by truncating the SVD of $M$ after $R$ terms.
If we need to split $M$ and recurse down to the next level, we do the following. First, we split $M$ in four square blocks of the same size: take the first $N/2$ rows and columns to make the first block, then taking the first $N/2$ rows and last $N/2$ columns to make the second block, etc. And now we apply the step described in the previous paragraph to each block of $M$, checking the block's numerical rank and compressing it or splitting it depending on that numerical rank. Whenever we split up a block, we label it as ``hierarchical'', and call its four sub-blocks its \emph{children}. Whenever a block was not divided, and hence compressed instead, we label it as ``compressed'', and we may call it a ``leaf'' as well.
If a block has dimension $R_{\text{max}}$ by $R_{\text{max}}$, then its numerical rank will be $R_{\text{max}}$, and so once blocks have dimensions smaller than or equal to the maximal desired rank $R_{\text{max}}$, we can stop recursing and store the blocks directly. However, especially if $R_{\text{max}}$ is large, we might still be interested in compressing those blocks using the SVD. This is what we do in our code, and we label such blocks as ``compressed'' as well. When we wish to refer to how blocks of a certain matrix $M$ have been divided when $M$ was compressed in the PLR framework, or in particular to the set of all leaf blocks and their positions in $M$, we refer to the ``structure'' of $M$. We see then that the structure of a PLR matrix will have at most $L$ levels, where $N/R_\text{max}=2^L$ so $L=\log_2{N/R_\text{max}}$.
\subsubsection{Implementation details}
Algorithm \ref{alg:PLR_matrix} presents pseudocode for the construction of a PLR matrix from a dense matrix. In practice, when we compute the SVD of a block, we use the randomized SVD\footnote{Theoretically, this randomized SVD has a failure probability, but we can choose a parameter to make this probability on the order of $10^{-16}$, and so we ignore the fact that the randomized SVD could fail.} \cite{randomSVD}. This allows us to use only a few matrix-vector multiplies between the block (or its transpose) and random vectors to form an approximate reduced SVD. This is a faster way of producing the SVD, and thus also of finding out whether the numerical rank of the block is larger than $R$. The randomized SVD requires about 10 more random matrix-vector multiplies than the desired maximal rank $R$. This is why, in algorithm \ref{alg:PLR_matrix}, the call to \emph{svd} has two arguments: the block we want to find the SVD of, and the maximal desired rank $R_{\text{max}}$. The randomized SVD algorithm then uses 10 more random vectors than the quantity $R_{\text{max}}$ and returns an SVD of rank $R_{\text{max}}+1$. We use the $R_{\text{max}}+1^{\text{st}}$ singular value in $\Sigma$ to test whether we need to split the block and recurse or not.
\begin{algorithm} Compression of matrix $M$ into Partitioned Low Rank form, with maximal rank $R_{\text{max}}$ and tolerance $\epsilon$ \label{alg:PLR_matrix}
\begin{algorithmic}[1]
\Function{H = PLR}{$M$, $R_{\text{max}}$, $\epsilon$}
\State $[U, \Sigma ,V] = \texttt{svd}(M, R_{\text{max}})$ \Comment{Randomized SVD}
\If{ $\exists R \in \left\{1,2,\ldots,R_{\text{max}}\right\} : \Sigma(R+1,R+1) < \epsilon$}
\State Let $R$ be the smallest such integer.
\State \texttt{H.data = \{$U(:,1:R)\cdot \Sigma(1:R,1:R)$, $V(:,1:R)^{*}$\} }
\State \texttt{H.id = 'c' } \Comment{This block is ``compressed''}
\Else \Comment{The $M_{ij}$'s are defined in the text}
\For{i = 1:2}
\For{j = 1:2}
\State \texttt{H.data\{i,j\} = } PLR($M_{ij}$, $R_{\text{max}}$, $\epsilon$ ) \Comment Recursive call
\EndFor
\EndFor
\State \texttt{H.id = 'h' } \Comment{This block is ``hierarchical''}
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Complexity}
The complexity of this algorithm depends on the complexity of the SVD algorithm we use. The randomized SVD has complexity $O(N_B R_\text{max}^2)$ where $N_B$ is the dimension of block $B$ whose SVD we are calculating. This can be much better, especially for larger blocks, than standard SVD algorithms which have complexity $O(N_B^3)$. The total complexity of the compression algorithm will then depend on how many blocks of which size and rank we find the randomized SVD of. We shall discuss this in more detail when we discuss also the complexity of a matrix-vector product for special structures.
\subsubsection{Error analysis}
To understand the error we make by compressing blocks in the PLR framework, we first note that compressing a block $M$ to its $R$-rank approximation $\overline{M}=\sum_{j=1}^{R} U_j \sigma_j V_j^*$ by truncating its SVD, we make an error in the $L_2$ norm of $\sigma_{R+1}$. That is,
\[ \|M-\overline{B} \|_2 = \| \sum_{j=R+1}^{N} U_j \sigma_j V_j^* \|_2 = \sigma_{R+1} . \]
Hence, by compressing a block, we make an $L_2$ error for that block of at most $\varepsilon$ because we make sure that $\sigma_{R+1} \leq \varepsilon$. Of course, errors from various blocks will compound to affect the total error we make on matrix $M$. We shall mention this in more detail when we discuss particular structures.
The relative Frobenius error we make between $M$ and $\overline{M}$, the compressed approximation of $M$, will usually be larger than $\epsilon$ because of two factors. First of all, as we just saw, the PLR compression algorithm uses the $L_2$ norm. To use the Frobenius norm when deciding whether to compress or divide a blcok, we would need access to all the dingular values of each block. This would be possible using typical SVD algorithm, but quite costly. Hence we use the randomized SVD, which is faster but with which we are forced to use the $L_2$ norm. Another factor we have yet to mention is that errors from different blocks will compound to make the total error between $M$ and $\overline{M}$ larger than the error between any individual blocks of $M$ and $\overline{M}$. This of course depends on how many blocks there are in the structure of nay particular PLR matrix. We will talk more about this in subsection \ref{sec:structncomp}, where we explore the complexity of the compression and matrix-vector algorithms of the PLR framework. But first, we introduce matrix-vector products.
\subsection{Multiplication of a PLR matrix with a vector}
To multiply a PLR matrix $M$ with a vector $v$, we again use recursion. Starting at the highest level block, the whole matrix $M$ itself, we ask whether this block has been divided into sub-blocks. If not, we multiply the block directly with the vector $v$. If the block has been subdivided, then we ask for each of its children whether those have been subdivided. If not, we multiply the sub-block with the appropriate restriction of $v$, and add the result to the correct restriction of the output vector. If so, we recurse again. Algorithm \ref{alg:PLR_matvec} presents the pseudocode for multiplying a vector by a PLR matrix. The algorithm to left-multiply a vector by a matrix is similar, we do not show it here.
\begin{algorithm} Multiplication of a PLR matrix $H$ with column vectors $x$ \label{alg:PLR_matvec}
\begin{algorithmic}[1]
\Function{y = matvec}{H,x}
\If{ \texttt{H.id == 'c' } }
\State \texttt{y = H.data\{1\}$\cdot$(H.data\{2\}$\cdot$x) }
\Else
\State $\texttt{y}_{1}\texttt{ = }\text{matvec}\texttt{(H.data\{1,1\},}\texttt{x}(\texttt{1:end/2,:}))$
\State $\qquad +\text{matvec}\texttt{(H.data\{1,2\},}\texttt{x}(\texttt{end/2:end,:}))$
\State $\texttt{y}_{2}\texttt{ = }\text{matvec}\texttt{(H.data\{2,1\},}\texttt{x}(\texttt{1:end/2,:}))$
\State $\qquad +\text{matvec}\texttt{(H.data\{2,2\},}\texttt{x}(\texttt{end/2:end,:}))$
\State \texttt{y = $ \left[ \begin{array}{c}
\texttt{y}_1\\
\texttt{y}_2
\end{array}
\right ]$ }
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Complexity}
The complexity of this algorithm is easily understood. Recall that we store blocks that are not divided not as a full matrix, but as column vectors corresponding to the orthonormal matrix $U$ of the SVD, and to the product $\Sigma V^*$ of the SVD. Every time we multiply such an $N_B$ by $N_B$ block $B$ that has not been subdivided with the corresponding restriction of $v$, we first multiply the restriction $\tilde{v}$ of $v$ with $\Sigma V^*$, and then multiply the result with $U$. Let $R_B \leq R_\text{max}$ be the numerical rank of that block. Then, we first make $N_BR_B$ multiplication operations and $(N_B-1)R_B$ addition operations for the product $\Sigma V^* \tilde{v}$, and then $R_BN_B$ multiplication operations and $(R_B-1)N_B$ addition operations for the product of that with $U$. Hence we make approximately $4N_BR_B$ operations per block, where again $N_B$ is the dimension of the block $B$ and $R_B$ is its numerical rank.
The total number of operations for multiplying a vector $v$ with a PLR matrix $M$, then, is about
\begin{equation}\label{eq:comp}
\sum_{B \text{ is compressed}} 4 N_B R_B ,
\end{equation}
where we sum over all ``compressed'' blocks.
Evidently there is a trade-off here. Asking for a small maximal rank $R_\text{max}$ may force blocks to be subdivided a lot. We then have small $R_B$'s, and small $N$'s, but a lot of blocks. On the other hand, having a larger $R_\text{max}$ means a lot of blocks can be remain large. We then have large $R_B$'s and $N_B$'s, but very few blocks. We shall use the complexity count \eqref{eq:comp} later on to decide on which $R_\text{max}$ to choose for each block of the DtN map. To have an idea of whether such a matrix-vector multiplication might result in a fast algorithm, we need to introduce new terminology regarding the structure of PLR matrices, and we do so in the next subsection.
\subsection{Structure and complexity}\label{sec:structncomp}
As we did with matrix probing, we again split up the probed exterior DtN map $\tilde{D}$ in submatrices corresponding to the different sides of $\partial \Omega$. We called those \emph{blocks} of $\tilde{D}$ in the previous chapter, but we now call them \emph{submatrices}, to differentiate them from the blocks in the structure we obtain from compressing to a PLR matrix. So $\tilde{D}$ is split up in submatrices $\tilde{M}$ that represent edge-to-edge restrictions of $\tilde{D}$. We then only have to compress once each unique submatrix $\tilde{M}$ to obtain an approximation $\overline{M}$ of $\tilde{M}$. We can use those compressed $\overline{M}$'s to define $\overline{D}$, an approximation of $\tilde{D}$. If we need to multiply our final approximation $\overline{D}$ by a vector, we may then split that vector in blocks corresponding to the sides of $\partial \Omega$ and use the compressed submatrices and the required PLR matrix algebra to obtain the result with low complexity.
What is particular about approximating the DtN map on a square boundary $\partial \Omega$ is that distinct submatrices are very different. Those that correspond to the restriction of $D$ from one edge to that same edge, and as such are on the diagonal of $D$, are harder to probe as we saw in the previous chapter because of the diagonal singularity. And, because of the diagonal singularity, they might be well-suited for compression by hierarchical matrices \cite{bebendorf}.
However, submatrices of $D$ corresponding to two edges that are side by side (for example, the bottom and right edges of the boundary of $[0,1]^2$) see the effects of the diagonal of $D$ in their upper-right or lower-left corners, and entries of such submatrices decay in norm away from that corner. Thus a hierarchical matrix would be ill-suited to compress such a submatrix. This is why the PLR framework is so useful to us: it automatically adapts to the submatrix at hand, and to whether there is a singularity in the submatrix, and where that singularity might be.
Similarly, when dealing with a submatrix of $D$ corresponding to opposite edges of $\partial \Omega$, we see that entries with higher norm are in the upper-right and bottom-left corners, so again PLR matrices are more appropriate than hierarchical ones. However, note that because such submatrices have very small relative norm compared to $D$, and were probed with only one or two basis matrices in the previous chapter, their PLR structure is often trivial.
In order to help us understand the complexity of PLR compression and matrix-vector products, we first study typical structures of hierarchical \cite{hmatlect}, \cite{Hweak} and PLR matrices.
\subsubsection{Weak hierarchical matrices}
\begin{definition}
A matrix is said to have \emph{weak hierarchical structure} when a block is compressed if and only if its row and column indices do not overlap.
\end{definition}
The weak hierarchical structure of a matrix is shown in Figure \ref{fig:weak}. For example, let the matrix $M$ be $8 \times 8$. Then the block at level 0 is $M$ itself. The row indices of $M$ are $\left\{1, 2, \ldots, 8\right\}$, and so are its column indices. Since those overlap, we divide the matrix in four. We are now at level 1, with four blocks. The $(1,1)$ block has row indices $\left\{1, 2, 3, 4\right\}$, and its column indices are the same. This block will have to be divided. The same holds for block $(2,2)$. However, block $(1,2)$ has row indices $\left\{1, 2, 3, 4\right\}$ and column indices $\left\{5, 6, 7, 8\right\}$. Those two sets do not overlap, hence this block is compressed. The same will be true of block $(2,1)$.
We note that, if the matrix $M$ has a weak hierarchical structure, we have a fast matrix-vector product. We may use the heuristic in \ref{eq:comp} to obtain the complexity of that product, assuming for simplicity that all $R_B$'s are $R_\text{max}$. Hence we account for all compressed blocks, starting from the 2 larger blocks on level 1, of size $N/2$ by $N/2$ (one on each side of the diagonal): they correspond to a maximum of $4 N/2 \times R_\text{max}$ operations (multiplications and divisions) each, and there is two of them, so they correspond to a total of $4 N R_\text{max}$ operations. Then, the next larger blocks are of size $N/4$ by $N/4$, and there is 4 of them (two on each side of the diagonal). Hence they correspond to a total of $4 \times 4 \times N/4 \times R_\text{max}=4 N R_\text{max}$ operations. Since we have $L=\log_2{N/R_\text{max}}$ levels, or different block sizes, and as we can see each of those block sizes will contribute at most $4 N R_\text{max}$ operations, we have about $4 N \log{N/R_\text{max}} R_\text{max}$ operations from off-diagonal blocks. We are left with the diagonal blocks. Those have size $R_\text{max}$ and there are $N/R_\text{max}$ of them, so the complexity of multiplying them by a vector is at most $ 4 N/R_\text{max} R^2_\text{max} = 4N R_\text{max}$ operations. Hence the total complexity of a matrix-vector multiplication with a weak hierarchical matrix is
\begin{equation}\label{eq:comweak}
4N R_\text{max} \log{\frac{2N}{R_\text{max}}} .
\end{equation}
This is clearly faster, asymptotically, than the typical complexity of a dense matrix-vector product which is of $2N^2$ operations.
As to the complexity of the compression algorithm, we do a similar calculation, but here the cost per block is $O(N_B R_\text{max}^2)$, so all we need is to sum the dimensions of all blocks that we used the SVD on. We start by taking the SVD of the matrix itself, then of all the blocks on level 1, then half of the blocks on level 2, then a quarter of the blocks on level 3, etc. Hence the complexity of compression is
\[ R_\text{max}^2 \left( N + \sum_{l=1}^{L-1} \frac{N}{2^l} \frac{4^l}{2^{l-1}} \right) = R_\text{max}^2 (N+2N(L-1)) \leq 2N R_\text{max}^2 \log{\frac{N}{R_\text{max}}}\]
which is nearly linear in $N$.
Finally, we address briefly the error in the matrix $M$ that is made when it is compressed. We especially care about the error in a matrix-vector multiplication $w=Mv$. We can see in this case that, for any entry $j$ in $w$, there will be error coming from all multiplications of the appropriate restriction of $v$ with the corresponding block intersecting row $j$ of $M$. Since there are about $\log\frac{N}{R_\text{max}}$ such blocks in row $j$, we can estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$. As we will see in section \ref{sec:usingplr}, dividing the ``desired'' error by a factor of 1 to 25 to obtain the necessary $\varepsilon$ will work quite well for our purposes.
\begin{figure}[H]
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/output.pdf}
\caption{Weak hierarchical structure, $\frac{N}{R_\text{max}}=8$.}\label{fig:weak}
\end{minipage}
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/outputstr.pdf}
\caption{Strong hierarchical structure, $\frac{N}{R_\text{max}}=16$.}\label{fig:strong}
\end{minipage}
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/outputcorn.pdf}
\caption{Corner PLR structure, $\frac{N}{R_\text{max}}=8$.}\label{fig:corner}
\end{minipage}
\end{figure}
\subsubsection{Strong hierarchical matrices}
Next, we define a matrix with a \emph{strong hierarchical structure}. This will be useful for matrices with a singularity on the diagonal.
\begin{definition}\label{def:strong}
A matrix is said to have \emph{strong hierarchical structure} when a block is compressed if and only if its row and column indices are separated by at least the width of the block.
\end{definition}
The strong hierarchical structure of a matrix is shown in Figure \ref{fig:strong}. We can see that, the condition for compression being stronger than in the weak case, more blocks will have to be divided. For example, let the matrix $M$ be $8 \times 8$ again. Then the block at level 0 is $M$ itself, and again its row and column indices overlap, so we divide the matrix in four. We are now at level 1, with four blocks. The $(1,1)$ block will still have to be divided, its row and column indices being equal. The same holds for block $(2,2)$. Now, block $(1,2)$ has row indices $\left\{1, 2, 3, 4\right\}$ and column indices $\left\{5, 6, 7, 8\right\}$. Those two sets do not overlap, but the distance between them, defined as the minimum of $|i-j|$ over all row indices $i$ and column indices $j$ for that block, is 1. Since the width of the block is 4, which is greater than 1, we have to divide the block following Definition \ref{def:strong}. However, at level 2 which has 16 blocks of width 2, we can see that multiple blocks will be compressed: $(1,3), (1,4), (2,4), (3,1), (4,1), (4,2)$.
The matrix-vector multiplication complexity of matrices with a strong hierarchical structure is
\begin{equation}\label{eq:compstrong}
12N R_\text{max} \log{\frac{N}{2 R_\text{max}}}.
\end{equation}
Again this is faster, asymptotically, than the typical $2N^2$ operations of a dense matrix-vector product. We can obtain this number once again by accounting for all the blocks and using \ref{eq:comp}. More precisely, we have $3\sum_{j=1}^{l-1} 2^l=6\frac{1-2^{l-1}}{1-2}=6(2^{l-1}-1)$ compressed blocks at level $l$, hence those blocks have size $N/2^l$. This is true for $l=2, \ldots L-1$, where again $L=\log_2{N/R_\text{max}}$ is the number of levels. Notice that, as expected, we do not have any compressed blocks, or leaves, at level 1. The contribution of those blocks to the matrix-vector complexity will be
\begin{eqnarray*}
4R_\text{max} \sum_{l=2}^{L-1} \left(6(2^{l-1}-1) \ \frac{N}{2^l} \right) &=&12 N R_\text{max} \sum_{l=2}^{L-1} \frac{2^l-2}{2^l}\\
&=& 12 N R_\text{max} (L-2 - \sum_{l=0}^{L-3} \frac{1}{2} \frac{1}{2^l} ) \\
&=& 12 N R_\text{max} (L-2 - (1-1/2^{L-2}) ) \\
&=& 12 N R_\text{max} (L-3+1/2^{L-2}).
\end{eqnarray*}
We need to add to this quantity the complexity coming from the smallest blocks, of size $N/2^L$. There are
\[ 6(2^{L-1}-1) + 2^L + 2(2^L-1)=6\cdot 2^L-8\]
such blocks, and so the corresponding complexity is
\[ 4R_\text{max} (6 \cdot 2^L-8)(N/2^L)=4 R_\text{max} N (6-8/2^L).\]
Adding this to our previous result, we obtain the final complexity of a matrix-vector multiplication:
\begin{eqnarray*}
& & 12 N R_\text{max} (L-3+1/2^{L-2})+ 4 R_\text{max} N (6-8/2^L)\\
&=& 12 N R_\text{max} (L-1+\frac{1}{2^{L-2}}-\frac{2}{3\cdot2^{L-2}}) \\
&\leq& 12 N R_\text{max} L,
\end{eqnarray*}
as stated previously.
For the complexity of the compression algorithm, again we sum the dimensions of all blocks whose SVD we calculated: the matrix itself, the 4 blocks of level 1, the 16 blocks of level 2, 40 blocks in level 3, etc. Hence the complexity of compression is
\begin{eqnarray*}
R_\text{max}^2 \left( N + \frac{N}{2} 4 + \sum_{l=2}^L \frac{N}{2^l} (6\cdot 2^l-8) \right) &=& R_\text{max}^2 \left( N + 2N + N \sum_{l=2}^{L-1} \left(6-\frac{8}{2^l} \right) \right) \\
&=& R_\text{max}^2 N \left(3+6(L-2)-8 \ \frac{1}{4}\ \frac{1-1/2^{L-2}}{1-1/2} \right) \\
&=& R_\text{max}^2 N \left(6L-9-4(1-1/2^{L-2}) \right) \\
&=& R_\text{max}^2 N \left(6L-13+1/2^{L-2} \right) \\
&\leq & R_\text{max}^2 N \left(6L-12 \right)
\end{eqnarray*}
or
\[6N R_\text{max}^2 \log{\frac{N}{4R_\text{max}}}. \]
This again is nearly linear. Using similar arguments as in the weak case, we can estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$ again.
\subsubsection{Corner PLR matrices}
One final structure we wish to define, now useful for matrices with a singularity in a corner, is the following:
\begin{definition}\label{def:corner}
A matrix is said to have \emph{corner PLR structure}, with reference to a specific corner of the matrix, when a block is divided if and only if both its row and column indices contain the row and column indices of the entry corresponding to that specific corner.
\end{definition}
Figure \ref{fig:corner} shows a top-right corner PLR structure. Again, we take an $8 \times 8$ matrix $M$ as an example. The top-right entry has row index 1 and column index 8. We see that the level 0 block, $M$ itself, certainly contains the indices $(1,8)$, so we divide it. On the next level, we have four blocks. Block $(1,2)$ is the only one that has both row indices that contain the index 1, and column indices that contain the index 8, so this is the only one that is divided. Again, on level 2, we have 16 blocks of size 2, and block $(1,4)$ is the only one divided.
As for the corner PLR matrices, their matrix-vector multiplication complexity is:
\begin{equation}\label{eq:compcorn}
8N R_\text{max}.
\end{equation}
Indeed, we see we have 3 blocks of size $N/2^L$ for $l=1,2, \ldots, L-1$. This is a constant number of blocks per level, which means that matrix-vector multiplication will be even faster. We also have 4 blocks at the lowest level, of size $N/2^L$. The complexity is then
\begin{eqnarray*}
4 R_\text{max} 3 \sum_{l=1}^{L-1} N/2^l \ +4R_\text{max} 4 N/2^L &=& 4 R_\text{max} N (3(1-1/2^{L-1}) +2/2^{L-1})\\
&=& 8 N R_\text{max}.
\end{eqnarray*}
For the complexity of the compression algorithm, we sum the dimensions of all blocks whose SVD we calculated: the matrix itself, the 4 blocks of level 1, 4 blocks in level 2, 4 blocks in level 3, etc. Hence the complexity of compression is
\begin{eqnarray*}
R_\text{max}^2 \left( N + 4 \sum_{l=1}^L \frac{N}{2^l} \right) &=& R_\text{max}^2 N \left( 1+4 \ \frac{1}{2} \ \frac{1-1/2^L}{1-1/2} \right) \\
&=& R_\text{max}^2 N \left(1+4-4/2^L \right) \\
&\leq & R_\text{max}^2 N \left(4 \right)
\end{eqnarray*}
or
\[4N R_\text{max}^2 . \]
Hence the complexity of compression for corner PLR matrices is linear. And again, we estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$.
\newline
\newline
Now that we have explained these three special structures, and how they provide a fast matrix-vector product, we are ready to discuss using PLR matrices specifically for the exterior DtN map.
\section{Using PLR matrices for the DtN map's submatrices}\label{sec:usingplr}
As we recall, obtaining the full DtN map from solving the exterior problem $4N$ times is too costly, and so we use matrix probing to approximate the DtN map $D$ by $\tilde{D}$ using only a few exterior solves. If we were to try to use PLR matrices directly on $D$, we would have to find the SVD of many blocks. Since we do not have access to the blocks themselves, we would need to use the randomized SVD, and hence to solve the exterior problem, on random vectors restricted to the block at hand. As we mentioned before, Lin et al. have done something similar in \cite{Hmatvec}, which required $O(\log{N}$ matrix-vector multiplies with a large constant, or in our case exterior solves. This is too costly, and this is why we use matrix probing first to obtain an approximate $\tilde{D}$ with a small, nearly constant number of exterior solves.
Now that we have access to $\tilde{D}$ from matrix probing, we can approximate it using PLR matrices. Indeed, we have access to the full matrix $\tilde{D}$, and so finding the SVD of a block is not a problem. In fact, we use the randomized SVD for speed, not because we only have access to matrix-vector multiplies.
Compressing one of those edge-to-edge submatrices under the PLR matrix framework requires that we pick both a tolerance $\epsilon$ and a maximal desired rank $R_\text{max}$. We explain in the next subsections how to choose appropriate values for those parameters.
\subsection{Choosing the tolerance}
Because our submatrices come from probing, they already have some error attached to them, that is, the relative probing error as defined in equation \ref{acterr} of chapter \ref{ch:probing}. Therefore, it would be wasteful to ask for the PLR approximation to do any better than that probing error.
Also, when we compress blocks in the PLR compression algorithm, we make an absolute error in the $L_2$ norm. However, because of the high norm of the DtN map, it makes more sense to consider the relative error. We can thus multiply the relative probing error we made on each submatrix $\tilde{M}$ by the norm of the DtN map $D$ to know the absolute error we need to ask of the PLR compression algorithm. And since the $L_2$ norm is smaller than the Frobenius norm, and errors from each block compound, we have found empirically that asking for a tolerance $\epsilon$ which is a factor of $1$ to $1/100$ of the absolute probing error of a submatrix works for obtaining a similar Frobenius error from the PLR approximation. As a rule of thumb, this factor needs to be smaller for diagonal submatrices $M$ of $D$, but can be equal to 1 for submatrices corresponding to opposite edges of $\partial \Omega$.
Of course, we do not want to use an $\varepsilon$ which is too smaller either. That might force the PLR compression algorithm to divide blocks more than needed, and make the matrix-vector multiplications slower than needed.
\subsection{Minimizing the matrix-vector application time}
Our main objective in this chapter is to obtain a fast algorithm. To this end, we try to compress probed submatrices of the DtN map using various values of the parameter $R_\text{max}$, and choose the value that will give us the fastest matrix-vector multiplies. We use the known complexity of a matrix-vector multiplication \eqref{eq:comp} to find the rank $R_\text{max}$ that minimizes the complexity, from doing a few tests, and we use the compressed submatrix corresponding to that particular maximal rank in our Helmholtz solver. A different choice of complexity might be used depending on the operating system and coding language used, since slow downs might occur because of cache size, operations related to memory, matrix and vector operations, etc.
However, we note that we may compare the actual complexity from the particular structure obtained by PLR compression to the ``ideal'' complexities coming from the special structures we have mentioned before. Indeed, for a submatrix on the diagonal, we can compare its matrix-vector complexity to that of weak and strong hierarchical matrices. That will give us an idea of whether we have a fast algorithm. One thing we notice in most of our experiments is that, for diagonal blocks, the actual complexity usually becomes smaller as $R_\text{max}$ increases, until we arrive at a minimum with the $R_\text{max}$ that gives us the better compromise between too many and blocks of too high a rank. Then, the complexity increases again. However, the complexity increases slower than that of both weak and strong matrices.
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{comps-c1-block1-errsm5m6}.pdf}
\caption{Matrix-vector complexity for submatrix $(1,1)$ for $c \equiv 1$, various $R_\text{max}$. Probing errors of $10^{-5}$, $10^{-6}$.}
\label{fig:compscomp1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{comps-c1-block2-errsm5m6}.pdf}
\caption{Matrix-vector complexity for submatrix $(2,1)$ for $c \equiv 1$, various $R_\text{max}$. Probing errors of $10^{-5}$, $10^{-6}$.}
\label{fig:compscomp2}
\end{minipage}
\end{figure}
Figure \ref{fig:compscomp1} confirms this phenomenon for the $(1,1)$ block of the constant medium. From this figure, we would then pick $R_\text{max}=8$ since this is the value of $R_\text{max}$ that corresponds to the smallest actual complexity of a matrix-vector product, both for a relative probing error of $10^{-5}$ and $10^{-6}$. Figure \ref{fig:compscomp2} confirms this phenomenon as well for the $(2,1)$ block of the constant medium. From this figure, we would then pick $R_\text{max}=4$ again both for a relative probing error of $10^{-5}$ and $10^{-6}$, since this is the value of $R_\text{max}$ that corresponds to the smallest actual complexity of a matrix-vector product (it is hard to tell from the figure, but the complexity for $R_\text{max}=2$ is just larger than that for $R_\text{max}=4$ in both cases).
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_1_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=2$. Each block is colored by its numerical rank.}
\label{fig:c1d1b1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_3_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=8$. Each block is colored by its numerical rank.}
\label{fig:c1d3b1}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_5_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=32$. Each block is colored by its numerical rank.}
\label{fig:c1d5b1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_2_2-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(2,1)$ for $c \equiv 1$, $R_\text{max}=4$. Each block is colored by its numerical rank.}
\label{fig:c1d2b2}
\end{minipage}
\end{figure}
It is also informative to have a look at the structures we obtain, and the ranks of each block. Figures \ref{fig:c1d1b1}, \ref{fig:c1d3b1} and \ref{fig:c1d5b1} refer to block $(1,1)$ again of $c \equiv 1$, for different values of $R_\text{max}$. As we expect, having $R_\text{max}=2$ in Figure \ref{fig:c1d1b1} in this case forces blocks to be very small, which is wasteful. On the other hand, a larger $R_\text{max}=32$ in Figure \ref{fig:c1d5b1} is not better because then we have fewer blocks, but they have large rank. Still, because the structure is that of a weak hierarchical matrix, with blocks that have in fact rank smaller than $R_\text{max}$, we obtain a fast matrix-vector product. However, the ideal is $R_\text{max}=8$ in Figure \ref{fig:c1d3b1}, which minimizes the complexity of a matrix-vector product by finding the correct balance between fewer blocks but small rank. We see it almost has a strong hierarchical structure, but in fact with more large blocks and fewer small blocks. As for the $(2,1)$ block, we see its PLR structure in Figure \ref{fig:c1d2b2}: it is actually a corner PLR structure but the numerical ranks of blocks are always lower than $R_\text{max}=4$ so the matrix-vector multiplication of that submatrix will be even faster than for a corner PLR structure, as we knew from Figure \ref{fig:compscomp2}.
\subsection{Numerical results}
\begin{table}[ht]
\caption{PLR compression results, $c\equiv 1$}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$4.2126e-01$} &{$6.5938e-01$} & $115$ \\ \hline
{$2$} & {$2$} & {$4.2004e-02$} &{$7.3655e-02$} & $93$ \\ \hline
{$2$} & {$2$} & {$1.2517e-03$} &{$2.4232e-03$} & $55$ \\ \hline
{$4$} & {$2$} & {$1.1210e-04$} &{$4.0003e-04$} & $42$ \\ \hline
{$8$} & {$4$} & {$1.0794e-05$} &{$1.4305e-05$} & $32$ \\ \hline
{$8$} & {$4$} & {$6.5496e-07$} &{$2.1741e-06$} & $29$ \\ \hline
\end{tabular}
\end{center}
\label{c1solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the Gaussian waveguide.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up \\ \hline
{$2$} & {$2$} & {$2$} & {$6.6034e-02$} &{$1.4449e-01$} & $105$ \\ \hline
{$2$} & {$2$} & {$2$} & {$1.8292e-02$} &{$7.4342e-02$} & $74$ \\ \hline
{$2$} & {$2$} & {$2$} & {$2.0948e-03$} &{$1.1014e-02$} & $59$ \\ \hline
{$4$} & {$4$} & {$2$} & {$2.3740e-04$} &{$1.6023e-03$} & $47$ \\ \hline
{$8$} & {$4$} & {$4$} & {$1.5369e-05$} &{$8.4841e-05$} & $36$ \\ \hline
{$8$} & {$8$} & {$4$} & {$3.4148e-06$} &{$1.7788e-05$} & $30$ \\ \hline
\end{tabular}
\end{center}
\label{c3solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the Gaussian slow disk.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$9.2307e-02$} &{$1.2296e+00$} & $97$ \\ \hline
{$2$} & {$2$} & {$8.1442e-03$} &{$4.7922e-02$} & $69$ \\ \hline
{$4$} & {$2$} & {$1.2981e-03$} &{$3.3540e-02$} & $44$ \\ \hline
{$4$} & {$2$} & {$1.1680e-04$} &{$1.0879e-03$} & $39$ \\ \hline
{$4$} & {$2$} & {$2.5651e-05$} &{$1.4303e-04$} & $37$ \\ \hline
\end{tabular}
\end{center}
\label{c5solveplr}
\end{table}
We have compressed probed DtN maps and used them in a Helmholtz solver with success. We have used the same probed matrices as in the previous chapter, and so we refer the reader to Tables \ref{FDPMLerr}, \ref{c1solve}, \ref{c3solve}, \ref{c5solve}, \ref{c16solve}, \ref{c18solve}, \ref{c33solve} for all the parameters we used then.
\begin{table}[ht]
\caption{PLR compression results, $c$ is the vertical fault, sources on the left and on the right.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $\frac{\|D-\overline{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\overline{u}\|_F}{\|u\|_F}$, left & $\frac{\|u-\overline{u}\|_F}{\|u\|_F}$, right & Speed-up\\ \hline
{$2$} & {$2$} & {$2.6972e-01$} &{$5.8907e-01$} &{$4.6217e-01$} & $105$\\ \hline
{$2$} & {$2$} & {$9.0861e-03$} &{$3.9888e-02$} &{$2.5051e-02$} & $67$ \\ \hline
{$1$} & {$4$} & {$8.7171e-04$} &{$3.4377e-03$} &{$2.4279e-03$} & $53$\\ \hline
\end{tabular}
\end{center}
\label{c16solveplrl}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the diagonal fault.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$1.4281e-01$} &{$5.3553e-01$} & $98$ \\ \hline
{$2$} & {$2$} & {$1.9108e-02$} &{$7.8969e-02$} & $76$ \\ \hline
{$2$} & {$4$} & {$2.5602e-03$} &{$8.7235e-03$} & $49$ \\ \hline
\end{tabular}
\end{center}
\label{c18solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the periodic medium.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$1.2967e-01$} &{$2.1162e-01$} & $32$ \\ \hline
{$2$} & {$2$} & {$3.0606e-02$} &{$5.9562e-02$} & $22$ \\ \hline
{$8$} & {$2$} & {$9.0682e-03$} &{$2.6485e-02$} & $11$ \\ \hline
\end{tabular}
\end{center}
\label{c33solveplr}
\end{table}
We now present results for PLR compression in a Helmholtz solver in Tables \ref{c1solveplr}, \ref{c3solveplr}, \ref{c5solveplr}, \ref{c16solveplrl}, \ref{c18solveplr}, \ref{c33solveplr}. For each medium, we show the chosen $R_\text{max}$ of the most important (in norm) submatrices. For all other submatrices, $R_\text{max} \leq 2$. We then show the relative norm of the error between the PLR compression $\overline{D}$ and the actual DtN map $D$. We also show the relative error between the solution $\overline{u}$ computed using $\overline{D}$ in the Helmholtz solver and the actual solution $u$ using $D$ as the DtN map. Finally, we show the ``speed-up'' obtained from taking the ratio of the complexity of using a dense matrix-vector product for $\tilde{D}$, which would be of about $2\times 16N^2$, to the total complexity of a matrix-vector product of $\bar{D}$. This ratio tells us how much faster than a dense product the PLR compression is. We see that this ratio ranges from about 50 to 100 for all media expect the periodic media, with a smaller ratio associated with asking for a higher accuracy, as expected. The speed-up ratio is between 10 and 30 for the periodic media, but as we recall the value of $N$ here is smaller: $N=320$. Larger values of $N$ should lead to a better speed-up.
\section{The half-space DtN map is separable and low rank: theorem}\label{sec:septheo}
As we have mentioned before, the Green's function for the half-space Helmholtz equation is separable and low rank \cite{Hsweep}. We investigate here the half-space DtN map kernel, which is related to the Green's function through two derivatives, as we saw in section \ref{sec:hsG}, and we obtain a similar result to that of \cite{Hsweep}. We state the result here, and prove it in the next subsections. We then end this section with a discussion on generalizing our theorem for heterogeneous media.
Let $\mathbf{x}=(x,0)$ and $\mathbf{y}=(y,0)$ be points along the half-space boundary, $x\neq y$. Recall the Dirichlet-to-Neumann map kernel for the half-space Helmholtz equation \eqref{eq:hsHE} with homogeneous medium $c \equiv 1$ and $\omega=k/c$ is:
\begin{equation}\label{eq:hsDtN}
K(|x-y|)= \frac{ik^2}{2} \frac{H_1^{(1)}(k|x-y|)}{k|x-y|}.
\end{equation}
\begin{theorem}\label{theo:sep}
Let $0 <\epsilon \leq 1/2$, and $0<r_0<1$, $r_0=\Theta(1/k)$. There exists an integer $J$, functions $\left\{\Phi_j,\chi_j\right\}_{j=1}^J$ and a number $C$ such that we can approximate $K(|x-y|)$ for $r_0\leq |x-y| \leq 1$ with a short sum of smooth separated functions:
\begin{equation}\label{eq:sep}
K(|x-y|)=\sum_{j=1}^J \Phi_j(x)\chi_j(y) + E(x,y)
\end{equation}
where $|E(x,y)| \leq \epsilon$, and $J \leq C \left(\log k \max(|\log\epsilon|,\log k\right))^2$ with $C$ which does not depend on $k$, or $\epsilon$. $C$ does depend weakly on $r_0$ through the constant quantity $r_0k$, for more details see Remark \ref{rem:C_0}.
\end{theorem}
To prove this, we shall first consider the Hankel function $H_1^{(1)}$ in \eqref{eq:hsDtN}, and see that it is separable and low rank. Then, we shall look at the factor of $1/k|x-y|$ in \eqref{eq:hsDtN}, and see the need for using quadratures on a dyadic partition of the interval $\left[r_0,1\right]$ in order to prove that this factor is also separable and low rank. Finally, we prove Theorem \ref{theo:sep} and make a few remarks.
\subsection{Treating the Hankel function}
From Lemmas 7 and 8 of \cite{mr} we know that $H_1^{(1)}(k|x-y|)$ is separable and low-rank as a function of $x$ and $y$. Looking in particular at Lemma 7 from \cite{mr}, we make a slight modification of the proof to obtain the following lemma.
\begin{lemma}\label{lem:7n8}
Slight modification of Lemmas 7 and 8, \cite{mr}. Let $0<\epsilon \leq 1/2$, $k>0$ and $r_0>0$, $r_0=\Theta(1/k)$. Let $|x-y|>r_0$. Then there exists an integer $J_1$, a number $C_1$, and functions $\left\{\Phi^{(1)}_j,\chi^{(1)}_j\right\}_{j=1}^{J_1}$ such that
\begin{equation}\label{eq:h1}
H_1^{(1)}(k|x-y|)=\sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y),
\end{equation}
where $|E^{(1)}_{J_1}(x,y)| \leq \epsilon$, and $J_1 \leq C_1 \log k |\log\epsilon|$ with $C_1$ which does not depend on $k$, or $\epsilon$. $C_1$ does depend weakly on $r_0$ through the quantity $r_0k$. Again, see Remark \ref{rem:C_0}.
\end{lemma}
\subsection{Treating the $1/kr$ factor}
We now show that the $1/kr$ factor is also separable and low rank\footnote{A different technique than the one presented here would be to expand $1/|x-y|$ using the Taylor expansion for $x>y>0$: $$\frac{1}{x} \frac{1}{1-y/x} = \frac{1}{x} \left(1+\frac{y}{x}+\left(\frac{y}{x}\right)^2 + \ldots \right).$$ However, making an error of $\varepsilon$ requires $k\log\varepsilon$ terms in the expansion because the error is large when $y/x \approx 1$ or $y\approx x$ or $r\approx r_0$.}. Notice:
\begin{equation}\label{eq:1okr}
\int_0^\infty e^{-krt} dt = \left. \frac{e^{-krt}}{-kr} \right|_0^\infty = 0 + \frac{e^{-kr0}}{kr} = \frac{1}{kr}
\end{equation}
and
\begin{equation}\label{eq:split}
\int_0^\infty e^{-krt} dt = \int_0^{T} e^{-krt} dt + \int_{T}^\infty e^{-krt} dt,
\end{equation}
where
\[ \int_{T}^\infty e^{-krt} dt = \frac{e^{-krT}}{kr}. \]
Equation \eqref{eq:1okr} means we can write $1/kr$ as an integral, and equation \eqref{eq:split} means we can split this integral in two, one part being a definite integral, the other indefinite. But we can choose $T$ so that the indefinite part is smaller than our error tolerance:
\[ \left| \int_{T}^\infty e^{-krt} dt \right| \leq \epsilon . \]
For this, we consider that $\frac{e^{-krT}}{kr} \leq \frac{e^{-krT}}{C_0} \leq \epsilon $ and so we need $krT \geq |\log C_0| + |\log \epsilon |$ or $T \geq (|\log C_0| + |\log \epsilon|)/C_0$ or
\begin{equation}\label{eq:T}
T = O(| \log \epsilon |).
\end{equation}
If we assume \eqref{eq:T} holds, then we can use a Gaussian quadrature to obtain a low-rank, separable expansion of $1/kr$:
\[ \frac{1}{kr} \approx \int_0^{T} e^{-krt} dt \approx \sum_{p=1}^n w_p e^{-krt_p} \]
where the $w_p$ are the Gaussian quadrature weights and the $t_p$ are the quadrature points. To determine $n$, the number of quadrature weights and points we need for an accuracy of order $\epsilon$, we can use the following Gaussian quadrature error estimate \cite{na} on the interval $[a,b]$:
\begin{equation}\label{eq:quaderr}
\frac{(b-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi)
\end{equation}
where $f$ is the integrand, and $\xi$ is within the bounds of integration: $f(\xi)=e^{-kr\xi}$ and $a \leq \xi \leq b$ where here $a=0$ and $b=T$. Clearly
\[ f^{(2n)}(\xi) = (-kr)^{2n}e^{-kr\xi}. \]
The worst case will be when $\xi=0$ and $r=1$ so
\[\max_{0\leq \xi \leq T}\left| f^{(2n)}(\xi) \right|= (k)^{2n}.\]
We can put this back in the error estimate \eqref{eq:quaderr}, using Stirling's approximation \cite{ans}
$$ \sqrt{2\pi n}\ n^n e^{-n} \leq n! \leq e \sqrt{n} \ n^n e^{-n}, $$
for the factorials, to get:
\begin{eqnarray*}
\left| \frac{(b-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{T^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} (k)^{2n} \\
&\leq& \frac{T^{2n+1} e^4(n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } (k)^{2n} \\
&\leq& \frac{T^{2n+1} e^4 (n)^{1/2} (ke)^{2n}} {(2n+1) \pi^{3/2} (2)^{6n+3} n^{2n}} \\
&\leq& \frac{Te^4}{16\sqrt{n} \pi^{3/2}} \left( \frac{Tke}{8n}\right)^{2n} .
\end{eqnarray*}
This is problematic because in order for the quadrature scheme to converge, we are forced to have $n > Tke/8 \approx k \log \epsilon$, which is prohibitively large. This can be understood as the difficulty of integrating accurately a function which has large higher derivatives such as this sharp exponential over a large domain such as the interval $[0,\log{\epsilon}]$. To solve this problem, we make a dyadic partition of the $[0,T]$ interval in $O(\log{k})$ subintervals, each of which will require $O(\log{\epsilon})$ quadrature points.
Before we get to the details, let us redo the above error analysis for a dyadic interval $[a,2a]$. The maximum of $\left| f^{(2n)}(\xi) \right|= (kr)^{2n}e^{-kr\xi}$ as a function of $kr$ occurs when $kr=2n/\xi$, and the maximum of that as a function of $\xi$ is when $\xi=a$, so the maximum is $\left| f^{(2n)}(a) \right|= (2n/a)^{2n}e^{-(2n/a)*a}$. We can put this back in the error estimate \eqref{eq:quaderr} to get%
\begin{eqnarray*}
\left| \frac{(2a-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{a^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} (2n/a)^{2n} e^{-2n} \\
&\leq& \frac{a e^4(n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } (2n)^{2n} e^{-2n} \\
&\leq& \frac{ 1.22 a (n)^{1/2} } {(2n+1) (2)^{4n} } \\
&\leq& \frac{a } {\sqrt{n} (2)^{4n} } .
\end{eqnarray*}
To have this error less than $\epsilon$, we thus need
\begin{equation}\label{eq:n}
4n \log 2 \geq |\log\epsilon | + \log{a/\sqrt{n}},
\end{equation}
with $a \leq T \approx |\log\epsilon |$, and we see that in fact
\begin{equation}\label{eq:nval}
n=|\log \epsilon|
\end{equation}
will work.
\begin{remark}
We found the maximum of $\left| f^{(2n)}(\xi) \right|= (kr)^{2n}e^{-kr\xi}$ to be when both $kr=2n/\xi$ and $\xi=a$. However, we neek $kr\leq k$, so that we need $a\geq 2n/k=2|\log\varepsilon|/k$. %
In the next subsection, we make sure that $a \geq 2|\log\varepsilon|/k$ by having the lower of the interval $I_1$ equal to $2|\log\varepsilon|/k$.%
\end{remark}
\subsection{Dyadic interval for the Gaussian quadrature}
Now we are ready get into the details of how we partition the interval. The subintervals are:
\begin{eqnarray*}
I_0&=&\left[0,\frac{2|\log\epsilon|}{k}\right] \\
I_j&=&\left[\frac{2^{j}|\log\epsilon|}{k},\frac{2^{j+1}|\log\epsilon|}{k}\right], \qquad j=1, \dots , M-1
\end{eqnarray*}
where $T=\frac{2^M|\log\epsilon|}{k}=O(|\log{\epsilon}|)$ which implies that
\begin{equation}\label{eq:M}
M=O(\log{k}).
\end{equation}
Then, for each interval $I_j$ with $j \geq 1$, we apply a Gaussian quadrature as explained above, and we need $n=|\log \epsilon|$ quadrature points to satisfy the error tolerance of $\epsilon$.
As for interval $I_0$, we return to the Gaussian quadrature error analysis, where this time again $k^{2n}$ is the maximum of $\left| f^{(2n)}(\xi) \right|$ for $\xi \in I_0$. Thus we have that the quadrature error is:
\begin{eqnarray*}
\left| \frac{(2|\log\epsilon|/k-0)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{(2|\log\epsilon|/k)^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} k^{2n} \\
&\leq& \frac{(2|\log\epsilon|/k)^{2n+1} e^4 (n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } k^{2n} \\
&\leq& \frac{2^{2n+1}|\log\epsilon|^{2n+1} e^{2n} e^4 n^{1/2} } {k(2n+1) (2)^{6n+3} n^{2n}} \\
&\leq& \frac{2|\log\epsilon| e^4 \sqrt{n}}{8k(2n+1)} \left( \frac{2|\log\epsilon| e}{8n} \right)^{2n}
\end{eqnarray*}
and $n=O(|\log\epsilon|)$ will satisfy the error tolerance.
To recap, we have approximated the function $1/kr$, as a function of $r$, by a low-rank separable expansion with error $\epsilon$:
\[ \frac{1}{k|x-y|} = \sum_{j=1}^{J_2} w_j e^{-k|x-y|t_j} + E^{(2)}_{J_2}(x,y),\]
where $J_2=O(\log k |\log\epsilon|)$ (again, from using $O(\log k)$ intervals with $O(|\log\epsilon|)$ quadrature points on each interval), $C_0 \leq k|x-y| \leq k$, and $|E^{(2)}_{J_2}(x,y)|<\epsilon$.
Clearly this expansion is separable: depending on the sign of $(x-y)$, we have $e^{-k|x-y|t_j}=e^{-kxt_j}e^{kyt_j}$ or $e^{-k|x-y|t_j}=e^{kxt_j}e^{-kyt_j}$. Either way, the exponential has been expressed as a product of a function of $x$ only and a function of $y$ only. Thus we have the following lemma.
\begin{lemma}\label{lem:1overkr}
Let $0<\epsilon $, $k>0$ and $r_0>0$, $r_0=\Theta(1/k)$. Let $|x-y|>r_0$. Then there exists an integer $J_2$, a number $C_2$, and functions $\left\{\Phi^{(2)}_j,\chi^{(2)}_j\right\}_{j=1}^{J_2}$ such that
\begin{equation}\label{eq:kr}
\frac{1}{k|x-y|}=\sum_{j=1}^{J_2} \Phi^{(2)}_j(x)\chi^{(2)}_j(y) + E^{(2)}_{J_2}(x,y),
\end{equation}
where $|E^{(2)}_{J_2}(x,y)| \leq \epsilon$, and $J_2 \leq C_2 \log k |\log\epsilon|$ with $C_2$ which does not depend on $k$, or $\epsilon$. $C_2$ does depend weakly on $r_0$ through the constant quantity $r_0k$. Again, see Remark \ref{rem:C_0}.
\end{lemma}
\subsection{Finalizing the proof}
We now come back to the DtN map kernel $K$ in \eqref{eq:hsDtN}. %
Using Lemmas \ref{lem:7n8} and \ref{lem:1overkr}, we can write each factor of $K$ in its separable expansion:
\begin{eqnarray*}
\frac{K(|x-y|)}{ik^2/2}&=&\ H_1^{(1)}(k|x-y|) \ \frac{1}{k|x-y|} \\
&=&\left( \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y) \right) \left( \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) + E^{(2)}_{J_2}(x,y) \right)\\
&=& \frac{K_{(J_1,J_2)}(|x-y|)}{ik^2/2} + E^{(2)}_{J_2}(x,y)\sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y) \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) \\
&+& E^{(1)}_{J_1}(x,y) E^{(2)}_{J_2}(x,y)
\end{eqnarray*}
where
\begin{eqnarray}\label{eq:hssepDtN}
K_{(J_1,J_2)}(|x-y|)&=&\frac{ik^2}{2}\left( \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) \right) \left( \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) \right)\\
&=& \frac{ik^2}{2}\sum_{j=1}^{J_1 J_2} \Phi_j(x)\chi_j(y) .
\end{eqnarray}
It follows that
\begin{equation}\label{eq:errDs}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right| \leq \left|E^{(2)}_{J_2}\right| \left| \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y)\right| + \left|E^{(1)}_{J_1}\right| \left|\sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y)\right|+ \left|E^{(1)}_{J_1}\right| \left|E^{(2)}_{J_2}\right|
\end{equation}
Now clearly
\[ \max_{C_0 \leq kr \leq k} \frac{1}{kr} = \frac{1}{C_0}. \]
We also have from Lemma 3 of \cite{flatland} that
\[ \left| e^{-ikr}H_1^{(1)}(kr) \right| \leq C(kr)^{-1/2}, \qquad kr \geq C_0 \]
for some constant $C$ which does not depend on $k$. Then, we have that
\[\max_{kr \geq C_0} \left| H_1^{(1)}(kr) \right| \leq \frac{C}{C_0^{1/2}}. \]
What we have shown is that the quantities $1/kr$ and $H_1^{(1)}(kr)$ are bounded by some constant, call it $\tilde{C}$, for the range of $kr$ we are interested, that is, $C_0 \leq kr \leq k$. We can now go back to \eqref{eq:errDs}, using our approximations from Lemmas \ref{lem:7n8} and \ref{lem:1overkr} which make an absolute error of no more than $\epsilon$, and see that
\begin{equation*}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right| \leq 2\epsilon (\tilde{C}+\epsilon) + \epsilon^2
\end{equation*}
or
\begin{equation}\label{eq:errDsrel}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right|=O(\epsilon).
\end{equation}
Note that the expansion of $K_{(J_1,J_2)}$ in \eqref{eq:hssepDtN} now contains $J_1 J_2 = O\left((\log k |\log\epsilon| )^2\right)$ terms. In order to obtain an absolute error bound, we need to multiply through with $ik^2/2$ in \eqref{eq:errDsrel}. With $\epsilon \rightarrow \epsilon k^2$, we have now finally shown that
\begin{equation*}
\left| K-K_{(J_1,J_2)}\right| \leq \epsilon
\end{equation*}
with the expansion of $K_{(J_1,J_2)}$ in \eqref{eq:hssepDtN} containing
\[J=J_1 J_2 =O\left( (\log k (|\log\epsilon|+2\log k) )^2 \right)\]
terms. We thus conclude that the DtN map is low-rank and separable away from the diagonal, with the prescriptions of Theorem \ref{theo:sep}:
\begin{equation}%
K(|x-y|)= \frac{ik^2}{2} \frac{H_1^{(1)}(k|x-y|)}{k|x-y|}= \sum_{j=1}^J \Phi_j(x)\chi_j(y) + E(x,y)
\end{equation}
where $|E(r)| \leq \epsilon$ for $C_0 \leq kr \leq k$ and there is a number $C$ which does not depend on $k$ or $\epsilon$ such that $J \leq C (\log k \max(|\log\epsilon|,\log k))^2$.
\begin{remark}\label{rem:C_0}
We can understand the number $J$ of ranks as made of two terms, one which is $(C_1\log k) (C_2|\log\epsilon|)$, the other $(C_1\log k)(\log C_3 k)$. The numbers $C_1$and $C_3$, but not $C_2$, also weakly depend on the separation $C_0$, in the sense that the larger the separation is, the smaller those numbers are. First, we note from the discussion before equation $\eqref{eq:T}$ that $T$ is smaller when $C_0$ is bigger. Then, a smaller $C_1$ comes from the discussion before equation $\eqref{eq:M}$. The fact that $C_2$ does \emph{not} depend on $C_0$ comes from the discussion after equation \eqref{eq:n}. As for $C_3$, we can understand its \emph{very weak} dependence on $C_0$ by looking at equation \eqref{eq:n} and plugging in $a=T$, remembering how $T$ depends on $C_0$. Thus both terms in $J$ should behave somewhat similarly as $C_0$ changes. Physically, we know a greater separation means we are farther away from the singularity of the DtN map, and so we expect the map to be smoother there, and hence have lower rank.
\end{remark}
\begin{remark}\label{rem:highpow}
In our numerical verifications, we have not noticed the square power in $J$. Rather, we observe in general that $J \sim \log k |\log\epsilon|$. The only exceptions to this behavior that we observed were for larger $\epsilon$, such as $\epsilon=1/10$ or sometimes $1/100$, where $J \sim \log k \log k$. From Remark \ref{rem:C_0}, we know $J$ is made up of two terms, and it makes sense that the term $\sim \log k \log k$ might become larger than the term $\sim \log k |\log \varepsilon|$ when $\varepsilon$ is large. %
\end{remark}
\begin{remark}\label{rem:h}
We also note that in our numerical verifications, we use $r_0$ as small as $h$, which is smaller than the $r_0\sim 1/k \sim h^{2/3}$ we prove the theorem with. If we used $r_0\sim h$ in the theorem, this would mean $C_0\sim n^{-1/3}<1$. By Remark \ref{rem:C_0}, this would affect $J$: the power of the $|\log\epsilon |$ factor would go from 2 to 3. Again, we do not notice such a higher power in the numerical simulations.
\end{remark}
\subsection{The numerical low-rank property of the DtN map kernel for heterogeneous media}
We would like to know as well if we can expect the half-space DtN map kernel to be numerically low-rank in heterogeneous media. We saw in section \ref{sec:BasisPf} how the half-space DtN map, in constant medium, consists of the amplitude $H(r)$, singular at $r=0$, multiplied by the complex exponential $e^{ikr}$. Because of the geometrical optics expansion \eqref{eq:geoopts} of the Green's function $G$ for the Helmholtz equation free-space problem in heterogeneous media, we expect $G$ to have an amplitude $A(\mathbf{x},\mathbf{y})$, which is singular at $\mathbf{x}=\mathbf{y}$, multiplied by a complex exponential with a phase corresponding to the traveltime between points $\mathbf{x}$ and $\mathbf{y}$: $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$. We can expect to be able to treat the amplitude in the same way as we did before, and approximate it away from the singularity with a low-rank separable expansion. However, the complex exponential is harder to analyze because of the phase $\tau(\mathbf{x},\mathbf{y})$, which is not so simple as $|\mathbf{x}-\mathbf{y}|$.
However, a result of \cite{butterflyFIO} still allows us to find a separable low-rank approximation of a function such as $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$. We refer the reader to Theorem 3.1 of \cite{butterflyFIO} for the details of the proof, and simply note here the main result. Let $X$, $Y$ be bounded subsets of $\mathbf{R}^2$ such that we only consider $\mathbf{x} \in X$ and $\mathbf{y} \in Y$. The width of $X$ (or $Y$) is defined as the maximal Euclidean distance between any two points in that set. Then Theorem 3.1 of \cite{butterflyFIO} cites that $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$, $\mathbf{x} \in X$ and $\mathbf{y} \in Y$, is numerically low-rank with rank $O(|\log^4 \varepsilon|)$, given that the product of the widths of $X$ and $Y$ is less than $1/k$.
This translates into a restriction on how large can the off-diagonal blocks of the matrix $D$ be while still being low-rank. Since we use square blocks in the PLR compression algorithm, we expect blocks to have to remain smaller than $1/\sqrt{k}$, equivalent to $N/\sqrt{k}$ points, in variable media. If $N^{2/3} \sim k$ as we have in this thesis because of the pollution effect, this translates into a maximum expected number of blocks of $N^{1/3}$. If we kept instead $N \sim k$, then the maximal number of blocks would be $\sqrt{N}$.
This is why, again, using PLR matrices for compressing the DtN map makes more sense than using hierarchical matrices: the added flexibility means blocks will be divided only where needed, in other words only where the traveltime $\tau$ requires blocks to be smaller in order to have low ranks. And as we saw in section \ref{sec:usingplr}, where we presented our numerical results, ranks indeed remain very small, between 2 and 8, for off-diagonal blocks of the submatrices of the exterior DtN map, even in heterogeneous media.
\section{The half-space DtN map is separable and low rank: numerical verification}\label{sec:sepnum}
We first compute the half-space DtN map for various $k \sim N^{2/3}$, which ensures a constant error from the finite difference discretization (FD error) as we saw in section \ref{sec:compABC}. We also choose a pPML width consistent with the FD error level. Then, we compute the maximum off-diagonal ranks for various fixed separations from the diagonal, that is, various $r_0$ such that $r \geq r_0$. To compute the ranks of a block, we fix a tolerance $\epsilon$, find the Singular Value Decomposition of that block, and discard all singular values smaller than that tolerance. The number of remaining singular values is the numerical rank of that block with tolerance $\epsilon$ (the error we make in Frobenius norm is not larger than $\epsilon$). Then, the maximum off-diagonal rank for a given separation $r_0$ is the maximum rank of any block whose entries correspond to $r\geq r_0$. %
Hence we consider all blocks that have $|i-j| \geq r_0/h$, or $i-j \geq r_0/h$ with $i>j$ since the DtN map is symmetric (and so is its numerical realization, up to machine precision).
\subsection{Slow disk and vertical fault}
The Figures \ref{c5-vsN-r1}, \ref{c5-vsN-r4}, \ref{c5-vsEps-r1}, \ref{c5-vsEps-r4} show the relationship between the ranks and $N$ or $\varepsilon$ for the slow disk, FD error of $10^{-3}$ and separations of $r_0=1$ and $r_0=4$.
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the slow disk, various $\epsilon$. FD error of $10^{-3}$, $r_0=h$.}
\label{c5-vsN-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsN_r4}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the slow disk, various $\epsilon$. FD error of $10^{-3}$, $r_0=4h$.}
\label{c5-vsN-r4}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the slow disk, various $N$. FD error of $10^{-3}$, $r_0=h$.}
\label{c5-vsEps-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsEps_r4}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the slow disk, various $N$. FD error of $10^{-3}$, $r_0=4h$.}
\label{c5-vsEps-r4}
\end{minipage}
\end{figure}
We expect ranks to be slightly smaller for a larger separation $r_0$ (hence larger $C_0$) because of Remark \ref{rem:C_0}. This is indeed the case in our numerical simulations, as we can see by comparing Figures \ref{c5-vsN-r1} ($r_0=h$) and \ref{c5-vsN-r4} ($r_0=4h$), or Figures \ref{c5-vsEps-r1} ($r_0=h$) and \ref{c5-vsEps-r4} ($r_0=4h$). We can clearly see also how the maximum ranks behave as in the previous theorem, except for the missing square power in $J$, as alluded to in Remark \ref{rem:highpow}: they vary logarithmically with $k$ (or $N$) when the tolerance $\epsilon$ is fixed. We expect the slope in a graph of the ranks versus $\log N$ to increase slowly as $\epsilon$ becomes smaller, and we see that in fact the slope barely does (from slightly smaller than $2$ to slightly larger than $2$) as $\epsilon$ goes from $10^{-1}$ to $10^{-6}$. Similarly, when we fix $N$, we expect the ranks to grow logarithmically with $1/\epsilon$, and this is the case. Once again, the slope of the graph with a logarithmic scale for $1/\epsilon$ grows, but only from $1$ to $2$ or so, as $N$ goes from $128$ to $2048$.
The off-diagonal ranks of the DtN map for the slow disk behave very similarly as the above for an FD error of $10^{-2}$, and also for various other separations $r_0$. The same is true for the vertical fault, and so we do not show those results.
\subsection{Constant medium, waveguide, diagonal fault}
As for the constant medium, waveguide and diagonal fault, it appears that the term $O(\log^2 k)$ we expect in the size of the ranks is larger than the term $O(\log k |\log\epsilon|)$, especially when the FD error is $10^{-2}$. This was mentioned in Remark \ref{rem:highpow}. As we can see in Figure \ref{c18-vsN-r1} for the diagonal fault, the dependance of the ranks with $\log N$ seems almost quadratic, not linear. This can also be seen in Figure \ref{c18-vsEps-r1}: here we still see a linear dependence of the ranks with $\log\epsilon$, but we can see that the ranks jump up more and more between different $N$, as $N$ grows, than they do for the slow disk for example (compare to Figure \ref{c5-vsEps-r1}). %
This phenomenon disappears for a smaller FD error (Figures \ref{c18-vsN-r1-FDm3}, \ref{c18-vsEps-r1-FDm3}).%
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the diagonal fault, various $\epsilon$. FD error of $10^{-2}$, $r_0=h$. }
\label{c18-vsN-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the diagonal fault, various $N$. FD error of $10^{-2}$, $r_0=h$.}
\label{c18-vsEps-r1}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.071429_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of $N$, various tolerances $\epsilon$. The separation is $r_0=h$. FD error of $10^{-3}$.}
\label{c18-vsN-r1-FDm3}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.071429_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of the tolerance $\epsilon$, various $N$. The separation is $r_0=h$. FD error of $10^{-3}$.}
\label{c18-vsEps-r1-FDm3}
\end{minipage}
\end{figure}
Finally, we also notice that the term $O(\log^2 k)$ seems to remain important compared to the term $O(\log k |\log\epsilon|)$ as the separation $r_0$ (or $C_0$) grows, as is somewhat expected from Remark \ref{rem:C_0}. This can be seen by comparing Figures \ref{c18-vsN-r8} and \ref{c18-vsN-r1}, or Figures \ref{c18-vsEps-r8} and \ref{c18-vsEps-r1}.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsN_r8}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of $N$, various tolerances $\epsilon$. The separation is $r_0=8h$. FD error of $10^{-2}$.}
\label{c18-vsN-r8}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsEps_r8}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of the tolerance $\epsilon$, various $N$. The separation is $r_0=8h$. FD error of $10^{-2}$.}
\label{c18-vsEps-r8}
\end{minipage}
\end{figure}
\subsection{Focusing and defocusing media}
We have also tested a smooth defocusing medium, that is, a medium where the value of $c$ decreases away from the half-space boundary, tending to a small value far away from the boundary. The equation we have used is $c(x,y)=1+\frac{1}{\pi}\arctan(4(y-1/2))$. This means that as $y \rightarrow \infty$, $c \rightarrow 3/2$ and that as $y \rightarrow -\infty$, $c \rightarrow 1/2$. Choosing the half-space to be $y<0$, we see that $c$ decreases away from $y=0$ into the negative $y$'s: this is a defocusing medium. We expect the waves in this case to never come back, and so we expect off-diagonal ranks of the DtN map to be very nice, just as in the constant medium case, and this is indeed what happens. We could not see any significant difference between the defocusing medium and the constant medium in terms of off-diagonal ranks.
We have also looked at a focusing medium, that is, one in which $c$ increases away from the interface. This forces waves to come back toward the interface. With the same medium $c(x,y)=1+\frac{1}{\pi}\arctan(4(y-1/2))$ as above, but choosing now $y>1$ as our half-space, we see that $c$ increases away from $y=1$ into the large positive $y$'s. This is a focusing medium. We have noticed that the off-diagonal ranks of the DtN map for this medium are the same or barely larger than for the constant medium.
This might only mean that the medium we chose did not have many returning waves. A more interesting medium is the following:
\begin{equation}\label{eq:focus}
c(x,y)=1/2+|y-1/2|.
\end{equation}
This linear $c$ has a first derivative bounded away from 0. Of course, this means that solving the Helmholtz equation in this case is much harder, and in particular, the pPML layer needs to be made thicker than for other media. Still, we notice that the ranks are very similar to the previous cases, as we can see in Figures \ref{c8-vsN-r1-FD2}, \ref{c8-vsEps-r1-FD2}, \ref{c8-vsN-r1-FD3}, \ref{c8-vsEps-r1-FD3}.
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.1_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of $N$, various tolerances $\epsilon$. Separation is $r_0=h$, FD error of $10^{-2}$.}
\label{c8-vsN-r1-FD2}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.1_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of the tolerance $\epsilon$, various $N$. Separation is $r_0=h$, FD error of $10^{-2}$.}
\label{c8-vsEps-r1-FD2}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.045455_bf0.5_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of $N$, various tolerances $\epsilon$. Separation is $r_0=h$, FD error of $10^{-3}$.}
\label{c8-vsN-r1-FD3}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.045455_bf0.5_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of the tolerance $\epsilon$, various $N$. Separation is $r_0=h$, FD error of $10^{-3}$.}
\label{c8-vsEps-r1-FD3}
\end{minipage}
\end{figure}
This could be explained by the fact that we do not go to high enough accuracies to really see those returning waves. Saying this another way, the relative amplitude of the returning waves might be too small to notice at lower accuracies. We were not able to construct an example with returning waves which affected the off-diagonal ranks of the DtN map. Thus we conclude that Theorem \ref{theo:sep} holds in a broad range of contexts, at least when $\epsilon$ is not very small.
\chapter{Matrix probing for expanding the Dirichlet-to-Neumann map}\label{ch:probing}
Recall that the goal of this thesis is to introduce a new compression scheme for ABCs. This scheme consists of two steps:
\begin{enumerate}
\item a precomputation sets up an expansion of the Dirichlet-to-Neumann map, then
\item a fast algorithm is used to apply the DtN map in a Helmholtz solver.
\end{enumerate}
This chapter is concerned with the first step of this procedure, namely, setting up an expansion of the exterior DtN map in a precomputation. This will pave the way for compression in step two, presented in the next chapter.
The main strategy we use in this chapter is matrix probing, we introduce it in section \ref{sec:introprobe}. For matrix probing to be an efficient expansion scheme, we need to carefully choose the basis for this expansion. We present our choices and their rationales in section \ref{sec:basis}. In particular, inverse powers multiplied by a complex exponential work quite well as kernels for the basis. We then present a detailed study of using matrix probing to expand the DtN map in various different media, use that expansion to solve the Helmholtz equation, and document the complexity of the method, all in section \ref{sec:numexp}. Then, we prove in section \ref{sec:BasisPf} a result on approximating the half-space DtN map with the particular set of functions mentioned before, inverse powers multiplied by a complex exponential. We also present a numerical confirmation of that result in section \ref{sec:NumPf}. %
\section{Introduction to matrix probing}\label{sec:introprobe}
The idea of matrix probing is that a matrix $D$ with adequate structure can sometimes be recovered from the knowledge of a fixed, small number of matrix-vector products $Dg_j$, where $g_j$ are typically random vectors. In the case where $D$ is the numerical DtN map (with a slight abuse of notation), each $g_j$ consists of Dirichlet data on $\partial \Omega$, and each application $Dg_j$ requires solving an exterior Helmholtz problem to compute the derivative of the solution normal to $\partial \Omega$. We first explain how to obtain the matrix-vector multiplication of the DtN map with any vector, without having to use the costly procedure of layer-stripping. We then introduce matrix probing.
\subsection{Setup for the exterior problem}\label{sec:ext}
Recall the exterior problem of section \ref{sec:extprob}: solving the heterogeneous-medium Helmholtz equation at frequency $\omega$, outside $\Omega=[0,1]^2$, with Dirichlet boundary condition $u=g$ on $\partial \Omega$. This problem is solved numerically with the five-point stencil of finite differences (FD), using $h$ to denote the grid spacing and $N$ the number of points across one dimension of $\Omega$. We use a Perfectly Matched Layer (PML) or pPML, introduced in section \ref{sec:layers}, as our ABC. The layer starts at a fixed, small distance away from $\Omega$, so that we keep a small strip around $\Omega$ where the equations are unchanged. Recall that the width of the layer is in general as large as $O(\omega^{1.5})$ grid points. We number the edges of $\partial \Omega$ counter-clockwise starting from $(0,0)$, hence side 1 is the bottom edge $(x,0)$, $0\leq x \leq 1$, side 2 is the right edge, etc. The exterior DtN map for this problem is defined from $\partial \Omega$ to itself. Thus its numerical realization, which we also call $D$ by a slight abuse of notation, has a $4\times 4$ block structure. Hence the numerical DtN map $D$ has 16 sub-blocks, and is $n \times n$ where $n=4N$. As an integral kernel, $D$ would have singularities at the junctions between these blocks (due to the singularities in $\partial \Omega)$, so we shall respect this feature by probing $D$ sub-block by sub-block. We shall denote a generic such sub-block by $M$, or as the $(i_M,j_M)$ sub-block of $D$, referring to its indices in the $4 \times 4$ sub-block structure.
The method by which the system for the exterior problem is solved is immaterial in the scope of this paper, though for reference, the experiments in this paper use UMFPACK's sparse direct solver \cite{UMFPACK}. For treating large problems, a better solver should be used, such as the sweeping preconditioner of Engquist and Ying \cite{Hsweep,Msweep}, the shifted Laplacian preconditioner of Erlangga \cite{erlangga}, the domain decomposition method of Stolk \cite{stolk}, or the direct solver with spectral collocation of Martinsson, Gillman and Barnett \cite{dirfirst,dirstab}. This in itself is a subject of ongoing research which we shall not discuss further.
For a given boundary condition $g$, we solve the system and obtain a solution $u$ in the exterior computational domain. In particular we consider $u_{1}$, the solution in the layer just outside of $\partial \Omega$. We are using the same notation as in section \ref{sec:strip}, where as we recall $u_0$ was the solution on the boundary, hence here $u_0=g$. We know from Section \ref{sec:strip} that $u_1$ and $g$ are related by
\begin{equation}\label{eq:D}
\frac{u_{1} - g}{h}=Dg
\end{equation}
The matrix $D$ that this relation defines needs not be interpreted as a first-order approximation of the continuous DtN map: it is the algebraic object of interest that will be ``probed" from repeated applications to different vectors $g$.
Similarly, for probing the $(i_M,j_M)$ block $M$ of $D$, one needs matrix-vector products of $D$ with vectors $g$ of the form $[z, 0, 0, 0]^T$, $[0, z, 0, 0]^T$, etc., to indicate that the Dirichlet boundary condition is $z$ on the side indexed by $j_M$, and zero on the other sides. The application $Dg$ is then restricted to side $i_M$.
\subsection{Matrix probing}\label{sec:probe}
The dimensionality of $D$ needs to be limited for recovery from a few $Dg_j$ to be possible, but matrix probing is \emph{not} an all-purpose low-rank approximation technique. Instead, it is the property that $D$ has an efficient representation in some adequate pre-set basis that makes recovery from probing possible. As opposed to the randomized SVD method which requires the number of matrix-vector applications to be greater than the rank \cite{Halko-randomSVD}, matrix probing can recover interesting structured operators from a single matrix-vector application \cite{Chiu-probing, Demanet-probing}.
We now describe a model for $M$, any $N \times N$ block of $D$, that will sufficiently lower its dimensionality to make probing possible. Assume we can write $M$ as
\begin{equation}\label{eq:Dexp}
M \approx \sum_{j=1}^p c_j B_j
\end{equation}
where the $B_j$'s are fixed, known basis matrices, that need to be chosen carefully in order to give an accurate approximation of $M$. In the case when the medium $c$ is uniform, we typically let $B_j$ be a discretization of the integral kernel
\begin{equation}\label{eq:Bj}
B_j(x,y)= \frac{e^{ik|x-y|}}{(h+|x-y|)^{j/2}},
\end{equation}
where again $h=1/N$ is the discretization parameter. We usually add another index to the $B_j$, and a corresponding multiplicative factor, to allow for a smooth dependence on $x+y$ as well. We shall further detail our choices and discuss their rationales in Section \ref{sec:basis}. For now, we note that the advantage of the specific choice of basis matrix \eqref{eq:Bj}, and its generalizations explained in Section \ref{sec:basis}, is that it results in accurate expansions with a number of parameters $p$ which is ``essentially independent" of $N$, namely that grows either logarithmically in $N$, or at most like a very sublinear fractional power law (such as $N^{0.12}$, see section \ref{sec:pwN}). This is in sharp contrast to the scaling for the layer width, $w = O(N)$ grid points, discussed earlier. The form of $B_j$ suggested in equation \eqref{eq:Bj} is motivated by the fact that they provide a good expansion basis for the uniform-medium half-space DtN map in $\mathbb{R}^2$. This will be proved in section \ref{sec:BasisPf}.
Given a random vector $z^{(1)} \sim N(0,I_N)$ (other choices are possible), the product $w^{(1)}=Mz^{(1)}$ and the expansion \eqref{eq:Dexp}, we can now write
\begin{equation}\label{mp}
w^{(1)}=Mz^{(1)} \approx \sum_{j=1}^p c_j B_j z^{(1)} = \Psi_{z^{(1)}} \, \mathbf{c}.
\end{equation}
Multiplying this equation on the left by the pseudo-inverse of the $N$ by $p$ matrix $\Psi_{z^{(1)}}$ will give an approximation to $\mathbf{c}$, the coefficient vector for the expansion \eqref{eq:Dexp} of $M$. More generally, if several applications $w^{(j)} = M z^{(j)}$, $j = 1,\ldots, q$ are available, a larger system is formed by concatenating the $\Psi_{z^{(j)}}$ into a tall-and-thin $Nq$ by $p$ matrix ${\bm \Psi}$. The computational work is dominated, here and in other cases \cite{Chiu-probing, Demanet-probing}, by the matrix-vector products $Dg^{(1)}$, or $Mz^{(j)}$. Note that both $\Psi_{z^{(j)}}$ and the resulting coefficient vector $\mathbf{c}$ depend on the vectors $z^{(j)}$. In the sequel we let $z^{(j)}$ be gaussian iid random.
In a nutshell, recovery of $\mathbf{c}$ works under mild assumptions on $B_j$, and when $p$ is a small fraction of $Nq$ up to log factors. In order to improve the conditioning in taking the pseudo-inverse of the matrix $\Psi_z$ and reduce the error in the coefficient vector $\mathbf{c}$, one may use $q > 1$ random realizations of $M$. %
There is a limit to the range of $p$ for which this system is well-posed: past work by Chiu and Demanet \cite{Chiu-probing} covers the precise conditions on $p$, $N$, and the following two parameters, called \emph{weak condition numbers}, for which recoverability of $\mathbf{c}$ is accurate with high probability.
\begin{definition}
\emph{Weak condition number $\lambda$.}
\[ \lambda = \max_j \frac{\| B_j \|_2 \sqrt{N}}{\| B_j \|_F} \]
\end{definition}
\begin{definition}\label{kap}
\emph{Weak condition number $\kappa$.}
\[ \kappa = \mbox{cond}( {\bf B}), \ {\bf B}_{j \ell} = \mbox{Tr} \, (B_j^T B_\ell)\]
\end{definition}
It is desirable to have a small $\lambda$, which translates into a high rank condition on the basis matrices, and a small $\kappa$, which translates into a Riesz basis condition on the basis matrices. Having small weak condition numbers will guarantee a small failure probability of matrix probing and a bound on the condition number of ${\bf \Psi}$, i.e. guaranteed accuracy in solving for $\mathbf{c}$. Also, using $q > 1$ allows to use a larger $p$, to achieve greater accuracy. These results are contained in the following theorem.
\begin{theorem} (Chiu-Demanet, \cite{Chiu-probing}) Let $z$ be a Gaussian i.i.d. random vector of length $qN$, and ${\bf \Psi}$ as above. Then $\mbox{cond}({\bf \Psi}) \leq 2\kappa + 1$ with high probability provided that $p$ is not too large, namely
\[
q N \geq C \, p \, (\kappa \lambda \log N)^2,
\]
for some number $C > 0$.
\end{theorem}
As noted previously, the work necessary for probing the matrix $M$ is on the order of $q$ solves of the original problem. Indeed, computing $Mz^{(1)}, \ldots , Mz^{(q)}$ means solving $q$ times the exterior problem with the AL. This is roughly equivalent to solving the original Helmholtz problem with the AL $q$ times, assuming the AL width $w$ is at least as large as $N$. Then, computing the $qp$ products of the $p$ basis matrices with the $q$ random vectors amounts to a total of at most $qpN^2$ work, or less if the basis matrices have a fast matrix-vector product. And finally, computing the pseudo-inverse of ${\bf \Psi}$ has cost $Nqp^2$. Hence, as long as $p,q \ll N$, the dominant cost of matrix probing\footnote{We will see later that we also need to perform a QR factorization on the basis matrices, and this has cost $N^2p^2$. This precomputation has a cost similar or smaller to the cost of an exterior solve using current Helmholtz solvers. It might also be possible to not need a QR factorization if basis matrices closer to orthonormal are used.} comes from solving $q$ times the exterior problem with a random Dirichlet boundary condition. In our experiments, $q=O(1)$ and $p$ can be as large as a few hundreds for high accuracy.
Finally, we note that the information from the $q$ solves can be re-used for any other block which is in the same block column as $M$. However, if it is needed to probe blocks of $D$ which are not all in the same block column, then another $q$ solves need to be performed, with a Dirichlet boundary condition on the appropriate side of $\partial \Omega$. This of course increases the total number of solves. Another option would be to probe all of $D$ at once, using a combination of basis matrices that have the same size as $D$, but that are 0 except on the support of each distinct block in turn. In this case, $\kappa$ remains the same because we still orthogonalize our basis matrices, but $\lambda$ doubles ($\| B_j \|_2 $ and $\| B_j \|_F$ do not change but $N \rightarrow 4N$) and this makes the conditioning worse, in particular a higher value of $q$ is needed for the same accuracy, given by $p$. Hence we have decided not to investigate further this approach, which might become more advantageous in the case of a more complicated polygonal domain.
\subsection{Solving the Helmholtz equation with a compressed ABC}
Once we have obtained approximations $\tilde{M}$ of each block $M$ in compressed form through the coefficients $\mathbf{c}$ using matrix probing, we construct block by block the approximation $\tilde{D}$ of $D$ and use it in a solver for the Helmholtz equation on the domain $\Omega=[0,1]^2$, with the boundary condition
$$\frac{\partial u}{\partial \nu} = \tilde{D}u , \qquad x \in \partial \Omega.$$
\section{Choice of basis matrices for matrix probing}\label{sec:basis}
The essential information of the DtN map needs to be summarized in broad strokes in the basis matrices $B_j$, with the details of the numerical fit left to the probing procedure. In the case of $D$, most of its physics is contained in its \emph{diagonal singularity} and \emph{oscillations}, as predicted by geometrical optics.
A heuristic argument to obtain the form of $D$ starts from the Green's formula \eqref{eq:GRF}, that we differentiate one more time in the normal direction. After accounting for the correct jump condition, we get an alternative Steklov-Poincare identity, namely
\[
D = (T^* + \frac{1}{2} I)^{-1} H,
\]
where $H$ is the hypersingular integral operator with kernel $\frac{\partial^2 G}{\partial \nu_{\mathbf{x}} \partial \nu_{\mathbf{y}}}$, again $G(\mathbf{x},\mathbf{y})$ is the free-space Green's function and $\nu_{\mathbf{x}}$, $\nu_{\mathbf{y}}$ are the normals to $\partial \Omega$ in $\mathbf{x}$ and $\mathbf{y}$ respectively. The presence of $(T^* + \frac{1}{2} I)^{-1}$ is somewhat inconsequential to the form of $D$, as it involves solving a well-posed second-kind integral equation. As a result, the properties of $D$ are qualitatively similar to those of $H$. (The exact construction of $D$ from $G$ is of course already known in a few special cases, such as the uniform medium half-space problem considered earlier.)
\subsection{Oscillations and traveltimes for the DtN map}
Geometrical optics will reveal the form of $G$. In a context where there is no multi-pathing, that is, where there is a single traveltime $\tau(\mathbf{x},\mathbf{y})$ between any two points $\mathbf{x},\mathbf{y} \in \Omega$, one may write a high-$\omega$ asymptotic series for $G$ as
\begin{equation}\label{eq:geoopts}
G(\mathbf{x},\mathbf{y}) \sim e^{i\omega \tau(\mathbf{x},\mathbf{y})} \sum_{j\geq 0} A_j(\mathbf{x},\mathbf{y}) \omega^{-j},
\end{equation}
$\tau(\mathbf{x},\mathbf{y})$ is the traveltime between points $\mathbf{x}$ and $\mathbf{y}$, found by solving the Eikonal equation
\begin{equation} \label{eq:tau}
\| \nabla_{\mathbf{x}} \tau(\mathbf{x},\mathbf{y}) \| = \frac{1}{c(\mathbf{x})},
\end{equation}
and the amplitudes $A_j$ satisfy transport equations. In the case of multi-pathing (possible multiple traveltimes between any two points), the representation \eqref{eq:geoopts} of $G$ becomes instead
\[
G(\mathbf{x},\mathbf{y}) \sim \sum_j e^{ i \omega \tau_j(\mathbf{x},\mathbf{y})} \sum_{k \geq 0} A_{jk}(\mathbf{x},\mathbf{y}) \omega^{-k},
\]
where the $\tau_j$'s are the traveltimes, each obeying \eqref{eq:tau} away from caustic curves. The amplitudes are singular at caustic curves in addition to the diagonal $\mathbf{x}=\mathbf{y}$, and contain the information of the Maslov indices. Note that traveltimes are symmetric: $\tau_j(\mathbf{x},\mathbf{y})=\tau_j(\mathbf{y},\mathbf{x})$, and so is the kernel of $D$.
The singularity of the amplitude factor in \eqref{eq:geoopts}, at $\mathbf{x} = \mathbf{y}$, is $O \left( \log | \mathbf{x} - \mathbf{y}| \right)$ in 2D and $O \left( | \mathbf{x} - \mathbf{y} |^{-1} \right)$ in 3D. After differentiating twice to obtain $H$, the homogeneity on the diagonal becomes $O \left( | \mathbf{x} - \mathbf{y}|^{-2} \right)$ in 2D and $O \left( | \mathbf{x} - \mathbf{y} |^{-3} \right)$ in 3D. For the decay at infinity, the scalings are different and can be obtained from Fourier analysis of square root singularities; the kernel of $H$ decays like $O \left(| \mathbf{x} - \mathbf{y}|^{-3/2} \right)$ in 2D, and $O \left(| \mathbf{x} - \mathbf{y}|^{-5/2} \right)$ in 3D. In between, the amplitude is smooth as long as the traveltime is single-valued.
As mentioned before, much more is known about DtN maps, such as boundedness and coercivity theorems. Again, we did not attempt to leverage these properties of $D$ in the scheme presented here.
For all these reasons, we define the basis matrices $B_j$ as follows. Assume $\tau$ is single-valued. In 1D, denote the tangential component of $\mathbf{x}$ by $x$, and similarly that of $\mathbf{y}$ by $y$, in coordinates local to each edge with $0 \leq x,y \leq 1$. Each block $M$ of $D$ relates to a couple of edges of the square domain. Let $j = (j_1, j_2)$ with $j_1, j_2$ nonnegative integers. The general forms that we consider are
\[
\beta_j(x,y) = e^{i \omega \tau(x,y)} (h + |x-y|)^{-\frac{j_1}{\alpha}} (h + \theta(x,y))^{-\frac{j_2}{\alpha}}
\]
and
\[
\beta_j(x,y) = e^{i \omega \tau(x,y)} (h + |x-y|)^{-\frac{j_1}{\alpha}} (h + \theta(x,y))^{j_2},
\]
where again $h$ is the grid spacing of the FD scheme, and $\theta(x,y)$ is an adequate function of $x$ and $y$ that depends on the particular block of interest. The more favorable choices for $\theta$ are those that respect the singularities created at the vertices of the square; we typically let $\theta(x,y) = \min(x+y, 2-x-y)$. The parameter $\alpha$ can be taken to be equal to 2, a good choice in view of the numerics and in the light of the asymptotic behaviors on the diagonal and at infinity discussed earlier.
If several traveltimes are needed for geometrical reasons, then different sets of $\beta_j$ are defined for each traveltime. (More about this in the next subsection.) The $B_j$ are then obtained from the $\beta_j$ by QR factorization within each block\footnote{Whenever a block of $D$ has symmetries, we enforce those in the QR factorization by using appropriate weights on a subset of the entries of that block. This also reduces the complexity of the QR factorization.}, where orthogonality is defined in the sense of the Frobenius inner product $\< A, B \> = \mbox{tr}(A B^T)$. This automatically sets the $\kappa$ number of probing to 1.
In many of our test cases it appears that the ``triangular" condition $j_1 + 2 j_2 < $ \emph{constant} works well. The number of couples $(j_1,j_2)$ satisfying this relation will be $p/T$, where $p$ is the number of basis matrices in the matrix probing algorithm and $T$ is the number of distinct traveltimes. The eventual ordering of the basis matrices $B_j$ respects the increase of $j_1 + 2 j_2$.
\subsection{More on traveltimes}\label{sec:tt}
Determining the traveltime(s) $\tau(\mathbf{x},\mathbf{y})$ is the more ``supervised" part of this method, but is needed to keep the number $p$ of parameters small in the probing expansion. A few different scenarios can arise.
\begin{itemize}
\item In the case when $\nabla c(\mathbf{x})$ is perpendicular to a straight segment of the boundary, locally, then this segment is itself a ray and the waves can be labeled as interfacial, or ``creeping". The direct traveltime between any two points $\mathbf{x}$ and $\mathbf{y}$ on this segment is then simply given by the line integral of $1/c(\mathbf{x})$. An infinite sequence of additional interfacial waves result from successive reflections at the endpoints of the segment, with traveltimes predicted as follows.
We still consider the exterior problem for $[0,1]^2$. We are interested in the traveltimes between points $\mathbf{x}, \mathbf{y}$ on the same side of $\partial \Omega$ -- for illustration, let $\mathbf{x}=(x,0)$ and $\mathbf{y}=(y,0)$ on the bottom side of $\Omega=[0,1]^2$, with $x \leq y$ (this is sufficient since traveltimes are symmetric). Assume that all the waves are interfacial. The first traveltime $\tau_1$ corresponds to the direct path from $\mathbf{x}$ to $\mathbf{y}$. The second arrival time $\tau_2$ will be the minimum traveltime corresponding to: either starting at $\mathbf{x}$, going left, reflecting off of the $(0,0)$ corner, and coming back along the bottom side of $\partial \Omega$, past $\mathbf{x}$ to finally reach $\mathbf{y}$; or starting at $\mathbf{x}$, going past $\mathbf{y}$, reflecting off of the $(1,0)$ and coming straight back to $\mathbf{y}$. The third arrival time $\tau_3$ is the maximum of those two choices. The fourth arrival time then corresponds to starting at $\mathbf{x}$, going left, reflecting off of the $(0,0)$ corner, travelling all the way to the $(1,0)$ corner, and then back to $\mathbf{y}$. The fifth arrival time corresponds to leaving $\mathbf{x}$, going to the $(1,0)$ corner this time, then back to the $(0,0)$ corner, then on to $\mathbf{y}$. And so on. To recap, we have the following formulas:
\begin{eqnarray*}
\tau_1(\mathbf{x},\mathbf{y})&=& \int_x^y \frac{1}{c(t,0)} \ dt, \\
\tau_2(\mathbf{x},\mathbf{y})&=& \tau_1(\mathbf{x},\mathbf{y}) + 2\min \left( \int_0^x \frac{1}{c(t,0)} \ dt, \int_y^1 \frac{1}{c(t,0)} \ dt \right), \\
\tau_3(\mathbf{x},\mathbf{y})&=& \tau_1(\mathbf{x},\mathbf{y}) + 2\max \left( \int_0^x \frac{1}{c(t,0)} \ dt, \int_y^1 \frac{1}{c(t,0)} \ dt \right) = 2\int_0^1 \frac{1}{c(t,0)} \ dt - \tau_2(\mathbf{x},\mathbf{y}), \\
\tau_4(\mathbf{x},\mathbf{y})&=& 2\int_0^1 \frac{1}{c(t,0)} \ dt - \tau_1(\mathbf{x},\mathbf{y}), \\
\tau_5(\mathbf{x},\mathbf{y})&=& 2\int_0^1 \frac{1}{c(t,0)} \ dt + \tau_1(\mathbf{x},\mathbf{y}), \qquad \mbox{etc.} \\
\end{eqnarray*}
All first five traveltimes can be expressed as a sum of $\pm \tau_1$, $\pm \tau_2$ and the constant phase $2\int_0^1 \frac{1}{c(t,0)} \ dt$, which does not depend on $\mathbf{x}$ or $\mathbf{y}$. In fact, one can see that any subsequent traveltime corresponding to traveling solely along the bottom boundary of $\partial \Omega$ should be again a combination of those quantities. This means that if we use $\pm \tau_1$ and $\pm \tau_2$ in our basis matrices, we are capturing all the traveltimes relative to a single side, which helps to obtain higher accuracy for probing the diagonal blocks of $D$.
This simple analysis can be adapted to deal with creeping waves that start on one side of the square and terminate on another side, which is important for the nondiagonal blocks of $D$.
\item In the case when $c(\mathbf{x})$ increases outward in a smooth fashion, we are also often in presence of body waves, going off into the exterior and coming back to $\partial \Omega$. The traveltime for these waves needs to be solved either by a Lagrangian method (solving the ODE for the rays), or by an Eulerian method (solving the Eikonal PDE shown earlier). In this paper we used the fast marching method of Sethian \cite{sethart} to deal with these waves in the case that we label ``slow disk" in the next section.
\item In the case when $c(\mathbf{x})$ has singularities in the exterior domain, each additional reflection creates a traveltime that should (ideally) be predicted. Such is the case of the ``diagonal fault" example introduced in the next section, where a straight jump discontinuity of $c(\mathbf{x})$ intersects $\partial \Omega$ at a non-normal angle: we can construct by hand the traveltime corresponding to a path leaving the boundary at $\mathbf{x}$, reflecting off of the discontinuity and coming back to the boundary at $\mathbf{y}$. More precisely, we consider again $\mathbf{x}=(x,0)$, $\mathbf{y}=(y,0)$ and $x \leq y$, with $x$ larger than or equal to the $x$ coordinate of the point where the reflector intersects the bottom side of $\partial \Omega$. We then reflect the point $\mathbf{y}$ across the discontinuity into the new point $\mathbf{y}'$, and calculate the Euclidean distance between $\mathbf{x}$ and $\mathbf{y}'$. To obtain the traveltime, we then divide this distance by the value $c(\mathbf{x})=c(\mathbf{y})$ of $c$ on the right side of the discontinuity, assuming that value is constant. This body traveltime is used in the case of the ``diagonal fault", replacing the quantity $\tau_2$ that was described above. This increased accuracy by an order of magnitude, as mentioned in the numerical results of the next section.
\end{itemize}
\section{Numerical experiments}\label{sec:numexp}
Our benchmark media $c(\mathbf{x})$ are as follows:
\begin{enumerate}
\item a uniform wave speed of 1, $c \equiv 1$ (Figure \ref{c1}),
\item a ``Gaussian waveguide" (Figure \ref{wg}),
\item a ``Gaussian slow disk" (Figure \ref{slow}) large enough to encompass $\Omega$ -- this will cause some waves going out of $\Omega$ to come back in,
\item a ``vertical fault" (Figure \ref{fault}),
\item a ``diagonal fault" (Figure \ref{diagfault}),
\item and a discontinuous periodic medium (Figure \ref{period}). The periodic medium consists of square holes of velocity 1 in a background of velocity $1/\sqrt{12}$.
\end{enumerate}
\begin{figure}[H]
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c1med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the uniform medium.}\label{c1}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c3med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the Gaussian waveguide.}\label{wg}
\end{minipage}
\begin{minipage}[t]{0.05\linewidth}
\end{minipage}
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c5med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the Gaussian slow disk.}\label{slow}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c16med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the vertical fault.}\label{fault}
\end{minipage}
\begin{minipage}[t]{0.05\linewidth}
\end{minipage}
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c18med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the diagonal fault.}\label{diagfault}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c33med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the periodic medium.}\label{period}
\end{minipage}
\end{figure}
All media used are continued in the obvious way (i.e., they are \emph{not} put to a uniform constant) outside of the domain in which they are shown in the figures if needed. The outline of the $[0,1]^2$ box is shown in black.
We can use a standard Helmholtz equation solver to estimate the relative error in the Helmholtz equation solution caused by the Finite Difference discretization (the \emph{FD error}\footnote{To find this FD error, we use a large pseudo-PML, and compare the solution $u$ for different values of $N$. What we call the FD error is the relative $\ell_2$ error in $u$ inside $\Omega$.}), and also the error caused by using the specified pPML width\footnote{To obtain the error caused by the absorbing layer, we fix $N$ and compare the solution $u$ for different layer widths $w$, and calculate the relative $\ell_2$ error in $u$ inside $\Omega$.}. Those errors are presented in Table \ref{FDPMLerr}, along with the main parameters used in the remaining of this section, including the position of the point source or right-hand side $f$. We note that, whenever possible, we try to use an AL with error smaller than the precision we seek with matrix probing, so with a width $w$ greater than that showed in Table \ref{FDPMLerr}. This makes probing easier, i.e. $p$ and $q$ can be smaller.
\begin{table}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
Medium &$N$ &$\omega/2\pi$ &FD error &$w$ &$P$ &Source position \\ \hline
$c \equiv 1$ &1023 &51.2 &{$2.5e-01$} &{$4$} &8 &$(0.5,0.25)$ \\ \hline
waveguide &1023 &51.2 &{$2.0e-01$} &{$4$} &56 &$(0.5,0.5)$ \\ \hline
slow disk &1023 &51.2 &{$1.8e-01$} &{$4$} &43 &$(0.5,0.25)$ \\ \hline
fault, left source &1023 &51.2 &{$1.1e-01$} &{$4$} &48 &$(0.25,0.5)$ \\ \hline
fault, right source &1023 &51.2 &{$2.2e-01$} &{$4$} &48 &$(0.75,0.5)$ \\ \hline
diagonal fault &1023 &51.2 &{$2.6e-01$} &{$256$} &101 &$(0.5,0.5)$ \\ \hline
periodic medium &319 &6 &{$1.0e-01$} &{$1280$} &792 &$(0.5,0.5)$ \\ \hline
\end{tabular}
\end{center}
\caption{For each medium considered, we show the parameters $N$ and $\omega/2\pi$, along with the resulting discretization error caused by the Finite Difference (FD error) formulation. We also show the width $w$ of the pPML needed, in number of points, to obtain an error caused by the pPML of less than $1e-1$. Furthermore, we show the total number $P$ of basis matrices needed to probe the entire DtN map with an accuracy of about $1e-1$ as found in Section \protect\ref{sec:tests}. Finally, we show the position of the point source used in calculating the solution $u$.}
\label{FDPMLerr}
\end{table}
Consider now a block $M$ of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. We note that some blocks in $D$ are the same up to transpositions or flips (inverting the order of columns or rows) if the medium $c$ has symmetries.
\begin{definition}
\emph{Multiplicity of a block of $D$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. The \emph{multiplicity} $m(M)$ of $M$ is the number of copies of $M$ appearing in $D$, up to transpositions or flips.
\end{definition}
Only the distinct blocks of $D$ need to be probed. Once we have chosen a block $M$, we may calculate the \emph{true probing coefficients}.
\begin{definition}
\emph{True probing coefficients of block $M$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. Assume orthonormal probing basis matrices $\left\{B_j \right\}$. The true coefficients $c^t_j$ in the probing expansion of $M$ are the inner products $c^t_j = \< B_j, M\>$.
\end{definition}
We may now define the \emph{$p$-term approximation error} for the block $M$.%
\begin{definition}
\emph{The $p$-term approximation error of block $M$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. For orthonormal probing basis matrices $\left\{B_j \right\}$, we have the true coefficients $c^t_j$ in the probing expansion of $M$. Let $M_p =\sum_{j=1}^p c^t_j B_j$ be the probing $p$-term approximation to $M$. The $p$-term approximation error for $M$ is
\begin{equation}\label{apperr}
\sqrt{m(M)} \frac{\|M-M_p\|_F}{\|D\|_F},
\end{equation}
using the matrix Frobenius norm.
\end{definition}
Because the blocks on the diagonal of $D$ have a singularity, their Frobenius norm can be a few orders of magnitude greater than that of other blocks, and so it is more important to approximate those well. This is why we consider the error relative to $D$, not to the block $M$, in the $p$-term approximation error. Also, we multiply by the square root of the multiplicity of the block to give us a better idea of how big the total error on $D$ will be. For brevity, we shall refer to (\ref{apperr}) simply as the approximation error when it is clear from the context what $M$, $p$ $\left\{B_j\right\}$, $D$ are.
Then, using matrix probing, we will recover a coefficient vector $\mathbf{c}$ close to $\mathbf{c}^t$, which gives an approximation $\tilde{M}=\sum_{j=1}^p c_j B_j$ to $M$. %
We now define the \emph{probing error} (which depends on $q$ and the random vectors used), for the block $M$.%
\begin{definition}
\emph{Probing error of block $M$.} Let $\mathbf{c}$ be the probing coefficients for $M$ obtained with $q$ random realizations $z^{(1)}$ through $z^{(q)}$. Let $\tilde{M}=\sum_{j=1}^p c_j B_j$ be the probing approximation to $M$. The probing error of $M$ is
\begin{equation}\label{acterr}
\sqrt{m(M)}\frac{\|M-\tilde{M}\|_F}{\|D\|_F}.
\end{equation}
\end{definition}
Again, for brevity, we refer to (\ref{acterr}) as the probing error when other parameters are clear from the context. Once all distinct blocks of $D$ have been probed, we can consider the \emph{total probing error}.
\begin{definition}
\emph{Total probing error.} The total probing error is defined as the total error made on $D$ by concatenating all probed blocks $\tilde{M}$ to produce an approximate $\tilde{D}$, and is equal to
\begin{equation}\label{eq:Derr}
\frac{\|D-\tilde{D}\|_F}{\|D\|_F}.
\end{equation}
\end{definition}
In order to get a point of reference for the accuracy benchmarks, for small problems only, the actual matrix $D$ is computed explicitly by solving the exterior problem $4N$ times using the standard basis as Dirichlet boundary conditions, and from this we can calculate \eqref{eq:Derr} exactly. For larger problems, we only have access to a black-box that outputs the product of $D$ with some input vector by solving the exterior problem. We can then estimate \eqref{eq:Derr} by comparing the products of $D$ and $\tilde{D}$ with a few random vectors different from those used in matrix probing.
We shall present results on the approximation and probing errors for various media, along with related condition numbers, and then we shall verify that using an approximate $\tilde{D}$ (constructed from approximate $\tilde{M}$'s for each block $M$ in $D$) does not affect the accuracy of the new solution to the Helmholtz equation, using the \emph{solution error from probing}.
\begin{definition}
\emph{Solution error from probing.} Once we have obtained an approximation $\tilde{D}$ to $D$ from probing the distinct blocks of $D$, we may use this $\tilde{D}$ in a Helmholtz solver to obtain an approximate solution $\tilde{u}$, and compare that to the true solution $u$ using $D$ in the solver. The solution error from probing is the $\ell_2$ error on $u$ inside $\Omega$:
\begin{equation}\label{eq:solerr}
\frac{\|u-\tilde{u}\|_2}{\|u\|_2} \text{ in } \Omega.
\end{equation}
\end{definition}
\subsection{Probing tests}\label{sec:tests}
As we saw in Section \ref{sec:probe}, randomness plays a role in the value of $\mbox{cond}({\bf \Psi})$ and of the probing error. Hence, whenever we show plots for those quantities in this section, we have done 10 trials for each value of $q$ used. The error bars show the minimum and maximum of the quantity over the 10 trials, and the line is plotted through the average value over the 10 trials. As expected, we will see in all experiments that increasing $q$ gives a better conditioning, and consequently a better accuracy and smaller failure probability. The following probing results will then be used in Section \ref{sec:insolver} to solve the Helmholtz equation.
\subsubsection{Uniform medium}
For a uniform medium, $c \equiv 1$, we have three blocks with the following multiplicities: $m((1,1))=4$ (same edge), $m((2,1))=8$ (neighboring edges), and $m((3,1))=4$ (opposite edges). Note that we do not present results for the $(3,1)$ block: this block has negligible Frobenius norm\footnote{We can use probing with $q=1$ and a single basis matrix (a constant multiplied by the correct oscillations) and have a probing error of less than $10^{-6}$ for that block.} compared to $D$. First, let us look at the conditioning for blocks $(1,1)$ and $(2,1)$. Figures \ref{cond11_1024_c1} and \ref{cond21_1024_c1} show the three relevant conditioning quantities: $\kappa$, $\lambda$ and $\mbox{cond}({\bf \Psi})$ for each block. As expected, $\kappa=1$ because we orthogonalize the basis functions. Also, we see that $\lambda$ does not grow very much as $p$ increases, it remains on the order of 10. As for $\mbox{cond}({\bf \Psi})$, it increases as $p$ increases for a fixed $q$ and $N$, as expected. This will affect probing in terms of the failure probability (the odds that the matrix ${\bf \Psi}$ is far from the expected value) and accuracy (taking the pseudo-inverse will introduce larger errors in $\mathbf{c}$). We notice these two phenomena in Figure \ref{erb1023_c1}, where we show the approximation and probing errors in probing the $(1,1)$ block for various $p$, using different $q$ and making 10 tests for each $q$ value as explained previously. As expected, as $p$ increases, the variations between trials get larger. Also, the probing error, always larger than the approximation error, becomes farther and farther away from the approximation error. Comparing Figure \ref{erb1023_c1} with Table \ref{c1solve} of the next section, we see that in Table \ref{c1solve} we are able to achieve higher accuracies. This is because we use the first two traveltimes (so four different types of oscillations, as explained in Section \ref{sec:basis}) to obtain those higher accuracies. But we do not use four types of oscillations for lower accuracies because this demands a larger number of basis matrices $p$ and of solves $q$ for the same error level.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/condnums-c1-b1-med.pdf}
\caption{Condition numbers for the $(1,1)$ block, $c\equiv 1$.}
\label{cond11_1024_c1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/condnums-c1-b2-med-alt.pdf}
\caption{Condition numbers for the $(2,1)$ block, $c\equiv 1$.}
\label{cond21_1024_c1}
\end{minipage}
\end{figure}
\subsubsection{The waveguide}
For a waveguide as a velocity field, we have more blocks compared to the uniform medium case, with different multiplicities: $m((1,1))=2$, $m((2,2))=2$, $m((2,1))=8$, $m((3,1))=2$, $m((4,2))=2$. Note that block $(2,2)$ will be easier to probe than block $(1,1)$ since the medium is smoother on that interface. Also, we can probe blocks $(3,1)$ and $(4,2)$ with $q=1$, $p=2$ and have a probing error less than $10^{-7}$. Hence we only show results for the probing and approximation errors of blocks $(1,1)$ and $(2,1)$, in Figure \ref{erb1023_c2}. Results for using probing in a solver can be found in Section \ref{sec:insolver}.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c1-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c\equiv 1$. Circles are for $q=3$, squares for $q=5$, stars for $q=10$. }
\label{erb1023_c1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c3-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c$ is the waveguide. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c2}
\end{minipage}
\end{figure}
\subsubsection{The slow disk}
Next, we consider the slow disk. Here, we have a choice to make for the traveltime upon which the oscillations depend. We may consider interfacial waves, traveling in straight line segments along $\partial \Omega$, with traveltime $\tau$. There is also the first arrival time of body waves, $\tau_f$, which for some points on $\partial \Omega$ involve taking a path that goes away from $\partial \Omega$, into the exterior where $c$ is higher, and back towards $\partial \Omega$. We have approximated this $\tau_f$ using the fast marching method of Sethian \cite{sethart}. For this example, it turns out that using either $\tau$ or $\tau_f$ to obtain oscillations in our basis matrices does not significantly alter the probing accuracy or conditioning, although it does seem that, for higher accuracies at least, the fast marching traveltime makes convergence slightly faster. Figures \ref{c5fastvsnorm1} and \ref{c5fastvsnorm2} demonstrate this for blocks $(1,1)$ and $(2,1)$ respectively. We omit plots of the probing and approximation errors, and refer the reader to Section \ref{sec:insolver} for final probing results and using those in a solver.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{fast_tau_b1}.pdf}
\caption{Approximation error for the $(1,1)$ blocks of $D$, $c$ is the slowness disk, comparing the use of the normal traveltime (circles) to the fast marching traveltime (squares).}
\label{c5fastvsnorm1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{fast_tau_b2}.pdf}
\caption{Approximation error for the $(2,1)$ blocks of $D$, $c$ is the slowness disk, comparing the use of the normal traveltime (circles) to the fast marching traveltime (squares).}
\label{c5fastvsnorm2}
\end{minipage}
\end{figure}
\subsubsection{The vertical fault}
Next, we look at the case of the medium $c$ which has a vertical fault. We note that this case is harder because some of the blocks will have themselves a 2 by 2 or 1 by 2 structure caused by the discontinuity in the medium. Ideally, as we shall see, each sub-block should be probed separately. There are 7 distinct blocks, with different multiplicities: $m((1,1))=2$, $m((2,2))=1$, $m((4,4))=1$, $m((2,1))=4$, $m((4,1))=4$, $m((3,1))=2$, $m((4,2))=2$. Blocks $(2,2)$ and $(4,4)$ are easier to probe than block $(1,1)$ because they do not exhibit a sub-structure. Also, since the velocity is smaller on the right side of the fault, the frequency there is higher, which means that blocks involving side 2 are slightly harder to probe than those involving side 4. Hence we first present results for the blocks $(1,1)$, $(2,2)$ and $(2,1)$ of $D$. In Figure \ref{erb1023_c16} we see the approximation and probing errors for those blocks. Then, in Figure \ref{erb1023_c16_sub}, we present results for the errors related to probing the 3 distinct sub-blocks of the $(1,1)$ block of $D$. We can see that probing the $(1,1)$ block by sub-blocks helps achieve greater accuracy. We could have split other blocks too to improve the accuracy of their probing (for example, block $(2,1)$ has a 1 by 2 structure because side 1 has a discontinuity in $c$) but the accuracy of the overall DtN map was still limited by the accuracy of probing the $(1,1)$ block, so we do not show results for other splittings.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c16-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c$ is the fault. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c16}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c16-sub_blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the sub-blocks of the $(1,1)$ block of $D$, $c$ is the fault. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c16_sub}
\end{minipage}
\end{figure}
\subsubsection{The diagonal fault}
Now, we look at the case of the medium $c$ which has a diagonal fault. Again, some of the blocks will have themselves a 2 by 2 or 1 by 2 structure. There are 6 distinct blocks, with different multiplicities: $m((1,1))=2$, $m((2,2))=2$, $m((2,1))=4$, $m((4,1))=2$, $m((3,2))=2$, $m((3,1))=4$. Again, we split up block $(1,1)$ in 4 sub-blocks and probe each of those sub-blocks separately for greater accuracy, but do not split other blocks. We then use two traveltimes for the $(2,2)$ sub-block of block $(1,1)$. Using as the second arrival time the geometrical traveltime consisting of leaving the boundary and bouncing off the fault, as mentioned in Section \ref{sec:tt}, allowed us to increase accuracy by an order of magnitude compared to using only the first arrival traveltime, or compared to using as a second arrival time the usual bounce off the corner (or here, bounce off the fault where it meets $\delta \Omega$). We omit plots of the probing and approximation errors, and refer the reader to Section \ref{sec:insolver} for final probing results and using those in a solver.
\subsubsection{The periodic medium}
Finally, we look at the case of the periodic medium presented earlier. There are 3 distinct blocks, with different multiplicities: $m((1,1))=4$, $m((2,1))=8$, $m((3,1))=4$. We expect the corresponding DtN map to be harder to probe because its structure will reflect that of the medium, i.e. it will exhibit sharp transitions at points corresponding to sharp transitions in $c$ (similarly as with the faults). First, we notice that, in all the previous mediums we tried, plotting the norm of the anti-diagonal entries of diagonal blocks (or sub-blocks for the faults) shows a rather smooth decay away from the diagonal. However, that is not the case for the periodic medium: it looks like there is decay away from the diagonal, but variations from that decay can be of relative order 1. This prevents our usual strategy, using basis matrices containing terms that decay away from the diagonal such as $(h+|x-y|)^{-j_1/\alpha}$, from working adequately. Instead, we use polynomials along anti-diagonals, as well as polynomials along diagonals as we previously did.
It is known that solutions to the Helmholtz equation in a periodic medium are Bloch waves with a particular structure \cite{JohnsonPhot}. However, using that structure in the basis matrices is not robust. Indeed, using a Bloch wave structure did not succeed very well, probably because our discretization was not accurate enough and so $D$ exhibited that structure only to a very rough degree. Hence we did not use Bloch waves for probing the periodic medium. Others have successfully used the known structure of the solution in this setting to approximate the DtN map. In \cite{Fliss}, the authors solve local cell problems and Ricatti equations to obtain discrete DtN operators for media which are a perturbation of a periodic structure. In \cite{antoine}, the authors develop a DtN map eigenvalue formulation for wave propagation in periodic media. We did not attempt to use those formulations here.
For this reason, we tried basis matrices with no oscillations, but with polynomials in both directions as explained previously, and obtained the results of Section \ref{sec:insolver}.
Now that we have probed the DtN map and obtained compressed blocks to form an approximation $\tilde{D}$ of $D$, we may use this $\tilde{D}$ in a Helmholtz solver as an absorbing boundary condition.
\subsection{Using the probed DtN map in a Helmholtz solver}\label{sec:insolver}
In Figures \ref{solc5}, \ref{solc3}, \ref{solc16l}, \ref{solc16r}, \ref{solc18} and \ref{solc33} we can see the standard solutions to the Helmholtz equation on $[0,1]^2$ using a large PML or pPML for the various media we consider, except for the uniform medium, where the solution is well-known. We use those as our reference solutions.
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c5-pres.pdf}
\caption{Real part of the solution, $c$ is the slow disk.}
\label{solc5}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c3-pres.pdf}
\caption{Real part of the solution, $c$ is the waveguide.}
\label{solc3}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c16l-pres.pdf}
\caption{Real part of the solution, $c$ is the vertical fault with source on the left.}
\label{solc16l}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c16r-pres.pdf}
\caption{Real part of the solution, $c$ is the vertical fault with source on the right.}
\label{solc16r}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c18-pres.pdf}
\caption{Real part of the solution, $c$ is the diagonal fault.}
\label{solc18}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c33-pres.pdf}
\caption{Imaginary part of the solution, $c$ is the periodic medium.}
\label{solc33}
\end{minipage}
\end{figure}
We have tested the solver with the probed $\tilde{D}$ as an absorbing boundary condition with success. See Tables \ref{c1solve}, \ref{c5solve}, \ref{c3solve} and \ref{c16solve} for results corresponding to each medium. We show the number $p$ of basis matrices required for some blocks for that tolerance, the number of solves $q$ of the exterior problem for those blocks, the total number of solves $Q$, the total probing error \eqref{eq:Derr} in $D$ and the solution error from probing \eqref{eq:solerr}. %
As we can see from the tables, the solution error from probing \eqref{eq:solerr} in the solution $u$ is no more than an order of magnitude greater than the total probing error \eqref{eq:Derr} in the DtN map $D$, for a source position as described in Table \ref{FDPMLerr}. Grazing waves, which can arise when the source is close to the boundary of the computational domain, will be discussed in the next subsection, \ref{sec:graz}. We note again that, for the uniform medium, using the second arrival traveltime as well as the first for the $(1,1)$ block allowed us to achieve accuracies of 5 and 6 digits in the DtN map, which was not possible otherwise. Using a second arrival time for the cases of the faults was also useful. Those results show that probing works best when the medium $c$ is rather smooth. For non-smooth media such as a fault, it becomes harder to probe the DtN map to a good accuracy, so that the solution to the Helmholtz equation also contains more error.
\begin{table}
\caption{$c\equiv 1$}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ & $p$ for $(2,1)$ & $q=Q$ & $\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$6$} & {$1$} & {$1$} & {$2.0130e-01$} & {$3.3191e-01$} \\ \hline
{$12$} & {$2$} & {$1$} & {$9.9407e-03$} & {$1.9767e-02$} \\ \hline
{$20$} & {$12$} & {$3$} & {$6.6869e-04$} & {$1.5236e-03$} \\ \hline
{$72$} & {$20$} & {$5$} & {$1.0460e-04$} & {$5.3040e-04$} \\ \hline
{$224$} & {$30$} & {$10$} & {$8.2892e-06$} & {$9.6205e-06$} \\ \hline
{$360$} & {$90$} & {$10$} & {$7.1586e-07$} & {$1.3044e-06$} \\ \hline
\end{tabular}
\end{center}
\label{c1solve}
\end{table}
\begin{table}
\caption{$c$ is the waveguide}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ &$p$ for $(2,1)$&$q$ &$p$ for $(2,2)$&$q$ &$Q$ &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
$40$ &$2$ &$1$ &$12$ &$1$ &$2$ &$9.1087e-02$ &$1.2215e-01$ \\ \hline
$40$ &$2$ &$3$ &$20$ &$1$ &$4$ &$1.8685e-02$ &$7.6840e-02$ \\ \hline
$60$ &$20$ &$5$ &$20$ &$3$ &$8$ &$2.0404e-03$ &$1.3322e-02$ \\ \hline
$112$ &$30$ &$10$ &$30$ &$3$ &$13$ &$2.3622e-04$ &$1.3980e-03$ \\ \hline
$264$ &$72$ &$20$ &$168$ &$10$ &$30$ &$1.6156e-05$ &$8.9911e-05$ \\ \hline
$1012$ &$240$ &$20$ &$360$ &$10$ &$30$ &$3.3473e-06$ &$1.7897e-05$ \\ \hline
\end{tabular}
\end{center}
\label{c3solve}
\end{table}
\begin{table}
\caption{$c$ is the slow disk}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ & $p$ for $(2,1)$ &$q=Q$ &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$40$} & {$2$} &{$3$} & {$1.0730e-01$} & {$5.9283e-01$} \\ \hline
{$84$} & {$2$} &{$3$} & {$8.0607e-03$} & {$4.5735e-02$} \\ \hline
{$180$} & {$12$} &{$3$} & {$1.2215e-03$} & {$1.3204e-02$} \\ \hline
{$264$} & {$30$} &{$5$} & {$1.5073e-04$} & {$7.5582e-04$} \\ \hline
{$1012$} & {$132$} &{$20$} & {$2.3635e-05$} & {$1.5490e-04$} \\ \hline
\end{tabular}
\end{center}
\label{c5solve}
\end{table}
\begin{table}
\caption{$c$ is the fault}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$, left source &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$, right source\\ \hline
{$5$} &{$2.8376e-01$} &{$6.6053e-01$} &{$5.5522e-01$} \\ \hline
{$5$} &{$8.2377e-03$} &{$3.8294e-02$} &{$2.4558e-02$} \\ \hline
{$30$} &{$1.1793e-03$} &{$4.0372e-03$} &{$2.9632e-03$} \\ \hline
\end{tabular}
\end{center}
\label{c16solve}
\end{table}
\begin{table}
\caption{$c$ is the diagonal fault}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$4$} &{$1.6030e-01$} &{$4.3117e-01$} \\ \hline
{$6$} &{$1.7845e-02$} &{$7.1500e-02$} \\ \hline
{$23$} &{$4.2766e-03$} &{$1.2429e-02$} \\ \hline
\end{tabular}
\end{center}
\label{c18solve}
\end{table}
\begin{table}
\caption{$c$ is the periodic medium}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$50$} &{$1.8087e-01$} &{$1.7337e-01$} \\ \hline
{$50$} &{$3.5714e-02$} &{$7.1720e-02$} \\ \hline
{$50$} &{$9.0505e-03$} &{$2.0105e-02$} \\ \hline
\end{tabular}
\end{center}
\label{c33solve}
\end{table}
\subsection{Grazing waves}\label{sec:graz}
It is well-known that ABCs often have difficulties when a source is close to a boundary of the domain, or in general when waves incident to the boundary are almost parallel to it. We wish to verify that the solution $\tilde{u}$ using the result $\tilde{D}$ of probing $D$ does not degrade as the source becomes closer and closer to some side of $\partial \Omega$. For this, we use a right-hand side $f$ to the Helmholtz equation which is a point source, located at the point $(x_0,y_0)$, where $x_0=0.5$ is fixed and $y_0>0$ becomes smaller and smaller, until it is a distance $2h$ away from the boundary (the point source's stencil has width $h$, so a source at a distance $h$ from the boundary does not make sense). We see in Figure \ref{c1graz} that, for $c \equiv 1$, the solution remains quite good until the source is a distance $2h$ away from the boundary. In this figure, we have used the probed maps we obtained in each row of Table \ref{c1solve}. %
We obtain very similar results for the waveguide, slow disk and faults (for the vertical fault we locate the source at $(x_0,y_0)$, where $y_0=0.5$ is fixed and $x_0$ goes to $0$ or $1$). This shows that the probing process itself does not significantly affect how well grazing waves are absorbed.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.5]{./figs/images/c1-graz-out-flat.pdf}
\caption{Error in solution $u$, $c\equiv 1$, moving point source. Each line in the plot corresponds to using $\tilde{D}$ from a different row of Table \ref{c1solve}.}
\label{c1graz}
\end{center}
\end{figure}
\subsection{Variations of $p$ with $N$}\label{sec:pwN}
We now discuss how the number of basis matrices $p$ needed to achieve a desired accuracy depends on $N$ or $\omega$. To do this, we pick 4 consecutive powers of 2 as values for $N$, and find the appropriate $\omega$ such that the finite discretization error remains constant at $10^{-1}$, so that in fact $N \sim \omega^{1.5}$ as we have previously mentioned. We then probe the $(1,1)$ block of the corresponding DtN map, using the same parameters for all $N$, and observe the required $p$ to obtain a fixed probing error. The worst case we have seen in our experiments came from the slow disk. As we can see in Figure \ref{fig:c5pvsn}, $p$ seems to follow a very weak power law with $N$, close to $p \sim 15N^{.12}$ for a probing error of $10^{-1}$ or $p \sim 15N^{.2}$ for an probing error of $10^{-2}$. In all other cases, $p$ is approximately constant with increasing $N$, or seems to follow a logarithmic law with $N$ as for the waveguide (see Figure \ref{fig:c3pvsn}).
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/pvsn-c5-b1.pdf}
\caption{Probing error of the $(1,1)$ block of the DtN map for the slow disk, fixed FD error level of $10^{-1}$, increasing $N$. This is the worst case, where $p$ follows a weak power law with $N$.}
\label{fig:c5pvsn}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/pvsn-c3-b1.pdf}
\caption{Probing error of the $(1,1)$ block of the DtN map for the waveguide, fixed FD error level of $10^{-1}$, increasing $N$. Here $p$ follows a logarithmic law.}
\label{fig:c3pvsn}
\end{minipage}
\end{figure}
\section{Convergence of probing for the half-space DtN map: theorem}\label{sec:BasisPf}
In this section, we consider the half-space DtN map kernel in uniform medium
$K(r) = \frac{ik}{2r} H_1^{(1)}(kr)$
that we found in section \ref{sec:hsG}. We wish to approximate this kernel for values of $r$ that are relevant to our numerical scheme. Because we take $\Omega$ in our numerical experiments to be the $[0,1]^2$ box, $r=|x-y|$ will be between 0 and 1, in increments of $h$, as coordinates $x$ and $y$ along edges of $\partial \Omega$ vary between 0 and 1 in increments of $h$. However, as we know, $K(r)$ is singular at $r=0$, and since discretization effects dominate near the diagonal in the matrix representation of the DtN map, we shall consider only values of $r$ in the range $r_0 \leq r \leq 1$, with $0 < r_0 \leq 1/k$ (hence $r_0$ can be on the order of $h$). Since we know to expect oscillations $e^{ikr}$ in this kernel, we can remove those from $K$ to obtain%
\begin{equation}\label{eq:Hofr}
H(r) = \frac{ik}{2r} H_1^{(1)}(kr) e^{-ikr}
\end{equation}
(not to be confused with the hypersingular kernel $H$ of section \ref{sec:basis}), a smoother function which will be easier to approximate. Equivalently, we can add those oscillations to the terms in an approximation of $H$, to obtain an approximation of $K$.
For this section only, we denote by $\tilde{D}$ the corresponding operator with integral kernel $H$, while we use $D$ for the half-space Dirichlet-to-Neumann map, that is, the operator with kernel $K$.
\begin{theorem}\label{teo:main}
Let $\alpha > \frac{2}{3}$, $0 < r_0 < 1/k$, and let $K_p(r)$ be the best uniform approximation of $K(r)$ in
$$\mbox{span} \{ \frac{e^{ikr}}{r^{j/\alpha}} : j = 1, \ldots, p, \mbox{ and } r_0 \leq r \leq 1 \}.$$
Denote by $D_p$ the operator defined with $K_p$ in place of $K$. Then, in the operator norm,
$$\| D - D_p \| \leq C_\alpha \, p^{1 - \lfloor 3\alpha/2 \rfloor} \, \| \tilde{K} \|_\infty,$$
for some $C_\alpha > 0$ depending on $\alpha$ and $r_0 \leq r \leq 1$.
\end{theorem}
The important point of the theorem is that the quality of approximation is otherwise independent of $k$, i.e., the number $p$ of basis functions does not need to grow like $k$ for the error to be small. In other words, it is unnecessary to ``mesh at the wavelength level" to spell out the degrees of freedom that go in the representation of the DtN map's kernel.
\begin{remark}
Growing $\alpha$ does not automatically result in a better approximation error, because a careful analysis of the proof shows that $C_\alpha$ grows factorially with $\alpha$. This behavior translates into a slower onset of convergence in $p$ when $\alpha$ is taken large, as the numerics show in the next section. This can in turn be interpreted as the result of ``overcrowding" of the basis by very look-alike functions.
\end{remark}
\begin{remark}
It is easy to see that the operator norm of $D$ grows like $k$, for instance by applying $D$ to the function $e^{-ik\mathbf{x}}$. %
The uniform norms of $K$ and $H$ once we cut out the diagonal, however, grow like $k^{1/2}/r_0^{3/2}$, so the result above shows that we incur an additional factor $k^{-1/2}r_0^{-3/2}$ in the error (somewhat akin to numerical pollution) in addition to the factor $k$ that we would have gotten from $\| D \|$.
\end{remark}
The result in Theorem \ref{teo:main} points the way for the design of basis matrices to be used in matrix probing, for the more general case of the exterior DtN map in heterogeneous media. We prove \ref{teo:main} in the next subsections, and present a numerical verification in the next section.
\subsection{Chebyshev expansion}
We mentioned the domain of interest for the $r$ variable is $[r_0,1]$. Again, expanding $K(r)$ in the system of Theorem \ref{teo:main} is equivalent to expanding $H(r)$ in polynomials of $r^{-1/\alpha}$ over $[r_0,1]$. It will be useful to perform the affine rescaling
\[
\xi(r) = \frac{2}{r_0^{-1/\alpha} - 1} (r^{-1/\alpha} - 1 ) - 1 \qquad \Leftrightarrow \qquad r(\xi) = \left( \frac{\xi+1}{2} (r_0^{-1/\alpha} - 1) + 1 \right)^{-\alpha}
\]
so that the bounds $r \in [r_0,1]$ turn into $\xi \in [-1,1]$. We further write $\xi = \cos \theta$ with $\theta \in [0,\pi]$. Our strategy is to expand $H$ in Chebyshev polynomials $T_n(\xi)$. By definition, the best $p$-term approximation of $H(r)$ in polynomials of $r^{-1/\alpha}$ (best in a uniform sense over $[r_0,1]$) will result in a lower uniform approximation error than that associated with the $p$-term approximation of $H(r(\xi))$ in the $T_n(\xi)$ system. Hence in the sequel we overload notations and let $H_p$ for the $p$-term approximant of $H$ in our Chebyshev system.
We write out the Chebyshev series for $H(r(\xi))$ as
$$H(r(\xi)) = \sum^{\infty}_{j=0} c_j T_j(\xi), \qquad c_j = \frac{2}{\pi} \int_{-1}^1 \frac{H(r(\xi)) T_j(\xi)}{(1-\xi^2)^{1/2}} \ d\xi, $$
with $T_j(\xi)=\cos{(j(\cos^{-1}\xi))}$, and $c_j$ alternatively written as
$$ c_j = \frac{2}{\pi} \int_0^\pi H(r(\cos{\theta})) \cos{j\theta} \ d \theta = \frac{1}{\pi} \int_0^{2\pi} H(r(\cos{\theta})) \cos{j\theta} \ d \theta. $$
The expansion will converge fast because we can integrate by parts in $\theta$ and afford to take a few derivatives of $H$, say $M$ of them, as done in \cite{tadmor}. After noting that the boundary terms cancel out because of periodicity in $\theta$, we express the coefficients $c_j$ for $j > 0$, up to a sign, as
$$ c_j = \pm \frac{1}{\pi j^M} \int_0^{2\pi} \sin{j\theta} \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \ d\theta, \qquad \ M \ \text{odd,} $$
$$ c_j = \pm \frac{1}{\pi j^M} \int_0^{2\pi} \cos{j\theta} \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \ d\theta, \qquad \ M \ \text{even.} $$
It follows that, for $j > 0$, and for all $M > 0$,
\[
\left| c_j \right| \leq \frac{2}{j^M} \max_\theta \left| \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \right|.
\]
Let $B_M$ be a bound on this $M$-th order derivative. The uniform error we make by truncating the Chebyshev series to $H_p=\sum^{p}_{j=0} c_j T_j$ is then bounded by %
\begin{equation}\label{eq:bndderiv}
\left\| H-H_p \right\|_{L^\infty[r_0,1]} \leq \sum_{j=p+1}^\infty \left|c_j \right| \leq 2 B_M \sum_{j=p+1}^\infty \frac{1}{j^{M}} \leq \frac{2B_M}{(M-1) p^{M-1}}, \qquad \ p > 1.
\end{equation}
The final step is a simple integral comparison test.
\subsection{Bound on the derivatives of the DtN map kernel with oscillations removed}
The question is now to find a favorable estimate for $B_M$, from studying successive $\theta$ derivatives of $H(r)$ in \eqref{eq:Hofr}. From the bound for the derivatives of Hankel functions in Lemma 1 of \cite{flatland}: given any $C > 0$, we have
\begin{equation}\label{eq:derivH}
\left| \frac{d^m}{dr^m} \left( H_1^{(1)}(kr) e^{-ikr} \right) \right| \leq C_m (kr)^{-1/2} r^{-m} \qquad \ \text{for} \ kr \geq C.
\end{equation}
The change of variables from $r$ to $\theta$ results in
\begin{eqnarray*}
\frac{dr}{d\theta}&=& \frac{d\xi}{d\theta} \frac{dr}{d\xi} = \left(-\sin \theta \right) \left( -\alpha \left( \frac{\xi+1}{2} (r_0^{-1/\alpha} - 1) + 1 \right)^{-\alpha-1} \frac{\left(r_0^{-1/\alpha}-1\right)}{2} \right) \\
&=&\left(-\sin \theta \right) \left( -\alpha \ r^{1+1/\alpha} \ \frac{r_0^{-1/\alpha}(1-r_0^{1/\alpha})}{2} \right).
\end{eqnarray*}
Hence
\begin{equation}\label{eq:drdtheta}
\frac{dr}{d\theta} = r (r/r_0)^{1/\alpha} \ \frac{\alpha \sin \theta (1-r_0^{1/\alpha})}{2}.
\end{equation}
Derivatives of higher powers of $r$ are handled by the chain rule, resulting in
\begin{equation}\label{eq:derivr}
\frac{d}{d \theta} (r^p) = p r^p (r/r_0)^{1/\alpha} \ \frac{\alpha \sin \theta (1-r_0^{1/\alpha})}{2}.
\end{equation}
We see that the action of a $\theta$ derivative is essentially equivalent to multiplication by $(r/r_0)^{1/\alpha}$. As for higher derivatives of powers of $r$, it is easy to see by induction that the product rule has them either hit a power of $r$, or a trigonometric polynomial of $\theta$, resulting in a growth of at most $(r/r_0)^{1/\alpha}$ for each derivative:
\[
| \frac{d^m}{d\theta^m} r^p | \leq C_{m,p,\alpha} \, r^p (r/r_0)^{m/\alpha}.
\]
These estimates can now be combined to bound $\frac{d^m}{d \theta^m} \left( H_1^{(1)}(kr) e^{-ikr} \right)$. One of two scenarios occur when applying the product rule:
\begin{itemize}
\item either $\frac{d}{d\theta}$ hits $\frac{d^{m_2}}{d\theta^{m_2}} \left( H_1^{(1)}(kr) e^{-ikr} \right)$ for some $m_2 < m$. In this case, one negative power of $r$ results from $\frac{d}{dr}$ as we saw in \eqref{eq:derivH}, and a factor $r (r/r_0)^{1/\alpha}$ results from $\frac{dr}{d\theta}$ as we saw in \eqref{eq:drdtheta};
\item or $\frac{d}{d\theta}$ hits some power of $r$, possibly multiplied by some trigonometric polynomial in $\theta$, resulting in a growth of an additional factor $(r/r_0)^{1/\alpha}$ as we saw in \eqref{eq:derivr}.
\end{itemize}
Thus, we get at most a $(r/r_0)^{1/\alpha}$ growth factor per derivative in every case. The situation is completely analogous when dealing with the slightly more complex expression $\frac{d^m}{d \theta^m} \left( \frac{1}{r} H_1^{(1)}(kr) e^{-ikr} \right)$. The number of terms is itself at most factorial in $m$, hence we get
\begin{equation}\label{eq:derivbnd}
| \frac{d^m}{d \theta^m} \frac{k}{r} \left( H_1^{(1)}(kr) e^{-ikr} \right) | \leq C_{m, \alpha} \ \frac{k}{r} \left( \frac{r}{r_0} \right)^{\frac{m}{\alpha} - \frac{1}{2}} \leq C_{m,\alpha} \ \frac{k}{r_0} \left( \frac{r}{r_0} \right)^{\frac{m}{\alpha} - \frac{3}{2}}.
\end{equation}
We now pick $m \leq M = \lfloor 3 \alpha /2 \rfloor$, so that the max over $\theta$ is realized when $r = r_0$, and $B_M$ is on the order of $k/r_0$. It follows from \eqref{eq:bndderiv} and \eqref{eq:derivbnd} that
$$ \left\| H-H_p \right\|_{L^\infty[r_0,1]} \leq C_\alpha \ \frac{k}{r_0} \ \frac{1}{p^{\lfloor 3\alpha/2 \rfloor - 1}}, \qquad \ p > 1, \ \alpha > 2/3.$$
The kernel of interest, $K(r) = H(r) e^{ikr}$ obeys the same estimate if we let $K_p$ be the $p$-term approximation of $K$ in the Chebyshev system modulated by $e^{ikr}$.
\subsection{Bound on the error of approximation}
For ease of writing, we now let $D^0$, $D^0_p$ be the operators with respective kernels $K^0(r) = K(r) \chi_{[r_0,1]}(r)$ and $K^0_p(r) = K_p(r) \chi_{[r_0,1]}(r)$. We now turn to the operator norm of $D^0 - D^0_p$ with kernel $K^0 -K^0_p$:
$$(D^0-D^0_p)g(x)=\int_0^1 (K^0-K^0_p)(|x-y|)g(y) \ dy, \qquad x \in [0,1]. $$
We use the Cauchy-Schwarz inequality to bound
\begin{eqnarray*}
\|(D^0-D^0_p)g\|_2 &=& \left(\int_{0\leq x \leq 1} \left|\int_{0\leq y \leq 1, \ |x-y|\geq r_0} (K^0-K^0_p)(|x-y|)g(y) \ dy \right|^2 dx \right)^{1/2} \\
& \leq & \left(\int_{0\leq x \leq 1} \int_{0\leq y \leq 1, \ |x-y|\geq r_0}\left|(K^0-K^0_p)(|x-y|)\right|^2 \ dy dx \right)^{1/2} \|g\|_2 \\
& \leq & \left( \int_{0\leq x \leq 1} \int_{0\leq y \leq 1, \ |x-y|\geq r_0} 1 \ dy \ dx \right)^{1/2} \|g\|_2 \ \max_{0\leq x,y \leq 1, \ |x-y| \geq r_0} |(K^0-K^0_p)(|x-y|)| \\
& \leq & \|g\|_2 \ \| K^0-K^0_p \|_{L^{\infty}[r_0,1]}.
\end{eqnarray*}
Assembling the bounds, we have
\[
\| D^0-D^0_p \|_2 \leq \| K^0-K^0_p \|_{L^{\infty}[r_0,1]} \leq C_{\alpha} \, p^{1 - \lfloor 3 \alpha / 2 \rfloor} \, \frac{k}{r_0}.
\]
It suffices therefore to show that $\| K^0 \|_\infty = \| K \|_{L^\infty[r_0,1]}$ is bigger than $k/r_0$ to complete the proof. Letting $z = kr$, we see that
$$\max_{r_0 \leq r \leq 1} |K(r)|=\frac{k}{2r_0} \max_{kr_0 \leq z \leq k} \left|H_1^{(1)}(z) \right| \geq C \frac{k^{1/2}}{r_0^{3/2}}.$$
The last inequality follows from the fact that there exist a positive constant $c_1$ such that $c_1 z^{-1/2} \leq \left|H_1^{(1)}(z) \right| $, from Lemma 3 of \cite{flatland}. But $k^{1/2}/r_0^{3/2} \geq k/r_0$ precisely when $r_0 \leq 1/k$. Hence we have proved the statement of Theorem \ref{teo:main}.
In the next section, we proceed to a numerical confirmation of Theorem \ref{teo:main}.
\section{Convergence of probing for the half-space DtN map: numerical confirmation}\label{sec:NumPf}
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/half-c1-errs.pdf}
\caption{Probing error of the half-space DtN map($q=1$, 10 trials, circle markers and error bars) compared to the approximation error (line), $c\equiv 1$, $L=1/4$, $\alpha=2$, $n=1024$, $\omega=51.2$.}
\label{q1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/half-c1-conds.pdf}
\caption{Condition numbers for probing the half-space DtN map, $c\equiv 1$, $L=1/4$, $\alpha=2$, $n=1024$, $\omega=51.2$, $q=1$, 10 trials.}
\label{p2}
\end{minipage}
\end{figure}
In order to use Theorem \ref{teo:main} to obtain convergent basis matrices, we start with the set $\left\{(r)^{-j/\alpha} \right\}_{j=0}^{p-1}$. We have proved the theorem for the interval $r_0 \leq r \leq 1$, but here we consider $h \leq r \leq 1$, which is a larger interval than in the theorem. We then put in oscillations, orthonormalize, and use this new set as a basis for probing the DtN map. Thus we have pre-basis matrices ($0 \leq j \leq p$)
$$(\beta_j)_{\ell m}=\frac{e^{ikh|\ell-m|}}{|\ell-m|^{j/\alpha}} \ \text{for} \ \ell \neq m,$$
with $(\beta_j)_{\ell \ell}=0$. We add to this set the identity matrix in order to capture the diagonal of $D$, and orthonormalize the resulting collection to get the $B_j$. Alternatively, we have noticed that orthonormalizing the set of $\beta_j$'s with
\begin{equation}\label{halfbasis}
(\beta_j)_{\ell m}=\frac{e^{ikh|\ell-m|}}{(h+h|\ell-m|)^{j/\alpha}}
\end{equation}
works just as well, and is simpler because there is no need to treat the diagonal separately. We use this same technique for the probing basis matrices of the exterior problem.
The convergent basis matrices in \eqref{halfbasis} have been used to obtain a numerical confirmation of Theorem \ref{teo:main}, again for the half-space DtN map. To obtain the DtN map in this setup, instead of solving the exterior problem with a PML or pPML on all sides, we solve a problem on a thin strip, with a random Dirichlet boundary condition (for probing) one of the long edges, and a PML or pPML on the other three sides. This method for a numerical approximation of the solution to the half-space problem was introduced in section \ref{sec:half} and Figure \ref{fig:half}.
\subsection{Uniform medium}
In Figure \ref{q1}, we show the approximation error, which we expect will behave as in Theorem \ref{teo:main}. We also plot error bars for the probing error, corresponding to ten trials of probing, with $q=1$. The probing results are about as good as the approximation error, because the relevant condition numbers are all well-behaved as we see in Figure \ref{p2} for the value of choice of $\alpha=2$. Back to the approximation error, we notice in Figure \ref{q1} that increasing $\alpha$ delays the onset of convergence as expected, because of the factor which is factorial in $\alpha$ in the error of Theorem \ref{teo:main}. And we can see that, for small $\alpha$, we are taking very high inverse powers of $r$, an ill-conditioned operation. Hence the appearance of a convergence plateau for smaller $\alpha$ is explained by ill-conditioning of the basis matrices, and the absence of data points is because of computational overflow.
Finally, increasing $\alpha$ from $1/8$ to $2$ gives a higher rate of convergence, as it should because in the error we have the factor $p^{-3\alpha/2}$, which gives a rate of convergence of $3\alpha/2$. This is roughly what we obtain numerically. As discussed, further increasing $\alpha$ is not necessarily advantageous since the constant $C_\alpha$ in \ref{teo:main} grows fast in $\alpha$.
\chapter{Summary of steps}\label{ch:steps}
In this appendix, we summarize the various operations needed in each step of the numerical scheme presented in this thesis.
\subsubsection{Preliminary remarks}
We need two solvers for the numerical scheme, one which does exterior problem solves with boundary condition $g$ on $\partial \Omega$, and one which solves the reformulated problem inside $\Omega$ using $D$ or $\overline{D}$ as a boundary condition. Both solvers should be built with the other in mind, so their discretization points agree on $\partial \Omega$.
For the exterior solves, we shall impose a boundary condition $g$ on $\partial \Omega$ (called $u_0$ in our discussion of layer-stripping), and find the solution on the layer of points just outside of $\Omega$ (called $u_1$ in our discussion of layer-stripping). Note that $u_1$ has eight more points than $u_0$ does. However, the four corner points of $u_1$ are not needed in the normal derivative of $u$ on $\partial \Omega$ (again, because we use the five-point stencil). Also, the four corner points of $u_0$ need to be used twice each. For example, the solution $u_0$ at point $(0,0)$ is needed for the normal derivative in the negative $x_1$ direction (going left) and the normal derivative in the negative $x_2$ direction (going down). Hence we obtain a DtN operator which takes the $4N$ solution points $u_0$ (with corners counted twice) to the $4N$ normal derivatives $(u_1-u_0)/h$ (with corners omitted in $u_1$). In this way, one can impose a (random) boundary condition on the $N$ points of any one side of $\partial \Omega$ and obtain the resulting Neumann data on the $N$ points of any side of $\partial \Omega$.
Once we have probed and compressed the DtN map $D$ into $\overline{D}$, we will need to use this $\overline{D}$ in a Helmholtz solver. We do this using ghost points $u_1$ just outside of $\partial \Omega$ (but not at the corners), which we can eliminate using $\overline{D}$. For best results, it is important to use $\overline{D}$ for the same solution points it was obtained. In other words, here we defined $\overline{D}$ as
$$\overline{D}u_0=\frac{u_1-u_0}{h}.$$
If instead we use the DtN map as
$$\overline{D}u_1=\frac{u_1-u_0}{h} \qquad \text{ or } \qquad \overline{D}u_0=\frac{u_0-u_{-1}}{h},$$
we lose accuracy. In a nutshell, one needs to be careful about designing the exterior solver and the Helmholtz solver so they agree with each other. We are now ready to consider the steps of the numerical scheme.
\section{Matrix probing: $D \rightarrow \tilde{D}$}
\begin{enumerate}
\item \label{steporg} Organize information about the various submatrices of $D$, their multiplicity and the location and ``orientation'' of each of their copies in $D$. For example, %
in the waveguide medium case, the $(1,1)$ submatrix has multiplicity 2, and appears as itself (because of symmetries in $c$) also in position $(3,3)$. However, the $(2,1)$ submatrix has multiplicity 8 but appears as itself in $D$ only in position $(4,3)$ as well. It appears as its transpose (\emph{not} conjugate transpose) in positions $(1,2)$ and $(3,4)$. It appears with column order flipped in positions $(4,1)$ and $(2,3)$, and finally as its transpose with column order flipped in positions $(1,4)$ and $(3,2)$.
\item \label{steprep}Pick a representative for each distinct submatrix of $D$. To do this, think of which block columns of $D$ will be used for probing. Minimizing the distinct block columns used will minimize the cost of matrix probing by minimizing $Q$, the sum of all solves needed. See step \ref{stepq} as well.
\item If the medium $c$ has discontinuities, it might be necessary to split up submatrices further, and to keep track of the ``sub-submatrices'' and their positions, multiplicities and orientations inside the representative submatrix.
\item \label{stepq} Pick a $q$ for each block column, keeping in mind that diagonal submatrices are usually the hardest to probe (hence need a higher $p$ and $q$), and that submatrices the farthest from the diagonal are typically very easy to probe. It might be wise to pick representatives, in step \ref{steprep}, knowing that some will require a higher $q$ than others.
\item \label{stepext} Solve the exterior problem $q$ times on each block column, saving the restriction of the result to the required block rows depending on the representative submatrices you chose. Also, save the random vectors used.
\item \label{steperror} For error checking in step \ref{steperrcheck}, also solve the exterior problem a fixed number of times, say 15, with different random vectors. Again, save both the random vectors and the exterior solves. Use those results to approximate the norm of $D$ as well.
\item For each representative submatrix $M$ (and representative sub-submatrix, if needed), do the following:
\begin{enumerate}
\item Pick appropriate basis matrices $B_j$.
\item Orthonormalize the basis matrices if necessary. If this step is needed, it is useful to use symmetries in the basis matrices to both reduce the complexity of the orthonormalization and enforce those symmetries in the orthonormalized basis matrices.
\item Multiply each basis matrix by the random vectors used in step \ref{stepext} in solving the exterior problem corresponding to the correct block column of $M$. Organize results in the matrix ${\bf \Psi}$.
\item Take the pseudo-inverse of ${\bf \Psi}$ on the results of the exterior solves from step \ref{stepext}, corresponding to the correct block row of $M$, to obtain the probing coefficients $\mathbf{c}$ and $\tilde{M}=\sum c_j B_j$.
\item \label{steperrcheck} To check the probing error, multiply $\tilde{M}$ with the random vectors used in the exterior solves for error checking purposes, in step \ref{steperror}. Compare to the results of the corresponding exterior solves. Multiply that error by the square root of the multiplicity, and divide by the estimated norm of $D$.
\end{enumerate}
\item If satisfied with the probing errors of each submatrix, move to next step: PLR compression.
\end{enumerate}
\section{PLR compression: $\tilde{D} \rightarrow \overline{D}$}
\begin{enumerate}
\item For each probed representative submatrix $\tilde{M}$ (and representative sub-submatrix, if needed), do the following:
\begin{enumerate}
\item Pick a tolerance $\varepsilon$ which is smaller than the probing error for that $\tilde{M}$. A factor of 25 smaller works well usually. Also, pick a maximal desired rank $R_\text{max}$. Usually $R_\text{max} \leq 8$ for a diagonal submatrix, $R_\text{max} \leq 4$ for a submatrix just off of the diagonal, and $R_\text{max} =2$ for a submatrix furthest away from the diagonal work well.
\item Compress $\tilde{M}$ using the PLR compression algorithm. Keep track of the dimensions and ranks of each block, to compare the matrix-vector complexity with that of a dense product.
\item Check the error made by the PLR compression by comparing $\tilde{M}$ and $\overline{M}$, again multiply that error by the square root of the multiplicity, and divide by the estimated norm of $D$.
\end{enumerate}
\item If satisfied with the PLR errors of each submatrix, move to next step: using the PLR compression of probed submatrices into a Helmholtz solver.
\end{enumerate}
\section{Using $\overline{D}$ in a Helmholtz solver}
Using the Helmholtz solver described in the preliminary remarks of this appendix, obtain the approximate solution $\overline{u}$ from solving with the appropriate boundary condition using $\overline{D}$. Every time a product of a vector $v$ with $\overline{D}$ is needed, build the result from all submatrices of $\overline{D}$. For each submatrix, multiply the correct restriction of that vector $v$ by the correct probed and compressed representative submatrix $\overline{M}$, taking into account the orientation of the submatrix as discussed in step \ref{steporg} of the matrix probing part of the numerical scheme.
\chapter{Background material}\label{ch:back}
In this chapter, we review the necessary background material for this thesis.
We begin by introducing in section \ref{sec:HEfree} the problem we wish to solve, that is, the \emph{free-space problem} for the Helmholtz equation. This problem is defined on an unbounded domain. We will explain how \emph{absorbing boundary conditions} (ABCs) and the \emph{exterior Dirichlet-to-Neumann map} (DtN map) are related, and can be used to reformulate the free-space problem from an unbounded domain to a bounded one. This reformulation allows us to solve the free-space problem.
We then briefly review existing techniques for constructing ABCs for the Helmholtz equation in section \ref{sec:abc}. From this, we will understand why, when solving the Helmholtz equation in a heterogeneous medium, ABCs are computationally expensive. This suggests an approach where we compress an ABC, or the related exterior DtN map. In other words, we find a way to apply the ABC which allows for a faster solve of the free-space problem.
To attack this problem, we first look at the \emph{half-space} DtN map, which is known analytically in uniform medium. In some respect, this half-space DtN map is quite similar to the exterior DtN map. To understand this similarity, we introduce in section \ref{sec:HEhalf} the concepts of the \emph{exterior} and \emph{half-space} problems for the Helmholtz equation, and the related half-space DtN map. In the next chapters, we will prove facts about the half-space DtN map which will inform our compression scheme for the exterior DtN map. %
Before we explain our work, we end this chapter of background material with section \ref{sec:strip}. We first describe how we may eliminate unknowns in the Helmholtz discretized system to obtain a Riccati equation governing the half-space DtN map. We then describe how the most straightforward way of compressing the exterior DtN map, by eliminating the exterior unknowns in the Helmholtz discretized system, which we call \emph{layer-stripping}, is prohibitively slow. This will explain also how, even if we have a more efficient way of obtaining the DtN map, applying it at every solve of the Helmholtz equation might still be slow. This is why we have developed the two-step procedure presented in this thesis: first an expansion of the DtN map, then a fast algorithm for the application of the DtN map.
\section{The Helmholtz equation: free-space problem}\label{sec:HEfree}
We consider the scalar Helmholtz equation in $\mathbb{R}^2$,
\begin{equation}\label{eq:HE}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = f(\mathbf{x}), \qquad \mathbf{x} = (x_1, x_2),
\end{equation}
with compactly supported $f$. We shall call this $f$ the \emph{right-hand side} or the \emph{source}. Here, the solution we seek to this equation is $u$. The function $c$ in \eqref{eq:HE} is called the \emph{medium of propagation}, or simply \emph{medium}. When $c$ is a constant, we say the medium is \emph{uniform}. When $c$ varies, we say the medium is \emph{heterogeneous}. We call $\omega$ the frequency, and note (this shall be explained later) that a high frequency makes the problem harder to solve numerically.
Throughout this work we consider the unique solution $u$ to \eqref{eq:HE} determined by the Sommerfeld radiation condition (SRC) at infinity: when $c(\mathbf{x})$ extends to a uniform $c$ outside of a bounded set\footnote{If the medium $c(\mathbf{x})$ does not extend to a uniform value outside of a bounded set, it is possible one could use the limiting absorption principle to define uniqueness of the solution to the Helmholtz equation. Start from the wave equation solution $u(\mathbf{x},t)$. We would like to take the Fourier transform in time of $u$ to obtain the Helmholtz solution. We may take the Laplace transform of $u$ instead to obtain $\hat{u}(\mathbf{x},s)$, and ask that the path of integration in the $s$ variable approach the imaginary axis from the decaying side. This could then be used to define the Helmholtz solution.}, the SRC is \cite{McLean}
\begin{equation}\label{eq:src}
\lim_{r \rightarrow \infty} r^{1/2} \left( \frac{\partial u}{\partial r} - ik u \right) = 0, \qquad k = \frac{\omega}{c},
\end{equation}
where $r$ is the radial coordinate. We call the problem of finding a solution $u$ to \eqref{eq:HE} and \eqref{eq:src} the \emph{free-space problem}.
\subsection{Solution in a uniform medium using the Green's function}\label{sec:Gfsol}
When the medium $c$ is uniform, there exists an analytical solution to the free-space problem, using the \emph{Green's function}.
\begin{definition}\label{def:Green}
The free-space \emph{Green's function} for the Helmholtz equation \eqref{eq:HE} is the unique function $G(\mathbf{x},\mathbf{y})$ which solves the free-space problem with right-hand side a delta function, $f(\mathbf{x})=\delta(|\mathbf{x}-\mathbf{y}|)$, for every fixed $\mathbf{y}$.
\end{definition}
It is well-known (p. 19 of \cite{bookChew}) that the Green's function $G$ of the uniform free-space problem \eqref{eq:HE}, \eqref{eq:src} is the following:
\begin{equation}\label{eq:Gfree}
G(\mathbf{x},\mathbf{y})=\frac{i}{4} H_0^{(1)}(k|\mathbf{x}-\mathbf{y}|)
\end{equation}
where $H_0^{(1)}$ is the Hankel function of zeroth order of the first kind. Then, one can compute the solution $u(\mathbf{x})$ for any $\mathbf{x} \in \mathbf{R}^2$, with now any given right-hand side $f$ supported on some bounded domain, using the following formula (see p. 62 of \cite{FollandIntroPDE}):
\begin{equation}\label{eq:Gsol}
u(\mathbf{x})=\int_{\mathbf{R}^2} G(|\mathbf{x}-\mathbf{y}|)f(\mathbf{y}) \ dy.
\end{equation}
To rapidly and accurately evaluate the previous expression at many $\mathbf{x}$'s is not easy: $G(\mathbf{x},\mathbf{y})$ has a singularity at $\mathbf{x}=\mathbf{y}$, and so care must be taken in the numerical evaluation of the integral in \eqref{eq:Gsol}. Since the issue of evaluating the solution to the free-space problem when the Green's function is known is not directly relevant to this thesis, we refer the reader to \cite{bem}.
\subsection{Solution in a heterogeneous medium using Absorbing Boundary Conditions}
For a number of practical applications of the Helmholtz equation, as was mentioned in the introductory chapter, one cannot find an analytical solution to this problem because the medium is not uniform, that is, $c$ is not a constant. In particular, the Green's function is not known, and one expects that calculating this Green's function numerically is just as hard as calculating the solution with numerical methods.
To obtain a numerical solution to an equation on an unbounded domain then, one must first reformulate this equation to obtain a possibly modified equation on a bounded domain. Hence, we pick a domain $\Omega$ in $\mathbb{R}^2$, with the compact support of $f$ contained in $\Omega$, such that $\Omega$ contains the area of interest, that is, where we care to obtain a solution. We now seek to reformulate the SRC on the boundary $\partial \Omega$, so that the resulting solution inside $\Omega$ matches that of the free-space problem. This leads us to define an \emph{Absorbing Boundary Condition}:
\begin{definition}
An \emph{Absorbing Boundary Condition} for the Helmholtz equation \eqref{eq:HE} is a condition on $\partial \Omega$, the boundary of a closed, bounded domain $\Omega \in \mathbb{R}^2$, which uniquely defines a solution to the Helmholtz equation restricted to $\Omega$, such that this unique solution matches the solution to the free-space problem \eqref{eq:HE}, \eqref{eq:src}.
\end{definition}
Clearly, if we can reformulate the SRC on the boundary $\partial \Omega$, so that the resulting solution inside $\Omega$ matches that of the free-space problem, we will obtain an ABC.
ABCs are extremely important to the numerical solution of the free-space problem because, as alluded to previously, they allow us to restrict the computational domain to a bounded domain $\Omega$, where a solution can be computed in finite time with finite memory. We will discuss ABCs in more detail in the next section, section \ref{sec:abc}.
We will explain in the next chapter the particular, quite rudimentary, solver we used in our numerical experiments. We note here that a better solver should be used for treating larger problems or obtaining better accuracy, for two reasons. First of all, as we explain in more detail in section \ref{sec:strip}, the cost of solving the Helmholtz problem with a standard solver in two dimensions is $O(N^4)$, which is prohibitive. Secondly, as we discuss in section \ref{sec:compABC}, a higher frequency means we need more points per wavelength in our discretization -- this is known as the \emph{pollution effect}. To treat larger problems, there exist better solvers such as the sweeping preconditioner of Engquist and Ying \cite{Hsweep,Msweep}, the shifted Laplacian preconditioner of Erlangga \cite{erlangga,er}, the domain decomposition method of Stolk \cite{stolk}, or the direct solver with spectral collocation of Martinsson, Gillman and Barnett \cite{dirfirst,dirstab}. The problem of appropriate numerical solvers for the Helmholtz equation in high frequency is very much a subject of ongoing research and not the purpose of this thesis, hence we do not discuss this further.
\subsection{The Dirichlet-to-Neumann map as an ABC}\label{sec:dtn}
We now seek to reformulate the SRC \eqref{eq:src} on $\partial \Omega$. There are many ways to do that numerically, as we shall see in section \ref{sec:abc}, but we wish to highlight this particular, analytical way because it introduces a fundamental concept, the \emph{Dirichlet-to-Neumann} map.
Let $G(\mathbf{x},\mathbf{y})$ be the Green's function for the free-space problem. Define the single and double layer potentials, respectively, on some closed contour $\Gamma$ by the following, for $\psi, \ \phi$ on $\Gamma$ (see details in \cite{McLean}, \cite{CK}):
\[
S \psi (\mathbf{x})=\int_{\Gamma} G(\mathbf{x},\mathbf{y}) \ \psi(\mathbf{y}) \ dS_y, \qquad T \phi (\mathbf{x}) =\int_{\Gamma} \frac{\partial G}{\partial \nu_{\mathbf{y}}} (\mathbf{x},\mathbf{y}) \ \phi(\mathbf{y}) \ dS_{\mathbf{y}},
\]
where $\nu$ is the outward pointing normal to the curve $\Gamma$, and $\mathbf{x}$ is not on $\Gamma$.
Now let $u^+$ satisfy the Helmholtz equation \eqref{eq:HE} in the exterior domain $\mathbb{R} \setminus \overline{\Omega}$, along with the SRC \eqref{eq:src}. Then Green's third identity is satisfied in the exterior domain: using $\Gamma= \partial \Omega$, we get
\begin{equation}\label{eq:GRF}
T u^+ - S \frac{\partial u}{\partial \nu}^+ = u^+, \qquad \mathbf{x} \in \mathbb{R}^2 \setminus \overline{\Omega}.
\end{equation}
Finally, using the jump condition of the the double layer $T$, we obtain Green's identity on the boundary $\partial \Omega$:
\[
(T - \frac{1}{2} I ) \, u^+ - S \frac{\partial u}{\partial \nu}^+ = 0, \qquad \mathbf{x} \in \partial \Omega.
\]
When the single-layer potential $S$ is invertible\footnote{This is the case when there is no interior resonance at frequency $\omega$, which could be circumvented by the use of combined field integral equations as in \cite{CK}. The existence and regularity of $D$ ultimately do not depend on the invertibility of $S$.}, we can let $D = S^{-1} (T - \frac{1}{2} I )$, and equivalently write (dropping the $+$ in the notation)
\begin{equation}\label{eq:dtn-abc}
\frac{\partial u}{\partial \nu} = D u, \qquad \mathbf{x} \in \partial \Omega.
\end{equation}
The operator $D$ is called the \emph{exterior Dirichlet-to-Neumann map} (or DtN map), because it maps the Dirichlet data $u$ to the Neumann data $\partial u/\partial \nu$ with $\nu$ pointing outward. The DtN map is independent of the right-hand side $f$ of \eqref{eq:HE} as long as $f$ is supported in $\Omega$. The notion that \eqref{eq:dtn-abc} can serve as an exact ABC was made clear in a uniform medium, e.g., in \cite{engmaj} and in \cite{kelgiv}. Equation \eqref{eq:dtn-abc} continues to hold even when $c(\mathbf{x})$ is heterogenous in the vicinity of $\partial \Omega$, provided the correct (often unknown) Green's function is used. The medium is indeed heterogeneous near $\partial \Omega$ in many situations of practical interest, such as in geophysics.
The DtN map $D$ is symmetric. The proof of the symmetry of D was shown in a slightly different setting here \cite{symm} and can be adapted to our situation. Much more is known about DtN maps, such as the many boundedness and coercivity theorems between adequate fractional Sobolev spaces (mostly in free space, with various smoothness assumptions on the boundary). We did not attempt to leverage these properties of $D$ in the scheme presented here.
We only compress the exterior DtN map in this work, and often refer to it as the DtN map for simplicity, unless there could be confusion with another concept, for example with the \emph{half-space DtN map}. We shall talk more about the half-space DtN map soon, but first, we review in the upcoming section existing methods for discrete absorbing boundary conditions.%
\section{Discrete absorbing boundary conditions}\label{sec:abc}
There are many ways to realize an absorbing boundary condition for the Helmholtz equation, and we briefly describe the main ones in this section. We start with ABCs that are surface-to-surface, and move on to ABCs which involve surrounding the computational domain $\Omega$ by an absorbing layer. The later approach is often more desirable because the parameters of the layer can usually be adjusted to obtain a desired accuracy. We then discuss the complexity of ABCs in heterogeneous media.%
\subsection{Surface-to-surface ABCs}
An early seminal work in absorbing boundary conditions was from Engquist and Majda, who in \cite{engmaj} consider the half-space problem ($x_1 \geq 0$) for the wave equation, with uniform medium, and write down the form of a general wave packet traveling to the left (towards the negative $x_1$ direction).
%
From this, they calculate the boundary condition which exactly annihilates those wave packets, and obtain \footnote{We are omitting details here for brevity, including a discussion of pseudo-differential operators and their symbols.}: %
%
\begin{equation} \label{sym}
d/dx - i\sqrt{\omega^2-\xi^2}
\end{equation}
where $(\omega,\xi)$ are the dual variables to $(y,t)$ in Fourier space. They can then approximate the square root in various ways in order to obtain usable, i.e. local in both space and time, boundary conditions, recalling that $i\omega$ corresponds to $\partial/\partial y$ and $i\xi$ corresponds to $\partial/\partial t$.
Hagstrom and Warburton in \cite{haglap} also consider the half-space problem, take the transverse Fourier-Laplace transforms of the solution and use a given Dirichlet data on the boundary of the half-space to obtain what they call a complete wave representation of the solution, valid away from the boundary. They then use this representation to obtain approximate local boundary condition sequences to be used as ABC's. Again, their method was developed for the uniform case.
Keller and Givoli, in \cite{kelgiv}, use a different technique: they assume a circular or spherical $\Omega$, and a uniform medium outside of this $\Omega$. They can then use the Green's function of the exterior problem, which is known for a circle, in order to know the solution anywhere outside $\Omega$, given the boundary data $u$ on $\partial \Omega$. They can then differentiate this solution in the radial (which is the normal) coordinate, and evaluate it on $\partial \Omega$ to obtain the exterior DtN map. They can now use this DtN map as an ABC in a Helmholtz solver. This technique can be seen as \emph{eliminating the exterior unknowns}: as we do not care to know the solution outside of $\Omega$, we can use the information we have on the exterior solution to reduce the system to one only on the inside of $\Omega$. This meaning of \emph{eliminating the exterior unknowns} shall become more obvious when we apply this to the discretized Helmholtz equation in section \ref{sec:strip}.
Somersalo et al. (\cite{somer}) also use the DtN map, this time to solve an interior problem related to the Helmholtz equation. They use a differential equation of Riccati type to produce the DtN map. When introduce the elimination of unknowns in section \ref{sec:strip}, we shall demonstrate the connection we have found between the \emph{half-space} problem and a Riccati equation for the DtN map.
The aforementioned ABCs are not meant to be a representative sample of all the existing techniques. Unfortunately, these techniques either do not apply to heterogeneous medium, or do not perform very well in that situation. In contrast, various types of absorbing layers can be used in heterogeneous media, with caution, and are more flexible.
\subsection{Absorbing layer ABCs}\label{sec:layers}
Another approach to ABCs is to surround the domain of interest by an \emph{absorbing layer}. %
While a layer should preferably be as thin as possible, to reduce computational complexity, its design involves at least two different factors: 1) waves that enter the layer must be significantly damped before they re-enter the computational domain, and 2) reflections created when waves cross the domain-layer interface must be minimized. The Perfectly Matched Layer of B\'erenger (called PML, see \cite{berenger}) is a convincing solution to this problem in a uniform acoustic medium. Its performance often carries through in a general heterogeneous acoustic medium $c(\mathbf{x})$, though its derivation strictly speaking does not.
PML consists of analytically continuing the solution to the complex plane for points inside the layer. This means we apply a coordinate change of the type $x \rightarrow x + i \int^x \sigma(\tilde{x}) \ d\tilde{x}$, say for a layer in the positive $x$ direction, with $\sigma=0$ in the interior $\Omega$, and $\sigma$ positive and increasing inside the layer. Hence the equations are unchanged in the interior, so the solution there is the desired solution to the Helmholtz equation, but waves inside the layer will be more and more damped, the deeper they go into the layer. This is because the solution is a superposition of complex exponentials, which become decaying exponentials under the change of variables. Then we simply put zero Dirichlet boundary conditions at the end of the layer. Whatever waves reflect there will be tiny when they come back out of the layer into the interior. For a heterogeneous medium, we may still define a layer-based scheme from a transformation of the spatial derivatives which mimics the one done for the PML in a uniform medium, by replacing the Laplacian operator $\Delta$ by some $\Delta_{layer}$ inside the PML, but this layer will not be perfectly matched anymore and is called a \emph{pseudo-PML} (pPML). In this case, reflections from the interface between $\Omega$ and the layer are usually not small. It has been shown in \cite{adiabatic} that, in some cases of interest to the optics community with nonuniform media, pPML for Maxwell's equations can still work, but the layer needs to be made very thick in order to minimize reflections at the interface. In this case, the Helmholtz equation has to be solved in a very large computational domain, where most of the work will consist in solving for the pPML. In fact, the layer might even cause the solution to grow exponentially inside it, instead of forcing it to decay (\cite{diazjoly}, \cite{back}), because the group and phase velocities have an opposite sign. %
An ABC scheme which is more stable by construction is the one of Appel\"o and Colonius \cite{appcol}. They use a smooth coordinate transform to reduce an unbounded domain to a bounded one with a slowing-down layer, and damp the spurious waves thus created by artificial viscosity (high-order undivided differences). The stability of this scheme follows from its construction, so that it can be used in problems for which the pPML is unstable. However, this method is not ideal because it requires discretizing higher and higher order space derivatives in order to obtain better and better results.
\subsection{Complexity of ABCs in heterogeneous media}\label{sec:compABC}
Unfortunately, discrete absorbing layers such as the pPML may need to be quite wide in practice, or may be otherwise computationally costly (because for example of high-order artificial viscosity in \cite{appcol}). Call $L$ this width (in meters). Although this is not a limitation of the framework presented in this paper, we discretize the Helmholtz operator in the most elementary way using the standard five-point difference stencil. Put $h = 1/N$ for the grid spacing, where $N$ is the number of points per dimension for the interior problem, inside the unit square $\Omega = [0,1]^2$. While $\Omega$ contains $N^2$ points, the total number of unknowns is $O\left((N+2w)^2\right)$ in the presence of the layer, where $w=L/h$ is its width in number of grid points. In a uniform medium, the PML width $L$ needed is a fraction of the wavelength, i.e. $L \sim \lambda=\frac{2\pi}{\omega} \sim \frac{1}{N}$, so that we need a constant number of points independently of $N$: $w=L/h=LN \sim 1$. However, in nonuniform media, the heterogeneity of $c(\mathbf{x})$ can limit the accuracy of the layer. If we consider an otherwise uniform medium with an embedded scatterer outside of $\Omega$, then the pPML will have to be large enough to enclose this scatterer, no matter $N$. For more general, heterogeneous media such as the ones considered in this paper, we often observe that convergence as a function of $L$ or $w$ is delayed compared to a uniform medium. That means that we have $L \sim L_0$ so that $w \sim NL_0$ or $w = O(N)$, as we assume in the sequel.
The authors of \cite{appcol} have not investigated the computational complexity of their layer for a given accuracy, but it is clear that a higher accuracy will require higher-order derivatives for the artificial viscosity, and those are quite costly. Fortunately, the framework to be developed over the next chapters also applies to the compression of such a layer, just as it does to any other ABC.
In the case of a second-order discretization, the rate at which one must increase $N$ in order to preserve a constant accuracy in the solution, as $\omega$ grows, is about $N =O(\omega^{1.5})$. This unfortunate phenomenon, called the \emph{pollution effect}, is well-known: it begs to \emph{increase} the resolution, or number of points per wavelength, of the scheme as $\omega$ grows \cite{nvsom,BabPollut}. As we saw, the width of the pPML may be as wide as a constant value $L_0$ independent of $N$, hence its width generally needs to scale as $O(\omega^{1.5})$ grid points.
Next, we introduce the exterior and half-space problems for the Helmholtz equation. We explain how those are related, and how knowledge from the solution of one will help us with the solution to the other.
\section{The Helmholtz equation: exterior and half-space problems}\label{sec:HEhalf}
The previous two sections addressed the fact that we wish to obtain the exterior DtN map in order to approximate the free-space solution in $\Omega$, and how to do that using ABCs and the DtN map. However, ABCs can be computationally intensive. To obtain the exterior DtN map numerically in a feasible way, we will need solve in chapter \ref{ch:probing} the exterior problem, and so we define it here. The \emph{half-space problem} for the Helmholtz equation is also interesting to us because we can write down an analytical formula for the DtN map, and use that to gain knowledge that might prove to be more general and apply to the exterior DtN map. Hence we begin by explaining the exterior problem. Then we introduce the half-space problem and its DtN map, and why this might give us insights into the exterior DtN map. We then state important results which will be used in the next chapters.
\subsection{The exterior problem}\label{sec:extprob}
The exterior problem consists of solving the free-space problem, but outside of some domain $\Omega$, given a Dirichlet boundary condition $g$ on $\partial \Omega$ and the SRC \eqref{eq:src}. That is, the following has to hold:
\begin{equation}\label{eq:HEext}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = 0, \ \mathbf{x} = (x_1, x_2) \in \Omega^c,
\end{equation}
where $\Omega^c=\mathbb{R}^2 \setminus \Omega$, with the boundary data $u(\mathbf{x})=g(\mathbf{x})$ for $\mathbf{x} \in \partial \Omega$ and a given $g$. Again, we require the SRC to hold. We call this the \emph{exterior problem}, since we solve for the solution $u$ outside of the domain $\Omega$. We are interested in this problem because, if we can find the solution $u$, then we can take its derivative, normal to $\partial \Omega$, and obtain the exterior Dirichlet-to-Neumann map. This is how we will calculate the DtN map numerically in the next chapter. Then, of course, the DtN map can be used to solve the free-space problem reformulated on $\Omega$, which is our goal. This sounds like a circular way of solving the free-space problem. This is because, as we shall see in more detail in the next chapter, solving the exterior problem a few times will give us the exterior DtN map, which will speed up all free-space solves.
In the next chapter, we will use $\Omega=\left[0,1\right]^2$, a square of side 1. For practical computations, a rectangular domain made to fit tightly around the scatterer of interest is often used, especially if we have a thin and long scatterer. As we will see in section \ref{sec:abc}, numerous ABCs have been designed for the rectangular domain, so choosing a rectangular domain here is not a limitation. Then, the numerical DtN map is a matrix which, when multiplied by a vector of Dirichlet values $u$ on $\partial \Omega$, outputs Neumann values $\partial_\nu u$ on $\partial \Omega$. In particular, we can consider the submatrix of the DtN map corresponding to Dirichlet values on one particular side of $\Omega$, and Neumann values on that same side. As we shall see next, this submatrix should be quite similar to the half-space DtN map.
\subsection{The half-space problem}\label{sec:half}
We consider again the scalar Helmholtz equation, but this time we take $\Omega$ to be the top half-space $\mathbb{R}_+^2=\left\{ (x_1,x_2) : x_2 \geq 0 \right\}$, so that now the boundary $\partial \Omega$ is the $x_1$-axis, that is, when $x_2=0$. And we consider the exterior problem for this $\Omega$:
\begin{equation}\label{eq:hsHE}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = 0, \qquad \mathbf{x} = (x_1, x_2), \qquad x_2 \leq 0
\end{equation}
with given boundary condition $g$:
\begin{equation}\label{eq:bchalf}
u(x_1,0)=g(x_1,0),
\end{equation}
requiring some decay on $g(x_1,0)$ as $x_1 \rightarrow \pm \infty$ and the SRC \eqref{eq:src} to hold in the bottom half-space $\mathbb{R}_-^2=\left\{ (x_1,x_2) : x_2 \leq 0 \right\}$. We shall explain how to find this analytical DtN map for the half-space problem in uniform medium, but first, we discuss the relevance of the half-space DtN map for the exterior DtN map.
Let us use again $\Omega=\left[0,1\right]^2$ as we introduced earlier, and let us call $S_1$ the bottom side of $\partial \Omega$: $S_1=\left\{(x_1,x_2): 0 \leq x_1 \leq 1, x_2=0 \right\}$. For the exterior problem, we prescribe boundary values $g$ on $\partial \Omega$, solve the exterior problem and obtain values of $u_\text{ext}$ everywhere outside of $\Omega$. From those, we know the exterior DtN map $D_\text{ext}:\partial \Omega \rightarrow \partial \Omega$. Consider now the values of $u_\text{ext}$ we have just found, along the $x_1$ axis: we can use those to define the boundary condition \eqref{eq:bchalf} of the half-space problem. The solution $u_\text{half}$ we find for this half-space problem on the bottom half-space $\mathbb{R}_-^2$ coincides with $u_\text{ext}$ on that half-space. Similarly, the exterior DtN map $D_\text{ext}$ restricted to the bottom side of $\Omega$, $D_\text{ext}: S_1 \rightarrow S_1$ coincides with the half-space DtN map $D_\text{half}$, restricted to this same side $D_\text{half}: S_1 \rightarrow S_1$.
This relationship between the half-space and exterior DtN maps remains when solving those problems numerically. To solve the exterior problem numerically, we proceed very similarly to how we would for the free-space problem. As we saw in the previous section on ABCs, we may place an ABC on $\partial \Omega$ for the free-space problem. For the exterior problem, we place this ABC just a little outside of $\partial \Omega$, as in Figure \ref{fig:ext}. We then enforce the boundary condition $u(\mathbf{x})=g(\mathbf{x})$ for $\mathbf{x}$ on $\partial \Omega$, and solve the Helmholtz equation outside of $\Omega$, inside the domain delimited by the ABC. We thus obtain the solution $u$ on a thin strip just outside of $\Omega$, and we can compute from that $u$ the DtN map $\partial_\nu u$ on $\partial \Omega$. This is why we need to put the ABC just a little outside of $\Omega$, and not exactly on $\partial \Omega$.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.45]{./figs/diag_ext.pdf}
\caption{The exterior problem: $\Omega$ is in grey, there is a thin white strip around $\Omega$, then an absorbing layer in blue.}
\label{fig:ext}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.45]{./figs/diag_half.pdf}
\caption{The half-space problem: there is thin white strip below the bottom side of $\Omega$, and an absorbing layer (in blue) surrounds that strip on three sides.}
\label{fig:half}
\end{minipage}
\end{figure}
To solve the half-space problem numerically, we will need again an ABC, in order to reformulate the problem on a smaller, bounded domain. We can prescribe values of $u$ only along the bottom edge $S_1$ as in Figure \ref{fig:half}, leave a thin strip just below that edge, surround that thin strip by an ABC on all three other sides, and solve for the solution $u$. We can then, from the solution $u$ in this thin strip, calculate an approximate half-space DtN map on $S_1$. And we see how this approximate half-space DtN map restricted to $S_1$ will be similar (but not exactly the same) to the exterior DtN map restricted to that same edge $S_1$, given the same boundary data on $S_1$ and 0 boundary data on the other edges of $\partial \Omega$. The main difference between the two maps is that in the exterior case, the two corners of $S_1$ will cause scattering, some of which will affect the solution in the top half-space $\mathbb{R}_+^2$ as well.%
It is because of this connection between the half-space and the exterior DtN maps that we analyze further the half-space DtN map, and use insights obtained from this analysis to treat the exterior DtN map.
\subsection{Solution to the half-space problem in a uniform medium using the Green's function}\label{sec:hsG}
In section \ref{sec:Gfsol} we used the free-space Green's function $G$ to obtain the solution $u$ anywhere from integrating $G$ with the right-hand side $f$. We can do the same for the half-space problem. First, we define the Green's function of the half-space problem:
\begin{definition}\label{def:Greenhalf}
The half-space \emph{Green's function} for the Helmholtz equation \ref{eq:hsHE} is the unique function $G(\mathbf{x},\mathbf{y})$ which solves the half-space problem with zero boundary data, that is $g=0$ in \eqref{eq:bchalf}, and with right-hand side the delta function $f(\mathbf{x})=\delta(|\mathbf{x}-\mathbf{y}|)$, for every fixed $\mathbf{y}$.
\end{definition}
This half-space Green's function, which we shall call $G_\text{half}$, is once again well-known, and can be obtained from the free-space Green's function by the reflection principle (p. 110 of \cite{FollandIntroPDE}), with $ \mathbf{x}=(x_1,x_2)$, $\mathbf{y}=(y_1,y_2)$ and $\mathbf{x}'=(-x_1,x_2)$:
\begin{equation}\label{eq:Greenhalf}
G_\text{half}(\mathbf{x},\mathbf{y})=G(\mathbf{x},\mathbf{y})- G(\mathbf{x}',\mathbf{y}).
\end{equation}
Then, the solution $u$ to the half-space problem with $g=0$ is as expected:
\begin{equation}\label{eq:Gsolhalf}
u(\mathbf{x})=\int_S G_\text{half}(|\mathbf{x}-\mathbf{y}|)f(\mathbf{y}) \ dy, \ \mathbf{x} \in \mathbb{R}^2 \setminus S.
\end{equation}
where $S=\left\{(x_1,x_2):x_2=0\right\}$ is the $x_1$ axis. This half-space Green's function will be useful to us, and in particular, we are interested in the following result of \cite{Hsweep}, slightly reformulated for our purposes:
\begin{lemma}\label{Glowrank}
\emph{Theorem 2.3 of \cite{Hsweep}.} Let $G_\text{half}$ be the half-space Green's function as defined above. Let $n \in \mathbb{N}$ be some discretization parameter, $n$ even, and let $h=1/n$. Let $Y=\left\{\mathbf{y}_j=(jh,-h), j=1, \ldots, n/2 \right\}$ and $X=\left\{\mathbf{x}_j=(jh,-h), j=n/2+1, \ldots, n \right\}$. Then $\left(G_\text{half}(\mathbf{x},\mathbf{y})\right)_{\mathbf{x} \in X, \ \mathbf{y} \in Y}$ is numerically low-rank. More precisely, for any $\varepsilon >0$, there exist a constant $J=O(\log k |\log\varepsilon |^2)$, functions $\left\{\alpha_p(\mathbf{x})\right\}_{1\leq p \leq J}$ for $\mathbf{x} \in X$ and functions $\left\{\beta_p(\mathbf{y})\right\}_{1\leq p \leq J}$ for $\mathbf{y} \in Y$ such that
\[ \left| G(\mathbf{x},\mathbf{y})-\sum_{p=1}^J \alpha_p(\mathbf{x})\beta_p(\mathbf{y}) \right| \leq \varepsilon \ \text{for} \ \mathbf{x} \in X, \mathbf{y} \in Y.\]
\end{lemma}
We provide this lemma here because the concept of an operator being \emph{numerically low-rank} away from its diagonal will be useful later.
For now, we define the half-space DtN map. We first want to find a kernel that will give us the solution on the bottom half-space when multiplied with the boundary data $g$ and not with the right-hand side $f$, which we will soon assume to be 0.
Such a kernel exists, and is related to $G_\text{half}$. From equation (2.37) of \cite{FollandIntroPDE}, we may write
$$ u(\mathbf{x})=\int_{S} \partial_{\nu_\mathbf{y}}G_\text{half}(\mathbf{x},\mathbf{y}) \ u(\mathbf{y}) \ dS_\mathbf{y}, \ \mathbf{x} \in \mathbb{R}^2 \setminus S $$
where $\nu$ is the outward- (thus downward-) pointing normal to $S$. Hence we obtain
\begin{equation}\label{eq:halfsol}
u(\mathbf{x})=-\int \left. \partial_{y_2}G_\text{half}\left(\mathbf{x},(y_1,y_2)\right)\right|_{y_2=0} \ u(y_1,0) \ dy_1, \ \mathbf{x} \in \mathbb{R}^2 \setminus S.
\end{equation}
We have found a different way of expressing the solution $u$ to the half-space problem using the half-space Green's function $G_\text{half}$.
\subsection{The analytical half-space DtN map}
Since we wish to consider the exterior Dirichlet-to-Neumann map for $u$ on $x_2=0$, $Du(x_1,0)=\left. \partial_{\nu_\mathbf{x}}u(x_1,x_2) \right|_{x_2=0}=-\left. \partial_{x_2} u(x_1,x_2) \right|_{x_2=0}$, we take the normal derivative in \eqref{eq:halfsol} and evaluate at $x_2=0$ (using the fact that $u(y_1,0)=g(y_1,0)$ is the boundary data):
$$Du(x_1,0)= \int \left. \partial_{x_2} \partial_{y_2} G_\text{half}\left((x_1,x_2),(y_1,y_2)\right) \right|_{x_2=0,y_2=0} \ g(y_1,0) \ dy_1, \ x_1 \in \mathbb{R}.$$
Hence we have found the kernel of the half-space DtN map to be, essentially, two derivatives of the half-space Green's function. Since we know that Green's function, we can use it to obtain an analytical expression for the half-space DtN map. Notice that $\frac{\partial}{\partial z} H_0^{(1)}(z)=-H_1^{(1)}(z)$, %
We have
$$\partial_{x_2} \partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i}{4} \partial_{x_2} \partial_{y_2} H_0^{(1)}(z)$$
with
$$z=k|\mathbf{x}-\mathbf{y}|=k\sqrt{(x_1-y_1)^2+(x_2-y_2)^2},$$
so that
$$\partial_{y_2} z = k^2\frac{y_2-x_2}{z}, \ \partial_{x_2} z = k^2\frac{x_2-y_2}{z}.$$
Hence
$$\partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i}{4} \frac{\partial z}{\partial y_2} \partial_z H_0^{(1)}(z) = \frac{i k^2 }{4} \frac{x_2-y_2}{z} H_1^{(1)}(z)$$
and
\begin{equation}\label{eq:Gxy}
\partial_{x_2} \partial_{y_2} G(\mathbf{x},\mathbf{y}) = \frac{i k^2}{4} \left( \frac{z-k^2(x_2-y_2)^2/z}{z^2} H_1^{(1)}(z) + k^2\left(\frac{x_2-y_2}{z}\right)^2 \partial_z H_1^{(1)}(z) \right).
\end{equation}
Also, we have
$$\partial_{x_2} \partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i}{4} \partial_{x_2} \partial_{y_2} H_0^{(1)}(z')$$
with
$$z'=k\sqrt{(x_1-y_1)^2+(-x_2-y_2)^2},$$
so that
$$\partial_{y_2} z' = k^2\frac{y_2+x_2}{z'}, \ \partial_{x_2} z' = k^2\frac{y_2+x_2}{z'}.$$
Hence
$$\partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i}{4} \frac{\partial z'}{\partial y_2} \partial_{z'} H_0^{(1)}(z') = \frac{i k^2}{4} \frac{-x_2-y_2}{z'} H_1^{(1)}(z')$$
and
\begin{equation}\label{eq:Gxpy}
\partial_{x_2} \partial_{y_2} G(\mathbf{x}',\mathbf{y}) = \frac{i k^2}{4} \left( \frac{-z'+k^2(x_2+y_2)^2/z'}{z'^2} H_1^{(1)}(z') -k^2\left(\frac{x_2+y_2}{z'}\right)^2 \partial_{z'} H_1^{(1)}(z') \right).
\end{equation}
Now let $x_2=0,y_2=0$ but $0 \neq k|\mathbf{x}-\mathbf{y}|= k|x_1-y_1|=\left. z \right|_{x_2=0,y_2=0}=\left. z' \right|_{x_2=0,y_2=0}$ in \eqref{eq:Gxy} and \eqref{eq:Gxpy}, so that
\begin{eqnarray*}
\partial_{x_2} \partial_{y_2} \left. G_\text{half}(\mathbf{x},\mathbf{y}) \right|_{x_2=0,y_2=0} &=& \partial_{x_2} \partial_{y_2} \left. G(\mathbf{x},\mathbf{y}) \right|_{x_2=0,y_2=0} - \partial_{x_2} \partial_{y_2} \left. G(\mathbf{x}',\mathbf{y}) \right|_{x_2=0,y_2=0} \\
&=& \left. \frac{i k^2}{2k|\mathbf{x}-\mathbf{y}|} H_1^{(1)}(k|\mathbf{x}-\mathbf{y}|) \right|_{x_2=0,y_2=0} \\
&=& \frac{i k^2}{2k|x_1-y_1|} H_1^{(1)}(k|x_1-y_1|)
\end{eqnarray*}
Thus we have found that the half-space DtN map kernel is:
\begin{equation}\label{eq:hsDtNkernel}
K(r)=\frac{i k^2}{2kr} H_1^{(1)}(kr)
\end{equation}
and thus the half-space DtN map is:
\begin{equation}\label{eq:hsDtNmap}
Du(x_1,0)= \int K(|x_1-y_1|) g(y_1,0) \ dy_1.
\end{equation}
As we shall prove in chapter \ref{ch:plr}, the half-space DtN map kernel \eqref{eq:hsDtNkernel} for a uniform medium is numerically low-rank away from its singularity, just as the half-space Green's function is from Lemma \ref{Glowrank}. This means that, if $x$ and $y$ are coordinates along the infinite boundary $S$, and $|x-y|\geq r_0$ for some constant quantity $r_0$, then the DtN map, a function of $x$ and $y$, can be approximated up to error $\varepsilon$ as a sum of products of functions of $x$ only with functions of $y$ only (\emph{separability}) and that this sum is finite and in fact involves only a small numbers of summands (\emph{low-rank}). Hence, a numerical realization of that DtN map in matrix form should be compressible. In particular, blocks of that matrix which are away from the diagonal should have low column rank. We shall make all of this precise in chapter \ref{ch:plr}.
Recall that the goal of this thesis, to compress ABCs, will be achieved by approximating the DtN map in more general cases than the uniform half-space problem. Before we explain our approach for compressing an ABC in the next two chapters, we first explain the most straightforward way of obtaining a DtN map from an ABC, by eliminating the unknowns in the absorbing layer in order to obtain a reduced system on the interior nodes. This solution, however, is computationally impractical. It only serves to make explicit the relationship between ABCs and the DtN map.
\section{Eliminating the unknowns: from any ABC to the DtN map}\label{sec:strip}
We now wish to explain the fundamental relationship between discrete ABCs and the discrete DtN map: any discrete ABC can be reformulated as a discrete DtN map on $\partial \Omega$. We present two similar ways of obtaining that relationship using Schur complements: one for the half-space problem, and one for the free-space problem.
\subsection{Eliminating the unknowns in the half-space problem}
We consider the half-space problem, in which we care about the solution in the top half-plane. We assume from the SRC that the solution far away from $x_2=0$ in the bottom half is small. We want to eliminate unknowns from far away on the bottom side, where the solution is so small we ignore it as zero, towards the positive $x_2$ direction in order to obtain an outgoing Dirichlet-to-Neumann map in the $-x_2$ direction. We assume $f=0$ everywhere in the bottom half-plane. Let $u_1$ denote the first line of unknowns at the far bottom, $u_2$ the next line, and so on. We first use a Schur complement to eliminate $u_1$ from the discretized (using the standard five-point stencil) Helmholtz system which is as follows:
\[ \frac{1}{h^2}
\begin{pmatrix} \\
& S_1 & P_1 & \\
& & & \\
& P_1^T& C_1 & \\
& & & \\
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{1} \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \vdots \\ \\ \end{pmatrix}.
\]
We then define similarly the matrices $S_k$, $P_k$ and $C_k$ corresponding to having eliminated lines $u_1$ through $u_{k-1}$ from the system. Only the upper block of $C_1$, or $C_k$ when subsequently eliminating line $u_k$, will be modified from the Schur complement. Indeed, since we eliminate from bottom to top and we have a five-point stencil, the matrices $P_k$ will be
\[ P_k=
\begin{pmatrix} & I & 0 & \cdots \\
\end{pmatrix}
\]
and so we may write the following recursion for the Schur complements:
\begin{equation}\label{Srel}
S_{k+1}=M-S_k^{-1},
\end{equation}
where $M$ is the main block of the 2D Helmholtz operator multiplied by $h^2$, that is,
\[ M=
\begin{pmatrix}
& -4+h^2k^2 & 1 & & \ & \\
& 1 & -4+h^2k^2 & \ddots & \ & \\
& & \ddots & \ddots & \ & \\
& & & & \ & \\
\end{pmatrix} = -2I + Lh^2.\]
Here $L$ is the 2nd order discretization of the 1D Helmholtz operator in the $x_1$ direction, $\partial_{x_1}\partial_{x_1}+k^2$. Now, at each step we denote
\begin{equation}\label{StoD}
S_k=hD_k-I.
\end{equation}
Indeed, looking at the first block row of the reduced system at step $k-1$ we have
\[S_ku_k +Iu_{k+1}=0\]
or
\[(hD_k-I)u_k+Iu_{k+1}=0.\]
From this we obtain the DtN map $D_k$ from a backward difference:
\[ \frac{u_k-u_{k+1}}{h}=D_ku_k. \]
Now we use (\ref{StoD}) inside (\ref{Srel}) to obtain
\[ hD_{k+1}-I=M+(I-hD_k)^{-1}=M+I+hD_k+h^2D_k^2+O(h^3)\]
or
\[hD_{k+1}-hD_k=Lh^2+h^2D_k^2+ O(h^3), \]
which we may rewrite to obtain a discretized Riccati equation (something similar was done in \cite{keller}):
\begin{equation}\label{DR}
\frac{D_{k+1}-D_k}{h}=L+D_k^2+ O(h),
\end{equation}
of which we may take the limit as $h \rightarrow 0$ to obtain the Riccati equation for the DtN map $D$ in the $-x_2$ direction:
\begin{equation}\label{R}
D_{x_2}=(\partial_{x_1}\partial_{x_1}+k^2)+D^2.
\end{equation}
This equation is to be evolved in the $+x_2$ direction, starting far away in the bottom half-space. Looking at the steady state, $D_{x_2}=0$, we get back $D^2=-\partial_{x_1}\partial_{x_1}-k^2$, which is the Helmholtz equation with 0 right-hand side $f$ (which we have assumed to hold in the bottom half-space). Hence we conclude that the Riccati equation for the DtN map could be used to obtain the DtN map in the half-space case, and maybe even more complicated problems. We leave this to future work, and turn to a very similar way of eliminating unknowns, but for the exterior DtN map with $\Omega=[0,1]^2$ this time. This technique will not give rise to a Riccati equation, but will help us understand how the DtN map can be used numerically to solve the free-space Helmholtz equation reformulated on $\Omega$.
\subsection{Eliminating the unknowns in the exterior problem}
In this subsection, we assume we need an absorbing layer of large width, $w \geq N$ in number of points. We write the system for the discrete Helmholtz equation as
\begin{equation}\label{HEsys}
\begin{pmatrix} \\
& A & P & \\
& & & \\
& P^T& C & \\
& & &
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{out} \\ \\ u_{\Omega} \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \\ f_\Omega \\ \\ \end{pmatrix},
\end{equation}
with $A=\Delta_{layer} + k^2 I$ and $C=\Delta + k^2 I$, with $\Delta$ overloaded to denote discretization of the Laplacian operator, and $\Delta_{layer}$ the discretization of the Laplacian operator inside the absorbing layer. We wish to eliminate the exterior unknowns $u_{out}$ from this system in order to have a new system which only depends on the interior unknowns $u_{\Omega}$. The most obvious way of eliminating those unknowns is to form the Schur complement $S=C-P^TA^{-1}P$ of $A$ by any kind of Gaussian elimination. For instance, in the standard raster scan ordering of the unknowns, the computational cost of this method\footnote{The cost of the Schur complement procedure is dominated by that of Gaussian elimination to apply $A^{-1}$ to $P$. Gaussian elimination on a sparse banded matrix of size $s$ and band $b$ is $O(sb^2)$, as can easily be infered from Algorithm 20.1 of \cite{tref}.} is $O(w^4)$ --- owing from the fact that $A$ is a sparse banded matrix of size $4Nw+4w^2$ which is $O(w^2)$, and band $N+2w$. Alternatively, elimination of the unknowns can be performed by layer-stripping, starting with the outermost unknowns from $u_{out}$, until we eliminate the layer of points that is just outside of $\partial \Omega$. The computational cost will be $O(w^4)$ in this case as well. To see this, let $u_{w}$ be the points on the outermost layer, $u_{w-1}$ the points in the layer just inside of $u_{w}$, etc. Then we have the following system:
\[
\begin{pmatrix} \\
& A_w & P_w & \\
& & & \\
& P_w^T& C_w & \\
& & & \\
\end{pmatrix} \quad
\begin{pmatrix} \\ u_{w} \\ \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ 0 \\ \\ \vdots \\ \\ \end{pmatrix}
\]
Note that, because of the five-point stencil, $P_w$ has non-zeros exactly on the columns corresponding to $u_{w-1}$. Hence the matrix $P_w^TA_w^{-1}P_w$ in the first Schur complement $S_w=C_w-P_w^TA_w^{-1}P_w$ is non-zero exactly at the entries corresponding to $u_{w-1}$. It is then clear that, in the next Schur complement, to eliminate the next layer of points, the matrix $A_{w-1}$ (the block of $S_w$ corresponding to the points $u_{w-1}$) to be inverted will be full. For the same reason, every matrix $A_j$ to be inverted thereafter, for every subsequent layer to be eliminated, will be a full matrix. Hence at every step the cost of forming the corresponding Schur complement is at least on the order of $m^3$, where $m$ is the number of points in that layer. Hence the total cost of eliminating the exterior unknowns by layer stripping is approximately
\[ \sum_{j=1}^{w} (4(N+2j))^3 = O(w^4). \]
Similar arguments can be used for the Helmholtz equation in 3 dimensions. In this case, the computational complexity of the Schur complement or layer-stripping methods would be $O(w^3 (w^2)^2)=O(w^7)$ or $~\sum_{j=1}^{w} (6(N+2j)^2)^3=O(w^7)$. Therefore, direct elimination of the exterior unknowns is quite costly. Some new insight will be needed to construct the DtN map more efficiently.
We now remark that, whether we eliminate exterior unknowns in one pass or by layer-stripping, we obtain a reduced system. It looks just like the original Helmholtz system on the interior unknowns $u_{\Omega}$, except for the top left block, corresponding to $u_{0}$ the unknowns on $\partial \Omega$, which has been modified by the elimination procedure. Hence with the help of some dense matrix $D$ we may write the reduced, $N^2$ by $N^2$ system as
\begin{equation}\label{HEred}
Lu=
\begin{pmatrix}
(hD-I)/h^2 & I/h^2 & 0 & & \cdots & \\
& & & \\
I/h^2 & & & \\
& & & \\
0 & & [ \; \Delta + k^2 I \; ] & \\
& & & \\
\vdots & & & \\
\\
\end{pmatrix} \quad
\begin{pmatrix} u_{0} \\ \\ u_{-1} \\ \\ u_{-2} \\ \\ \vdots \\ \\ \end{pmatrix}
= \begin{pmatrix} 0 \\ \\ f_{-1} \\ \\ f_{-2} \\ \\ \vdots \\ \\ \end{pmatrix}
\end{equation}
and we have thus obtained an absorbing boundary condition which we may use on the boundary of $\Omega$, independent of the right-hand side $f$. Indeed, if we call $u_{-1}$ the first layer of points inside $\Omega$, we have $ (I-hD)u_{0} = u_{-1} $, or
\[
\frac{u_{0} - u_{-1}}{h}=Du_{0} ,
\]
a numerical realization of the DtN map in \eqref{eq:dtn-abc}, using the ABC of choice. Indeed, elimination can be used to reformulate any computationally intensive ABC, not just absorbing layers, into a realization of \eqref{eq:dtn-abc}. Any ABC is equivalent to a set of equations relating unknowns on the surface to unknowns close to the surface, and possibly auxiliary variables. Again, elimination can reduce those equations to relations involving only unknowns on the boundary and on the first layer inside the boundary, to obtain a numerical DtN map $D$. A drawback is that forming this matrix $D$ by elimination is prohibitive, as we have just seen.%
As for the complexity of solving the Helmholtz equation, reducing the ABC confers the advantage of making the number of nonzeros in the matrix $L$ (of Section \ref{sec:strip}) independent of the width of the absorbing layer or complexity of the ABC. After elimination of the layer, it is easy to see that $L$ has about $20N^2$ nonzero entries, instead of the $5N^2$ one would expect from a five-point stencil discretization of the Helmholtz equation, because the matrix $D$ (part of a small block of $L$) is full. Although obtaining a fast matrix-vector product for our approximation of $D$ could reduce the application cost of $L$ from $20N^2$ to something closer to $5N^2$, it should be noted that the asymptotic complexity does not change -- only the constant does.%
This thesis addresses those two problems, obtaining the DtN map and applying it fast. The next chapter, chapter \ref{ch:probing}, suggests adapting the framework of matrix probing in order to obtain $D$ in reasonable complexity. Subsequently, chapter \ref{ch:plr} presents a compression method which leads to a fast application of the DtN map.
\chapter{Conclusion}\label{ch:conclusion}
In this thesis, we have compressed the Dirichlet-to-Neumann map for the Helmholtz equation in two steps, using matrix probing followed by the partitioned low-rank matrix framework. This approach is useful for applications in heterogeneous media because absorbing boundary conditions can be very costly. Especially when one needs to solve the Helmholtz equation with many different sources, it makes sense to perform the precomputation required of our two-step scheme, to speed up subsequent solves.
Probing the DtN map $D$ ultimately makes sense in conjunction with a fast algorithm for its application. In full matrix form, $D$ costs $N^2$ operations to apply. With the help of a compressed representation, this count becomes $p$ times the application complexity of any atomic basis function $B_j$, which may or may not be advantageous depending on the particular expansion scheme. The better solution for a fast algorithm, however, is to post-process the compressed expansion from probing into a slightly less compressed, but more algorithmically favorable one, such as hierarchical or partitioned low-rank matrices. These types of matrix structures are not parametrized enough to lend themselves to efficient probing -- see for instance \cite{Hmatvec} for an illustration of the large number of probing vectors required -- but give rise to faster algorithms for matrix-vector multiplication. Hence the feasibility of probing, and the availability of a fast algorithm for matrix-vector multiplication, are two different goals that require different expansion schemes.
We found that in order to obtain an efficient compression algorithm, we need to perform some precomputations. The leading cost of those is equivalent to a small number of solves of the original problem, which we can afford if we plan to make many solves in total. Then, a matrix-vector application of the DtN map in a Helmholtz solver is nearly linear in the dimension of the DtN map. The worst-case complexity of a matrix-vector application is in fact super-linear. General oscillatory integrals can often be handled in optimal complexity with the butterfly algorithm \cite{butterflyFIO}. We did not consider this algorithm because currently published methods are not adaptive, nor can they handle diagonal singularities in a kernel-dependent fashion.
\newline
\newline
Let us summarize the different complexity scalings of our method, recalling that without our compressed ABC, a thousand solves of the free-space problem would cost a thousand times the complexity of one solve, and that complexity depends on the solver used and on the fact that the ABC used might be very costly. By contrast, the cost of precomputation for matrix probing is of a few (1 to 50) solves of the exterior problem -- roughly equivalent to a solve of the free-space problem with the costly ABC. The precomputation cost for PLR compression is $O(N R^2 |\mathcal{B}|)$ where $|\mathcal{B}|$ is the number of leaves in the compressed DtN map -- empirically $|\mathcal{B}|=O(\log N)$ with a worst case of $|\mathcal{B}|=O(\sqrt{N})$. Finally, the application cost of the compressed ABC is $O(N R |\mathcal{B}|)$, which typically leads to a speed-up of a factor of 30 to 100 for $N \approx 1000$. Using the compressed ABC in a solver, the computational cost of making a thousand solves is then reduced to that of a thousand solves where the ABC is not so costly anymore, but only amounts to a matrix-vector multiplication of nearly linear cost.
Of course, the method presented here has some limitations. The main ones come from the first step of the scheme, matrix probing, which requires a careful design of basis matrices. This is harder to do when the wavelength is comparable to the size of features in the medium, or when the medium has discontinuities.
\newline
\newline
In addition to the direct applications the proposed two-step scheme has in making a typical (iterative) Helmholtz solver faster when in presence of a heterogeneous media, it could also improve solvers based on Domain Decomposition. Some interesting recent work on fast solvers for the Helmholtz equation has been along the direction of the Domain Decomposition Method (DDM). This method splits up the domain $\Omega$ in multiple subdomains. Depending on the method, the subdomains might or might not overlap. In non-overlapping DDM, transmission conditions are used to transfer information about the solution from one domain to the next. Transmission conditions are crucial for the convergence of the algorithm. After all, if there is a source in one subdomain, this will create waves which should travel to other parts of $\Omega$, and this information needs to be transmitted to other subdomains as precisely as possible.%
In particular, Stolk in \cite{stolk} has developed a nearly linear complexity solver for the Helmholtz equation, with transmission conditions based on the PML. Of course, this relies on the PML being an accurate absorbing boundary condition. One may assume that, for faster convergence in heterogeneous media, thicker PMLs might be necessary. In this case, a precomputation to compress the PML might prove useful. Other solvers which rely on transmission conditions are those of Engquist and Ying \cite{Hsweep,Msweep}, Zepeda-N\'u\~nez et al. \cite{zepeda}, Vion and Geuzaine \cite{Vion}.
Another way in which the two-step numerical scheme presented in this thesis could be useful is for compressing the Helmholtz solution operator itself. Indeed, we have compressed ABCs to the DtN map, by using knowledge of the DtN map kernel. If one knows another kernel, one could expand that kernel using matrix probing then compress it with PLR matrices, and obtain a fast application of that solution operator.
\section*{Acknowledgments}
I would like to thank my advisor, Laurent Demanet, for his advice and support throughout my time as a graduate student.
\newline
\newline
Also, I would like to acknowledge professors I took classes from, including my thesis committee members Steven Johnson and Jacob White, and qualifying examiners Ruben Rosales and again Steven Jonhson. I appreciate your expertise, availability and help.
\newline
\newline
I would like to thank collaborators and researchers with whom I discussed various ideas over the years, it was a pleasure to learn and discover with you.
\newline
\newline
Thanks also to my colleagues here at MIT in the mathematics PhD program, including office mates. %
It was a pleasure to get through this together.
\newline
\newline
Thanks to my friends, here and abroad, who supported me. From my decision to come to MIT, all the way to graduation, it was good to have you.
\newline
\newline
Finally, I thank my loved ones for being there for me always, and especially when I needed them.
\chapter{Introduction}\label{ch:intro}
This work investigates arbitrarily accurate realizations of absorbing (a.k.a. open, radiating) boundary conditions (ABC), including absorbing layers, for the 2D acoustic high-frequency Helmholtz equation in certain kinds of heterogeneous media. Instead of considering a specific modification of the partial differential equation, such as a perfectly matched layer, we study the broader question of compressibility of the nonlocal kernel that appears in the exact boundary integral form $Du=\partial_\nu u$ of the ABC. The operator $D$ is called the \emph{Dirichlet-to-Neumann} (DtN) map. This boundary integral viewpoint invites to rethink ABCs as a two-step numerical scheme, where
\begin{enumerate}
\item a precomputation sets up an expansion of the kernel of the boundary integral equation, then
\item a fast algorithm is used for each application of this integral kernel at the open boundaries in a Helmholtz solver.
\end{enumerate}
This two-step approach may pay off in scenarios when the precomputation is amortized over a large number of solves of the original equation with different data. This framework is, interestingly, half-way between a purely analytical or physical method and a purely numerical one. It uses both the theoretical grounding of analytic knowledge and the intuition from understanding the physics of the problem in order to obtain a useful solution.
The numerical realization of ABC typically involves absorbing layers that become impractical for difficult $c(\mathbf{x})$, or for high accuracy. We instead propose to realize the ABC by directly compressing the integral kernel of $D$, so that the computational cost of its setup and application would become competitive when (\ref{eq:HE}) is to be solved multiple times. Hence this paper is not concerned with the design of a new ABC, but rather with the reformulation of existing ABCs that otherwise require a lot of computational work per solve. In many situations of practical interest we show that it is possible to ``learn" the integral form of $D$, as a precomputation, from a small number of solves of the exterior problem with the expensive ABC. By ``small number", we mean a quantity essentially independent of the number of discretization points $N$ along one dimension -- in practice as small as 1 or as large as 50. We call this strategy matrix probing. To show matrix probing is efficient, we prove a result on approximating $D$ in a special case, with inverse powers multiplied by a complex exponential. This leads us to the successful design of a basis for a variety of heterogeneous media.
Once we obtain a matrix realization $\tilde{D}$ of the ABC from matrix probing, we can use it in a Helmholtz solver. However, a solver would use matrix-vector multiplications to apply the dense matrix $\tilde{D}$. Hence the second step of our numerical scheme: we compress $\tilde{D}$ using partitioned low-rank (PLR) matrices to acquire a fast matrix-vector product. This second step can only come after the first, since it is the first step that gives us access to the entries in $D$ and allows us to use the compression algorithms of interest to us. We know we can use hierarchical or partitioned-low-rank matrices to compress the DtN map because we prove the numerical low-rank property of off-diagonal blocks of the DtN map, and those algorithms exploit favorably low-rank blocks. Since PLR matrices are more flexible than hierarchical matrices, we use them to compress $\tilde{D}$ into $\overline{D}$ and apply it to vectors in complexity ranging from $O(N\log N)$ (more typical) to $O(N^{3/2})$ (worst case). %
The precomputation necessary to set up the PLR approximation is of similar complexity. This can be compared to the complexity of a dense matrix-vector product, which is $O(N^2)$.
In this introduction, we first motivate in section \ref{sec:motivate} the study of the Helmholtz equation in an unbounded domain by presenting important applications. We then give more details on the steps of our numerical scheme in section \ref{sec:struct}, which will make explicit the structure of this thesis.%
\section{Applications of the Helmholtz equation in an unbounded domain}\label{sec:motivate}
We consider the scalar Helmholtz equation in $\mathbb{R}^2$,
\begin{equation}\label{eq:HE0}
\Delta u(\mathbf{x})+\frac{\omega^2}{c^2(\mathbf{x})} u(\mathbf{x}) = f(\mathbf{x}), \qquad \mathbf{x} = (x_1, x_2),
\end{equation}
with compactly supported $f$, the \emph{source}. We seek the solution $u$. The function $c$ in \eqref{eq:HE0} is called the \emph{medium}, $\omega$ the frequency. %
We consider the unique solution $u$ to \eqref{eq:HE0} determined by the Sommerfeld radiation condition (SRC), which demands that the solution be outgoing. %
We call the problem of finding a solution $u$ to \eqref{eq:HE0} with the SRC the \emph{free-space problem}. There are many applications that require a solution to the free-space Helmholtz problem, or the related free-space Maxwell's equations problem. To solve the free-space problem numerically, we reformulate the problem to a bounded domain $\Omega$ in which we shall find the solution $u$. One must then impose an \emph{absorbing boundary condition} (ABC) on the boundary $\partial \Omega$. ABCs are designed to absorb waves impinging on $\partial \Omega$ so the waves do not reflect back in $\Omega$ and pollute the solution there.
ABCs in heterogeneous media are often too costly, but the two-step scheme of this thesis addresses this problem. %
Let us now discuss applications of this two-step scheme.%
The main application is wave-based imaging, an inverse problem for the wave equation
\begin{equation}\label{eq:WE}
\Delta U(\mathbf{x},t) - \frac{1}{c^2(\mathbf{x})}\frac{\partial^2 U(\mathbf{x},t)}{\partial t^2} = F(\mathbf{x},t), \qquad
\end{equation}
and related equations. The Helmholtz equation is equivalent to the wave equation because we can decompose the solution $U$ and the source $F$ into time-harmonic components by a Fourier transform in time. %
Another type of wave equation, the Maxwell's equations, can also be reformulated as Helmholtz equations on the various components of the electric and magnetic fields.%
An inverse problem is as follows: instead of trying to find the solution $u$ of \eqref{eq:HE0} given $\omega$, $f$ and $c$, we do the opposite. In other words, we are given the solution $u$ at a set of receiver locations for some $\omega$ and various sources $f$. We try to determine the medium $c$ from that information. We are usually interested in (or can only afford) $c(\mathbf{x})$ for $\mathbf{x}$ in some part of the whole space, say some bounded domain $\Omega$, and absorbing boundary conditions are then necessary. To solve the inverse problem in $\Omega$, we need to use a lot of sources, say a thousand. The details of why many sources are useful, and how to solve the inverse problem, are not relevant to this thesis. What is relevant is that, in the course of solving the inverse problem for say the Helmholtz equation, it is needed to solve the free-space Helmholtz problem for all these sources, with ABCs, in heterogeneous media. We list here some imaging applications where the Helmholtz equation, or other types of wave equations, are used to solve inverse problems with ABCs in heterogeneous media, that is, where the numerical scheme in this thesis might prove useful.
\begin{itemize}
\item In seismic imaging, we acquire knowledge of the rock formations under the earth's surface. That is, we want to know the medium $c$ in which the waves propagate. The domain $\Omega$ might be a region surrounding a seismic fault \cite{seismic} where one wants to assess earthquake hazards, or a place where one might like to find hydrocarbons, other minerals, or even geothermal energy \cite{geothermal}. ABCs are needed to simulate the much larger domain in which $\Omega$ is embedded, that is, the Earth. Sources (which might be acoustic or electromagnetic) and receivers might be placed on the surface of the Earth or inside a well, or might be towed by a boat or placed at the bottom of the sea. An earthquake can also be used as a source.
\item Ultrasonic testing \cite{ultrasonic} is a form of non-destructive testing where very short ultrasonic pulses are sent inside an object. The object might be, say, a pipe that is being tested for cracks or damage from rust, or a weld being tested for defects. The received reflections or refractions from those ultrasonic pulses are used for diagnosis on the object of interest. $\Omega$ might be the object itself, or a part of it. Ultrasonic imaging \cite{3dmedimag} is also used in medicine to visualize fetuses, muscles, tendons and organs. The domain $\Omega$ is then the body part of interest.
\item Synthetic-aperture radar imaging is used to visualize a scene by sending electromagnetic pulses from an antenna aboard a plane or satellite \cite{radar}. It is also used to detect the presence of an object \cite{borden} far away such as a planet, or through clouds or foliage.
\end{itemize}
An entirely different application of the Helmholtz equation resides in photonics \cite{JohnsonPhot}. The field of photonics studies the optical properties of materials. In particular, one tries to construct photonic crystals, a periodic medium with desired properties depending on the specific application. It is of course useful to first test photonic crystals numerically to observe their properties before actually building them. However, since crystals are assumed to be infinite, absorbing boundary conditions need to be used to reformulate the problem on a bounded domain. This is where our two-step numerical scheme can be used.
\section{A two-step numerical scheme for compressing ABCs}\label{sec:struct}
The next chapter, chapter \ref{ch:back}, will present theoretical facts about the Helmholtz equation and related concepts, which will be useful at various points in later chapters for developing the proposed framework. Then, chapter \ref{ch:probing} will present the first step of the scheme: matrix probing and how it is used for providing a rapidly converging expansion of the DtN map. Next, chapter \ref{ch:plr} will contain the second part of the proposed two-step scheme: the compression of the matrix probing expansion, for fast evaluation, using partitioned low-rank matrices. After the precomputations of matrix probing and PLR matrices, we finally obtain a fast and accurate compressed absorbing boundary condition. Chapter \ref{ch:conclusion} concludes this thesis with a review of the main new ideas presented, identification of further research directions and open problems, and also an overview of how the presented framework could be used in other contexts. A summary of the steps involved in the presented two-step numerical scheme is available in appendix \ref{ch:steps}.
\subsection{Background material}
We begin by introducing again the problem we wish to solve, the \emph{free-space problem} for the Helmholtz equation. This problem is defined on an unbounded domain such as $\mathbb{R}^2$, but can be solved by reformulating to a bounded domain we will call $\Omega$. One way to do this is to impose what we call the \emph{exterior Dirichlet-to-Neumann map} (DtN map) on $\partial \Omega$. The DtN map $D$ relates Dirichlet data $u$ on $\partial \Omega$ to the normal derivative $\partial_\nu u$ of $u$, where $\nu$ is the unit outward vector to $\partial \Omega$: $Du=\partial_\nu u$. This allows us to solve the Helmholtz equation on $\Omega$ to obtain a solution $u$ which coincides, inside $\Omega$, with the solution to the free-space problem.
We shall see that any \emph{absorbing boundary condition} (ABC) or Absorbing Layer (AL) can be used to obtain the DtN map. As we mentioned before, an ABC is a special condition on the boundary $\partial \Omega$ of $\Omega$, which should minimize reflections of waves reaching $\partial \Omega$. %
Different ABCs work differently however, and so we will review existing techniques for constructing ABCs for the Helmholtz equation, and see again how they are computationally expensive in a variable medium. This suggests finding a different, more efficient, way of obtaining the exterior DtN map from an ABC. To attack this problem, we consider the \emph{half-space} DtN map, known analytically in a constant medium. This half-space DtN map is actually quite similar to the exterior DtN map, at least when $\Omega$ is a rectangular domain. Indeed, the interface along which the DtN map is defined for the half-space problem is a straight infinite line $x_2=0$. For the exterior problem on a rectangle, the DtN map is defined on each of the straight edges of $\Omega$. Hence the restriction of the exterior DtN map to one such edge of $\Omega$, which say happens to be on $x_2=0$, behaves similarly to the restriction of the half-space DtN map to that same edge. The main difference between those two maps is created by scattering from corners of $\partial \Omega$. Both in chapter \ref{ch:probing} and \ref{ch:plr}, we will prove facts about the half-space DtN map which will inform our numerical scheme for the exterior DtN map.
We end the chapter of background material with how the most straightforward way of obtaining the exterior DtN map from an ABC, \emph{layer-stripping}, is prohibitively slow, especially in variable media. This will also explain how, even if we have an efficient way of obtaining the DtN map from an ABC, applying it at every solve of the Helmholtz equation will be slow. In fact, once we have the map $D$ in $Du=\partial_\nu u$, we need to apply this map to vectors inside a Helmholtz solver. But $D$ has dimension approximately $N$, if $N$ is the number of points along each direction in $\Omega$, so matrix-vector products with $D$ have complexity $N^2$. This is why we have developed the two-step procedure presented in this thesis: first an expansion of the DtN map using matrix probing, then a fast algorithm using partitioned low-rank matrices for the application of the DtN map.
\subsection{Matrix probing}
Chapter \ref{ch:probing} is concerned with the first step of this procedure, namely, setting up an expansion of the exterior DtN map kernel, in a precomputation. This will pave the way for compression in step two, presented in chapter \ref{ch:plr}.
Matrix probing is used to find an expansion of a matrix $M$. For this, we assume an expansion of the type
\[ M \approx \tilde{M} = \sum_{j=i}^{p} c_jB_j, \]
where the \emph{basis matrices} $\left\{B_j\right\}$ are known, and we wish to find the coefficients $\left\{c_j\right\}$. We do not have access to $M$ itself, but only to products of $M$ with vectors. In particular, we can multiply $M$ with a random vector $z$ to obtain
\[ w=Mz \approx \sum_{j=1}^p c_j B_j z = \Psi_z \, \mathbf{c}.\]
We can thus obtain the vector of coefficients $\mathbf{c}$ by applying the pseudoinverse of $\Psi_z$ to $Mz=w$.
For matrix probing to be an efficient expansion scheme, we need to carefully choose the basis matrices $\left\{B_j\right\}$. Here, we use knowledge of the half-space DtN map to inform our choices: a result on approximating the half-space DtN map with a particular set of functions, inverse powers multiplied by a complex exponential. Hence our basis matrices are typically a discretization of the kernels
\begin{equation}%
B_j(x,y)= \frac{e^{ik|x-y|}}{(h+|x-y|)^{j/2}},
\end{equation}
where $h$ is our discretization parameter, $h=1/N$. The need for a careful design of the basis matrices can however be a limitation of matrix probing. In this thesis, we have used also insights from geometrical optics to derive basis matrices which provide good convergence in a variety of cases. Nonetheless, in a periodic medium such as a photonic crystal, where the wavelength is as large as the features of the medium, the geometrical optics approximation breaks down. Instead, we use insights from the solution in a periodic medium, which we know behaves like a Bloch wave, to design basis matrices. However, the periodic factor of a Bloch wave does not lead to very efficient basis matrices since it is easily corrupted by numerical error. Another limitation is that a medium which has discontinuities created discontinuities as well in the DtN map, forcing again a more careful design of basis matrices.
Once we do have basis matrices, we can probe the DtN map $D$, block by block. Indeed, we choose $\Omega=[0,1]^2$, and so $\partial \Omega$ is the four edges of the square\footnote{The framework does carry over to polygonal domains easily, but we do not cover this here.}, numbered counter-clockwise. Hence, $D$ has a 4 by 4 structure corresponding to restrictions of the DtN map to each pair of edges. To obtain the product of the $(i_M,j_M)$ block $M$ of $D$ with a random vector, we need to solve what we call the \emph{exterior problem}: we put a random Dirichlet boundary condition on edge $j_M$ of $\partial \Omega$, solve the Helmholtz equation outside of $\Omega$ using an ABC, and take the normal derivative of the solution on edge $i_M$ of $\partial \Omega$.
In situations of practical interest, we obtain the DtN map as a precomputation using matrix probing with leading complexity of about 1 to 50 solves of the exterior problem with the expensive ABC -- a number of solves essentially independent of the number of discretization points $N$. A solve of the exterior problem is essentially equivalent to a solve of the original problem with that same expensive ABC.
We then present a careful study of using matrix probing to expand the DtN map in various media, use the matrix probing expansion to solve the Helmholtz equation, and document the complexity of the method.
In the next chapter, we will present a fast algorithm for applying the DtN map's expansion found by matrix probing.
\subsection{Partitioned low-rank matrices}
In chapter \ref{ch:plr}, we produce a fast algorithm for applying the DtN map $D$ to vectors, since this is an operation that a Helmholtz solver needs. Again, we needed matrix probing first, to obtain an explicit approximation of $D$, to be compressed in order to obtain a fast matrix-vector product. Indeed, we do not have direct access to the entries of $D$ at first, but rather we need to solve a costly problem, the exterior problem, every time we need a multiplication of $D$ with a vector. Now that we have an explicit representation of $D$ from matrix probing, with cost $O(N^2)$ for matrix-vector products, we can compress that representation to obtain a faster product.
We have mentioned before how the half-space DtN map is similar to the exterior DtN map. In chapter \ref{ch:plr}, we will prove that the half-space DtN map kernel $K$ in constant medium is numerically low rank. This means that, away from the singularity of $K$, the function $K(|x-y|)$ can be written as a short sum of functions of $x$ multiplied by functions of $y$:
\[ K(|x-y|)=\sum_{j=1}^J \Psi_j(x)\chi_j(y) + E(x,y)\]
with error $E$ small. The number of terms $J$ depends logarithmically on the error tolerance and on the frequency $k$. This behavior carries over to some extent to the exterior DtN map. Thus we use a compression algorithm which can exploit the low-rank properties of blocks of $D$ that are not on the diagonal, that is, that are away from the singularity.
A well-known such compression framework is called \emph{hierarchical matrices}. Briefly, hierarchical matrices adaptively divide or compress diagonal blocks. We start by dividing the matrix in 4 blocks of half the original matrix's dimension. The two off-diagonal blocks are compressed: we express them by their singular value decomposition (SVD), truncated after $R$ terms. The two diagonal blocks are divided in four again, and we recurse: off-diagonal blocks are compressed, diagonal blocks are divided, etc. We do not divide a block if compressing it will result in less error than our error tolerance $\varepsilon$ -- this is adaptivity. The parameters $R$ and $\varepsilon$ are chosen judiciously to provide a fast matrix-vector multiplication with small error.
However, the hierarchical matrix framework can only apply to matrices with a singularity along the diagonal. This is not useful to us, since for example a block of $D$ corresponding to two consecutive edges of $\partial \Omega$ will have the singularity in a corner. We thus decide to use partitioned low-rank (PLR) matrices for compressing and applying $D$. PLR matrices are more flexible than hierarchical matrices: when we divide a block, or the original matrix, in 4 sub-blocks, any of those four sub-blocks can be divided again. Any block that is not divided any further is called a \emph{leaf}, and $\mathcal{B}$ is the set of all leaves. If a singularity is in a corner, then the PLR compression algorithm will automatically divide blocks close to that corner, but will compress farther blocks since they have lower numerical rank. Note that we use the randomized SVD \cite{randomSVD} to speed us the compression, so that its complexity is of order $O(NR^2|\mathcal{B}|)$, where $|\mathcal{B}|$ is often on the order of $\log N$ but will be $\sqrt{N}$ in the worst case. Similarly, the complexity of a matrix-vector product is usually $O(NR|\mathcal{B}|)$, which for $N\approx 1000$ provides a speed-up over a dense matrix-vector product of a factor of 30 to 100. We also show that the worst case complexity of a matrix-vector product in the PLR framework is $O(N^{3/2})$. This should be compared to the complexity of a dense matrix-vector product, which is $O(N^2)$.
We may then use PLR matrices to compress the DtN map, and use this compressed map in a Helmholtz solver. We verify the complexity of the method, and present results on the solution of the Helmholtz equation with a probed and compressed DtN map.
\subsection{Summary of steps}
In appendix \ref{ch:steps}, we present a summary of the various operations involved in implementing the presented two-step algorithm.
\chapter{Partitioned low-rank matrices for compressing the Dirichlet-to-Neumann map}\label{ch:plr}
In the previous chapter, we explained in detail the first step of two in our numerical scheme for compressing ABC's. This first step consisted in approximating the DtN map $D$ by $\tilde{D}$ using matrix probing. To do this, we considered each block $M$ of $D$ separately, corresponding to an edge-to-edge restriction of $D$. We approximated each $M$ by a matrix probing expansion. %
We saw how we could obtain an accurate approximation of $M$ using appropriate basis matrices $B_j$. We were left with the task of producing a fast algorithm for applying the resulting $\tilde{M}$ to vectors, since this is an operation that a Helmholtz solver needs. This is what we do in the current chapter, by compressing $\tilde{M}$ into a new $\overline{M}$ which can then be applied fast.
We also explained in the previous chapter why we needed probing: to obtain an explicit approximation of $D$, to be compressed in order to obtain a fast matrix-vector product. Indeed, we do not have direct access to the entries of $D$, but rather we need to solve the costly exterior problem every time we need a multiplication of $D$ with a vector. We have already mentioned how the approach of Lin et al. \cite{Hmatvec}, for example, would require $O(\log N)$ such solves, with a large constant.
We alluded that we might be able to compress each block $M$ (or $\tilde{M}$) of $D$ (or $\tilde{D}$) when we presented background material in chapter \ref{ch:back}. Indeed, we discussed the fact that the half-space Green's function $G_\text{half}$ in constant medium is separable and low-rank away from its singularity. Because the half-space DtN map kernel $K$ is simply two derivatives of $G_\text{ext}$, we expect $K$ to also be separable and low-rank, and we prove this at the end of the present chapter, in section \ref{sec:septheo}. See also the numerical verification of that theorem in section \ref{sec:sepnum}. Because the half-space DtN map is strongly related to the exterior DtN map as we mentioned in chapter \ref{ch:back}, we expect the exterior DtN map kernel to also be separable and low rank, at least in favorable conditions such as a constant medium. But first, as in the previous chapter, we begin by explaining the technique we use, partitioned low-rank (PLR) matrices, in section \ref{sec:introplr}. Compression of an $N$ by $N$ matrix into the PLR framework is nearly linear in $N$, and so is matrix-vector multiplication. We then show the results of using this PLR technique on test cases in section \ref{sec:usingplr}. %
\section{Partitioned low-rank matrices}\label{sec:introplr}
As we have discussed in chapter \ref{ch:back}, when an operator is separable and low-rank, we expect its numerical realization to have low-rank blocks under certain conditions. In our case, the DtN map $K(x-y)$ is separable and low-rank away from the singularity $x=y$ and so we expect its numerical realization to have low-rank blocks away from its diagonal. This calls for a compression scheme such as the hierarchical matrices of Hackbush et al. \cite{hmat1}, \cite{hmat2}, \cite{hmatlect}, to compress off-diagonal blocks. However, because we expect higher ranks away from the singularity in variable media, and because different blocks of the DtN map will show a singularity elsewhere than on the diagonal, we decide to use a more flexible scheme called \emph{partitioned low rank} matrices, or PLR matrices, from \cite{Jones}.
\subsection{Construction of a PLR matrix}
PLR matrices are constructed recursively, using a given tolerance $\epsilon$ and a given maximal rank $R_{\text{max}}$. We start at the top level, level 0, with the matrix $M$ which is $N$ by $N$ and $N$ is a power of two\footnote{Having a square matrix with dimensions that are powers of two is not necessary, but makes the discussion easier.}. We wish to compress $M$ (in the next sections we will use this compression scheme on probed blocks $\tilde{M}$ of the DtN map, but we use $M$ here for notational simplicity). We first ask for the numerical rank $R$ of $M$. The numerical rank is defined by the Singular Value Decomposition and the tolerance $\epsilon$, as the number $R$ of singular values that are larger than or equal to the tolerance. If $R>R_{\text{max}}$, we split the matrix in four blocks and recurse to the next level, level 1 where blocks are $N/2^1$ by $N/2^1$. If the numerical rank of $M$ is less than or equal to $R_{\text{max}}$, $R \leq R_{\text{max}}$, we express $M$ in its low-rank form by truncating the SVD of $M$ after $R$ terms. That is, the SVD of $M$ is $M=U\Sigma V^*=\sum_{j=1}^{N} U_j \sigma_j V_j^*$ where $U$ and $V$ are orthonormal matrices with columns $\left\{U_j\right\}_{j=i}^{N}$ and $\left\{V_j\right\}_{j=i}^{N}$, and $\Sigma$ is the diagonal matrix of decreasing singular values: $\Sigma=\text{diag}(\sigma_1, \sigma_2, \ldots, \sigma_N)$. Then, if $R \leq R_{\text{max}}$, we compress $M$ to $\overline{M}=\sum_{j=1}^{R} U_j \sigma_j V_j^*$ by truncating the SVD of $M$ after $R$ terms.
If we need to split $M$ and recurse down to the next level, we do the following. First, we split $M$ in four square blocks of the same size: take the first $N/2$ rows and columns to make the first block, then taking the first $N/2$ rows and last $N/2$ columns to make the second block, etc. And now we apply the step described in the previous paragraph to each block of $M$, checking the block's numerical rank and compressing it or splitting it depending on that numerical rank. Whenever we split up a block, we label it as ``hierarchical'', and call its four sub-blocks its \emph{children}. Whenever a block was not divided, and hence compressed instead, we label it as ``compressed'', and we may call it a ``leaf'' as well.
If a block has dimension $R_{\text{max}}$ by $R_{\text{max}}$, then its numerical rank will be $R_{\text{max}}$, and so once blocks have dimensions smaller than or equal to the maximal desired rank $R_{\text{max}}$, we can stop recursing and store the blocks directly. However, especially if $R_{\text{max}}$ is large, we might still be interested in compressing those blocks using the SVD. This is what we do in our code, and we label such blocks as ``compressed'' as well. When we wish to refer to how blocks of a certain matrix $M$ have been divided when $M$ was compressed in the PLR framework, or in particular to the set of all leaf blocks and their positions in $M$, we refer to the ``structure'' of $M$. We see then that the structure of a PLR matrix will have at most $L$ levels, where $N/R_\text{max}=2^L$ so $L=\log_2{N/R_\text{max}}$.
\subsubsection{Implementation details}
Algorithm \ref{alg:PLR_matrix} presents pseudocode for the construction of a PLR matrix from a dense matrix. In practice, when we compute the SVD of a block, we use the randomized SVD\footnote{Theoretically, this randomized SVD has a failure probability, but we can choose a parameter to make this probability on the order of $10^{-16}$, and so we ignore the fact that the randomized SVD could fail.} \cite{randomSVD}. This allows us to use only a few matrix-vector multiplies between the block (or its transpose) and random vectors to form an approximate reduced SVD. This is a faster way of producing the SVD, and thus also of finding out whether the numerical rank of the block is larger than $R$. The randomized SVD requires about 10 more random matrix-vector multiplies than the desired maximal rank $R$. This is why, in algorithm \ref{alg:PLR_matrix}, the call to \emph{svd} has two arguments: the block we want to find the SVD of, and the maximal desired rank $R_{\text{max}}$. The randomized SVD algorithm then uses 10 more random vectors than the quantity $R_{\text{max}}$ and returns an SVD of rank $R_{\text{max}}+1$. We use the $R_{\text{max}}+1^{\text{st}}$ singular value in $\Sigma$ to test whether we need to split the block and recurse or not.
\begin{algorithm} Compression of matrix $M$ into Partitioned Low Rank form, with maximal rank $R_{\text{max}}$ and tolerance $\epsilon$ \label{alg:PLR_matrix}
\begin{algorithmic}[1]
\Function{H = PLR}{$M$, $R_{\text{max}}$, $\epsilon$}
\State $[U, \Sigma ,V] = \texttt{svd}(M, R_{\text{max}})$ \Comment{Randomized SVD}
\If{ $\exists R \in \left\{1,2,\ldots,R_{\text{max}}\right\} : \Sigma(R+1,R+1) < \epsilon$}
\State Let $R$ be the smallest such integer.
\State \texttt{H.data = \{$U(:,1:R)\cdot \Sigma(1:R,1:R)$, $V(:,1:R)^{*}$\} }
\State \texttt{H.id = 'c' } \Comment{This block is ``compressed''}
\Else \Comment{The $M_{ij}$'s are defined in the text}
\For{i = 1:2}
\For{j = 1:2}
\State \texttt{H.data\{i,j\} = } PLR($M_{ij}$, $R_{\text{max}}$, $\epsilon$ ) \Comment Recursive call
\EndFor
\EndFor
\State \texttt{H.id = 'h' } \Comment{This block is ``hierarchical''}
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Complexity}
The complexity of this algorithm depends on the complexity of the SVD algorithm we use. The randomized SVD has complexity $O(N_B R_\text{max}^2)$ where $N_B$ is the dimension of block $B$ whose SVD we are calculating. This can be much better, especially for larger blocks, than standard SVD algorithms which have complexity $O(N_B^3)$. The total complexity of the compression algorithm will then depend on how many blocks of which size and rank we find the randomized SVD of. We shall discuss this in more detail when we discuss also the complexity of a matrix-vector product for special structures.
\subsubsection{Error analysis}
To understand the error we make by compressing blocks in the PLR framework, we first note that compressing a block $M$ to its $R$-rank approximation $\overline{M}=\sum_{j=1}^{R} U_j \sigma_j V_j^*$ by truncating its SVD, we make an error in the $L_2$ norm of $\sigma_{R+1}$. That is,
\[ \|M-\overline{B} \|_2 = \| \sum_{j=R+1}^{N} U_j \sigma_j V_j^* \|_2 = \sigma_{R+1} . \]
Hence, by compressing a block, we make an $L_2$ error for that block of at most $\varepsilon$ because we make sure that $\sigma_{R+1} \leq \varepsilon$. Of course, errors from various blocks will compound to affect the total error we make on matrix $M$. We shall mention this in more detail when we discuss particular structures.
The relative Frobenius error we make between $M$ and $\overline{M}$, the compressed approximation of $M$, will usually be larger than $\epsilon$ because of two factors. First of all, as we just saw, the PLR compression algorithm uses the $L_2$ norm. To use the Frobenius norm when deciding whether to compress or divide a blcok, we would need access to all the dingular values of each block. This would be possible using typical SVD algorithm, but quite costly. Hence we use the randomized SVD, which is faster but with which we are forced to use the $L_2$ norm. Another factor we have yet to mention is that errors from different blocks will compound to make the total error between $M$ and $\overline{M}$ larger than the error between any individual blocks of $M$ and $\overline{M}$. This of course depends on how many blocks there are in the structure of nay particular PLR matrix. We will talk more about this in subsection \ref{sec:structncomp}, where we explore the complexity of the compression and matrix-vector algorithms of the PLR framework. But first, we introduce matrix-vector products.
\subsection{Multiplication of a PLR matrix with a vector}
To multiply a PLR matrix $M$ with a vector $v$, we again use recursion. Starting at the highest level block, the whole matrix $M$ itself, we ask whether this block has been divided into sub-blocks. If not, we multiply the block directly with the vector $v$. If the block has been subdivided, then we ask for each of its children whether those have been subdivided. If not, we multiply the sub-block with the appropriate restriction of $v$, and add the result to the correct restriction of the output vector. If so, we recurse again. Algorithm \ref{alg:PLR_matvec} presents the pseudocode for multiplying a vector by a PLR matrix. The algorithm to left-multiply a vector by a matrix is similar, we do not show it here.
\begin{algorithm} Multiplication of a PLR matrix $H$ with column vectors $x$ \label{alg:PLR_matvec}
\begin{algorithmic}[1]
\Function{y = matvec}{H,x}
\If{ \texttt{H.id == 'c' } }
\State \texttt{y = H.data\{1\}$\cdot$(H.data\{2\}$\cdot$x) }
\Else
\State $\texttt{y}_{1}\texttt{ = }\text{matvec}\texttt{(H.data\{1,1\},}\texttt{x}(\texttt{1:end/2,:}))$
\State $\qquad +\text{matvec}\texttt{(H.data\{1,2\},}\texttt{x}(\texttt{end/2:end,:}))$
\State $\texttt{y}_{2}\texttt{ = }\text{matvec}\texttt{(H.data\{2,1\},}\texttt{x}(\texttt{1:end/2,:}))$
\State $\qquad +\text{matvec}\texttt{(H.data\{2,2\},}\texttt{x}(\texttt{end/2:end,:}))$
\State \texttt{y = $ \left[ \begin{array}{c}
\texttt{y}_1\\
\texttt{y}_2
\end{array}
\right ]$ }
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Complexity}
The complexity of this algorithm is easily understood. Recall that we store blocks that are not divided not as a full matrix, but as column vectors corresponding to the orthonormal matrix $U$ of the SVD, and to the product $\Sigma V^*$ of the SVD. Every time we multiply such an $N_B$ by $N_B$ block $B$ that has not been subdivided with the corresponding restriction of $v$, we first multiply the restriction $\tilde{v}$ of $v$ with $\Sigma V^*$, and then multiply the result with $U$. Let $R_B \leq R_\text{max}$ be the numerical rank of that block. Then, we first make $N_BR_B$ multiplication operations and $(N_B-1)R_B$ addition operations for the product $\Sigma V^* \tilde{v}$, and then $R_BN_B$ multiplication operations and $(R_B-1)N_B$ addition operations for the product of that with $U$. Hence we make approximately $4N_BR_B$ operations per block, where again $N_B$ is the dimension of the block $B$ and $R_B$ is its numerical rank.
The total number of operations for multiplying a vector $v$ with a PLR matrix $M$, then, is about
\begin{equation}\label{eq:comp}
\sum_{B \text{ is compressed}} 4 N_B R_B ,
\end{equation}
where we sum over all ``compressed'' blocks.
Evidently there is a trade-off here. Asking for a small maximal rank $R_\text{max}$ may force blocks to be subdivided a lot. We then have small $R_B$'s, and small $N$'s, but a lot of blocks. On the other hand, having a larger $R_\text{max}$ means a lot of blocks can be remain large. We then have large $R_B$'s and $N_B$'s, but very few blocks. We shall use the complexity count \eqref{eq:comp} later on to decide on which $R_\text{max}$ to choose for each block of the DtN map. To have an idea of whether such a matrix-vector multiplication might result in a fast algorithm, we need to introduce new terminology regarding the structure of PLR matrices, and we do so in the next subsection.
\subsection{Structure and complexity}\label{sec:structncomp}
As we did with matrix probing, we again split up the probed exterior DtN map $\tilde{D}$ in submatrices corresponding to the different sides of $\partial \Omega$. We called those \emph{blocks} of $\tilde{D}$ in the previous chapter, but we now call them \emph{submatrices}, to differentiate them from the blocks in the structure we obtain from compressing to a PLR matrix. So $\tilde{D}$ is split up in submatrices $\tilde{M}$ that represent edge-to-edge restrictions of $\tilde{D}$. We then only have to compress once each unique submatrix $\tilde{M}$ to obtain an approximation $\overline{M}$ of $\tilde{M}$. We can use those compressed $\overline{M}$'s to define $\overline{D}$, an approximation of $\tilde{D}$. If we need to multiply our final approximation $\overline{D}$ by a vector, we may then split that vector in blocks corresponding to the sides of $\partial \Omega$ and use the compressed submatrices and the required PLR matrix algebra to obtain the result with low complexity.
What is particular about approximating the DtN map on a square boundary $\partial \Omega$ is that distinct submatrices are very different. Those that correspond to the restriction of $D$ from one edge to that same edge, and as such are on the diagonal of $D$, are harder to probe as we saw in the previous chapter because of the diagonal singularity. And, because of the diagonal singularity, they might be well-suited for compression by hierarchical matrices \cite{bebendorf}.
However, submatrices of $D$ corresponding to two edges that are side by side (for example, the bottom and right edges of the boundary of $[0,1]^2$) see the effects of the diagonal of $D$ in their upper-right or lower-left corners, and entries of such submatrices decay in norm away from that corner. Thus a hierarchical matrix would be ill-suited to compress such a submatrix. This is why the PLR framework is so useful to us: it automatically adapts to the submatrix at hand, and to whether there is a singularity in the submatrix, and where that singularity might be.
Similarly, when dealing with a submatrix of $D$ corresponding to opposite edges of $\partial \Omega$, we see that entries with higher norm are in the upper-right and bottom-left corners, so again PLR matrices are more appropriate than hierarchical ones. However, note that because such submatrices have very small relative norm compared to $D$, and were probed with only one or two basis matrices in the previous chapter, their PLR structure is often trivial.
In order to help us understand the complexity of PLR compression and matrix-vector products, we first study typical structures of hierarchical \cite{hmatlect}, \cite{Hweak} and PLR matrices.
\subsubsection{Weak hierarchical matrices}
\begin{definition}
A matrix is said to have \emph{weak hierarchical structure} when a block is compressed if and only if its row and column indices do not overlap.
\end{definition}
The weak hierarchical structure of a matrix is shown in Figure \ref{fig:weak}. For example, let the matrix $M$ be $8 \times 8$. Then the block at level 0 is $M$ itself. The row indices of $M$ are $\left\{1, 2, \ldots, 8\right\}$, and so are its column indices. Since those overlap, we divide the matrix in four. We are now at level 1, with four blocks. The $(1,1)$ block has row indices $\left\{1, 2, 3, 4\right\}$, and its column indices are the same. This block will have to be divided. The same holds for block $(2,2)$. However, block $(1,2)$ has row indices $\left\{1, 2, 3, 4\right\}$ and column indices $\left\{5, 6, 7, 8\right\}$. Those two sets do not overlap, hence this block is compressed. The same will be true of block $(2,1)$.
We note that, if the matrix $M$ has a weak hierarchical structure, we have a fast matrix-vector product. We may use the heuristic in \ref{eq:comp} to obtain the complexity of that product, assuming for simplicity that all $R_B$'s are $R_\text{max}$. Hence we account for all compressed blocks, starting from the 2 larger blocks on level 1, of size $N/2$ by $N/2$ (one on each side of the diagonal): they correspond to a maximum of $4 N/2 \times R_\text{max}$ operations (multiplications and divisions) each, and there is two of them, so they correspond to a total of $4 N R_\text{max}$ operations. Then, the next larger blocks are of size $N/4$ by $N/4$, and there is 4 of them (two on each side of the diagonal). Hence they correspond to a total of $4 \times 4 \times N/4 \times R_\text{max}=4 N R_\text{max}$ operations. Since we have $L=\log_2{N/R_\text{max}}$ levels, or different block sizes, and as we can see each of those block sizes will contribute at most $4 N R_\text{max}$ operations, we have about $4 N \log{N/R_\text{max}} R_\text{max}$ operations from off-diagonal blocks. We are left with the diagonal blocks. Those have size $R_\text{max}$ and there are $N/R_\text{max}$ of them, so the complexity of multiplying them by a vector is at most $ 4 N/R_\text{max} R^2_\text{max} = 4N R_\text{max}$ operations. Hence the total complexity of a matrix-vector multiplication with a weak hierarchical matrix is
\begin{equation}\label{eq:comweak}
4N R_\text{max} \log{\frac{2N}{R_\text{max}}} .
\end{equation}
This is clearly faster, asymptotically, than the typical complexity of a dense matrix-vector product which is of $2N^2$ operations.
As to the complexity of the compression algorithm, we do a similar calculation, but here the cost per block is $O(N_B R_\text{max}^2)$, so all we need is to sum the dimensions of all blocks that we used the SVD on. We start by taking the SVD of the matrix itself, then of all the blocks on level 1, then half of the blocks on level 2, then a quarter of the blocks on level 3, etc. Hence the complexity of compression is
\[ R_\text{max}^2 \left( N + \sum_{l=1}^{L-1} \frac{N}{2^l} \frac{4^l}{2^{l-1}} \right) = R_\text{max}^2 (N+2N(L-1)) \leq 2N R_\text{max}^2 \log{\frac{N}{R_\text{max}}}\]
which is nearly linear in $N$.
Finally, we address briefly the error in the matrix $M$ that is made when it is compressed. We especially care about the error in a matrix-vector multiplication $w=Mv$. We can see in this case that, for any entry $j$ in $w$, there will be error coming from all multiplications of the appropriate restriction of $v$ with the corresponding block intersecting row $j$ of $M$. Since there are about $\log\frac{N}{R_\text{max}}$ such blocks in row $j$, we can estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$. As we will see in section \ref{sec:usingplr}, dividing the ``desired'' error by a factor of 1 to 25 to obtain the necessary $\varepsilon$ will work quite well for our purposes.
\begin{figure}[H]
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/output.pdf}
\caption{Weak hierarchical structure, $\frac{N}{R_\text{max}}=8$.}\label{fig:weak}
\end{minipage}
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/outputstr.pdf}
\caption{Strong hierarchical structure, $\frac{N}{R_\text{max}}=16$.}\label{fig:strong}
\end{minipage}
\begin{minipage}[t]{0.32\linewidth}
\includegraphics[scale=.2]{./figs/outputcorn.pdf}
\caption{Corner PLR structure, $\frac{N}{R_\text{max}}=8$.}\label{fig:corner}
\end{minipage}
\end{figure}
\subsubsection{Strong hierarchical matrices}
Next, we define a matrix with a \emph{strong hierarchical structure}. This will be useful for matrices with a singularity on the diagonal.
\begin{definition}\label{def:strong}
A matrix is said to have \emph{strong hierarchical structure} when a block is compressed if and only if its row and column indices are separated by at least the width of the block.
\end{definition}
The strong hierarchical structure of a matrix is shown in Figure \ref{fig:strong}. We can see that, the condition for compression being stronger than in the weak case, more blocks will have to be divided. For example, let the matrix $M$ be $8 \times 8$ again. Then the block at level 0 is $M$ itself, and again its row and column indices overlap, so we divide the matrix in four. We are now at level 1, with four blocks. The $(1,1)$ block will still have to be divided, its row and column indices being equal. The same holds for block $(2,2)$. Now, block $(1,2)$ has row indices $\left\{1, 2, 3, 4\right\}$ and column indices $\left\{5, 6, 7, 8\right\}$. Those two sets do not overlap, but the distance between them, defined as the minimum of $|i-j|$ over all row indices $i$ and column indices $j$ for that block, is 1. Since the width of the block is 4, which is greater than 1, we have to divide the block following Definition \ref{def:strong}. However, at level 2 which has 16 blocks of width 2, we can see that multiple blocks will be compressed: $(1,3), (1,4), (2,4), (3,1), (4,1), (4,2)$.
The matrix-vector multiplication complexity of matrices with a strong hierarchical structure is
\begin{equation}\label{eq:compstrong}
12N R_\text{max} \log{\frac{N}{2 R_\text{max}}}.
\end{equation}
Again this is faster, asymptotically, than the typical $2N^2$ operations of a dense matrix-vector product. We can obtain this number once again by accounting for all the blocks and using \ref{eq:comp}. More precisely, we have $3\sum_{j=1}^{l-1} 2^l=6\frac{1-2^{l-1}}{1-2}=6(2^{l-1}-1)$ compressed blocks at level $l$, hence those blocks have size $N/2^l$. This is true for $l=2, \ldots L-1$, where again $L=\log_2{N/R_\text{max}}$ is the number of levels. Notice that, as expected, we do not have any compressed blocks, or leaves, at level 1. The contribution of those blocks to the matrix-vector complexity will be
\begin{eqnarray*}
4R_\text{max} \sum_{l=2}^{L-1} \left(6(2^{l-1}-1) \ \frac{N}{2^l} \right) &=&12 N R_\text{max} \sum_{l=2}^{L-1} \frac{2^l-2}{2^l}\\
&=& 12 N R_\text{max} (L-2 - \sum_{l=0}^{L-3} \frac{1}{2} \frac{1}{2^l} ) \\
&=& 12 N R_\text{max} (L-2 - (1-1/2^{L-2}) ) \\
&=& 12 N R_\text{max} (L-3+1/2^{L-2}).
\end{eqnarray*}
We need to add to this quantity the complexity coming from the smallest blocks, of size $N/2^L$. There are
\[ 6(2^{L-1}-1) + 2^L + 2(2^L-1)=6\cdot 2^L-8\]
such blocks, and so the corresponding complexity is
\[ 4R_\text{max} (6 \cdot 2^L-8)(N/2^L)=4 R_\text{max} N (6-8/2^L).\]
Adding this to our previous result, we obtain the final complexity of a matrix-vector multiplication:
\begin{eqnarray*}
& & 12 N R_\text{max} (L-3+1/2^{L-2})+ 4 R_\text{max} N (6-8/2^L)\\
&=& 12 N R_\text{max} (L-1+\frac{1}{2^{L-2}}-\frac{2}{3\cdot2^{L-2}}) \\
&\leq& 12 N R_\text{max} L,
\end{eqnarray*}
as stated previously.
For the complexity of the compression algorithm, again we sum the dimensions of all blocks whose SVD we calculated: the matrix itself, the 4 blocks of level 1, the 16 blocks of level 2, 40 blocks in level 3, etc. Hence the complexity of compression is
\begin{eqnarray*}
R_\text{max}^2 \left( N + \frac{N}{2} 4 + \sum_{l=2}^L \frac{N}{2^l} (6\cdot 2^l-8) \right) &=& R_\text{max}^2 \left( N + 2N + N \sum_{l=2}^{L-1} \left(6-\frac{8}{2^l} \right) \right) \\
&=& R_\text{max}^2 N \left(3+6(L-2)-8 \ \frac{1}{4}\ \frac{1-1/2^{L-2}}{1-1/2} \right) \\
&=& R_\text{max}^2 N \left(6L-9-4(1-1/2^{L-2}) \right) \\
&=& R_\text{max}^2 N \left(6L-13+1/2^{L-2} \right) \\
&\leq & R_\text{max}^2 N \left(6L-12 \right)
\end{eqnarray*}
or
\[6N R_\text{max}^2 \log{\frac{N}{4R_\text{max}}}. \]
This again is nearly linear. Using similar arguments as in the weak case, we can estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$ again.
\subsubsection{Corner PLR matrices}
One final structure we wish to define, now useful for matrices with a singularity in a corner, is the following:
\begin{definition}\label{def:corner}
A matrix is said to have \emph{corner PLR structure}, with reference to a specific corner of the matrix, when a block is divided if and only if both its row and column indices contain the row and column indices of the entry corresponding to that specific corner.
\end{definition}
Figure \ref{fig:corner} shows a top-right corner PLR structure. Again, we take an $8 \times 8$ matrix $M$ as an example. The top-right entry has row index 1 and column index 8. We see that the level 0 block, $M$ itself, certainly contains the indices $(1,8)$, so we divide it. On the next level, we have four blocks. Block $(1,2)$ is the only one that has both row indices that contain the index 1, and column indices that contain the index 8, so this is the only one that is divided. Again, on level 2, we have 16 blocks of size 2, and block $(1,4)$ is the only one divided.
As for the corner PLR matrices, their matrix-vector multiplication complexity is:
\begin{equation}\label{eq:compcorn}
8N R_\text{max}.
\end{equation}
Indeed, we see we have 3 blocks of size $N/2^L$ for $l=1,2, \ldots, L-1$. This is a constant number of blocks per level, which means that matrix-vector multiplication will be even faster. We also have 4 blocks at the lowest level, of size $N/2^L$. The complexity is then
\begin{eqnarray*}
4 R_\text{max} 3 \sum_{l=1}^{L-1} N/2^l \ +4R_\text{max} 4 N/2^L &=& 4 R_\text{max} N (3(1-1/2^{L-1}) +2/2^{L-1})\\
&=& 8 N R_\text{max}.
\end{eqnarray*}
For the complexity of the compression algorithm, we sum the dimensions of all blocks whose SVD we calculated: the matrix itself, the 4 blocks of level 1, 4 blocks in level 2, 4 blocks in level 3, etc. Hence the complexity of compression is
\begin{eqnarray*}
R_\text{max}^2 \left( N + 4 \sum_{l=1}^L \frac{N}{2^l} \right) &=& R_\text{max}^2 N \left( 1+4 \ \frac{1}{2} \ \frac{1-1/2^L}{1-1/2} \right) \\
&=& R_\text{max}^2 N \left(1+4-4/2^L \right) \\
&\leq & R_\text{max}^2 N \left(4 \right)
\end{eqnarray*}
or
\[4N R_\text{max}^2 . \]
Hence the complexity of compression for corner PLR matrices is linear. And again, we estimate that by giving a tolerance $\varepsilon$ to the PLR compression algorithm, we will obtain an error in matrix-vector multiplications of about $\varepsilon \log N$.
\newline
\newline
Now that we have explained these three special structures, and how they provide a fast matrix-vector product, we are ready to discuss using PLR matrices specifically for the exterior DtN map.
\section{Using PLR matrices for the DtN map's submatrices}\label{sec:usingplr}
As we recall, obtaining the full DtN map from solving the exterior problem $4N$ times is too costly, and so we use matrix probing to approximate the DtN map $D$ by $\tilde{D}$ using only a few exterior solves. If we were to try to use PLR matrices directly on $D$, we would have to find the SVD of many blocks. Since we do not have access to the blocks themselves, we would need to use the randomized SVD, and hence to solve the exterior problem, on random vectors restricted to the block at hand. As we mentioned before, Lin et al. have done something similar in \cite{Hmatvec}, which required $O(\log{N}$ matrix-vector multiplies with a large constant, or in our case exterior solves. This is too costly, and this is why we use matrix probing first to obtain an approximate $\tilde{D}$ with a small, nearly constant number of exterior solves.
Now that we have access to $\tilde{D}$ from matrix probing, we can approximate it using PLR matrices. Indeed, we have access to the full matrix $\tilde{D}$, and so finding the SVD of a block is not a problem. In fact, we use the randomized SVD for speed, not because we only have access to matrix-vector multiplies.
Compressing one of those edge-to-edge submatrices under the PLR matrix framework requires that we pick both a tolerance $\epsilon$ and a maximal desired rank $R_\text{max}$. We explain in the next subsections how to choose appropriate values for those parameters.
\subsection{Choosing the tolerance}
Because our submatrices come from probing, they already have some error attached to them, that is, the relative probing error as defined in equation \ref{acterr} of chapter \ref{ch:probing}. Therefore, it would be wasteful to ask for the PLR approximation to do any better than that probing error.
Also, when we compress blocks in the PLR compression algorithm, we make an absolute error in the $L_2$ norm. However, because of the high norm of the DtN map, it makes more sense to consider the relative error. We can thus multiply the relative probing error we made on each submatrix $\tilde{M}$ by the norm of the DtN map $D$ to know the absolute error we need to ask of the PLR compression algorithm. And since the $L_2$ norm is smaller than the Frobenius norm, and errors from each block compound, we have found empirically that asking for a tolerance $\epsilon$ which is a factor of $1$ to $1/100$ of the absolute probing error of a submatrix works for obtaining a similar Frobenius error from the PLR approximation. As a rule of thumb, this factor needs to be smaller for diagonal submatrices $M$ of $D$, but can be equal to 1 for submatrices corresponding to opposite edges of $\partial \Omega$.
Of course, we do not want to use an $\varepsilon$ which is too smaller either. That might force the PLR compression algorithm to divide blocks more than needed, and make the matrix-vector multiplications slower than needed.
\subsection{Minimizing the matrix-vector application time}
Our main objective in this chapter is to obtain a fast algorithm. To this end, we try to compress probed submatrices of the DtN map using various values of the parameter $R_\text{max}$, and choose the value that will give us the fastest matrix-vector multiplies. We use the known complexity of a matrix-vector multiplication \eqref{eq:comp} to find the rank $R_\text{max}$ that minimizes the complexity, from doing a few tests, and we use the compressed submatrix corresponding to that particular maximal rank in our Helmholtz solver. A different choice of complexity might be used depending on the operating system and coding language used, since slow downs might occur because of cache size, operations related to memory, matrix and vector operations, etc.
However, we note that we may compare the actual complexity from the particular structure obtained by PLR compression to the ``ideal'' complexities coming from the special structures we have mentioned before. Indeed, for a submatrix on the diagonal, we can compare its matrix-vector complexity to that of weak and strong hierarchical matrices. That will give us an idea of whether we have a fast algorithm. One thing we notice in most of our experiments is that, for diagonal blocks, the actual complexity usually becomes smaller as $R_\text{max}$ increases, until we arrive at a minimum with the $R_\text{max}$ that gives us the better compromise between too many and blocks of too high a rank. Then, the complexity increases again. However, the complexity increases slower than that of both weak and strong matrices.
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{comps-c1-block1-errsm5m6}.pdf}
\caption{Matrix-vector complexity for submatrix $(1,1)$ for $c \equiv 1$, various $R_\text{max}$. Probing errors of $10^{-5}$, $10^{-6}$.}
\label{fig:compscomp1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{comps-c1-block2-errsm5m6}.pdf}
\caption{Matrix-vector complexity for submatrix $(2,1)$ for $c \equiv 1$, various $R_\text{max}$. Probing errors of $10^{-5}$, $10^{-6}$.}
\label{fig:compscomp2}
\end{minipage}
\end{figure}
Figure \ref{fig:compscomp1} confirms this phenomenon for the $(1,1)$ block of the constant medium. From this figure, we would then pick $R_\text{max}=8$ since this is the value of $R_\text{max}$ that corresponds to the smallest actual complexity of a matrix-vector product, both for a relative probing error of $10^{-5}$ and $10^{-6}$. Figure \ref{fig:compscomp2} confirms this phenomenon as well for the $(2,1)$ block of the constant medium. From this figure, we would then pick $R_\text{max}=4$ again both for a relative probing error of $10^{-5}$ and $10^{-6}$, since this is the value of $R_\text{max}$ that corresponds to the smallest actual complexity of a matrix-vector product (it is hard to tell from the figure, but the complexity for $R_\text{max}=2$ is just larger than that for $R_\text{max}=4$ in both cases).
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_1_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=2$. Each block is colored by its numerical rank.}
\label{fig:c1d1b1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_3_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=8$. Each block is colored by its numerical rank.}
\label{fig:c1d3b1}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_5_1-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(1,1)$ for $c \equiv 1$, $R_\text{max}=32$. Each block is colored by its numerical rank.}
\label{fig:c1d5b1}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/ims/{draw-c1-hmats1_2_2-errm5}.pdf}
\caption{PLR structure of the probed submatrix $(2,1)$ for $c \equiv 1$, $R_\text{max}=4$. Each block is colored by its numerical rank.}
\label{fig:c1d2b2}
\end{minipage}
\end{figure}
It is also informative to have a look at the structures we obtain, and the ranks of each block. Figures \ref{fig:c1d1b1}, \ref{fig:c1d3b1} and \ref{fig:c1d5b1} refer to block $(1,1)$ again of $c \equiv 1$, for different values of $R_\text{max}$. As we expect, having $R_\text{max}=2$ in Figure \ref{fig:c1d1b1} in this case forces blocks to be very small, which is wasteful. On the other hand, a larger $R_\text{max}=32$ in Figure \ref{fig:c1d5b1} is not better because then we have fewer blocks, but they have large rank. Still, because the structure is that of a weak hierarchical matrix, with blocks that have in fact rank smaller than $R_\text{max}$, we obtain a fast matrix-vector product. However, the ideal is $R_\text{max}=8$ in Figure \ref{fig:c1d3b1}, which minimizes the complexity of a matrix-vector product by finding the correct balance between fewer blocks but small rank. We see it almost has a strong hierarchical structure, but in fact with more large blocks and fewer small blocks. As for the $(2,1)$ block, we see its PLR structure in Figure \ref{fig:c1d2b2}: it is actually a corner PLR structure but the numerical ranks of blocks are always lower than $R_\text{max}=4$ so the matrix-vector multiplication of that submatrix will be even faster than for a corner PLR structure, as we knew from Figure \ref{fig:compscomp2}.
\subsection{Numerical results}
\begin{table}[ht]
\caption{PLR compression results, $c\equiv 1$}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$4.2126e-01$} &{$6.5938e-01$} & $115$ \\ \hline
{$2$} & {$2$} & {$4.2004e-02$} &{$7.3655e-02$} & $93$ \\ \hline
{$2$} & {$2$} & {$1.2517e-03$} &{$2.4232e-03$} & $55$ \\ \hline
{$4$} & {$2$} & {$1.1210e-04$} &{$4.0003e-04$} & $42$ \\ \hline
{$8$} & {$4$} & {$1.0794e-05$} &{$1.4305e-05$} & $32$ \\ \hline
{$8$} & {$4$} & {$6.5496e-07$} &{$2.1741e-06$} & $29$ \\ \hline
\end{tabular}
\end{center}
\label{c1solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the Gaussian waveguide.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up \\ \hline
{$2$} & {$2$} & {$2$} & {$6.6034e-02$} &{$1.4449e-01$} & $105$ \\ \hline
{$2$} & {$2$} & {$2$} & {$1.8292e-02$} &{$7.4342e-02$} & $74$ \\ \hline
{$2$} & {$2$} & {$2$} & {$2.0948e-03$} &{$1.1014e-02$} & $59$ \\ \hline
{$4$} & {$4$} & {$2$} & {$2.3740e-04$} &{$1.6023e-03$} & $47$ \\ \hline
{$8$} & {$4$} & {$4$} & {$1.5369e-05$} &{$8.4841e-05$} & $36$ \\ \hline
{$8$} & {$8$} & {$4$} & {$3.4148e-06$} &{$1.7788e-05$} & $30$ \\ \hline
\end{tabular}
\end{center}
\label{c3solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the Gaussian slow disk.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$9.2307e-02$} &{$1.2296e+00$} & $97$ \\ \hline
{$2$} & {$2$} & {$8.1442e-03$} &{$4.7922e-02$} & $69$ \\ \hline
{$4$} & {$2$} & {$1.2981e-03$} &{$3.3540e-02$} & $44$ \\ \hline
{$4$} & {$2$} & {$1.1680e-04$} &{$1.0879e-03$} & $39$ \\ \hline
{$4$} & {$2$} & {$2.5651e-05$} &{$1.4303e-04$} & $37$ \\ \hline
\end{tabular}
\end{center}
\label{c5solveplr}
\end{table}
We have compressed probed DtN maps and used them in a Helmholtz solver with success. We have used the same probed matrices as in the previous chapter, and so we refer the reader to Tables \ref{FDPMLerr}, \ref{c1solve}, \ref{c3solve}, \ref{c5solve}, \ref{c16solve}, \ref{c18solve}, \ref{c33solve} for all the parameters we used then.
\begin{table}[ht]
\caption{PLR compression results, $c$ is the vertical fault, sources on the left and on the right.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $\frac{\|D-\overline{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\overline{u}\|_F}{\|u\|_F}$, left & $\frac{\|u-\overline{u}\|_F}{\|u\|_F}$, right & Speed-up\\ \hline
{$2$} & {$2$} & {$2.6972e-01$} &{$5.8907e-01$} &{$4.6217e-01$} & $105$\\ \hline
{$2$} & {$2$} & {$9.0861e-03$} &{$3.9888e-02$} &{$2.5051e-02$} & $67$ \\ \hline
{$1$} & {$4$} & {$8.7171e-04$} &{$3.4377e-03$} &{$2.4279e-03$} & $53$\\ \hline
\end{tabular}
\end{center}
\label{c16solveplrl}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the diagonal fault.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,2)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$1.4281e-01$} &{$5.3553e-01$} & $98$ \\ \hline
{$2$} & {$2$} & {$1.9108e-02$} &{$7.8969e-02$} & $76$ \\ \hline
{$2$} & {$4$} & {$2.5602e-03$} &{$8.7235e-03$} & $49$ \\ \hline
\end{tabular}
\end{center}
\label{c18solveplr}
\end{table}
\begin{table}[ht]
\caption{PLR compression results, $c$ is the periodic medium.}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|} \hline
$R_\text{max}$ for $(1,1)$ & $R_\text{max}$ for $(2,1)$ & $\|D-\overline{D}\|_F/\|D\|_F$ & $\|u-\overline{u}\|_F/\|u\|_F$ & Speed-up\\ \hline
{$2$} & {$2$} & {$1.2967e-01$} &{$2.1162e-01$} & $32$ \\ \hline
{$2$} & {$2$} & {$3.0606e-02$} &{$5.9562e-02$} & $22$ \\ \hline
{$8$} & {$2$} & {$9.0682e-03$} &{$2.6485e-02$} & $11$ \\ \hline
\end{tabular}
\end{center}
\label{c33solveplr}
\end{table}
We now present results for PLR compression in a Helmholtz solver in Tables \ref{c1solveplr}, \ref{c3solveplr}, \ref{c5solveplr}, \ref{c16solveplrl}, \ref{c18solveplr}, \ref{c33solveplr}. For each medium, we show the chosen $R_\text{max}$ of the most important (in norm) submatrices. For all other submatrices, $R_\text{max} \leq 2$. We then show the relative norm of the error between the PLR compression $\overline{D}$ and the actual DtN map $D$. We also show the relative error between the solution $\overline{u}$ computed using $\overline{D}$ in the Helmholtz solver and the actual solution $u$ using $D$ as the DtN map. Finally, we show the ``speed-up'' obtained from taking the ratio of the complexity of using a dense matrix-vector product for $\tilde{D}$, which would be of about $2\times 16N^2$, to the total complexity of a matrix-vector product of $\bar{D}$. This ratio tells us how much faster than a dense product the PLR compression is. We see that this ratio ranges from about 50 to 100 for all media expect the periodic media, with a smaller ratio associated with asking for a higher accuracy, as expected. The speed-up ratio is between 10 and 30 for the periodic media, but as we recall the value of $N$ here is smaller: $N=320$. Larger values of $N$ should lead to a better speed-up.
\section{The half-space DtN map is separable and low rank: theorem}\label{sec:septheo}
As we have mentioned before, the Green's function for the half-space Helmholtz equation is separable and low rank \cite{Hsweep}. We investigate here the half-space DtN map kernel, which is related to the Green's function through two derivatives, as we saw in section \ref{sec:hsG}, and we obtain a similar result to that of \cite{Hsweep}. We state the result here, and prove it in the next subsections. We then end this section with a discussion on generalizing our theorem for heterogeneous media.
Let $\mathbf{x}=(x,0)$ and $\mathbf{y}=(y,0)$ be points along the half-space boundary, $x\neq y$. Recall the Dirichlet-to-Neumann map kernel for the half-space Helmholtz equation \eqref{eq:hsHE} with homogeneous medium $c \equiv 1$ and $\omega=k/c$ is:
\begin{equation}\label{eq:hsDtN}
K(|x-y|)= \frac{ik^2}{2} \frac{H_1^{(1)}(k|x-y|)}{k|x-y|}.
\end{equation}
\begin{theorem}\label{theo:sep}
Let $0 <\epsilon \leq 1/2$, and $0<r_0<1$, $r_0=\Theta(1/k)$. There exists an integer $J$, functions $\left\{\Phi_j,\chi_j\right\}_{j=1}^J$ and a number $C$ such that we can approximate $K(|x-y|)$ for $r_0\leq |x-y| \leq 1$ with a short sum of smooth separated functions:
\begin{equation}\label{eq:sep}
K(|x-y|)=\sum_{j=1}^J \Phi_j(x)\chi_j(y) + E(x,y)
\end{equation}
where $|E(x,y)| \leq \epsilon$, and $J \leq C \left(\log k \max(|\log\epsilon|,\log k\right))^2$ with $C$ which does not depend on $k$, or $\epsilon$. $C$ does depend weakly on $r_0$ through the constant quantity $r_0k$, for more details see Remark \ref{rem:C_0}.
\end{theorem}
To prove this, we shall first consider the Hankel function $H_1^{(1)}$ in \eqref{eq:hsDtN}, and see that it is separable and low rank. Then, we shall look at the factor of $1/k|x-y|$ in \eqref{eq:hsDtN}, and see the need for using quadratures on a dyadic partition of the interval $\left[r_0,1\right]$ in order to prove that this factor is also separable and low rank. Finally, we prove Theorem \ref{theo:sep} and make a few remarks.
\subsection{Treating the Hankel function}
From Lemmas 7 and 8 of \cite{mr} we know that $H_1^{(1)}(k|x-y|)$ is separable and low-rank as a function of $x$ and $y$. Looking in particular at Lemma 7 from \cite{mr}, we make a slight modification of the proof to obtain the following lemma.
\begin{lemma}\label{lem:7n8}
Slight modification of Lemmas 7 and 8, \cite{mr}. Let $0<\epsilon \leq 1/2$, $k>0$ and $r_0>0$, $r_0=\Theta(1/k)$. Let $|x-y|>r_0$. Then there exists an integer $J_1$, a number $C_1$, and functions $\left\{\Phi^{(1)}_j,\chi^{(1)}_j\right\}_{j=1}^{J_1}$ such that
\begin{equation}\label{eq:h1}
H_1^{(1)}(k|x-y|)=\sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y),
\end{equation}
where $|E^{(1)}_{J_1}(x,y)| \leq \epsilon$, and $J_1 \leq C_1 \log k |\log\epsilon|$ with $C_1$ which does not depend on $k$, or $\epsilon$. $C_1$ does depend weakly on $r_0$ through the quantity $r_0k$. Again, see Remark \ref{rem:C_0}.
\end{lemma}
\subsection{Treating the $1/kr$ factor}
We now show that the $1/kr$ factor is also separable and low rank\footnote{A different technique than the one presented here would be to expand $1/|x-y|$ using the Taylor expansion for $x>y>0$: $$\frac{1}{x} \frac{1}{1-y/x} = \frac{1}{x} \left(1+\frac{y}{x}+\left(\frac{y}{x}\right)^2 + \ldots \right).$$ However, making an error of $\varepsilon$ requires $k\log\varepsilon$ terms in the expansion because the error is large when $y/x \approx 1$ or $y\approx x$ or $r\approx r_0$.}. Notice:
\begin{equation}\label{eq:1okr}
\int_0^\infty e^{-krt} dt = \left. \frac{e^{-krt}}{-kr} \right|_0^\infty = 0 + \frac{e^{-kr0}}{kr} = \frac{1}{kr}
\end{equation}
and
\begin{equation}\label{eq:split}
\int_0^\infty e^{-krt} dt = \int_0^{T} e^{-krt} dt + \int_{T}^\infty e^{-krt} dt,
\end{equation}
where
\[ \int_{T}^\infty e^{-krt} dt = \frac{e^{-krT}}{kr}. \]
Equation \eqref{eq:1okr} means we can write $1/kr$ as an integral, and equation \eqref{eq:split} means we can split this integral in two, one part being a definite integral, the other indefinite. But we can choose $T$ so that the indefinite part is smaller than our error tolerance:
\[ \left| \int_{T}^\infty e^{-krt} dt \right| \leq \epsilon . \]
For this, we consider that $\frac{e^{-krT}}{kr} \leq \frac{e^{-krT}}{C_0} \leq \epsilon $ and so we need $krT \geq |\log C_0| + |\log \epsilon |$ or $T \geq (|\log C_0| + |\log \epsilon|)/C_0$ or
\begin{equation}\label{eq:T}
T = O(| \log \epsilon |).
\end{equation}
If we assume \eqref{eq:T} holds, then we can use a Gaussian quadrature to obtain a low-rank, separable expansion of $1/kr$:
\[ \frac{1}{kr} \approx \int_0^{T} e^{-krt} dt \approx \sum_{p=1}^n w_p e^{-krt_p} \]
where the $w_p$ are the Gaussian quadrature weights and the $t_p$ are the quadrature points. To determine $n$, the number of quadrature weights and points we need for an accuracy of order $\epsilon$, we can use the following Gaussian quadrature error estimate \cite{na} on the interval $[a,b]$:
\begin{equation}\label{eq:quaderr}
\frac{(b-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi)
\end{equation}
where $f$ is the integrand, and $\xi$ is within the bounds of integration: $f(\xi)=e^{-kr\xi}$ and $a \leq \xi \leq b$ where here $a=0$ and $b=T$. Clearly
\[ f^{(2n)}(\xi) = (-kr)^{2n}e^{-kr\xi}. \]
The worst case will be when $\xi=0$ and $r=1$ so
\[\max_{0\leq \xi \leq T}\left| f^{(2n)}(\xi) \right|= (k)^{2n}.\]
We can put this back in the error estimate \eqref{eq:quaderr}, using Stirling's approximation \cite{ans}
$$ \sqrt{2\pi n}\ n^n e^{-n} \leq n! \leq e \sqrt{n} \ n^n e^{-n}, $$
for the factorials, to get:
\begin{eqnarray*}
\left| \frac{(b-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{T^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} (k)^{2n} \\
&\leq& \frac{T^{2n+1} e^4(n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } (k)^{2n} \\
&\leq& \frac{T^{2n+1} e^4 (n)^{1/2} (ke)^{2n}} {(2n+1) \pi^{3/2} (2)^{6n+3} n^{2n}} \\
&\leq& \frac{Te^4}{16\sqrt{n} \pi^{3/2}} \left( \frac{Tke}{8n}\right)^{2n} .
\end{eqnarray*}
This is problematic because in order for the quadrature scheme to converge, we are forced to have $n > Tke/8 \approx k \log \epsilon$, which is prohibitively large. This can be understood as the difficulty of integrating accurately a function which has large higher derivatives such as this sharp exponential over a large domain such as the interval $[0,\log{\epsilon}]$. To solve this problem, we make a dyadic partition of the $[0,T]$ interval in $O(\log{k})$ subintervals, each of which will require $O(\log{\epsilon})$ quadrature points.
Before we get to the details, let us redo the above error analysis for a dyadic interval $[a,2a]$. The maximum of $\left| f^{(2n)}(\xi) \right|= (kr)^{2n}e^{-kr\xi}$ as a function of $kr$ occurs when $kr=2n/\xi$, and the maximum of that as a function of $\xi$ is when $\xi=a$, so the maximum is $\left| f^{(2n)}(a) \right|= (2n/a)^{2n}e^{-(2n/a)*a}$. We can put this back in the error estimate \eqref{eq:quaderr} to get%
\begin{eqnarray*}
\left| \frac{(2a-a)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{a^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} (2n/a)^{2n} e^{-2n} \\
&\leq& \frac{a e^4(n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } (2n)^{2n} e^{-2n} \\
&\leq& \frac{ 1.22 a (n)^{1/2} } {(2n+1) (2)^{4n} } \\
&\leq& \frac{a } {\sqrt{n} (2)^{4n} } .
\end{eqnarray*}
To have this error less than $\epsilon$, we thus need
\begin{equation}\label{eq:n}
4n \log 2 \geq |\log\epsilon | + \log{a/\sqrt{n}},
\end{equation}
with $a \leq T \approx |\log\epsilon |$, and we see that in fact
\begin{equation}\label{eq:nval}
n=|\log \epsilon|
\end{equation}
will work.
\begin{remark}
We found the maximum of $\left| f^{(2n)}(\xi) \right|= (kr)^{2n}e^{-kr\xi}$ to be when both $kr=2n/\xi$ and $\xi=a$. However, we neek $kr\leq k$, so that we need $a\geq 2n/k=2|\log\varepsilon|/k$. %
In the next subsection, we make sure that $a \geq 2|\log\varepsilon|/k$ by having the lower of the interval $I_1$ equal to $2|\log\varepsilon|/k$.%
\end{remark}
\subsection{Dyadic interval for the Gaussian quadrature}
Now we are ready get into the details of how we partition the interval. The subintervals are:
\begin{eqnarray*}
I_0&=&\left[0,\frac{2|\log\epsilon|}{k}\right] \\
I_j&=&\left[\frac{2^{j}|\log\epsilon|}{k},\frac{2^{j+1}|\log\epsilon|}{k}\right], \qquad j=1, \dots , M-1
\end{eqnarray*}
where $T=\frac{2^M|\log\epsilon|}{k}=O(|\log{\epsilon}|)$ which implies that
\begin{equation}\label{eq:M}
M=O(\log{k}).
\end{equation}
Then, for each interval $I_j$ with $j \geq 1$, we apply a Gaussian quadrature as explained above, and we need $n=|\log \epsilon|$ quadrature points to satisfy the error tolerance of $\epsilon$.
As for interval $I_0$, we return to the Gaussian quadrature error analysis, where this time again $k^{2n}$ is the maximum of $\left| f^{(2n)}(\xi) \right|$ for $\xi \in I_0$. Thus we have that the quadrature error is:
\begin{eqnarray*}
\left| \frac{(2|\log\epsilon|/k-0)^{2n+1} (n!)^4 }{(2n+1)[(2n)!]^3} f^{(2n)}(\xi) \right| &\leq& \frac{(2|\log\epsilon|/k)^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} k^{2n} \\
&\leq& \frac{(2|\log\epsilon|/k)^{2n+1} e^4 (n)^2 (n)^{4n} e^{6n}} {(2n+1)e^{4n}(2\pi 2n)^{3/2} (2n)^{6n} } k^{2n} \\
&\leq& \frac{2^{2n+1}|\log\epsilon|^{2n+1} e^{2n} e^4 n^{1/2} } {k(2n+1) (2)^{6n+3} n^{2n}} \\
&\leq& \frac{2|\log\epsilon| e^4 \sqrt{n}}{8k(2n+1)} \left( \frac{2|\log\epsilon| e}{8n} \right)^{2n}
\end{eqnarray*}
and $n=O(|\log\epsilon|)$ will satisfy the error tolerance.
To recap, we have approximated the function $1/kr$, as a function of $r$, by a low-rank separable expansion with error $\epsilon$:
\[ \frac{1}{k|x-y|} = \sum_{j=1}^{J_2} w_j e^{-k|x-y|t_j} + E^{(2)}_{J_2}(x,y),\]
where $J_2=O(\log k |\log\epsilon|)$ (again, from using $O(\log k)$ intervals with $O(|\log\epsilon|)$ quadrature points on each interval), $C_0 \leq k|x-y| \leq k$, and $|E^{(2)}_{J_2}(x,y)|<\epsilon$.
Clearly this expansion is separable: depending on the sign of $(x-y)$, we have $e^{-k|x-y|t_j}=e^{-kxt_j}e^{kyt_j}$ or $e^{-k|x-y|t_j}=e^{kxt_j}e^{-kyt_j}$. Either way, the exponential has been expressed as a product of a function of $x$ only and a function of $y$ only. Thus we have the following lemma.
\begin{lemma}\label{lem:1overkr}
Let $0<\epsilon $, $k>0$ and $r_0>0$, $r_0=\Theta(1/k)$. Let $|x-y|>r_0$. Then there exists an integer $J_2$, a number $C_2$, and functions $\left\{\Phi^{(2)}_j,\chi^{(2)}_j\right\}_{j=1}^{J_2}$ such that
\begin{equation}\label{eq:kr}
\frac{1}{k|x-y|}=\sum_{j=1}^{J_2} \Phi^{(2)}_j(x)\chi^{(2)}_j(y) + E^{(2)}_{J_2}(x,y),
\end{equation}
where $|E^{(2)}_{J_2}(x,y)| \leq \epsilon$, and $J_2 \leq C_2 \log k |\log\epsilon|$ with $C_2$ which does not depend on $k$, or $\epsilon$. $C_2$ does depend weakly on $r_0$ through the constant quantity $r_0k$. Again, see Remark \ref{rem:C_0}.
\end{lemma}
\subsection{Finalizing the proof}
We now come back to the DtN map kernel $K$ in \eqref{eq:hsDtN}. %
Using Lemmas \ref{lem:7n8} and \ref{lem:1overkr}, we can write each factor of $K$ in its separable expansion:
\begin{eqnarray*}
\frac{K(|x-y|)}{ik^2/2}&=&\ H_1^{(1)}(k|x-y|) \ \frac{1}{k|x-y|} \\
&=&\left( \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y) \right) \left( \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) + E^{(2)}_{J_2}(x,y) \right)\\
&=& \frac{K_{(J_1,J_2)}(|x-y|)}{ik^2/2} + E^{(2)}_{J_2}(x,y)\sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) + E^{(1)}_{J_1}(x,y) \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) \\
&+& E^{(1)}_{J_1}(x,y) E^{(2)}_{J_2}(x,y)
\end{eqnarray*}
where
\begin{eqnarray}\label{eq:hssepDtN}
K_{(J_1,J_2)}(|x-y|)&=&\frac{ik^2}{2}\left( \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y) \right) \left( \sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y) \right)\\
&=& \frac{ik^2}{2}\sum_{j=1}^{J_1 J_2} \Phi_j(x)\chi_j(y) .
\end{eqnarray}
It follows that
\begin{equation}\label{eq:errDs}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right| \leq \left|E^{(2)}_{J_2}\right| \left| \sum_{j=1}^{J_1} \Phi^{(1)}_j(x)\chi^{(1)}_j(y)\right| + \left|E^{(1)}_{J_1}\right| \left|\sum_{j=1}^{J_2} \Phi^{(2)}_j(x) \chi^{(2)}_j(y)\right|+ \left|E^{(1)}_{J_1}\right| \left|E^{(2)}_{J_2}\right|
\end{equation}
Now clearly
\[ \max_{C_0 \leq kr \leq k} \frac{1}{kr} = \frac{1}{C_0}. \]
We also have from Lemma 3 of \cite{flatland} that
\[ \left| e^{-ikr}H_1^{(1)}(kr) \right| \leq C(kr)^{-1/2}, \qquad kr \geq C_0 \]
for some constant $C$ which does not depend on $k$. Then, we have that
\[\max_{kr \geq C_0} \left| H_1^{(1)}(kr) \right| \leq \frac{C}{C_0^{1/2}}. \]
What we have shown is that the quantities $1/kr$ and $H_1^{(1)}(kr)$ are bounded by some constant, call it $\tilde{C}$, for the range of $kr$ we are interested, that is, $C_0 \leq kr \leq k$. We can now go back to \eqref{eq:errDs}, using our approximations from Lemmas \ref{lem:7n8} and \ref{lem:1overkr} which make an absolute error of no more than $\epsilon$, and see that
\begin{equation*}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right| \leq 2\epsilon (\tilde{C}+\epsilon) + \epsilon^2
\end{equation*}
or
\begin{equation}\label{eq:errDsrel}
\left| \frac{K-K_{(J_1,J_2)}}{ik^2/2} \right|=O(\epsilon).
\end{equation}
Note that the expansion of $K_{(J_1,J_2)}$ in \eqref{eq:hssepDtN} now contains $J_1 J_2 = O\left((\log k |\log\epsilon| )^2\right)$ terms. In order to obtain an absolute error bound, we need to multiply through with $ik^2/2$ in \eqref{eq:errDsrel}. With $\epsilon \rightarrow \epsilon k^2$, we have now finally shown that
\begin{equation*}
\left| K-K_{(J_1,J_2)}\right| \leq \epsilon
\end{equation*}
with the expansion of $K_{(J_1,J_2)}$ in \eqref{eq:hssepDtN} containing
\[J=J_1 J_2 =O\left( (\log k (|\log\epsilon|+2\log k) )^2 \right)\]
terms. We thus conclude that the DtN map is low-rank and separable away from the diagonal, with the prescriptions of Theorem \ref{theo:sep}:
\begin{equation}%
K(|x-y|)= \frac{ik^2}{2} \frac{H_1^{(1)}(k|x-y|)}{k|x-y|}= \sum_{j=1}^J \Phi_j(x)\chi_j(y) + E(x,y)
\end{equation}
where $|E(r)| \leq \epsilon$ for $C_0 \leq kr \leq k$ and there is a number $C$ which does not depend on $k$ or $\epsilon$ such that $J \leq C (\log k \max(|\log\epsilon|,\log k))^2$.
\begin{remark}\label{rem:C_0}
We can understand the number $J$ of ranks as made of two terms, one which is $(C_1\log k) (C_2|\log\epsilon|)$, the other $(C_1\log k)(\log C_3 k)$. The numbers $C_1$and $C_3$, but not $C_2$, also weakly depend on the separation $C_0$, in the sense that the larger the separation is, the smaller those numbers are. First, we note from the discussion before equation $\eqref{eq:T}$ that $T$ is smaller when $C_0$ is bigger. Then, a smaller $C_1$ comes from the discussion before equation $\eqref{eq:M}$. The fact that $C_2$ does \emph{not} depend on $C_0$ comes from the discussion after equation \eqref{eq:n}. As for $C_3$, we can understand its \emph{very weak} dependence on $C_0$ by looking at equation \eqref{eq:n} and plugging in $a=T$, remembering how $T$ depends on $C_0$. Thus both terms in $J$ should behave somewhat similarly as $C_0$ changes. Physically, we know a greater separation means we are farther away from the singularity of the DtN map, and so we expect the map to be smoother there, and hence have lower rank.
\end{remark}
\begin{remark}\label{rem:highpow}
In our numerical verifications, we have not noticed the square power in $J$. Rather, we observe in general that $J \sim \log k |\log\epsilon|$. The only exceptions to this behavior that we observed were for larger $\epsilon$, such as $\epsilon=1/10$ or sometimes $1/100$, where $J \sim \log k \log k$. From Remark \ref{rem:C_0}, we know $J$ is made up of two terms, and it makes sense that the term $\sim \log k \log k$ might become larger than the term $\sim \log k |\log \varepsilon|$ when $\varepsilon$ is large. %
\end{remark}
\begin{remark}\label{rem:h}
We also note that in our numerical verifications, we use $r_0$ as small as $h$, which is smaller than the $r_0\sim 1/k \sim h^{2/3}$ we prove the theorem with. If we used $r_0\sim h$ in the theorem, this would mean $C_0\sim n^{-1/3}<1$. By Remark \ref{rem:C_0}, this would affect $J$: the power of the $|\log\epsilon |$ factor would go from 2 to 3. Again, we do not notice such a higher power in the numerical simulations.
\end{remark}
\subsection{The numerical low-rank property of the DtN map kernel for heterogeneous media}
We would like to know as well if we can expect the half-space DtN map kernel to be numerically low-rank in heterogeneous media. We saw in section \ref{sec:BasisPf} how the half-space DtN map, in constant medium, consists of the amplitude $H(r)$, singular at $r=0$, multiplied by the complex exponential $e^{ikr}$. Because of the geometrical optics expansion \eqref{eq:geoopts} of the Green's function $G$ for the Helmholtz equation free-space problem in heterogeneous media, we expect $G$ to have an amplitude $A(\mathbf{x},\mathbf{y})$, which is singular at $\mathbf{x}=\mathbf{y}$, multiplied by a complex exponential with a phase corresponding to the traveltime between points $\mathbf{x}$ and $\mathbf{y}$: $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$. We can expect to be able to treat the amplitude in the same way as we did before, and approximate it away from the singularity with a low-rank separable expansion. However, the complex exponential is harder to analyze because of the phase $\tau(\mathbf{x},\mathbf{y})$, which is not so simple as $|\mathbf{x}-\mathbf{y}|$.
However, a result of \cite{butterflyFIO} still allows us to find a separable low-rank approximation of a function such as $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$. We refer the reader to Theorem 3.1 of \cite{butterflyFIO} for the details of the proof, and simply note here the main result. Let $X$, $Y$ be bounded subsets of $\mathbf{R}^2$ such that we only consider $\mathbf{x} \in X$ and $\mathbf{y} \in Y$. The width of $X$ (or $Y$) is defined as the maximal Euclidean distance between any two points in that set. Then Theorem 3.1 of \cite{butterflyFIO} cites that $e^{i\omega \tau(\mathbf{x},\mathbf{y})}$, $\mathbf{x} \in X$ and $\mathbf{y} \in Y$, is numerically low-rank with rank $O(|\log^4 \varepsilon|)$, given that the product of the widths of $X$ and $Y$ is less than $1/k$.
This translates into a restriction on how large can the off-diagonal blocks of the matrix $D$ be while still being low-rank. Since we use square blocks in the PLR compression algorithm, we expect blocks to have to remain smaller than $1/\sqrt{k}$, equivalent to $N/\sqrt{k}$ points, in variable media. If $N^{2/3} \sim k$ as we have in this thesis because of the pollution effect, this translates into a maximum expected number of blocks of $N^{1/3}$. If we kept instead $N \sim k$, then the maximal number of blocks would be $\sqrt{N}$.
This is why, again, using PLR matrices for compressing the DtN map makes more sense than using hierarchical matrices: the added flexibility means blocks will be divided only where needed, in other words only where the traveltime $\tau$ requires blocks to be smaller in order to have low ranks. And as we saw in section \ref{sec:usingplr}, where we presented our numerical results, ranks indeed remain very small, between 2 and 8, for off-diagonal blocks of the submatrices of the exterior DtN map, even in heterogeneous media.
\section{The half-space DtN map is separable and low rank: numerical verification}\label{sec:sepnum}
We first compute the half-space DtN map for various $k \sim N^{2/3}$, which ensures a constant error from the finite difference discretization (FD error) as we saw in section \ref{sec:compABC}. We also choose a pPML width consistent with the FD error level. Then, we compute the maximum off-diagonal ranks for various fixed separations from the diagonal, that is, various $r_0$ such that $r \geq r_0$. To compute the ranks of a block, we fix a tolerance $\epsilon$, find the Singular Value Decomposition of that block, and discard all singular values smaller than that tolerance. The number of remaining singular values is the numerical rank of that block with tolerance $\epsilon$ (the error we make in Frobenius norm is not larger than $\epsilon$). Then, the maximum off-diagonal rank for a given separation $r_0$ is the maximum rank of any block whose entries correspond to $r\geq r_0$. %
Hence we consider all blocks that have $|i-j| \geq r_0/h$, or $i-j \geq r_0/h$ with $i>j$ since the DtN map is symmetric (and so is its numerical realization, up to machine precision).
\subsection{Slow disk and vertical fault}
The Figures \ref{c5-vsN-r1}, \ref{c5-vsN-r4}, \ref{c5-vsEps-r1}, \ref{c5-vsEps-r4} show the relationship between the ranks and $N$ or $\varepsilon$ for the slow disk, FD error of $10^{-3}$ and separations of $r_0=1$ and $r_0=4$.
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the slow disk, various $\epsilon$. FD error of $10^{-3}$, $r_0=h$.}
\label{c5-vsN-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsN_r4}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the slow disk, various $\epsilon$. FD error of $10^{-3}$, $r_0=4h$.}
\label{c5-vsN-r4}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the slow disk, various $N$. FD error of $10^{-3}$, $r_0=h$.}
\label{c5-vsEps-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c5_ompow0.66667_coefr0.125_bf0.25_a_R1em16_all_ranks_scale_vsEps_r4}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the slow disk, various $N$. FD error of $10^{-3}$, $r_0=4h$.}
\label{c5-vsEps-r4}
\end{minipage}
\end{figure}
We expect ranks to be slightly smaller for a larger separation $r_0$ (hence larger $C_0$) because of Remark \ref{rem:C_0}. This is indeed the case in our numerical simulations, as we can see by comparing Figures \ref{c5-vsN-r1} ($r_0=h$) and \ref{c5-vsN-r4} ($r_0=4h$), or Figures \ref{c5-vsEps-r1} ($r_0=h$) and \ref{c5-vsEps-r4} ($r_0=4h$). We can clearly see also how the maximum ranks behave as in the previous theorem, except for the missing square power in $J$, as alluded to in Remark \ref{rem:highpow}: they vary logarithmically with $k$ (or $N$) when the tolerance $\epsilon$ is fixed. We expect the slope in a graph of the ranks versus $\log N$ to increase slowly as $\epsilon$ becomes smaller, and we see that in fact the slope barely does (from slightly smaller than $2$ to slightly larger than $2$) as $\epsilon$ goes from $10^{-1}$ to $10^{-6}$. Similarly, when we fix $N$, we expect the ranks to grow logarithmically with $1/\epsilon$, and this is the case. Once again, the slope of the graph with a logarithmic scale for $1/\epsilon$ grows, but only from $1$ to $2$ or so, as $N$ goes from $128$ to $2048$.
The off-diagonal ranks of the DtN map for the slow disk behave very similarly as the above for an FD error of $10^{-2}$, and also for various other separations $r_0$. The same is true for the vertical fault, and so we do not show those results.
\subsection{Constant medium, waveguide, diagonal fault}
As for the constant medium, waveguide and diagonal fault, it appears that the term $O(\log^2 k)$ we expect in the size of the ranks is larger than the term $O(\log k |\log\epsilon|)$, especially when the FD error is $10^{-2}$. This was mentioned in Remark \ref{rem:highpow}. As we can see in Figure \ref{c18-vsN-r1} for the diagonal fault, the dependance of the ranks with $\log N$ seems almost quadratic, not linear. This can also be seen in Figure \ref{c18-vsEps-r1}: here we still see a linear dependence of the ranks with $\log\epsilon$, but we can see that the ranks jump up more and more between different $N$, as $N$ grows, than they do for the slow disk for example (compare to Figure \ref{c5-vsEps-r1}). %
This phenomenon disappears for a smaller FD error (Figures \ref{c18-vsN-r1-FDm3}, \ref{c18-vsEps-r1-FDm3}).%
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks with $N$ for the diagonal fault, various $\epsilon$. FD error of $10^{-2}$, $r_0=h$. }
\label{c18-vsN-r1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks with $\varepsilon$ for the diagonal fault, various $N$. FD error of $10^{-2}$, $r_0=h$.}
\label{c18-vsEps-r1}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.071429_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of $N$, various tolerances $\epsilon$. The separation is $r_0=h$. FD error of $10^{-3}$.}
\label{c18-vsN-r1-FDm3}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.071429_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of the tolerance $\epsilon$, various $N$. The separation is $r_0=h$. FD error of $10^{-3}$.}
\label{c18-vsEps-r1-FDm3}
\end{minipage}
\end{figure}
Finally, we also notice that the term $O(\log^2 k)$ seems to remain important compared to the term $O(\log k |\log\epsilon|)$ as the separation $r_0$ (or $C_0$) grows, as is somewhat expected from Remark \ref{rem:C_0}. This can be seen by comparing Figures \ref{c18-vsN-r8} and \ref{c18-vsN-r1}, or Figures \ref{c18-vsEps-r8} and \ref{c18-vsEps-r1}.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsN_r8}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of $N$, various tolerances $\epsilon$. The separation is $r_0=8h$. FD error of $10^{-2}$.}
\label{c18-vsN-r8}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c18_ompow0.66667_coefr0.16667_bf0.25_a_R1em16_all_ranks_scale_vsEps_r8}.pdf}
\caption{Maximum off-diagonal ranks for the diagonal fault as a function of the tolerance $\epsilon$, various $N$. The separation is $r_0=8h$. FD error of $10^{-2}$.}
\label{c18-vsEps-r8}
\end{minipage}
\end{figure}
\subsection{Focusing and defocusing media}
We have also tested a smooth defocusing medium, that is, a medium where the value of $c$ decreases away from the half-space boundary, tending to a small value far away from the boundary. The equation we have used is $c(x,y)=1+\frac{1}{\pi}\arctan(4(y-1/2))$. This means that as $y \rightarrow \infty$, $c \rightarrow 3/2$ and that as $y \rightarrow -\infty$, $c \rightarrow 1/2$. Choosing the half-space to be $y<0$, we see that $c$ decreases away from $y=0$ into the negative $y$'s: this is a defocusing medium. We expect the waves in this case to never come back, and so we expect off-diagonal ranks of the DtN map to be very nice, just as in the constant medium case, and this is indeed what happens. We could not see any significant difference between the defocusing medium and the constant medium in terms of off-diagonal ranks.
We have also looked at a focusing medium, that is, one in which $c$ increases away from the interface. This forces waves to come back toward the interface. With the same medium $c(x,y)=1+\frac{1}{\pi}\arctan(4(y-1/2))$ as above, but choosing now $y>1$ as our half-space, we see that $c$ increases away from $y=1$ into the large positive $y$'s. This is a focusing medium. We have noticed that the off-diagonal ranks of the DtN map for this medium are the same or barely larger than for the constant medium.
This might only mean that the medium we chose did not have many returning waves. A more interesting medium is the following:
\begin{equation}\label{eq:focus}
c(x,y)=1/2+|y-1/2|.
\end{equation}
This linear $c$ has a first derivative bounded away from 0. Of course, this means that solving the Helmholtz equation in this case is much harder, and in particular, the pPML layer needs to be made thicker than for other media. Still, we notice that the ranks are very similar to the previous cases, as we can see in Figures \ref{c8-vsN-r1-FD2}, \ref{c8-vsEps-r1-FD2}, \ref{c8-vsN-r1-FD3}, \ref{c8-vsEps-r1-FD3}.
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.1_bf0.25_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of $N$, various tolerances $\epsilon$. Separation is $r_0=h$, FD error of $10^{-2}$.}
\label{c8-vsN-r1-FD2}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.1_bf0.25_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of the tolerance $\epsilon$, various $N$. Separation is $r_0=h$, FD error of $10^{-2}$.}
\label{c8-vsEps-r1-FD2}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.045455_bf0.5_a_R1em16_all_ranks_scale_vsN_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of $N$, various tolerances $\epsilon$. Separation is $r_0=h$, FD error of $10^{-3}$.}
\label{c8-vsN-r1-FD3}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/ims/{u_dtn_ord1_c8_ompow0.66667_coefr0.045455_bf0.5_a_R1em16_all_ranks_scale_vsEps_r1}.pdf}
\caption{Maximum off-diagonal ranks for the focusing medium \eqref{eq:focus} as a function of the tolerance $\epsilon$, various $N$. Separation is $r_0=h$, FD error of $10^{-3}$.}
\label{c8-vsEps-r1-FD3}
\end{minipage}
\end{figure}
This could be explained by the fact that we do not go to high enough accuracies to really see those returning waves. Saying this another way, the relative amplitude of the returning waves might be too small to notice at lower accuracies. We were not able to construct an example with returning waves which affected the off-diagonal ranks of the DtN map. Thus we conclude that Theorem \ref{theo:sep} holds in a broad range of contexts, at least when $\epsilon$ is not very small.
\chapter{Matrix probing for expanding the Dirichlet-to-Neumann map}\label{ch:probing}
Recall that the goal of this thesis is to introduce a new compression scheme for ABCs. This scheme consists of two steps:
\begin{enumerate}
\item a precomputation sets up an expansion of the Dirichlet-to-Neumann map, then
\item a fast algorithm is used to apply the DtN map in a Helmholtz solver.
\end{enumerate}
This chapter is concerned with the first step of this procedure, namely, setting up an expansion of the exterior DtN map in a precomputation. This will pave the way for compression in step two, presented in the next chapter.
The main strategy we use in this chapter is matrix probing, we introduce it in section \ref{sec:introprobe}. For matrix probing to be an efficient expansion scheme, we need to carefully choose the basis for this expansion. We present our choices and their rationales in section \ref{sec:basis}. In particular, inverse powers multiplied by a complex exponential work quite well as kernels for the basis. We then present a detailed study of using matrix probing to expand the DtN map in various different media, use that expansion to solve the Helmholtz equation, and document the complexity of the method, all in section \ref{sec:numexp}. Then, we prove in section \ref{sec:BasisPf} a result on approximating the half-space DtN map with the particular set of functions mentioned before, inverse powers multiplied by a complex exponential. We also present a numerical confirmation of that result in section \ref{sec:NumPf}. %
\section{Introduction to matrix probing}\label{sec:introprobe}
The idea of matrix probing is that a matrix $D$ with adequate structure can sometimes be recovered from the knowledge of a fixed, small number of matrix-vector products $Dg_j$, where $g_j$ are typically random vectors. In the case where $D$ is the numerical DtN map (with a slight abuse of notation), each $g_j$ consists of Dirichlet data on $\partial \Omega$, and each application $Dg_j$ requires solving an exterior Helmholtz problem to compute the derivative of the solution normal to $\partial \Omega$. We first explain how to obtain the matrix-vector multiplication of the DtN map with any vector, without having to use the costly procedure of layer-stripping. We then introduce matrix probing.
\subsection{Setup for the exterior problem}\label{sec:ext}
Recall the exterior problem of section \ref{sec:extprob}: solving the heterogeneous-medium Helmholtz equation at frequency $\omega$, outside $\Omega=[0,1]^2$, with Dirichlet boundary condition $u=g$ on $\partial \Omega$. This problem is solved numerically with the five-point stencil of finite differences (FD), using $h$ to denote the grid spacing and $N$ the number of points across one dimension of $\Omega$. We use a Perfectly Matched Layer (PML) or pPML, introduced in section \ref{sec:layers}, as our ABC. The layer starts at a fixed, small distance away from $\Omega$, so that we keep a small strip around $\Omega$ where the equations are unchanged. Recall that the width of the layer is in general as large as $O(\omega^{1.5})$ grid points. We number the edges of $\partial \Omega$ counter-clockwise starting from $(0,0)$, hence side 1 is the bottom edge $(x,0)$, $0\leq x \leq 1$, side 2 is the right edge, etc. The exterior DtN map for this problem is defined from $\partial \Omega$ to itself. Thus its numerical realization, which we also call $D$ by a slight abuse of notation, has a $4\times 4$ block structure. Hence the numerical DtN map $D$ has 16 sub-blocks, and is $n \times n$ where $n=4N$. As an integral kernel, $D$ would have singularities at the junctions between these blocks (due to the singularities in $\partial \Omega)$, so we shall respect this feature by probing $D$ sub-block by sub-block. We shall denote a generic such sub-block by $M$, or as the $(i_M,j_M)$ sub-block of $D$, referring to its indices in the $4 \times 4$ sub-block structure.
The method by which the system for the exterior problem is solved is immaterial in the scope of this paper, though for reference, the experiments in this paper use UMFPACK's sparse direct solver \cite{UMFPACK}. For treating large problems, a better solver should be used, such as the sweeping preconditioner of Engquist and Ying \cite{Hsweep,Msweep}, the shifted Laplacian preconditioner of Erlangga \cite{erlangga}, the domain decomposition method of Stolk \cite{stolk}, or the direct solver with spectral collocation of Martinsson, Gillman and Barnett \cite{dirfirst,dirstab}. This in itself is a subject of ongoing research which we shall not discuss further.
For a given boundary condition $g$, we solve the system and obtain a solution $u$ in the exterior computational domain. In particular we consider $u_{1}$, the solution in the layer just outside of $\partial \Omega$. We are using the same notation as in section \ref{sec:strip}, where as we recall $u_0$ was the solution on the boundary, hence here $u_0=g$. We know from Section \ref{sec:strip} that $u_1$ and $g$ are related by
\begin{equation}\label{eq:D}
\frac{u_{1} - g}{h}=Dg
\end{equation}
The matrix $D$ that this relation defines needs not be interpreted as a first-order approximation of the continuous DtN map: it is the algebraic object of interest that will be ``probed" from repeated applications to different vectors $g$.
Similarly, for probing the $(i_M,j_M)$ block $M$ of $D$, one needs matrix-vector products of $D$ with vectors $g$ of the form $[z, 0, 0, 0]^T$, $[0, z, 0, 0]^T$, etc., to indicate that the Dirichlet boundary condition is $z$ on the side indexed by $j_M$, and zero on the other sides. The application $Dg$ is then restricted to side $i_M$.
\subsection{Matrix probing}\label{sec:probe}
The dimensionality of $D$ needs to be limited for recovery from a few $Dg_j$ to be possible, but matrix probing is \emph{not} an all-purpose low-rank approximation technique. Instead, it is the property that $D$ has an efficient representation in some adequate pre-set basis that makes recovery from probing possible. As opposed to the randomized SVD method which requires the number of matrix-vector applications to be greater than the rank \cite{Halko-randomSVD}, matrix probing can recover interesting structured operators from a single matrix-vector application \cite{Chiu-probing, Demanet-probing}.
We now describe a model for $M$, any $N \times N$ block of $D$, that will sufficiently lower its dimensionality to make probing possible. Assume we can write $M$ as
\begin{equation}\label{eq:Dexp}
M \approx \sum_{j=1}^p c_j B_j
\end{equation}
where the $B_j$'s are fixed, known basis matrices, that need to be chosen carefully in order to give an accurate approximation of $M$. In the case when the medium $c$ is uniform, we typically let $B_j$ be a discretization of the integral kernel
\begin{equation}\label{eq:Bj}
B_j(x,y)= \frac{e^{ik|x-y|}}{(h+|x-y|)^{j/2}},
\end{equation}
where again $h=1/N$ is the discretization parameter. We usually add another index to the $B_j$, and a corresponding multiplicative factor, to allow for a smooth dependence on $x+y$ as well. We shall further detail our choices and discuss their rationales in Section \ref{sec:basis}. For now, we note that the advantage of the specific choice of basis matrix \eqref{eq:Bj}, and its generalizations explained in Section \ref{sec:basis}, is that it results in accurate expansions with a number of parameters $p$ which is ``essentially independent" of $N$, namely that grows either logarithmically in $N$, or at most like a very sublinear fractional power law (such as $N^{0.12}$, see section \ref{sec:pwN}). This is in sharp contrast to the scaling for the layer width, $w = O(N)$ grid points, discussed earlier. The form of $B_j$ suggested in equation \eqref{eq:Bj} is motivated by the fact that they provide a good expansion basis for the uniform-medium half-space DtN map in $\mathbb{R}^2$. This will be proved in section \ref{sec:BasisPf}.
Given a random vector $z^{(1)} \sim N(0,I_N)$ (other choices are possible), the product $w^{(1)}=Mz^{(1)}$ and the expansion \eqref{eq:Dexp}, we can now write
\begin{equation}\label{mp}
w^{(1)}=Mz^{(1)} \approx \sum_{j=1}^p c_j B_j z^{(1)} = \Psi_{z^{(1)}} \, \mathbf{c}.
\end{equation}
Multiplying this equation on the left by the pseudo-inverse of the $N$ by $p$ matrix $\Psi_{z^{(1)}}$ will give an approximation to $\mathbf{c}$, the coefficient vector for the expansion \eqref{eq:Dexp} of $M$. More generally, if several applications $w^{(j)} = M z^{(j)}$, $j = 1,\ldots, q$ are available, a larger system is formed by concatenating the $\Psi_{z^{(j)}}$ into a tall-and-thin $Nq$ by $p$ matrix ${\bm \Psi}$. The computational work is dominated, here and in other cases \cite{Chiu-probing, Demanet-probing}, by the matrix-vector products $Dg^{(1)}$, or $Mz^{(j)}$. Note that both $\Psi_{z^{(j)}}$ and the resulting coefficient vector $\mathbf{c}$ depend on the vectors $z^{(j)}$. In the sequel we let $z^{(j)}$ be gaussian iid random.
In a nutshell, recovery of $\mathbf{c}$ works under mild assumptions on $B_j$, and when $p$ is a small fraction of $Nq$ up to log factors. In order to improve the conditioning in taking the pseudo-inverse of the matrix $\Psi_z$ and reduce the error in the coefficient vector $\mathbf{c}$, one may use $q > 1$ random realizations of $M$. %
There is a limit to the range of $p$ for which this system is well-posed: past work by Chiu and Demanet \cite{Chiu-probing} covers the precise conditions on $p$, $N$, and the following two parameters, called \emph{weak condition numbers}, for which recoverability of $\mathbf{c}$ is accurate with high probability.
\begin{definition}
\emph{Weak condition number $\lambda$.}
\[ \lambda = \max_j \frac{\| B_j \|_2 \sqrt{N}}{\| B_j \|_F} \]
\end{definition}
\begin{definition}\label{kap}
\emph{Weak condition number $\kappa$.}
\[ \kappa = \mbox{cond}( {\bf B}), \ {\bf B}_{j \ell} = \mbox{Tr} \, (B_j^T B_\ell)\]
\end{definition}
It is desirable to have a small $\lambda$, which translates into a high rank condition on the basis matrices, and a small $\kappa$, which translates into a Riesz basis condition on the basis matrices. Having small weak condition numbers will guarantee a small failure probability of matrix probing and a bound on the condition number of ${\bf \Psi}$, i.e. guaranteed accuracy in solving for $\mathbf{c}$. Also, using $q > 1$ allows to use a larger $p$, to achieve greater accuracy. These results are contained in the following theorem.
\begin{theorem} (Chiu-Demanet, \cite{Chiu-probing}) Let $z$ be a Gaussian i.i.d. random vector of length $qN$, and ${\bf \Psi}$ as above. Then $\mbox{cond}({\bf \Psi}) \leq 2\kappa + 1$ with high probability provided that $p$ is not too large, namely
\[
q N \geq C \, p \, (\kappa \lambda \log N)^2,
\]
for some number $C > 0$.
\end{theorem}
As noted previously, the work necessary for probing the matrix $M$ is on the order of $q$ solves of the original problem. Indeed, computing $Mz^{(1)}, \ldots , Mz^{(q)}$ means solving $q$ times the exterior problem with the AL. This is roughly equivalent to solving the original Helmholtz problem with the AL $q$ times, assuming the AL width $w$ is at least as large as $N$. Then, computing the $qp$ products of the $p$ basis matrices with the $q$ random vectors amounts to a total of at most $qpN^2$ work, or less if the basis matrices have a fast matrix-vector product. And finally, computing the pseudo-inverse of ${\bf \Psi}$ has cost $Nqp^2$. Hence, as long as $p,q \ll N$, the dominant cost of matrix probing\footnote{We will see later that we also need to perform a QR factorization on the basis matrices, and this has cost $N^2p^2$. This precomputation has a cost similar or smaller to the cost of an exterior solve using current Helmholtz solvers. It might also be possible to not need a QR factorization if basis matrices closer to orthonormal are used.} comes from solving $q$ times the exterior problem with a random Dirichlet boundary condition. In our experiments, $q=O(1)$ and $p$ can be as large as a few hundreds for high accuracy.
Finally, we note that the information from the $q$ solves can be re-used for any other block which is in the same block column as $M$. However, if it is needed to probe blocks of $D$ which are not all in the same block column, then another $q$ solves need to be performed, with a Dirichlet boundary condition on the appropriate side of $\partial \Omega$. This of course increases the total number of solves. Another option would be to probe all of $D$ at once, using a combination of basis matrices that have the same size as $D$, but that are 0 except on the support of each distinct block in turn. In this case, $\kappa$ remains the same because we still orthogonalize our basis matrices, but $\lambda$ doubles ($\| B_j \|_2 $ and $\| B_j \|_F$ do not change but $N \rightarrow 4N$) and this makes the conditioning worse, in particular a higher value of $q$ is needed for the same accuracy, given by $p$. Hence we have decided not to investigate further this approach, which might become more advantageous in the case of a more complicated polygonal domain.
\subsection{Solving the Helmholtz equation with a compressed ABC}
Once we have obtained approximations $\tilde{M}$ of each block $M$ in compressed form through the coefficients $\mathbf{c}$ using matrix probing, we construct block by block the approximation $\tilde{D}$ of $D$ and use it in a solver for the Helmholtz equation on the domain $\Omega=[0,1]^2$, with the boundary condition
$$\frac{\partial u}{\partial \nu} = \tilde{D}u , \qquad x \in \partial \Omega.$$
\section{Choice of basis matrices for matrix probing}\label{sec:basis}
The essential information of the DtN map needs to be summarized in broad strokes in the basis matrices $B_j$, with the details of the numerical fit left to the probing procedure. In the case of $D$, most of its physics is contained in its \emph{diagonal singularity} and \emph{oscillations}, as predicted by geometrical optics.
A heuristic argument to obtain the form of $D$ starts from the Green's formula \eqref{eq:GRF}, that we differentiate one more time in the normal direction. After accounting for the correct jump condition, we get an alternative Steklov-Poincare identity, namely
\[
D = (T^* + \frac{1}{2} I)^{-1} H,
\]
where $H$ is the hypersingular integral operator with kernel $\frac{\partial^2 G}{\partial \nu_{\mathbf{x}} \partial \nu_{\mathbf{y}}}$, again $G(\mathbf{x},\mathbf{y})$ is the free-space Green's function and $\nu_{\mathbf{x}}$, $\nu_{\mathbf{y}}$ are the normals to $\partial \Omega$ in $\mathbf{x}$ and $\mathbf{y}$ respectively. The presence of $(T^* + \frac{1}{2} I)^{-1}$ is somewhat inconsequential to the form of $D$, as it involves solving a well-posed second-kind integral equation. As a result, the properties of $D$ are qualitatively similar to those of $H$. (The exact construction of $D$ from $G$ is of course already known in a few special cases, such as the uniform medium half-space problem considered earlier.)
\subsection{Oscillations and traveltimes for the DtN map}
Geometrical optics will reveal the form of $G$. In a context where there is no multi-pathing, that is, where there is a single traveltime $\tau(\mathbf{x},\mathbf{y})$ between any two points $\mathbf{x},\mathbf{y} \in \Omega$, one may write a high-$\omega$ asymptotic series for $G$ as
\begin{equation}\label{eq:geoopts}
G(\mathbf{x},\mathbf{y}) \sim e^{i\omega \tau(\mathbf{x},\mathbf{y})} \sum_{j\geq 0} A_j(\mathbf{x},\mathbf{y}) \omega^{-j},
\end{equation}
$\tau(\mathbf{x},\mathbf{y})$ is the traveltime between points $\mathbf{x}$ and $\mathbf{y}$, found by solving the Eikonal equation
\begin{equation} \label{eq:tau}
\| \nabla_{\mathbf{x}} \tau(\mathbf{x},\mathbf{y}) \| = \frac{1}{c(\mathbf{x})},
\end{equation}
and the amplitudes $A_j$ satisfy transport equations. In the case of multi-pathing (possible multiple traveltimes between any two points), the representation \eqref{eq:geoopts} of $G$ becomes instead
\[
G(\mathbf{x},\mathbf{y}) \sim \sum_j e^{ i \omega \tau_j(\mathbf{x},\mathbf{y})} \sum_{k \geq 0} A_{jk}(\mathbf{x},\mathbf{y}) \omega^{-k},
\]
where the $\tau_j$'s are the traveltimes, each obeying \eqref{eq:tau} away from caustic curves. The amplitudes are singular at caustic curves in addition to the diagonal $\mathbf{x}=\mathbf{y}$, and contain the information of the Maslov indices. Note that traveltimes are symmetric: $\tau_j(\mathbf{x},\mathbf{y})=\tau_j(\mathbf{y},\mathbf{x})$, and so is the kernel of $D$.
The singularity of the amplitude factor in \eqref{eq:geoopts}, at $\mathbf{x} = \mathbf{y}$, is $O \left( \log | \mathbf{x} - \mathbf{y}| \right)$ in 2D and $O \left( | \mathbf{x} - \mathbf{y} |^{-1} \right)$ in 3D. After differentiating twice to obtain $H$, the homogeneity on the diagonal becomes $O \left( | \mathbf{x} - \mathbf{y}|^{-2} \right)$ in 2D and $O \left( | \mathbf{x} - \mathbf{y} |^{-3} \right)$ in 3D. For the decay at infinity, the scalings are different and can be obtained from Fourier analysis of square root singularities; the kernel of $H$ decays like $O \left(| \mathbf{x} - \mathbf{y}|^{-3/2} \right)$ in 2D, and $O \left(| \mathbf{x} - \mathbf{y}|^{-5/2} \right)$ in 3D. In between, the amplitude is smooth as long as the traveltime is single-valued.
As mentioned before, much more is known about DtN maps, such as boundedness and coercivity theorems. Again, we did not attempt to leverage these properties of $D$ in the scheme presented here.
For all these reasons, we define the basis matrices $B_j$ as follows. Assume $\tau$ is single-valued. In 1D, denote the tangential component of $\mathbf{x}$ by $x$, and similarly that of $\mathbf{y}$ by $y$, in coordinates local to each edge with $0 \leq x,y \leq 1$. Each block $M$ of $D$ relates to a couple of edges of the square domain. Let $j = (j_1, j_2)$ with $j_1, j_2$ nonnegative integers. The general forms that we consider are
\[
\beta_j(x,y) = e^{i \omega \tau(x,y)} (h + |x-y|)^{-\frac{j_1}{\alpha}} (h + \theta(x,y))^{-\frac{j_2}{\alpha}}
\]
and
\[
\beta_j(x,y) = e^{i \omega \tau(x,y)} (h + |x-y|)^{-\frac{j_1}{\alpha}} (h + \theta(x,y))^{j_2},
\]
where again $h$ is the grid spacing of the FD scheme, and $\theta(x,y)$ is an adequate function of $x$ and $y$ that depends on the particular block of interest. The more favorable choices for $\theta$ are those that respect the singularities created at the vertices of the square; we typically let $\theta(x,y) = \min(x+y, 2-x-y)$. The parameter $\alpha$ can be taken to be equal to 2, a good choice in view of the numerics and in the light of the asymptotic behaviors on the diagonal and at infinity discussed earlier.
If several traveltimes are needed for geometrical reasons, then different sets of $\beta_j$ are defined for each traveltime. (More about this in the next subsection.) The $B_j$ are then obtained from the $\beta_j$ by QR factorization within each block\footnote{Whenever a block of $D$ has symmetries, we enforce those in the QR factorization by using appropriate weights on a subset of the entries of that block. This also reduces the complexity of the QR factorization.}, where orthogonality is defined in the sense of the Frobenius inner product $\< A, B \> = \mbox{tr}(A B^T)$. This automatically sets the $\kappa$ number of probing to 1.
In many of our test cases it appears that the ``triangular" condition $j_1 + 2 j_2 < $ \emph{constant} works well. The number of couples $(j_1,j_2)$ satisfying this relation will be $p/T$, where $p$ is the number of basis matrices in the matrix probing algorithm and $T$ is the number of distinct traveltimes. The eventual ordering of the basis matrices $B_j$ respects the increase of $j_1 + 2 j_2$.
\subsection{More on traveltimes}\label{sec:tt}
Determining the traveltime(s) $\tau(\mathbf{x},\mathbf{y})$ is the more ``supervised" part of this method, but is needed to keep the number $p$ of parameters small in the probing expansion. A few different scenarios can arise.
\begin{itemize}
\item In the case when $\nabla c(\mathbf{x})$ is perpendicular to a straight segment of the boundary, locally, then this segment is itself a ray and the waves can be labeled as interfacial, or ``creeping". The direct traveltime between any two points $\mathbf{x}$ and $\mathbf{y}$ on this segment is then simply given by the line integral of $1/c(\mathbf{x})$. An infinite sequence of additional interfacial waves result from successive reflections at the endpoints of the segment, with traveltimes predicted as follows.
We still consider the exterior problem for $[0,1]^2$. We are interested in the traveltimes between points $\mathbf{x}, \mathbf{y}$ on the same side of $\partial \Omega$ -- for illustration, let $\mathbf{x}=(x,0)$ and $\mathbf{y}=(y,0)$ on the bottom side of $\Omega=[0,1]^2$, with $x \leq y$ (this is sufficient since traveltimes are symmetric). Assume that all the waves are interfacial. The first traveltime $\tau_1$ corresponds to the direct path from $\mathbf{x}$ to $\mathbf{y}$. The second arrival time $\tau_2$ will be the minimum traveltime corresponding to: either starting at $\mathbf{x}$, going left, reflecting off of the $(0,0)$ corner, and coming back along the bottom side of $\partial \Omega$, past $\mathbf{x}$ to finally reach $\mathbf{y}$; or starting at $\mathbf{x}$, going past $\mathbf{y}$, reflecting off of the $(1,0)$ and coming straight back to $\mathbf{y}$. The third arrival time $\tau_3$ is the maximum of those two choices. The fourth arrival time then corresponds to starting at $\mathbf{x}$, going left, reflecting off of the $(0,0)$ corner, travelling all the way to the $(1,0)$ corner, and then back to $\mathbf{y}$. The fifth arrival time corresponds to leaving $\mathbf{x}$, going to the $(1,0)$ corner this time, then back to the $(0,0)$ corner, then on to $\mathbf{y}$. And so on. To recap, we have the following formulas:
\begin{eqnarray*}
\tau_1(\mathbf{x},\mathbf{y})&=& \int_x^y \frac{1}{c(t,0)} \ dt, \\
\tau_2(\mathbf{x},\mathbf{y})&=& \tau_1(\mathbf{x},\mathbf{y}) + 2\min \left( \int_0^x \frac{1}{c(t,0)} \ dt, \int_y^1 \frac{1}{c(t,0)} \ dt \right), \\
\tau_3(\mathbf{x},\mathbf{y})&=& \tau_1(\mathbf{x},\mathbf{y}) + 2\max \left( \int_0^x \frac{1}{c(t,0)} \ dt, \int_y^1 \frac{1}{c(t,0)} \ dt \right) = 2\int_0^1 \frac{1}{c(t,0)} \ dt - \tau_2(\mathbf{x},\mathbf{y}), \\
\tau_4(\mathbf{x},\mathbf{y})&=& 2\int_0^1 \frac{1}{c(t,0)} \ dt - \tau_1(\mathbf{x},\mathbf{y}), \\
\tau_5(\mathbf{x},\mathbf{y})&=& 2\int_0^1 \frac{1}{c(t,0)} \ dt + \tau_1(\mathbf{x},\mathbf{y}), \qquad \mbox{etc.} \\
\end{eqnarray*}
All first five traveltimes can be expressed as a sum of $\pm \tau_1$, $\pm \tau_2$ and the constant phase $2\int_0^1 \frac{1}{c(t,0)} \ dt$, which does not depend on $\mathbf{x}$ or $\mathbf{y}$. In fact, one can see that any subsequent traveltime corresponding to traveling solely along the bottom boundary of $\partial \Omega$ should be again a combination of those quantities. This means that if we use $\pm \tau_1$ and $\pm \tau_2$ in our basis matrices, we are capturing all the traveltimes relative to a single side, which helps to obtain higher accuracy for probing the diagonal blocks of $D$.
This simple analysis can be adapted to deal with creeping waves that start on one side of the square and terminate on another side, which is important for the nondiagonal blocks of $D$.
\item In the case when $c(\mathbf{x})$ increases outward in a smooth fashion, we are also often in presence of body waves, going off into the exterior and coming back to $\partial \Omega$. The traveltime for these waves needs to be solved either by a Lagrangian method (solving the ODE for the rays), or by an Eulerian method (solving the Eikonal PDE shown earlier). In this paper we used the fast marching method of Sethian \cite{sethart} to deal with these waves in the case that we label ``slow disk" in the next section.
\item In the case when $c(\mathbf{x})$ has singularities in the exterior domain, each additional reflection creates a traveltime that should (ideally) be predicted. Such is the case of the ``diagonal fault" example introduced in the next section, where a straight jump discontinuity of $c(\mathbf{x})$ intersects $\partial \Omega$ at a non-normal angle: we can construct by hand the traveltime corresponding to a path leaving the boundary at $\mathbf{x}$, reflecting off of the discontinuity and coming back to the boundary at $\mathbf{y}$. More precisely, we consider again $\mathbf{x}=(x,0)$, $\mathbf{y}=(y,0)$ and $x \leq y$, with $x$ larger than or equal to the $x$ coordinate of the point where the reflector intersects the bottom side of $\partial \Omega$. We then reflect the point $\mathbf{y}$ across the discontinuity into the new point $\mathbf{y}'$, and calculate the Euclidean distance between $\mathbf{x}$ and $\mathbf{y}'$. To obtain the traveltime, we then divide this distance by the value $c(\mathbf{x})=c(\mathbf{y})$ of $c$ on the right side of the discontinuity, assuming that value is constant. This body traveltime is used in the case of the ``diagonal fault", replacing the quantity $\tau_2$ that was described above. This increased accuracy by an order of magnitude, as mentioned in the numerical results of the next section.
\end{itemize}
\section{Numerical experiments}\label{sec:numexp}
Our benchmark media $c(\mathbf{x})$ are as follows:
\begin{enumerate}
\item a uniform wave speed of 1, $c \equiv 1$ (Figure \ref{c1}),
\item a ``Gaussian waveguide" (Figure \ref{wg}),
\item a ``Gaussian slow disk" (Figure \ref{slow}) large enough to encompass $\Omega$ -- this will cause some waves going out of $\Omega$ to come back in,
\item a ``vertical fault" (Figure \ref{fault}),
\item a ``diagonal fault" (Figure \ref{diagfault}),
\item and a discontinuous periodic medium (Figure \ref{period}). The periodic medium consists of square holes of velocity 1 in a background of velocity $1/\sqrt{12}$.
\end{enumerate}
\begin{figure}[H]
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c1med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the uniform medium.}\label{c1}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c3med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the Gaussian waveguide.}\label{wg}
\end{minipage}
\begin{minipage}[t]{0.05\linewidth}
\end{minipage}
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c5med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the Gaussian slow disk.}\label{slow}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c16med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the vertical fault.}\label{fault}
\end{minipage}
\begin{minipage}[t]{0.05\linewidth}
\end{minipage}
\begin{minipage}[t]{0.30\linewidth}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c18med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the diagonal fault.}\label{diagfault}
\includegraphics[scale=.33,trim = 7mm 2mm 15mm 0mm, clip]{./figs/images/c33med.pdf}
\caption{Color plot of $c(\mathbf{x})$ for the periodic medium.}\label{period}
\end{minipage}
\end{figure}
All media used are continued in the obvious way (i.e., they are \emph{not} put to a uniform constant) outside of the domain in which they are shown in the figures if needed. The outline of the $[0,1]^2$ box is shown in black.
We can use a standard Helmholtz equation solver to estimate the relative error in the Helmholtz equation solution caused by the Finite Difference discretization (the \emph{FD error}\footnote{To find this FD error, we use a large pseudo-PML, and compare the solution $u$ for different values of $N$. What we call the FD error is the relative $\ell_2$ error in $u$ inside $\Omega$.}), and also the error caused by using the specified pPML width\footnote{To obtain the error caused by the absorbing layer, we fix $N$ and compare the solution $u$ for different layer widths $w$, and calculate the relative $\ell_2$ error in $u$ inside $\Omega$.}. Those errors are presented in Table \ref{FDPMLerr}, along with the main parameters used in the remaining of this section, including the position of the point source or right-hand side $f$. We note that, whenever possible, we try to use an AL with error smaller than the precision we seek with matrix probing, so with a width $w$ greater than that showed in Table \ref{FDPMLerr}. This makes probing easier, i.e. $p$ and $q$ can be smaller.
\begin{table}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
Medium &$N$ &$\omega/2\pi$ &FD error &$w$ &$P$ &Source position \\ \hline
$c \equiv 1$ &1023 &51.2 &{$2.5e-01$} &{$4$} &8 &$(0.5,0.25)$ \\ \hline
waveguide &1023 &51.2 &{$2.0e-01$} &{$4$} &56 &$(0.5,0.5)$ \\ \hline
slow disk &1023 &51.2 &{$1.8e-01$} &{$4$} &43 &$(0.5,0.25)$ \\ \hline
fault, left source &1023 &51.2 &{$1.1e-01$} &{$4$} &48 &$(0.25,0.5)$ \\ \hline
fault, right source &1023 &51.2 &{$2.2e-01$} &{$4$} &48 &$(0.75,0.5)$ \\ \hline
diagonal fault &1023 &51.2 &{$2.6e-01$} &{$256$} &101 &$(0.5,0.5)$ \\ \hline
periodic medium &319 &6 &{$1.0e-01$} &{$1280$} &792 &$(0.5,0.5)$ \\ \hline
\end{tabular}
\end{center}
\caption{For each medium considered, we show the parameters $N$ and $\omega/2\pi$, along with the resulting discretization error caused by the Finite Difference (FD error) formulation. We also show the width $w$ of the pPML needed, in number of points, to obtain an error caused by the pPML of less than $1e-1$. Furthermore, we show the total number $P$ of basis matrices needed to probe the entire DtN map with an accuracy of about $1e-1$ as found in Section \protect\ref{sec:tests}. Finally, we show the position of the point source used in calculating the solution $u$.}
\label{FDPMLerr}
\end{table}
Consider now a block $M$ of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. We note that some blocks in $D$ are the same up to transpositions or flips (inverting the order of columns or rows) if the medium $c$ has symmetries.
\begin{definition}
\emph{Multiplicity of a block of $D$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. The \emph{multiplicity} $m(M)$ of $M$ is the number of copies of $M$ appearing in $D$, up to transpositions or flips.
\end{definition}
Only the distinct blocks of $D$ need to be probed. Once we have chosen a block $M$, we may calculate the \emph{true probing coefficients}.
\begin{definition}
\emph{True probing coefficients of block $M$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. Assume orthonormal probing basis matrices $\left\{B_j \right\}$. The true coefficients $c^t_j$ in the probing expansion of $M$ are the inner products $c^t_j = \< B_j, M\>$.
\end{definition}
We may now define the \emph{$p$-term approximation error} for the block $M$.%
\begin{definition}
\emph{The $p$-term approximation error of block $M$.} Let $M$ be a block of $D$, corresponding to the restriction of $D$ to two sides of $\partial \Omega$. For orthonormal probing basis matrices $\left\{B_j \right\}$, we have the true coefficients $c^t_j$ in the probing expansion of $M$. Let $M_p =\sum_{j=1}^p c^t_j B_j$ be the probing $p$-term approximation to $M$. The $p$-term approximation error for $M$ is
\begin{equation}\label{apperr}
\sqrt{m(M)} \frac{\|M-M_p\|_F}{\|D\|_F},
\end{equation}
using the matrix Frobenius norm.
\end{definition}
Because the blocks on the diagonal of $D$ have a singularity, their Frobenius norm can be a few orders of magnitude greater than that of other blocks, and so it is more important to approximate those well. This is why we consider the error relative to $D$, not to the block $M$, in the $p$-term approximation error. Also, we multiply by the square root of the multiplicity of the block to give us a better idea of how big the total error on $D$ will be. For brevity, we shall refer to (\ref{apperr}) simply as the approximation error when it is clear from the context what $M$, $p$ $\left\{B_j\right\}$, $D$ are.
Then, using matrix probing, we will recover a coefficient vector $\mathbf{c}$ close to $\mathbf{c}^t$, which gives an approximation $\tilde{M}=\sum_{j=1}^p c_j B_j$ to $M$. %
We now define the \emph{probing error} (which depends on $q$ and the random vectors used), for the block $M$.%
\begin{definition}
\emph{Probing error of block $M$.} Let $\mathbf{c}$ be the probing coefficients for $M$ obtained with $q$ random realizations $z^{(1)}$ through $z^{(q)}$. Let $\tilde{M}=\sum_{j=1}^p c_j B_j$ be the probing approximation to $M$. The probing error of $M$ is
\begin{equation}\label{acterr}
\sqrt{m(M)}\frac{\|M-\tilde{M}\|_F}{\|D\|_F}.
\end{equation}
\end{definition}
Again, for brevity, we refer to (\ref{acterr}) as the probing error when other parameters are clear from the context. Once all distinct blocks of $D$ have been probed, we can consider the \emph{total probing error}.
\begin{definition}
\emph{Total probing error.} The total probing error is defined as the total error made on $D$ by concatenating all probed blocks $\tilde{M}$ to produce an approximate $\tilde{D}$, and is equal to
\begin{equation}\label{eq:Derr}
\frac{\|D-\tilde{D}\|_F}{\|D\|_F}.
\end{equation}
\end{definition}
In order to get a point of reference for the accuracy benchmarks, for small problems only, the actual matrix $D$ is computed explicitly by solving the exterior problem $4N$ times using the standard basis as Dirichlet boundary conditions, and from this we can calculate \eqref{eq:Derr} exactly. For larger problems, we only have access to a black-box that outputs the product of $D$ with some input vector by solving the exterior problem. We can then estimate \eqref{eq:Derr} by comparing the products of $D$ and $\tilde{D}$ with a few random vectors different from those used in matrix probing.
We shall present results on the approximation and probing errors for various media, along with related condition numbers, and then we shall verify that using an approximate $\tilde{D}$ (constructed from approximate $\tilde{M}$'s for each block $M$ in $D$) does not affect the accuracy of the new solution to the Helmholtz equation, using the \emph{solution error from probing}.
\begin{definition}
\emph{Solution error from probing.} Once we have obtained an approximation $\tilde{D}$ to $D$ from probing the distinct blocks of $D$, we may use this $\tilde{D}$ in a Helmholtz solver to obtain an approximate solution $\tilde{u}$, and compare that to the true solution $u$ using $D$ in the solver. The solution error from probing is the $\ell_2$ error on $u$ inside $\Omega$:
\begin{equation}\label{eq:solerr}
\frac{\|u-\tilde{u}\|_2}{\|u\|_2} \text{ in } \Omega.
\end{equation}
\end{definition}
\subsection{Probing tests}\label{sec:tests}
As we saw in Section \ref{sec:probe}, randomness plays a role in the value of $\mbox{cond}({\bf \Psi})$ and of the probing error. Hence, whenever we show plots for those quantities in this section, we have done 10 trials for each value of $q$ used. The error bars show the minimum and maximum of the quantity over the 10 trials, and the line is plotted through the average value over the 10 trials. As expected, we will see in all experiments that increasing $q$ gives a better conditioning, and consequently a better accuracy and smaller failure probability. The following probing results will then be used in Section \ref{sec:insolver} to solve the Helmholtz equation.
\subsubsection{Uniform medium}
For a uniform medium, $c \equiv 1$, we have three blocks with the following multiplicities: $m((1,1))=4$ (same edge), $m((2,1))=8$ (neighboring edges), and $m((3,1))=4$ (opposite edges). Note that we do not present results for the $(3,1)$ block: this block has negligible Frobenius norm\footnote{We can use probing with $q=1$ and a single basis matrix (a constant multiplied by the correct oscillations) and have a probing error of less than $10^{-6}$ for that block.} compared to $D$. First, let us look at the conditioning for blocks $(1,1)$ and $(2,1)$. Figures \ref{cond11_1024_c1} and \ref{cond21_1024_c1} show the three relevant conditioning quantities: $\kappa$, $\lambda$ and $\mbox{cond}({\bf \Psi})$ for each block. As expected, $\kappa=1$ because we orthogonalize the basis functions. Also, we see that $\lambda$ does not grow very much as $p$ increases, it remains on the order of 10. As for $\mbox{cond}({\bf \Psi})$, it increases as $p$ increases for a fixed $q$ and $N$, as expected. This will affect probing in terms of the failure probability (the odds that the matrix ${\bf \Psi}$ is far from the expected value) and accuracy (taking the pseudo-inverse will introduce larger errors in $\mathbf{c}$). We notice these two phenomena in Figure \ref{erb1023_c1}, where we show the approximation and probing errors in probing the $(1,1)$ block for various $p$, using different $q$ and making 10 tests for each $q$ value as explained previously. As expected, as $p$ increases, the variations between trials get larger. Also, the probing error, always larger than the approximation error, becomes farther and farther away from the approximation error. Comparing Figure \ref{erb1023_c1} with Table \ref{c1solve} of the next section, we see that in Table \ref{c1solve} we are able to achieve higher accuracies. This is because we use the first two traveltimes (so four different types of oscillations, as explained in Section \ref{sec:basis}) to obtain those higher accuracies. But we do not use four types of oscillations for lower accuracies because this demands a larger number of basis matrices $p$ and of solves $q$ for the same error level.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/condnums-c1-b1-med.pdf}
\caption{Condition numbers for the $(1,1)$ block, $c\equiv 1$.}
\label{cond11_1024_c1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/condnums-c1-b2-med-alt.pdf}
\caption{Condition numbers for the $(2,1)$ block, $c\equiv 1$.}
\label{cond21_1024_c1}
\end{minipage}
\end{figure}
\subsubsection{The waveguide}
For a waveguide as a velocity field, we have more blocks compared to the uniform medium case, with different multiplicities: $m((1,1))=2$, $m((2,2))=2$, $m((2,1))=8$, $m((3,1))=2$, $m((4,2))=2$. Note that block $(2,2)$ will be easier to probe than block $(1,1)$ since the medium is smoother on that interface. Also, we can probe blocks $(3,1)$ and $(4,2)$ with $q=1$, $p=2$ and have a probing error less than $10^{-7}$. Hence we only show results for the probing and approximation errors of blocks $(1,1)$ and $(2,1)$, in Figure \ref{erb1023_c2}. Results for using probing in a solver can be found in Section \ref{sec:insolver}.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c1-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c\equiv 1$. Circles are for $q=3$, squares for $q=5$, stars for $q=10$. }
\label{erb1023_c1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c3-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c$ is the waveguide. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c2}
\end{minipage}
\end{figure}
\subsubsection{The slow disk}
Next, we consider the slow disk. Here, we have a choice to make for the traveltime upon which the oscillations depend. We may consider interfacial waves, traveling in straight line segments along $\partial \Omega$, with traveltime $\tau$. There is also the first arrival time of body waves, $\tau_f$, which for some points on $\partial \Omega$ involve taking a path that goes away from $\partial \Omega$, into the exterior where $c$ is higher, and back towards $\partial \Omega$. We have approximated this $\tau_f$ using the fast marching method of Sethian \cite{sethart}. For this example, it turns out that using either $\tau$ or $\tau_f$ to obtain oscillations in our basis matrices does not significantly alter the probing accuracy or conditioning, although it does seem that, for higher accuracies at least, the fast marching traveltime makes convergence slightly faster. Figures \ref{c5fastvsnorm1} and \ref{c5fastvsnorm2} demonstrate this for blocks $(1,1)$ and $(2,1)$ respectively. We omit plots of the probing and approximation errors, and refer the reader to Section \ref{sec:insolver} for final probing results and using those in a solver.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{fast_tau_b1}.pdf}
\caption{Approximation error for the $(1,1)$ blocks of $D$, $c$ is the slowness disk, comparing the use of the normal traveltime (circles) to the fast marching traveltime (squares).}
\label{c5fastvsnorm1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{fast_tau_b2}.pdf}
\caption{Approximation error for the $(2,1)$ blocks of $D$, $c$ is the slowness disk, comparing the use of the normal traveltime (circles) to the fast marching traveltime (squares).}
\label{c5fastvsnorm2}
\end{minipage}
\end{figure}
\subsubsection{The vertical fault}
Next, we look at the case of the medium $c$ which has a vertical fault. We note that this case is harder because some of the blocks will have themselves a 2 by 2 or 1 by 2 structure caused by the discontinuity in the medium. Ideally, as we shall see, each sub-block should be probed separately. There are 7 distinct blocks, with different multiplicities: $m((1,1))=2$, $m((2,2))=1$, $m((4,4))=1$, $m((2,1))=4$, $m((4,1))=4$, $m((3,1))=2$, $m((4,2))=2$. Blocks $(2,2)$ and $(4,4)$ are easier to probe than block $(1,1)$ because they do not exhibit a sub-structure. Also, since the velocity is smaller on the right side of the fault, the frequency there is higher, which means that blocks involving side 2 are slightly harder to probe than those involving side 4. Hence we first present results for the blocks $(1,1)$, $(2,2)$ and $(2,1)$ of $D$. In Figure \ref{erb1023_c16} we see the approximation and probing errors for those blocks. Then, in Figure \ref{erb1023_c16_sub}, we present results for the errors related to probing the 3 distinct sub-blocks of the $(1,1)$ block of $D$. We can see that probing the $(1,1)$ block by sub-blocks helps achieve greater accuracy. We could have split other blocks too to improve the accuracy of their probing (for example, block $(2,1)$ has a 1 by 2 structure because side 1 has a discontinuity in $c$) but the accuracy of the overall DtN map was still limited by the accuracy of probing the $(1,1)$ block, so we do not show results for other splittings.
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c16-blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the blocks of $D$, $c$ is the fault. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c16}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/{c16-sub_blocks-bars-q-3-5-10-larg}.pdf}
\caption{Approximation error (line) and probing error (with markers) for the sub-blocks of the $(1,1)$ block of $D$, $c$ is the fault. Circles are for $q=3$, squares for $q=5$, stars for $q=10$.}
\label{erb1023_c16_sub}
\end{minipage}
\end{figure}
\subsubsection{The diagonal fault}
Now, we look at the case of the medium $c$ which has a diagonal fault. Again, some of the blocks will have themselves a 2 by 2 or 1 by 2 structure. There are 6 distinct blocks, with different multiplicities: $m((1,1))=2$, $m((2,2))=2$, $m((2,1))=4$, $m((4,1))=2$, $m((3,2))=2$, $m((3,1))=4$. Again, we split up block $(1,1)$ in 4 sub-blocks and probe each of those sub-blocks separately for greater accuracy, but do not split other blocks. We then use two traveltimes for the $(2,2)$ sub-block of block $(1,1)$. Using as the second arrival time the geometrical traveltime consisting of leaving the boundary and bouncing off the fault, as mentioned in Section \ref{sec:tt}, allowed us to increase accuracy by an order of magnitude compared to using only the first arrival traveltime, or compared to using as a second arrival time the usual bounce off the corner (or here, bounce off the fault where it meets $\delta \Omega$). We omit plots of the probing and approximation errors, and refer the reader to Section \ref{sec:insolver} for final probing results and using those in a solver.
\subsubsection{The periodic medium}
Finally, we look at the case of the periodic medium presented earlier. There are 3 distinct blocks, with different multiplicities: $m((1,1))=4$, $m((2,1))=8$, $m((3,1))=4$. We expect the corresponding DtN map to be harder to probe because its structure will reflect that of the medium, i.e. it will exhibit sharp transitions at points corresponding to sharp transitions in $c$ (similarly as with the faults). First, we notice that, in all the previous mediums we tried, plotting the norm of the anti-diagonal entries of diagonal blocks (or sub-blocks for the faults) shows a rather smooth decay away from the diagonal. However, that is not the case for the periodic medium: it looks like there is decay away from the diagonal, but variations from that decay can be of relative order 1. This prevents our usual strategy, using basis matrices containing terms that decay away from the diagonal such as $(h+|x-y|)^{-j_1/\alpha}$, from working adequately. Instead, we use polynomials along anti-diagonals, as well as polynomials along diagonals as we previously did.
It is known that solutions to the Helmholtz equation in a periodic medium are Bloch waves with a particular structure \cite{JohnsonPhot}. However, using that structure in the basis matrices is not robust. Indeed, using a Bloch wave structure did not succeed very well, probably because our discretization was not accurate enough and so $D$ exhibited that structure only to a very rough degree. Hence we did not use Bloch waves for probing the periodic medium. Others have successfully used the known structure of the solution in this setting to approximate the DtN map. In \cite{Fliss}, the authors solve local cell problems and Ricatti equations to obtain discrete DtN operators for media which are a perturbation of a periodic structure. In \cite{antoine}, the authors develop a DtN map eigenvalue formulation for wave propagation in periodic media. We did not attempt to use those formulations here.
For this reason, we tried basis matrices with no oscillations, but with polynomials in both directions as explained previously, and obtained the results of Section \ref{sec:insolver}.
Now that we have probed the DtN map and obtained compressed blocks to form an approximation $\tilde{D}$ of $D$, we may use this $\tilde{D}$ in a Helmholtz solver as an absorbing boundary condition.
\subsection{Using the probed DtN map in a Helmholtz solver}\label{sec:insolver}
In Figures \ref{solc5}, \ref{solc3}, \ref{solc16l}, \ref{solc16r}, \ref{solc18} and \ref{solc33} we can see the standard solutions to the Helmholtz equation on $[0,1]^2$ using a large PML or pPML for the various media we consider, except for the uniform medium, where the solution is well-known. We use those as our reference solutions.
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c5-pres.pdf}
\caption{Real part of the solution, $c$ is the slow disk.}
\label{solc5}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c3-pres.pdf}
\caption{Real part of the solution, $c$ is the waveguide.}
\label{solc3}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c16l-pres.pdf}
\caption{Real part of the solution, $c$ is the vertical fault with source on the left.}
\label{solc16l}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c16r-pres.pdf}
\caption{Real part of the solution, $c$ is the vertical fault with source on the right.}
\label{solc16r}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c18-pres.pdf}
\caption{Real part of the solution, $c$ is the diagonal fault.}
\label{solc18}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/sol-c33-pres.pdf}
\caption{Imaginary part of the solution, $c$ is the periodic medium.}
\label{solc33}
\end{minipage}
\end{figure}
We have tested the solver with the probed $\tilde{D}$ as an absorbing boundary condition with success. See Tables \ref{c1solve}, \ref{c5solve}, \ref{c3solve} and \ref{c16solve} for results corresponding to each medium. We show the number $p$ of basis matrices required for some blocks for that tolerance, the number of solves $q$ of the exterior problem for those blocks, the total number of solves $Q$, the total probing error \eqref{eq:Derr} in $D$ and the solution error from probing \eqref{eq:solerr}. %
As we can see from the tables, the solution error from probing \eqref{eq:solerr} in the solution $u$ is no more than an order of magnitude greater than the total probing error \eqref{eq:Derr} in the DtN map $D$, for a source position as described in Table \ref{FDPMLerr}. Grazing waves, which can arise when the source is close to the boundary of the computational domain, will be discussed in the next subsection, \ref{sec:graz}. We note again that, for the uniform medium, using the second arrival traveltime as well as the first for the $(1,1)$ block allowed us to achieve accuracies of 5 and 6 digits in the DtN map, which was not possible otherwise. Using a second arrival time for the cases of the faults was also useful. Those results show that probing works best when the medium $c$ is rather smooth. For non-smooth media such as a fault, it becomes harder to probe the DtN map to a good accuracy, so that the solution to the Helmholtz equation also contains more error.
\begin{table}
\caption{$c\equiv 1$}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ & $p$ for $(2,1)$ & $q=Q$ & $\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$6$} & {$1$} & {$1$} & {$2.0130e-01$} & {$3.3191e-01$} \\ \hline
{$12$} & {$2$} & {$1$} & {$9.9407e-03$} & {$1.9767e-02$} \\ \hline
{$20$} & {$12$} & {$3$} & {$6.6869e-04$} & {$1.5236e-03$} \\ \hline
{$72$} & {$20$} & {$5$} & {$1.0460e-04$} & {$5.3040e-04$} \\ \hline
{$224$} & {$30$} & {$10$} & {$8.2892e-06$} & {$9.6205e-06$} \\ \hline
{$360$} & {$90$} & {$10$} & {$7.1586e-07$} & {$1.3044e-06$} \\ \hline
\end{tabular}
\end{center}
\label{c1solve}
\end{table}
\begin{table}
\caption{$c$ is the waveguide}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ &$p$ for $(2,1)$&$q$ &$p$ for $(2,2)$&$q$ &$Q$ &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
$40$ &$2$ &$1$ &$12$ &$1$ &$2$ &$9.1087e-02$ &$1.2215e-01$ \\ \hline
$40$ &$2$ &$3$ &$20$ &$1$ &$4$ &$1.8685e-02$ &$7.6840e-02$ \\ \hline
$60$ &$20$ &$5$ &$20$ &$3$ &$8$ &$2.0404e-03$ &$1.3322e-02$ \\ \hline
$112$ &$30$ &$10$ &$30$ &$3$ &$13$ &$2.3622e-04$ &$1.3980e-03$ \\ \hline
$264$ &$72$ &$20$ &$168$ &$10$ &$30$ &$1.6156e-05$ &$8.9911e-05$ \\ \hline
$1012$ &$240$ &$20$ &$360$ &$10$ &$30$ &$3.3473e-06$ &$1.7897e-05$ \\ \hline
\end{tabular}
\end{center}
\label{c3solve}
\end{table}
\begin{table}
\caption{$c$ is the slow disk}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|} \hline
$p$ for $(1,1)$ & $p$ for $(2,1)$ &$q=Q$ &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ & $\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$40$} & {$2$} &{$3$} & {$1.0730e-01$} & {$5.9283e-01$} \\ \hline
{$84$} & {$2$} &{$3$} & {$8.0607e-03$} & {$4.5735e-02$} \\ \hline
{$180$} & {$12$} &{$3$} & {$1.2215e-03$} & {$1.3204e-02$} \\ \hline
{$264$} & {$30$} &{$5$} & {$1.5073e-04$} & {$7.5582e-04$} \\ \hline
{$1012$} & {$132$} &{$20$} & {$2.3635e-05$} & {$1.5490e-04$} \\ \hline
\end{tabular}
\end{center}
\label{c5solve}
\end{table}
\begin{table}
\caption{$c$ is the fault}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$, left source &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$, right source\\ \hline
{$5$} &{$2.8376e-01$} &{$6.6053e-01$} &{$5.5522e-01$} \\ \hline
{$5$} &{$8.2377e-03$} &{$3.8294e-02$} &{$2.4558e-02$} \\ \hline
{$30$} &{$1.1793e-03$} &{$4.0372e-03$} &{$2.9632e-03$} \\ \hline
\end{tabular}
\end{center}
\label{c16solve}
\end{table}
\begin{table}
\caption{$c$ is the diagonal fault}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$4$} &{$1.6030e-01$} &{$4.3117e-01$} \\ \hline
{$6$} &{$1.7845e-02$} &{$7.1500e-02$} \\ \hline
{$23$} &{$4.2766e-03$} &{$1.2429e-02$} \\ \hline
\end{tabular}
\end{center}
\label{c18solve}
\end{table}
\begin{table}
\caption{$c$ is the periodic medium}
\begin{center} \footnotesize
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
Q &$\frac{\|D-\tilde{D}\|_F}{\|D\|_F}$ &$\frac{\|u-\tilde{u}\|_2}{\|u\|_2}$ \\ \hline
{$50$} &{$1.8087e-01$} &{$1.7337e-01$} \\ \hline
{$50$} &{$3.5714e-02$} &{$7.1720e-02$} \\ \hline
{$50$} &{$9.0505e-03$} &{$2.0105e-02$} \\ \hline
\end{tabular}
\end{center}
\label{c33solve}
\end{table}
\subsection{Grazing waves}\label{sec:graz}
It is well-known that ABCs often have difficulties when a source is close to a boundary of the domain, or in general when waves incident to the boundary are almost parallel to it. We wish to verify that the solution $\tilde{u}$ using the result $\tilde{D}$ of probing $D$ does not degrade as the source becomes closer and closer to some side of $\partial \Omega$. For this, we use a right-hand side $f$ to the Helmholtz equation which is a point source, located at the point $(x_0,y_0)$, where $x_0=0.5$ is fixed and $y_0>0$ becomes smaller and smaller, until it is a distance $2h$ away from the boundary (the point source's stencil has width $h$, so a source at a distance $h$ from the boundary does not make sense). We see in Figure \ref{c1graz} that, for $c \equiv 1$, the solution remains quite good until the source is a distance $2h$ away from the boundary. In this figure, we have used the probed maps we obtained in each row of Table \ref{c1solve}. %
We obtain very similar results for the waveguide, slow disk and faults (for the vertical fault we locate the source at $(x_0,y_0)$, where $y_0=0.5$ is fixed and $x_0$ goes to $0$ or $1$). This shows that the probing process itself does not significantly affect how well grazing waves are absorbed.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.5]{./figs/images/c1-graz-out-flat.pdf}
\caption{Error in solution $u$, $c\equiv 1$, moving point source. Each line in the plot corresponds to using $\tilde{D}$ from a different row of Table \ref{c1solve}.}
\label{c1graz}
\end{center}
\end{figure}
\subsection{Variations of $p$ with $N$}\label{sec:pwN}
We now discuss how the number of basis matrices $p$ needed to achieve a desired accuracy depends on $N$ or $\omega$. To do this, we pick 4 consecutive powers of 2 as values for $N$, and find the appropriate $\omega$ such that the finite discretization error remains constant at $10^{-1}$, so that in fact $N \sim \omega^{1.5}$ as we have previously mentioned. We then probe the $(1,1)$ block of the corresponding DtN map, using the same parameters for all $N$, and observe the required $p$ to obtain a fixed probing error. The worst case we have seen in our experiments came from the slow disk. As we can see in Figure \ref{fig:c5pvsn}, $p$ seems to follow a very weak power law with $N$, close to $p \sim 15N^{.12}$ for a probing error of $10^{-1}$ or $p \sim 15N^{.2}$ for an probing error of $10^{-2}$. In all other cases, $p$ is approximately constant with increasing $N$, or seems to follow a logarithmic law with $N$ as for the waveguide (see Figure \ref{fig:c3pvsn}).
\begin{figure}[ht]
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/pvsn-c5-b1.pdf}
\caption{Probing error of the $(1,1)$ block of the DtN map for the slow disk, fixed FD error level of $10^{-1}$, increasing $N$. This is the worst case, where $p$ follows a weak power law with $N$.}
\label{fig:c5pvsn}
\end{minipage}
\hspace{1cm}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[scale=.45]{./figs/images/pvsn-c3-b1.pdf}
\caption{Probing error of the $(1,1)$ block of the DtN map for the waveguide, fixed FD error level of $10^{-1}$, increasing $N$. Here $p$ follows a logarithmic law.}
\label{fig:c3pvsn}
\end{minipage}
\end{figure}
\section{Convergence of probing for the half-space DtN map: theorem}\label{sec:BasisPf}
In this section, we consider the half-space DtN map kernel in uniform medium
$K(r) = \frac{ik}{2r} H_1^{(1)}(kr)$
that we found in section \ref{sec:hsG}. We wish to approximate this kernel for values of $r$ that are relevant to our numerical scheme. Because we take $\Omega$ in our numerical experiments to be the $[0,1]^2$ box, $r=|x-y|$ will be between 0 and 1, in increments of $h$, as coordinates $x$ and $y$ along edges of $\partial \Omega$ vary between 0 and 1 in increments of $h$. However, as we know, $K(r)$ is singular at $r=0$, and since discretization effects dominate near the diagonal in the matrix representation of the DtN map, we shall consider only values of $r$ in the range $r_0 \leq r \leq 1$, with $0 < r_0 \leq 1/k$ (hence $r_0$ can be on the order of $h$). Since we know to expect oscillations $e^{ikr}$ in this kernel, we can remove those from $K$ to obtain%
\begin{equation}\label{eq:Hofr}
H(r) = \frac{ik}{2r} H_1^{(1)}(kr) e^{-ikr}
\end{equation}
(not to be confused with the hypersingular kernel $H$ of section \ref{sec:basis}), a smoother function which will be easier to approximate. Equivalently, we can add those oscillations to the terms in an approximation of $H$, to obtain an approximation of $K$.
For this section only, we denote by $\tilde{D}$ the corresponding operator with integral kernel $H$, while we use $D$ for the half-space Dirichlet-to-Neumann map, that is, the operator with kernel $K$.
\begin{theorem}\label{teo:main}
Let $\alpha > \frac{2}{3}$, $0 < r_0 < 1/k$, and let $K_p(r)$ be the best uniform approximation of $K(r)$ in
$$\mbox{span} \{ \frac{e^{ikr}}{r^{j/\alpha}} : j = 1, \ldots, p, \mbox{ and } r_0 \leq r \leq 1 \}.$$
Denote by $D_p$ the operator defined with $K_p$ in place of $K$. Then, in the operator norm,
$$\| D - D_p \| \leq C_\alpha \, p^{1 - \lfloor 3\alpha/2 \rfloor} \, \| \tilde{K} \|_\infty,$$
for some $C_\alpha > 0$ depending on $\alpha$ and $r_0 \leq r \leq 1$.
\end{theorem}
The important point of the theorem is that the quality of approximation is otherwise independent of $k$, i.e., the number $p$ of basis functions does not need to grow like $k$ for the error to be small. In other words, it is unnecessary to ``mesh at the wavelength level" to spell out the degrees of freedom that go in the representation of the DtN map's kernel.
\begin{remark}
Growing $\alpha$ does not automatically result in a better approximation error, because a careful analysis of the proof shows that $C_\alpha$ grows factorially with $\alpha$. This behavior translates into a slower onset of convergence in $p$ when $\alpha$ is taken large, as the numerics show in the next section. This can in turn be interpreted as the result of ``overcrowding" of the basis by very look-alike functions.
\end{remark}
\begin{remark}
It is easy to see that the operator norm of $D$ grows like $k$, for instance by applying $D$ to the function $e^{-ik\mathbf{x}}$. %
The uniform norms of $K$ and $H$ once we cut out the diagonal, however, grow like $k^{1/2}/r_0^{3/2}$, so the result above shows that we incur an additional factor $k^{-1/2}r_0^{-3/2}$ in the error (somewhat akin to numerical pollution) in addition to the factor $k$ that we would have gotten from $\| D \|$.
\end{remark}
The result in Theorem \ref{teo:main} points the way for the design of basis matrices to be used in matrix probing, for the more general case of the exterior DtN map in heterogeneous media. We prove \ref{teo:main} in the next subsections, and present a numerical verification in the next section.
\subsection{Chebyshev expansion}
We mentioned the domain of interest for the $r$ variable is $[r_0,1]$. Again, expanding $K(r)$ in the system of Theorem \ref{teo:main} is equivalent to expanding $H(r)$ in polynomials of $r^{-1/\alpha}$ over $[r_0,1]$. It will be useful to perform the affine rescaling
\[
\xi(r) = \frac{2}{r_0^{-1/\alpha} - 1} (r^{-1/\alpha} - 1 ) - 1 \qquad \Leftrightarrow \qquad r(\xi) = \left( \frac{\xi+1}{2} (r_0^{-1/\alpha} - 1) + 1 \right)^{-\alpha}
\]
so that the bounds $r \in [r_0,1]$ turn into $\xi \in [-1,1]$. We further write $\xi = \cos \theta$ with $\theta \in [0,\pi]$. Our strategy is to expand $H$ in Chebyshev polynomials $T_n(\xi)$. By definition, the best $p$-term approximation of $H(r)$ in polynomials of $r^{-1/\alpha}$ (best in a uniform sense over $[r_0,1]$) will result in a lower uniform approximation error than that associated with the $p$-term approximation of $H(r(\xi))$ in the $T_n(\xi)$ system. Hence in the sequel we overload notations and let $H_p$ for the $p$-term approximant of $H$ in our Chebyshev system.
We write out the Chebyshev series for $H(r(\xi))$ as
$$H(r(\xi)) = \sum^{\infty}_{j=0} c_j T_j(\xi), \qquad c_j = \frac{2}{\pi} \int_{-1}^1 \frac{H(r(\xi)) T_j(\xi)}{(1-\xi^2)^{1/2}} \ d\xi, $$
with $T_j(\xi)=\cos{(j(\cos^{-1}\xi))}$, and $c_j$ alternatively written as
$$ c_j = \frac{2}{\pi} \int_0^\pi H(r(\cos{\theta})) \cos{j\theta} \ d \theta = \frac{1}{\pi} \int_0^{2\pi} H(r(\cos{\theta})) \cos{j\theta} \ d \theta. $$
The expansion will converge fast because we can integrate by parts in $\theta$ and afford to take a few derivatives of $H$, say $M$ of them, as done in \cite{tadmor}. After noting that the boundary terms cancel out because of periodicity in $\theta$, we express the coefficients $c_j$ for $j > 0$, up to a sign, as
$$ c_j = \pm \frac{1}{\pi j^M} \int_0^{2\pi} \sin{j\theta} \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \ d\theta, \qquad \ M \ \text{odd,} $$
$$ c_j = \pm \frac{1}{\pi j^M} \int_0^{2\pi} \cos{j\theta} \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \ d\theta, \qquad \ M \ \text{even.} $$
It follows that, for $j > 0$, and for all $M > 0$,
\[
\left| c_j \right| \leq \frac{2}{j^M} \max_\theta \left| \frac{d^M}{d\theta^M} H(r(\cos{\theta})) \right|.
\]
Let $B_M$ be a bound on this $M$-th order derivative. The uniform error we make by truncating the Chebyshev series to $H_p=\sum^{p}_{j=0} c_j T_j$ is then bounded by %
\begin{equation}\label{eq:bndderiv}
\left\| H-H_p \right\|_{L^\infty[r_0,1]} \leq \sum_{j=p+1}^\infty \left|c_j \right| \leq 2 B_M \sum_{j=p+1}^\infty \frac{1}{j^{M}} \leq \frac{2B_M}{(M-1) p^{M-1}}, \qquad \ p > 1.
\end{equation}
The final step is a simple integral comparison test.
\subsection{Bound on the derivatives of the DtN map kernel with oscillations removed}
The question is now to find a favorable estimate for $B_M$, from studying successive $\theta$ derivatives of $H(r)$ in \eqref{eq:Hofr}. From the bound for the derivatives of Hankel functions in Lemma 1 of \cite{flatland}: given any $C > 0$, we have
\begin{equation}\label{eq:derivH}
\left| \frac{d^m}{dr^m} \left( H_1^{(1)}(kr) e^{-ikr} \right) \right| \leq C_m (kr)^{-1/2} r^{-m} \qquad \ \text{for} \ kr \geq C.
\end{equation}
The change of variables from $r$ to $\theta$ results in
\begin{eqnarray*}
\frac{dr}{d\theta}&=& \frac{d\xi}{d\theta} \frac{dr}{d\xi} = \left(-\sin \theta \right) \left( -\alpha \left( \frac{\xi+1}{2} (r_0^{-1/\alpha} - 1) + 1 \right)^{-\alpha-1} \frac{\left(r_0^{-1/\alpha}-1\right)}{2} \right) \\
&=&\left(-\sin \theta \right) \left( -\alpha \ r^{1+1/\alpha} \ \frac{r_0^{-1/\alpha}(1-r_0^{1/\alpha})}{2} \right).
\end{eqnarray*}
Hence
\begin{equation}\label{eq:drdtheta}
\frac{dr}{d\theta} = r (r/r_0)^{1/\alpha} \ \frac{\alpha \sin \theta (1-r_0^{1/\alpha})}{2}.
\end{equation}
Derivatives of higher powers of $r$ are handled by the chain rule, resulting in
\begin{equation}\label{eq:derivr}
\frac{d}{d \theta} (r^p) = p r^p (r/r_0)^{1/\alpha} \ \frac{\alpha \sin \theta (1-r_0^{1/\alpha})}{2}.
\end{equation}
We see that the action of a $\theta$ derivative is essentially equivalent to multiplication by $(r/r_0)^{1/\alpha}$. As for higher derivatives of powers of $r$, it is easy to see by induction that the product rule has them either hit a power of $r$, or a trigonometric polynomial of $\theta$, resulting in a growth of at most $(r/r_0)^{1/\alpha}$ for each derivative:
\[
| \frac{d^m}{d\theta^m} r^p | \leq C_{m,p,\alpha} \, r^p (r/r_0)^{m/\alpha}.
\]
These estimates can now be combined to bound $\frac{d^m}{d \theta^m} \left( H_1^{(1)}(kr) e^{-ikr} \right)$. One of two scenarios occur when applying the product rule:
\begin{itemize}
\item either $\frac{d}{d\theta}$ hits $\frac{d^{m_2}}{d\theta^{m_2}} \left( H_1^{(1)}(kr) e^{-ikr} \right)$ for some $m_2 < m$. In this case, one negative power of $r$ results from $\frac{d}{dr}$ as we saw in \eqref{eq:derivH}, and a factor $r (r/r_0)^{1/\alpha}$ results from $\frac{dr}{d\theta}$ as we saw in \eqref{eq:drdtheta};
\item or $\frac{d}{d\theta}$ hits some power of $r$, possibly multiplied by some trigonometric polynomial in $\theta$, resulting in a growth of an additional factor $(r/r_0)^{1/\alpha}$ as we saw in \eqref{eq:derivr}.
\end{itemize}
Thus, we get at most a $(r/r_0)^{1/\alpha}$ growth factor per derivative in every case. The situation is completely analogous when dealing with the slightly more complex expression $\frac{d^m}{d \theta^m} \left( \frac{1}{r} H_1^{(1)}(kr) e^{-ikr} \right)$. The number of terms is itself at most factorial in $m$, hence we get
\begin{equation}\label{eq:derivbnd}
| \frac{d^m}{d \theta^m} \frac{k}{r} \left( H_1^{(1)}(kr) e^{-ikr} \right) | \leq C_{m, \alpha} \ \frac{k}{r} \left( \frac{r}{r_0} \right)^{\frac{m}{\alpha} - \frac{1}{2}} \leq C_{m,\alpha} \ \frac{k}{r_0} \left( \frac{r}{r_0} \right)^{\frac{m}{\alpha} - \frac{3}{2}}.
\end{equation}
We now pick $m \leq M = \lfloor 3 \alpha /2 \rfloor$, so that the max over $\theta$ is realized when $r = r_0$, and $B_M$ is on the order of $k/r_0$. It follows from \eqref{eq:bndderiv} and \eqref{eq:derivbnd} that
$$ \left\| H-H_p \right\|_{L^\infty[r_0,1]} \leq C_\alpha \ \frac{k}{r_0} \ \frac{1}{p^{\lfloor 3\alpha/2 \rfloor - 1}}, \qquad \ p > 1, \ \alpha > 2/3.$$
The kernel of interest, $K(r) = H(r) e^{ikr}$ obeys the same estimate if we let $K_p$ be the $p$-term approximation of $K$ in the Chebyshev system modulated by $e^{ikr}$.
\subsection{Bound on the error of approximation}
For ease of writing, we now let $D^0$, $D^0_p$ be the operators with respective kernels $K^0(r) = K(r) \chi_{[r_0,1]}(r)$ and $K^0_p(r) = K_p(r) \chi_{[r_0,1]}(r)$. We now turn to the operator norm of $D^0 - D^0_p$ with kernel $K^0 -K^0_p$:
$$(D^0-D^0_p)g(x)=\int_0^1 (K^0-K^0_p)(|x-y|)g(y) \ dy, \qquad x \in [0,1]. $$
We use the Cauchy-Schwarz inequality to bound
\begin{eqnarray*}
\|(D^0-D^0_p)g\|_2 &=& \left(\int_{0\leq x \leq 1} \left|\int_{0\leq y \leq 1, \ |x-y|\geq r_0} (K^0-K^0_p)(|x-y|)g(y) \ dy \right|^2 dx \right)^{1/2} \\
& \leq & \left(\int_{0\leq x \leq 1} \int_{0\leq y \leq 1, \ |x-y|\geq r_0}\left|(K^0-K^0_p)(|x-y|)\right|^2 \ dy dx \right)^{1/2} \|g\|_2 \\
& \leq & \left( \int_{0\leq x \leq 1} \int_{0\leq y \leq 1, \ |x-y|\geq r_0} 1 \ dy \ dx \right)^{1/2} \|g\|_2 \ \max_{0\leq x,y \leq 1, \ |x-y| \geq r_0} |(K^0-K^0_p)(|x-y|)| \\
& \leq & \|g\|_2 \ \| K^0-K^0_p \|_{L^{\infty}[r_0,1]}.
\end{eqnarray*}
Assembling the bounds, we have
\[
\| D^0-D^0_p \|_2 \leq \| K^0-K^0_p \|_{L^{\infty}[r_0,1]} \leq C_{\alpha} \, p^{1 - \lfloor 3 \alpha / 2 \rfloor} \, \frac{k}{r_0}.
\]
It suffices therefore to show that $\| K^0 \|_\infty = \| K \|_{L^\infty[r_0,1]}$ is bigger than $k/r_0$ to complete the proof. Letting $z = kr$, we see that
$$\max_{r_0 \leq r \leq 1} |K(r)|=\frac{k}{2r_0} \max_{kr_0 \leq z \leq k} \left|H_1^{(1)}(z) \right| \geq C \frac{k^{1/2}}{r_0^{3/2}}.$$
The last inequality follows from the fact that there exist a positive constant $c_1$ such that $c_1 z^{-1/2} \leq \left|H_1^{(1)}(z) \right| $, from Lemma 3 of \cite{flatland}. But $k^{1/2}/r_0^{3/2} \geq k/r_0$ precisely when $r_0 \leq 1/k$. Hence we have proved the statement of Theorem \ref{teo:main}.
In the next section, we proceed to a numerical confirmation of Theorem \ref{teo:main}.
\section{Convergence of probing for the half-space DtN map: numerical confirmation}\label{sec:NumPf}
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/half-c1-errs.pdf}
\caption{Probing error of the half-space DtN map($q=1$, 10 trials, circle markers and error bars) compared to the approximation error (line), $c\equiv 1$, $L=1/4$, $\alpha=2$, $n=1024$, $\omega=51.2$.}
\label{q1}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=.5]{./figs/images/half-c1-conds.pdf}
\caption{Condition numbers for probing the half-space DtN map, $c\equiv 1$, $L=1/4$, $\alpha=2$, $n=1024$, $\omega=51.2$, $q=1$, 10 trials.}
\label{p2}
\end{minipage}
\end{figure}
In order to use Theorem \ref{teo:main} to obtain convergent basis matrices, we start with the set $\left\{(r)^{-j/\alpha} \right\}_{j=0}^{p-1}$. We have proved the theorem for the interval $r_0 \leq r \leq 1$, but here we consider $h \leq r \leq 1$, which is a larger interval than in the theorem. We then put in oscillations, orthonormalize, and use this new set as a basis for probing the DtN map. Thus we have pre-basis matrices ($0 \leq j \leq p$)
$$(\beta_j)_{\ell m}=\frac{e^{ikh|\ell-m|}}{|\ell-m|^{j/\alpha}} \ \text{for} \ \ell \neq m,$$
with $(\beta_j)_{\ell \ell}=0$. We add to this set the identity matrix in order to capture the diagonal of $D$, and orthonormalize the resulting collection to get the $B_j$. Alternatively, we have noticed that orthonormalizing the set of $\beta_j$'s with
\begin{equation}\label{halfbasis}
(\beta_j)_{\ell m}=\frac{e^{ikh|\ell-m|}}{(h+h|\ell-m|)^{j/\alpha}}
\end{equation}
works just as well, and is simpler because there is no need to treat the diagonal separately. We use this same technique for the probing basis matrices of the exterior problem.
The convergent basis matrices in \eqref{halfbasis} have been used to obtain a numerical confirmation of Theorem \ref{teo:main}, again for the half-space DtN map. To obtain the DtN map in this setup, instead of solving the exterior problem with a PML or pPML on all sides, we solve a problem on a thin strip, with a random Dirichlet boundary condition (for probing) one of the long edges, and a PML or pPML on the other three sides. This method for a numerical approximation of the solution to the half-space problem was introduced in section \ref{sec:half} and Figure \ref{fig:half}.
\subsection{Uniform medium}
In Figure \ref{q1}, we show the approximation error, which we expect will behave as in Theorem \ref{teo:main}. We also plot error bars for the probing error, corresponding to ten trials of probing, with $q=1$. The probing results are about as good as the approximation error, because the relevant condition numbers are all well-behaved as we see in Figure \ref{p2} for the value of choice of $\alpha=2$. Back to the approximation error, we notice in Figure \ref{q1} that increasing $\alpha$ delays the onset of convergence as expected, because of the factor which is factorial in $\alpha$ in the error of Theorem \ref{teo:main}. And we can see that, for small $\alpha$, we are taking very high inverse powers of $r$, an ill-conditioned operation. Hence the appearance of a convergence plateau for smaller $\alpha$ is explained by ill-conditioning of the basis matrices, and the absence of data points is because of computational overflow.
Finally, increasing $\alpha$ from $1/8$ to $2$ gives a higher rate of convergence, as it should because in the error we have the factor $p^{-3\alpha/2}$, which gives a rate of convergence of $3\alpha/2$. This is roughly what we obtain numerically. As discussed, further increasing $\alpha$ is not necessarily advantageous since the constant $C_\alpha$ in \ref{teo:main} grows fast in $\alpha$.
\chapter{Summary of steps}\label{ch:steps}
In this appendix, we summarize the various operations needed in each step of the numerical scheme presented in this thesis.
\subsubsection{Preliminary remarks}
We need two solvers for the numerical scheme, one which does exterior problem solves with boundary condition $g$ on $\partial \Omega$, and one which solves the reformulated problem inside $\Omega$ using $D$ or $\overline{D}$ as a boundary condition. Both solvers should be built with the other in mind, so their discretization points agree on $\partial \Omega$.
For the exterior solves, we shall impose a boundary condition $g$ on $\partial \Omega$ (called $u_0$ in our discussion of layer-stripping), and find the solution on the layer of points just outside of $\Omega$ (called $u_1$ in our discussion of layer-stripping). Note that $u_1$ has eight more points than $u_0$ does. However, the four corner points of $u_1$ are not needed in the normal derivative of $u$ on $\partial \Omega$ (again, because we use the five-point stencil). Also, the four corner points of $u_0$ need to be used twice each. For example, the solution $u_0$ at point $(0,0)$ is needed for the normal derivative in the negative $x_1$ direction (going left) and the normal derivative in the negative $x_2$ direction (going down). Hence we obtain a DtN operator which takes the $4N$ solution points $u_0$ (with corners counted twice) to the $4N$ normal derivatives $(u_1-u_0)/h$ (with corners omitted in $u_1$). In this way, one can impose a (random) boundary condition on the $N$ points of any one side of $\partial \Omega$ and obtain the resulting Neumann data on the $N$ points of any side of $\partial \Omega$.
Once we have probed and compressed the DtN map $D$ into $\overline{D}$, we will need to use this $\overline{D}$ in a Helmholtz solver. We do this using ghost points $u_1$ just outside of $\partial \Omega$ (but not at the corners), which we can eliminate using $\overline{D}$. For best results, it is important to use $\overline{D}$ for the same solution points it was obtained. In other words, here we defined $\overline{D}$ as
$$\overline{D}u_0=\frac{u_1-u_0}{h}.$$
If instead we use the DtN map as
$$\overline{D}u_1=\frac{u_1-u_0}{h} \qquad \text{ or } \qquad \overline{D}u_0=\frac{u_0-u_{-1}}{h},$$
we lose accuracy. In a nutshell, one needs to be careful about designing the exterior solver and the Helmholtz solver so they agree with each other. We are now ready to consider the steps of the numerical scheme.
\section{Matrix probing: $D \rightarrow \tilde{D}$}
\begin{enumerate}
\item \label{steporg} Organize information about the various submatrices of $D$, their multiplicity and the location and ``orientation'' of each of their copies in $D$. For example, %
in the waveguide medium case, the $(1,1)$ submatrix has multiplicity 2, and appears as itself (because of symmetries in $c$) also in position $(3,3)$. However, the $(2,1)$ submatrix has multiplicity 8 but appears as itself in $D$ only in position $(4,3)$ as well. It appears as its transpose (\emph{not} conjugate transpose) in positions $(1,2)$ and $(3,4)$. It appears with column order flipped in positions $(4,1)$ and $(2,3)$, and finally as its transpose with column order flipped in positions $(1,4)$ and $(3,2)$.
\item \label{steprep}Pick a representative for each distinct submatrix of $D$. To do this, think of which block columns of $D$ will be used for probing. Minimizing the distinct block columns used will minimize the cost of matrix probing by minimizing $Q$, the sum of all solves needed. See step \ref{stepq} as well.
\item If the medium $c$ has discontinuities, it might be necessary to split up submatrices further, and to keep track of the ``sub-submatrices'' and their positions, multiplicities and orientations inside the representative submatrix.
\item \label{stepq} Pick a $q$ for each block column, keeping in mind that diagonal submatrices are usually the hardest to probe (hence need a higher $p$ and $q$), and that submatrices the farthest from the diagonal are typically very easy to probe. It might be wise to pick representatives, in step \ref{steprep}, knowing that some will require a higher $q$ than others.
\item \label{stepext} Solve the exterior problem $q$ times on each block column, saving the restriction of the result to the required block rows depending on the representative submatrices you chose. Also, save the random vectors used.
\item \label{steperror} For error checking in step \ref{steperrcheck}, also solve the exterior problem a fixed number of times, say 15, with different random vectors. Again, save both the random vectors and the exterior solves. Use those results to approximate the norm of $D$ as well.
\item For each representative submatrix $M$ (and representative sub-submatrix, if needed), do the following:
\begin{enumerate}
\item Pick appropriate basis matrices $B_j$.
\item Orthonormalize the basis matrices if necessary. If this step is needed, it is useful to use symmetries in the basis matrices to both reduce the complexity of the orthonormalization and enforce those symmetries in the orthonormalized basis matrices.
\item Multiply each basis matrix by the random vectors used in step \ref{stepext} in solving the exterior problem corresponding to the correct block column of $M$. Organize results in the matrix ${\bf \Psi}$.
\item Take the pseudo-inverse of ${\bf \Psi}$ on the results of the exterior solves from step \ref{stepext}, corresponding to the correct block row of $M$, to obtain the probing coefficients $\mathbf{c}$ and $\tilde{M}=\sum c_j B_j$.
\item \label{steperrcheck} To check the probing error, multiply $\tilde{M}$ with the random vectors used in the exterior solves for error checking purposes, in step \ref{steperror}. Compare to the results of the corresponding exterior solves. Multiply that error by the square root of the multiplicity, and divide by the estimated norm of $D$.
\end{enumerate}
\item If satisfied with the probing errors of each submatrix, move to next step: PLR compression.
\end{enumerate}
\section{PLR compression: $\tilde{D} \rightarrow \overline{D}$}
\begin{enumerate}
\item For each probed representative submatrix $\tilde{M}$ (and representative sub-submatrix, if needed), do the following:
\begin{enumerate}
\item Pick a tolerance $\varepsilon$ which is smaller than the probing error for that $\tilde{M}$. A factor of 25 smaller works well usually. Also, pick a maximal desired rank $R_\text{max}$. Usually $R_\text{max} \leq 8$ for a diagonal submatrix, $R_\text{max} \leq 4$ for a submatrix just off of the diagonal, and $R_\text{max} =2$ for a submatrix furthest away from the diagonal work well.
\item Compress $\tilde{M}$ using the PLR compression algorithm. Keep track of the dimensions and ranks of each block, to compare the matrix-vector complexity with that of a dense product.
\item Check the error made by the PLR compression by comparing $\tilde{M}$ and $\overline{M}$, again multiply that error by the square root of the multiplicity, and divide by the estimated norm of $D$.
\end{enumerate}
\item If satisfied with the PLR errors of each submatrix, move to next step: using the PLR compression of probed submatrices into a Helmholtz solver.
\end{enumerate}
\section{Using $\overline{D}$ in a Helmholtz solver}
Using the Helmholtz solver described in the preliminary remarks of this appendix, obtain the approximate solution $\overline{u}$ from solving with the appropriate boundary condition using $\overline{D}$. Every time a product of a vector $v$ with $\overline{D}$ is needed, build the result from all submatrices of $\overline{D}$. For each submatrix, multiply the correct restriction of that vector $v$ by the correct probed and compressed representative submatrix $\overline{M}$, taking into account the orientation of the submatrix as discussed in step \ref{steporg} of the matrix probing part of the numerical scheme.
|
2,869,038,156,645 | arxiv |
\section{Introduction}
\label{PlanckIPF1.5:Sect1}
{\it Planck\/}\footnote{{\it Planck\/}\ (http://www.esa.int/{\it Planck\/}) is a project
of the European Space Agency (ESA) with instruments provided by two
scientific consortia funded by ESA member states (in particular the
lead countries France and Italy), with contributions from NASA (USA)
and telescope reflectors provided by a collaboration between ESA and a
scientific consortium led and funded by Denmark.} \citep{tauber2010a,
planck2011-1.1}
is the third-generation space mission to measure the
anisotropy of the cosmic microwave background (CMB). It observes the
sky in nine frequency bands covering 30--857\,GHz with high
sensitivity and angular resolution from 31\ifmmode {^{\scriptstyle\prime}\ to 5\ifmmode {^{\scriptstyle\prime}. The Low
Frequency Instrument (LFI; \citealt{mandolesi2010, bersanelli2010,
planck2011-1.4}) covers the 30, 44, and 70\,GHz bands with amplifiers
cooled to 20\,\hbox{K}. The High Frequency Instrument (HFI;
\citealt{lamarre2010, planck2011-1.5}) covers the 100, 143, 217, 353,
545, and 857\,GHz bands with bolometers cooled to 0.1\,\hbox{K}.
Polarization is measured in all but the highest two bands
\citep{leahy2010, rosset2010}. A combination of radiative cooling and
three mechanical coolers provides the temperatures needed for the
detectors and optics \citep{planck2011-1.3}. Two data processing
centres (DPCs) check and calibrate the data and make maps of the sky
\citep{planck2011-1.7, planck2011-1.6}. {\it Planck\/}'s sensitivity,
angular resolution, and frequency coverage make it a powerful
instrument for galactic and extragalactic astrophysics as well as
cosmology. Early astrophysics results are given in Planck
Collaboration (2011h--x).
The goal of this paper is to describe the in-flight performance of the
HFI in space and after the challenging launch conditions. It does
not attempt to duplicate the content of the {\it Planck\/}\ pre-launch status
papers \citep{lamarre2010,pajot2010}, but rather presents the
operational status from an instrumental viewpoint. These results
propagate to scientific products through the data processing reported
in the companion paper \citep{planck2011-1.7} which describes the
instrumental properties as they appear in the maps used by the
``{\it Planck\/}\ early results'' companion papers. This paper focuses on
the ability of the HFI to measure intensity without any description of
its performance in measuring polarization, which will be reported
later.
Section \ref{PlanckIPF1.5:Sect2} summarizes the instrument design.
Section \ref{PlanckIPF1.5:Sect3} focuses on early in-flight
operations, the verification phase and the setting of the parameters
that have to be tuned in flight. Section \ref{PlanckIPF1.5:Sect4}
addresses the measurement of the beams on planets and the
disentangling of time response effects from the beam shape. It also
presents the best current knowledge of the physical beams resulting
from this work. The effective beam obtained after data processing are
to be found in \cite{planck2011-1.7}. Sections
\ref{PlanckIPF1.5:Sect5}, \ref{PlanckIPF1.5:Sect6} and
\ref{PlanckIPF1.5:Sect7} are dedicated to noise, systematic effects
and instrument stability respectively. A summary of the HFI in-flight
performance and a comparison with pre-launch expectations are
presented in section \ref{PlanckIPF1.5:Sect8}.
\section{The HFI instrument}
\label{PlanckIPF1.5:Sect2}
\begin{table*}[!tb]
\caption{The HFI receivers. P stands for polarisation sensitive bolometers.}
\label{PlanckIPF1.5:tab:Channels}
\centering
\begin{tabular}{l*{10}{c}}
\hline \hline
Channel & & 100P & 143P &143 &217P &217 & 353P & 353 & 545 & 857 \\
\hline
Central frequency & (GHz) & 100 & 143 & 143 & 217 & 217 & 353 & 353 & 545 & 857 \\
Bandwidth & (\%) & 33 & 32 & 30 & 29 & 33 & 29 & 28 & 31 & 30 \\
Number of bolometers & &8 &8 &4 &8 &4 &8 &4 &4 &4 \\
\hline
\end{tabular}
\end{table*}
\subsection{Design}
The High Frequency Instrument (HFI) was proposed to ESA in response to
the announcement of opportunity for instruments for the Planck mission
in 1995. It is designed to measure the sky in six bands
(Tab. \ref{PlanckIPF1.5:tab:Channels}) with bolometer sensitivity close to the
fundamental limit set by photon noise. The lower four frequency bands
include the measurement of the polarization. This sensitivity is
obtained through a combination of technological breakthroughs in each
of the critical components needed for bolometric detection:
\begin{itemize}
\item Spider web bolometers \citep{Bock1995,Holmes2008} and
polarization sensitive bolometers \citep{Jones2003} which can reach
the photon noise limit with sufficient bandwidth to enable scanning
great circles on the sky at roughly 1\,rpm. They offer a very low
cross-section to cosmic rays that proves to be essential in this
environment and with this sensitivity.
\item A space qualified 100\,mK dilution cooler \citep{Benoit1997}
associated with a
high precision temperature
control system.
\item An active cooler for 4\,K \citep{Bradshaw1997} using vibration
controlled mechanical compressors to prevent excessive warming of the
100\,mK stage and minimize parasitic effects on bolometers.
\item AC biased readout electronics that
extend high sensitivity to very slow signals \citep{Gaertner1997}.
\item A thermo-optical design consisting, for each optical channel, of
three corrugated horns and a set of compact reflective filters and
lenses at cryogenic temperatures \citep{Church1996}. These include
high throughput (multimoded) corrugated horns for the 545 and
857\,GHz channels \citep{Murphy2002}.
\end{itemize}
The angular resolution was chosen to extend the measurement of the
small scale features in the CMB, while keeping the level of stray
light to extremely low levels. At the same time, at this sensitivity,
the measurement and removal of foregrounds requires a large number of
bands extending on both sides of the foreground minimum. This is
achieved with the six bands of the HFI (Table~\ref{PlanckIPF1.5:tab:Channels})
and the three bands of the Low Frequency Instrument (LFI;
\citealt{planck2011-1.4}).
The instrument uses a $\sim20$\,K sorption cooler common to the HFI
and the LFI \citep{planck2011-1.3,Bhandari2000,Bhandari2004}. The HFI
focal plane unit (FPU) is integrated inside the mechanical structure
of the LFI, on axis of the focal plane of a common telescope
\citep{tauber2010a}.
The ability to achieve background limited sensitivity was demonstrated
by the ARCHEOPS balloon-borne experiment \citep{Benoit2003a,
Benoit2003b}, an adaptation of the HFI designed for operation in the
environment of a stratospheric balloon. Similarly, the method of
polarimetry employed by the HFI was demonstrated by the Boomerang
experiment \citep{Montroy2006,Piacentini2006, Jones2006}. The HFI
itself was extensively tested on the ground during the calibration
campaigns \citep{pajot2010} at IAS in Orsay and CSL at Liège. However,
the fully integrated instrument was never characterized in an
operational environment like that of the second Earth-Sun Lagrange
point (L2). In addition to thermal and gravitational environmental
conditions, the spectrum and flux of cosmic rays at L2 is vastly
different than that during the pre-flight testing. Finally, due to the
operational constraints of the cryogenic receiver, the end to end
optical assembly could not be tested on the ground with the focal
plane instruments.
The instrument design and development are described in
\cite{lamarre2010}. The calibration of the instrument is described in
\cite{pajot2010}. The overall thermal and cryogenic design and the
Planck payload performance are critical aspects of the mission.
Detailed system-level aspects are described in \citet{planck2011-1.1}
and \citet{planck2011-1.3}.
\begin{figure}
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig1.pdf
\caption{HFI spectral transmission}
\label{PlanckIPF1.5:fig:spectral}
\end{figure}
\subsection{Spectral transmission}
The spectral calibration is described in \citet{pajot2010} and consists
of pre-launch data, in the passband and around, combined with
component level data to determine the out of band rejection over an
extended frequency range (radio--UV). Analysis of the in-flight data
shows that the contribution of CO rotational transitions to the HFI
measurements is important. An evaluation of this contribution for the
$J=1\rightarrow0$ (100 and 143 GHz), $J=2\rightarrow1$ (217 GHz) and
$J=3\rightarrow2$ (353 GHz) transitions of CO is presented in
\citet{planck2011-1.7}.
\section{Early HFI operation}
\label{PlanckIPF1.5:Sect3}
\subsection{ HFI Cool down and cryogenic operating point}
\label{PlanckIPF1.5:SS:cooldown}
The {\it Planck\/}\ satellite cooldown is described in \cite{planck2011-1.3}.
The first two weeks after launch were used for passive outgassing,
which ended on 2 June 2009. During this period, gas was circulated
through the $^4$He-JT\ cooler and the dilution cooler to prevent clogging
by condensable gases. The sorption cooler thermal interface with HFI
reached a temperature of 17.2\,K on 13~June. The $^4$He-JT cooler was
only operated at its nominal stroke amplitude of 3.5\,mm on 24~June to
leave time for the LFI to carry out a specific calibration with their
reference loads around 20\,\hbox{K}. The operating temperature was
reached on 27~June, with the thermal interface with the focal plane
unit at 4.37\,K.
The dilution cooler cold head reached 93\,mK on 3 July 2009. Taking
into account the specific LFI calibration requirement that slowed down
the cooldown, the system behaved as expected within a few days,
according to the thermal models adjusted to the full system cryogenic
tests in the summer 2008 at CSL (Li\`ege).
The regulated operating temperature point of the 4\,K stage was set at
4.8\,K for the 4\,K feed horns on the \hbox{FPU}. The other stages
were set to 1.395\,K for the so called 1.4\,K stage, 100.4\,mK for the
regulated dilution plate, and 103\,mK for the bolometer regulated
plate.
These numbers were very close to the planned operating point. As the
whole system worked nominally, margins on the cooling chain for
interface temperatures and heat lift are large. The {\it Planck\/}\ active
cooling chain was one of the great technological challenges of this
mission and is fully successful. A full description of the
performance of the cryogenic chain and its system aspects can be found
in \cite{planck2011-1.3}. The parameters of the operating points of
the 4\,K, 1.4\,K and 100\,mK stages are summarised in
Table~\ref{PlanckIPF1.5:tab:cryopoint}.
The temperature stability of the regulated stages has a direct impact
on the scientific performance of the \hbox{HFI}. These stabilities
are discussed in detail in \cite{planck2011-1.3}. Their impact on the
power received by the detectors is given in
Sect.~\ref{PlanckIPF1.5:SSS:Stability}.
\begin{table*}
\caption{Main operation and interface parameters of the cooling chain}
\label{PlanckIPF1.5:tab:cryopoint}
\begin{center}
\vspace{-0.5cm}
\begin{tabular}{lcc}
\hline \hline
Interface Sorption cooler-$^4$He-JT cooler (4\,K gas pre-cooling temperature)&17.2~K\\
Interface $^4$He-JT cooler-dilution cooler (dilution gas pre-cooling temperature)&4.37~K\\
Interface 1.4\,K cooler-dilution gas precooling&1.34~K\\
Temperature of dilution plate (after regulation)&100.4~mK\\
Temperature of bolometer plate (after regulation)&103~mK\\
Temperature of 1.4\,K plate (after regulation)&1.395~K\\
Temperature of 4\,K plate (after regulation)&4.80~K\\
Dilution plate PID power&24.3--30.7~nW\\
Bolometer plate PID power&5.1--7.4~nW\\
1.4\,K PID power&270~$\mu$W\\
4\,K PID power&1.7~mW\\
$^4$He-JT cooler stroke amplitude&3450~$\mu$m\\
Dilution cooler $^4$He flow rate&16.19--16.65~$\mu$mole/s\\
Dilution cooler $^3$He flow rate&5.92--6.00~$\mu$mole/s\\
Present survey life time (started 6 August 2009) &29.4~months\\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Calibration and performance verification phase}
\subsubsection{Overview}
The calibration and performance verification (CPV) phase of the HFI
consisted of activities during the initial cooldown to 100\,mK and
during a period of about six weeks before the start of the survey.
The cooldown phase is summarized in Sect.~\ref{PlanckIPF1.5:SS:cooldown}.
The pre-launch value of the $^4$He-JT cooler operating frequency was
used (see Sect.~\ref{PlanckIPF1.5:SSS:freq4K}). Activities related to the
optimization of the detection chain settings were performed first
during the cooldown of the JFET amplifiers, and again when the
bolometers were at their operating temperature. Most of the operating
conditions were pre-determined during the ground calibration. The main
unknown was the in-flight background on the detectors. The detection
chain settings are presented in Sect.~\ref{PlanckIPF1.5:SSS:detchain}.
Other CPV activities performed are:
\begin{itemize}
\item determination of the detection chain time response under the
flight background
\item determination of the detection chain channel-to-channel
crosstalk under the flight background
\item characterization of the bolometer response to the 4\,K and
1.4\,K optical stages, and to the bolometer plate temperature
variations
\item checking the immunity of the instrument to the satellite
transponder
\item optimization of the numerical compression parameters for the
actual sky signal and high energy particle glitch rate
\item various ring-to-ring slew angles (1\parcm7, 2\parcm0 [nominal],
2\parcm5)
\item checking the effect of the scan angle with respect to the Sun
\item checking the effect of the satellite spin rate around its
nominal value of 1\,rpm.
\end{itemize}
On 5 August 2009, an unexpected shutdown of the $^4$He-JT cooler was
triggered by its current regulator unit (CRU). Despite investigations
into this event, its origin is still unexplained. A procedure for a
quick restart was developed and implemented in case the problem
recurred, but it has not. Six days were required to re-cool the
instrument to its operating point. The two-week first light survey
(FLS) followed this recovery, starting on 15 August 2009. The FLS
allowed assessment of the quality of the instrument settings,
readiness of the data processing chain, and satellite scanning before
the start of science operations. The complete instrument and satellite
settings were validated and kept, and science operations began. All
activities performed during the CPV phase confirmed the pre-launch
estimates of the instrument settings and operating mode. We will
detail in the following paragraphs the most significant ones.
\subsubsection{$^4$He-JT cooler operating frequency setting}
\label{PlanckIPF1.5:SSS:freq4K}
The $^4$He-JT cooler operating frequency was set to the nominal value
of 40.08\,Hz determined during ground tests. Once the cryochain
stabilized, the in-flight behaviour of the cooler was found to be very
similar to that observed during ground tests. The lines observed in
the signal due to known electromagnetic interference (EMI) from the
$^4$He-JT cooler drive electronics have the same very narrow
width. The long term evolution of the $^4$He-JT cooler parasitic lines
is discussed in Sect.~\ref{PlanckIPF1.5:Sect6}.
\subsubsection{Detection chain parameters setting}
\label{PlanckIPF1.5:SSS:detchain}
The JFET preamplifiers are operated at the temperature which minimizes
their noise. This setting was checked when the bolometers were still
warm (above 100\,K) during the cooldown, since the bolometer
Johnson noise was then much lower than the JFET noise. Optimum noise
performance of the JFETs was found close at 130 K, in agreement
with the ground calibration.
After ground calibration, the only parameters of the REU remaining to
optimize in-flight were the bolometer bias current and the phase of
the lock-in detection, which slightly depends on the bolometer
impedance. Fig.~\ref{PlanckIPF1.5:fig:Resp_all} shows the bolometer
responses for a set of bias current values measured while {\it Planck\/}\ was
scanning the sky. For this sequence, the satellite rotation axis was
fixed. For each bias value, the total detection chain noise was
computed after subtraction of the sky signal. Ground measurements
have shown that the minimum NEP and the maximum responsivity bias
currents differ by less than 1\%. Because of its higher
signal-to-noise ratio, we use the responsivity to optimize the bias
currents \citep{CatalanoACDC2010}. The optimum in-flight bias current
values correspond to the pre-launch estimates within 5\%. Therefore
the pre-launch settings, for which extensive ground characterizations
were performed, were kept (Fig.~\ref{PlanckIPF1.5:fig:Resp_all}). In a
similar way, the lock-in phase was explored and optimized, and again
the pre-launch settings were kept.
The optical background power on the bolometers is on the low end of
our rather conservative range of predictions, even lower than expected
from the ground measurements. This is attributed to a low telescope
temperature and no detectable contamination of the telescope surface
by dust during launch. This should result in a level of photon noise
lower than initially expected and an improved sensitivity.
\begin{figure*}
\includegraphics[width=\textwidth, keepaspectratio]{HFI_IFP_fig2.pdf
\caption{Optimization of the bolometer bias currents. Vertical lines
indicate the final bias value setting. These values are shifted
with respect to the maximum because a dynamic response correction
has been taken into account. A bias value of 100\,digits
corresponds approximately to 0.1\,nA.}
\label{PlanckIPF1.5:fig:Resp_all}
\end{figure*}
\subsubsection{Numerical data-compression tuning}
The output of the readout electronic unit (REU) consists of one number
for each of the 72~science channels for each half-period of modulation
\citep{lamarre2010}. This number, $S_{\rm REU}$, is the exact sum of
the 40 16-bit ADC signal values obtained within the given
half-period. The data processor unit (DPU) performs a lossy
quantization of $S_{\rm REU}$.
First, 254 $S_{\rm REU}$ values corresponding to about 1.4\,s of
observation for each detector, covering a strip of sky about 8\ifmmode^\circ\else$^\circ$\fi\
long, are processed. These 254 values are called a {\it compression
slice}. The mean $<S_{\rm REU}>$ of the data within each compression
slice is computed, and data are demodulated using this mean:
\begin{equation}
S_{{\rm demod},i}= (S_{{\rm REU},i}-<S_{\rm REU}>)*(-1)^i
\end{equation}
where $1<i<254$ is the running index within the compression slice.
Then the mean $<S_{\rm demod}>$ of the demodulated data $S_{{\rm
demod},i}$ is computed and subtracted. The resulting data slice is
quantized according to a step $Q$ fixed per detector:
\begin{equation}
S_{{\rm DPU},i}= \hbox{round}((S_{{\rm demod},i}-<S_{\rm demod}>)/Q)
\end{equation}
This is the lossy part of the algorithm: the required compression
factor, obtained through the tuning of the quantization step $Q$, adds
some extra noise to the data. For $\sigma/Q = 2$, where $\sigma$ is
the standard deviation of Gaussian white noise, the quantization adds
1\% to the noise \citep{pajot2010,pratt1978}. In flight, the value of
$\sigma$ was determined at the end of the CPV phase after subtraction
of the signal from the timeline.
The two means $<S_{\rm REU}>$ and $<S_{\rm demod}>$ computed as 32-bit
words are sent through the telemetry, together with the $S_{{\rm
DPU},i}$ values. A variable length encoding of the $S_{{\rm DPU},i}$
values is performed on board, and the inverse decoding is applied on
ground. This provides a lossless transmission of the quantized
values. A load limitation mechanism inhibits the data transmission,
first at the compression slice level (compression errors), and second
at the ring level \citep{lamarre2010}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,totalheight=.38\textwidth]{HFI_IFP_fig3.pdf
\caption{Load measured for each HFI channel on 16 July 2010. Simulated
data for the same patch of the sky are shown for
bolometers. Channels \#54 and higher corrrespond to the fine
thermometers on the optical stages of the instruments, plus a fixed
resistor (\#60) and a capacitor (\#61) on the bolometer plate.}
\label{PlanckIPF1.5:fig:compload}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{HFI_IFP_fig4.pdf
\caption{Example of loss in one compression slice of data on bolometer
857-1. Note the large signal-to-noise ratio while scanning through
the galactic center.}
\label{PlanckIPF1.5:fig:comperror}
\end{figure}
For a given $Q$ value, the load on each channel depends on the dynamic
range of the signal above the level of the noise. This dynamic range
is largest for the high frequency bolometers because of the galactic
signal. The large rate of glitches due to high energy particle
interactions also contributes to the load of each channel. Optimal
use of the bandpass available for the downlink (75\,kb\,s\ifmmode^{-1}\else$^{-1}$\fi\ average
for HFI science) was obtained by using initially a value of $Q =
\sigma/2.5$ for all bolometer signals, therefore including a margin
with respect to the requirement of $\sigma/Q = 2$. The load on each
HFI channel is shown and compared to simulated data in
Fig.~\ref{PlanckIPF1.5:fig:compload}. The increase of signal gradients while
scanning through the galactic center in September 2009 triggered the
load limitation mechanism (compression error) and up to 80000 samples
were lost for each of the 857\,GHz band bolometers. Therefore a new
value of $Q = \sigma/2$ was set for those bolometers from 21 December
2009 onward, reducing the number of samples lost to less than 200
during the following scan through the galactic center in March 2010.
An illustration of a compression error loss is shown in
Fig.~\ref{PlanckIPF1.5:fig:comperror}. Thanks to the redundancy of the {\it Planck\/}\
scan strategy and the irregular distribution of the few remaining
compression errors, no pixels are missing in the maps of the high
signal-to-noise ratio galactic center regions. Periodic checks of the
noise value $\sigma$ are done for each channel, but no deviation
requiring a change in the quantization step $Q$ has been encountered
so far.
\subsubsection{Instrument readiness at the end of the CPV phase}
The overall readiness of the instrument was assessed during the
\hbox{FLS}. This end-to-end test was completely successful, from both
the instrument setting and the satellite scanning points of view. The
part of the sky covered during the FLS was included in the first all
sky survey.
\subsection{Response}
\subsubsection{Variation of the signal with background and with the bolometer plate temperature}
\label{PlanckIPF1.5:SSS:Stability}
The optical background on the bolometers originates from the sky, the
telescope, and from the HFI itself. The operating point of the
bolometers is constrained by this total optical background, and the
fluctuations of this background have a direct impact on the stability
of the HFI measurements.
The power spectral density of each contribution to the background is
compared to 30\% of the total noise measured in-flight (NEP$_1$ column
of Table~\ref{PlanckIPF1.5:tab:noiseJML}). This specification corresponds to a
quadratic contribution smaller than 5\% on the total noise.
The in-flight temperature stability of the HFI cryogenic stages is
discussed in \cite{planck2011-1.3}. The optical coupling of the HFI
bolometers to each cryogenic stage is shown in the left panels of
Figs. \ref{PlanckIPF1.5:fig:4k} and \ref{PlanckIPF1.5:fig:16k} and in
Fig. \ref{PlanckIPF1.5:fig:100mk}. (The fact that the 100\,mK
couplings all agree with pre-launch measurements shows that no
bolometers were damaged during launch.) These couplings are used to
calculate the effect of the fluctuations of each cryogenic stage on
the bolometer signals. The right panels of
Figs.~\ref{PlanckIPF1.5:fig:4k} and \ref{PlanckIPF1.5:fig:16k} show
the power spectral density (PSD) of the respective thermometers scaled
by the optical coupling factors for the most extreme bolometers. The
scaled PSDs of the thermal fluctuations of the 4\,K and 1.4\,K stages
are below the line corresponding to 30\% of the total noise of the
corresponding bolometer for all frequencies above the spacecraft spin
frequency.
The bolometer plate thermometers have a large cosmic particle hit rate
\citep{planck2011-1.3} because of the large size of their sensors
compared to that of the bolometers. Cosmic ray hits detection and
removal do not allow us to reach the thermometer nominal sensitivity,
therefore they cannot be used to remove the effect of bolometer plate
temperature fluctuations on the bolometer signal. Instead, the data
processing pipeline \citep{planck2011-1.7} uses blind bolometers
located on the bolometer plate. The bolometer noise components are
discussed in Sect.~\ref{PlanckIPF1.5:Sect6}.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig5a
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig5b
\caption{Left: coupling coefficients of the 4\,K stage. Right: scaled
power spectral density (PSD) of the 4\,K stage thermal fluctuations
for the 100-1a and 353-5a-7a bolometers.}
\label{PlanckIPF1.5:fig:4k}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig6a
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig6b
\caption{Left: coupling coefficients of the 1.4\,K stage. The thermal
emission in high frequency bands becomes too small to be
measured. Right: scaled PSD of the 1.4\,K stage thermal
fluctuations for the 100-1a and 353-5a-7a bolometers.}
\label{PlanckIPF1.5:fig:16k}
\end{figure*}
\subsubsection{Linearity}
\label{PlanckIPF1.5:SSS:Linearity}
The way a bolometer transforms absorbed optical power into a voltage
is not a linear process because both the conductance between the
bolometer and the heat sink, and the bolometer impedance have a
non-linear dependence on the temperature (see
e.g. \cite{Catalano2008Thesis,Sudiwala2000}).
\begin{table}
\caption{Relative response deviation (in \%) from linearity for the
CMB dipole, the galactic center (GC) and planets. Saturation (Sat.)
occurs for the Jupiter measurements at high frequency.}
\label{PlanckIPF1.5:tab:nonlinearity}
\vspace{-0.25cm}
\begin{tabular}{lcccccc}
\hline\hline
& Dipole & GC &Mars & Saturn & Jupiter\\
\hline
100 GHz &$3.8~10^{-4}$ &0.001 & 0.01& 0.13 & 0.8\\
143 GHz &$10^{-3}$&0.0017 & 0.02 & 0.18 & 1.0 \\
217 GHz& $8~10^{-4}$ &0.003 & 0.05& 0.53 & 3.2 \\
353 GHz& $6.4~10^{-4}$ &0.007 & 0.06&
0.8 & 4.5 \\
545 GHz &$<10^{-4}$ &0.01 & 0.08& 0.8 & Sat. \\
857 GHz& $<10^{-4}$ &0.1 & 0.06& 0.8 & Sat. \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{HFI_IFP_fig7
\caption{Bolometer signal coupling coefficients to the 100mK bolometer plate.}
\label{PlanckIPF1.5:fig:100mk}
\end{figure}
The characterization of the linearity of the HFI detectors has a
direct impact on the calibration of the instrument: strong
non-linearity takes place during the galaxy crossing for high
frequency bolometers and during planet crossings. An accurate absolute
calibration is also necessary for the CMB dipole. Finally, the energy
scale of large glitches can be corrected. The static response has been
characterized during ground calibration showing a small deviation from
linearity around a tenth of one percent for fainter sources (few
hundreds of attowatts) and around a few percent for brighter sources
like planets \citep{pajot2010}. The static response measured during
the CPV phase agrees with the ground estimate to better than
1\%. Nevertheless, the use of the static non-linearity determination
does not represent the true bolometric non-linear behaviour when
scanning through bright point sources like planets. The linearization
of the response done by multiplying the signal times the gain
(depending on the amplitude of the signal) and convolving it with the
temporal transfer function normalized to 1 at the lowest frequency, is
valid in the case of small signals. However this is not the case for
bright point sources for which the estimate of non-linearity using the
static response may be incorrect by up to 40\% in the extreme case of
Jupiter. For these sources we use a model to correct the static
results. The use of fainter planets like Mars to characterize the
beams minimizes this effect.
Table~\ref{PlanckIPF1.5:tab:nonlinearity} gives the deviation from linearity for
various sources at the center of the beam for the bolometers at each
frequency.
\subsection{Electrical crosstalk on HFI detectors}
The electrical coupling of the signal of one bolometer into the
readout chain of another, or \emph{electrical crosstalk}, was measured
to be less than $-60$\,dB for all pairs of channels during ground-based
tests \citep{pajot2010}. We performed two tests in flight to verify
this result, described below.
\subsubsection{CPV crosstalk measurements}
During the CPV phase, we switched off each readout channel one at a
time for ten minutes, and observed the impact on all other channels.
For each bolometer we collected about 660\,minutes of data.
The crosstalk coefficient between channels $i$ and $j$ is expressed as:
\begin{equation}
C_{ij} = \Delta{\tilde{V}_j} / \Delta{\tilde{V}_i},
\end{equation}
where {$\tilde{V}_i$} and {$\tilde{V}_j$} are the channel $i$ and $j$
voltages, corrected for thermal drift. The crosstalk matrix and a
histogram of crosstalk levels are shown in
Fig.~\ref{PlanckIPF1.5:fig:CPV_EXT_matrix}. The crosstalk is mostly
confined to nearest neighbours in the belt, channels whose wiring is
physically close. The measured crosstalk level is in good agreement
with ground measurements, typically {$< -70$}\,dB, and thus meets the
requirement. A few of the polarization sensitive bolometer pairs show
a crosstalk around $-60$\,dB.
\begin{figure}
\includegraphics[width=\columnwidth]{HFI_IFP_fig8a}
\includegraphics[width=\columnwidth]{HFI_IFP_fig8b}
\caption{Top: Electrical crosstalk matrix {$C_{ij}$} 54x54 for all
bolometers, coefficients are in \hbox{dB}. Bottom: Distribution of
electrical crosstalk coefficients in dB.}
\label{PlanckIPF1.5:fig:CPV_EXT_matrix}
\end{figure}
In the next section we see that crosstalk measurement from glitches
shows a much lower level of crosstalk. These results suggest that the
CPV test has measured electrical crosstalk in current which is
unrelated to the scientific signal.
\subsubsection{Measurements using glitches}
We used high energy glitches in one channel to study the impact on the
signal of surrounding channels. Thousands of glitch events are
collected for one channel, and the signals of all other channels for
the same time period are stacked. The crosstalk in volts for
individual glitches is defined as:
\begin{equation}
c_{ij}^V = \Delta{V_j} / \Delta{V_i}
\end{equation}
where {$V_i$} is the glitch amplitude in volts in the channel hit by a
cosmic ray, and {$V_j$} the response amplitude of another channel $j$.
Then, for a pair of channels $i$ and $j$, the global voltage crosstalk
coefficient is
\begin{equation}
C_{ij}^V = \hbox{median}(c_{ij}^V)
\end{equation}
For SWB channels, in contrast with the CPV previous results, no
evidence of crosstalk is seen, with an upper limit of $-100$\,\hbox{dB}.
There are outliers in galactic channels because of incorrect glitch
flagging. A second analysis using planet crossing data instead of
glitches gave the same results.
Concerning the coupling between PSB pairs, we see crosstalk around
$-60$\,dB, in agreement with the CPV tests; however, this is likely an
upper limit because it includes the effects of coincident cosmic ray
glitches which produce a similar effect but are not crosstalk.
\section{Beams and time response}
\label{PlanckIPF1.5:Sect4}
\subsection{Measurement of Time Response}
\subsubsection{Introduction}
The \emph{time response} of HFI describes the shift, in amplitude and
phase, between the optical signal incident to each detector and the
output of the readout electronics. The response can be approximated
by a linear complex transfer function in the frequency domain. The
signal band of HFI extends from the spin frequency of the spacecraft
($f_{\rm spin} \simeq 16.7$\,mHz) to a cutoff defined by the angular
size of the beam (14--70\,Hz; see Table~4 from
\citet{lamarre2010}). For the channels at 100, 143, 217, and 353\,GHz,
the dipole calibration normalizes the time response at the spin
frequency. To properly measure the sky signal at small scales, the
time response must be characterized to high precision across the
entire signal band, spanning four decades from 16.7\,mHz to $\sim
100$\,Hz.
The time response of bolometers typically is nearly flat over a signal
band from zero frequency to a frequency defined by the bolometer's
thermal time constant, and then drops sharply at higher frequencies.
For the HFI bolometers, the thermal frequency is 20--50\,Hz
\citep{lamarre2010,Holmes2008}, as noted in \citet{lamarre2010} and
\citet{pajot2010}, however, the time response of HFI is not flat at
very low frequencies, but exhibits a low frequency excess response
(LFER).
We define the {\em optical beam} as the instantaneous directional
response to a point source. Any sky signal is convolved with this
function, which is completely determined by the optical systems of HFI
and {\it Planck\/}.
Since {\it Planck\/}\ is rotating at a nearly constant rate and around the
same direction, the data are the convolution of the signal with both
the beam and the time response of \hbox{HFI}. We separate the
two effects and deconvolve the time response from the time ordered
data. This deconvolution results in a flat signal response, but
necessarily amplifies any components of the system noise that are not
rolled off by the bolometric response. This amplified noise is
supressed by a low-pass filter \citep{planck2011-1.7}.
\subsubsection{TF10 model}
\label{PlanckIPF1.5:SSS:TF10_model}
The main ingredients of the time response are: (i)~heat propagation
within the bolometer; (ii)~signal modulation at a frequency of $f_{\rm
mod}= 90.188$\,Hz performed by reversing the bolometer bias current;
(iii)~the effect of parasitic capacitance along the high impedance
wiring between the bolometer and the first electronics stage (JFETs);
(iv)~band-pass filtering, to reject the low frequency and high
frequency white noise in the electronics; (v)~signal averaging and
sampling; and (vi)~demodulation.
Because of the complexity of this sequence, a phenomenological
approach was chosen to build the time response model. The time
response is written as the product of three factors:
\begin{equation}
H_{\rm 10} (f) = H_{\rm bolo} \times H_{\rm res} \times H_{\rm filter}
\end{equation}
Schematically, the first factor takes into account step
(i), the second factor describes a resonance effect
that results from the combination of steps (ii) and (iii) , while the
purpose of $H_{\rm filter}$ is to account for step (iv).
Detailed analysis and measurements of heat propagation within the
bolometer have shown that $H_{\rm bolo}$
is given by the algebraic sum of three single pole low pass filters.
Explicitly:
\begin{equation}
H_{\rm bolo} = \sum_{i = 1,3} \frac{a_i}{1 + j 2 \pi f \tau_i}
\end{equation}
with 6 parameters ($a_1$,\,$a_2$,\,$a_3$,\,$\tau_1$,\,$\tau_2$,\,$\tau_3$).
\begin{equation}
H_{\rm res} = {{1 + p_7 (2 \pi f)^2}\over{1 -p_8 (2 \pi f)^2 + j p_9
(2 \pi f)}}
\end{equation}
with 3 free parameters ($p_7$,\,$p_8$,\,$p_9$),
\begin{equation}
H_{\rm filter} = { {1 - (f / F_{\rm mod})^2} \over
{1 - p_{10}(2 \pi f)^2 + j (f/F_{\rm filter})^2} }
\end{equation}
with one free parameter ($p_{10}$). A total of 10 free parameters
describe this model, as indicated by its name. See
Fig. \ref{PlanckIPF1.5:fig:TF10_example} for an illustration of the three
components of the time response model TF10 for a typical 217 GHz
channel.
The parameter $F_{\rm filter}$ characterizes the rejection filter width
and is kept fixed to 6\,Hz in the fitting process.
Besides the fact that this phenomenological model
is physically motivated, this parameterization:
\begin{itemize}
\item ensures causality
\item satisfies $H(-f) = H^\ast{}(f)$
\item goes to 1 when $f$ goes to zero (because we define $a_1+a_2+a_3
= 1$), while it goes to 0 when $f$ goes to infinity
\item includes enough parameters to provide the necessary
flexibility to fit the time response data of all 52 bolometers.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{HFI_IFP_fig9
\caption{The amplitude and phase (in radians) of the three components
of the TF10 model of the time response. The solid blue line is
$H_{\rm bolo} (f)$, the dotted green line shows $H_{\rm filter}
(f)$ and the dashed red line shows $H_{\rm res}$. The solid black
line is $H_{\rm 10} (f)$, the product of the three components. The
vertical dotted black line shows the signal frequency where the
beam of a 217\,GHz channel cuts the signal power by half.}
\label{PlanckIPF1.5:fig:TF10_example}
\end{center}
\end{figure}
\subsubsection{Fitting the TF10 Model to Ground Data}
\label{PlanckIPF1.5:SSS:TF10_GroundData}
To obtain the 10 $\times$ 52 parameter values, we used three sets of
pre-launch measurements. (i) The bolometer response was measured at 10
different frequencies by illuminating all 52 bolometers with a chopped
light source. (ii) Other measurements were done using carbon fibers as
light sources; the latter were alternately turned on and off at a
variable frequency. (iii) The bolometer bias currents were
periodically stepped up and lowered by a small amount. By adding a
square wave to the DC current, temperature steps are induced,
simulating turning on and off a light source (the analysis of these
data requires bolometer modelling).
None of these measurements was absolutely normalized; all
compared the relative response to inputs of
various frequencies. While
measurement (i) only provided the amplitude of the time response,
measurements (ii) and (iii) provided both the amplitude and the
phase. Note that for the phase analysis, because of the lack of
precise knowledge of the time origin $t_0$ of the light/current
pulses, a fourth factor $\exp(j 2 \pi \Delta t_0 f)$ is
introduced in the expression of $H_{10} (f)$, where the additional
parameter $\Delta t_0$ represents
the uncertainty in time.
Among the three sets of measurements, the carbon fiber set covered the
largest frequency range. Thus it was the best for building the transfer
function model described above and for investigating its main
features. However, it involves uncertainties
that could be resolved only with a very detailed simulation of the set-up,
therefore it was not used to calculate the final set of parameter
values. The final values were calculated from data sets (i)
and (iii), whose frequency ranges are complementary, 2--140\,Hz and
0.0167--10\,Hz, respectively.
Since no absolute normalization was available, the two datasets were matched
in the overlap frequency range.
The fitting of the analytic expression given above to the merged data
was done in the range between 16.7\,mHz and 120\,Hz. The 52 fits have
a $\chi^2$/DoF distribution whose mean value is 1.13, indicating that
the model is adequate to describe the data. The numerical values of
$p_{10}$ displayed a small spread, $\sigma_{\rm mean} <
6\times10^{-4}$. This parameter was set at its mean value to
calculate the 52 covariance matrices of the nine remaining parameters,
which were useful in propagating the statistical errors.
As described below, the time response thus obtained was further tuned
and checked using in-flight observations, in particular signals
produced by planets and by cosmic rays (glitches).
An alternative model has also been defined, based on the analytical
expression of the steps (ii) to (vi) of
section~\ref{PlanckIPF1.5:SSS:TF10_model}. Based on a closer analysis
of the electronics stages, this model is more physically motivated
than the 10-parameter model. It requires only 8 parameters and
provides better results near the modulation frequency. Nevertheless
the model has not been used in the current data release. It is only
used as a benchmark, to check possible systematic effects in the
current release. Most of the effects of the difference between the
models disappear when the data are low-pass filtered.
\subsubsection{Fitting TF10 to Flight Data}
\label{PlanckIPF1.5:SSS:TF10_FlightData}
The planets Mars, Jupiter, and Saturn are bright, compact sources that
are suitable for measuring the beam and provide a near-delta-function
stimulus to the system that can be used to constrain the time
response. During the first sky survey, {\it Planck\/}\ observed Mars twice
and Jupiter and Saturn once \citep{planck2011-1.7}. During a planet
observation, the spacecraft scans in its usual observing mode
\citep{planck2011-1.1}, shifting the spin axis in 2\ifmmode {^{\scriptstyle\prime}\ steps along
a cycloidal path on the sky. Since planets are close to the ecliptic
plane, the coverage in the cross-scan direction is not as fine as in
the scan direction. In the case of Jupiter and Saturn, each channel
observes the planet once per rotation for a period of approximately 6
hours (9 periods of stationary pointing, or "rings"). Because Mars has
a large proper motion, the first observation lasted 12 hours (or 18
rings).
We use the forward-sense time domain approach \citep{Huffenberger2010}
to simultaneously fit for Gaussian beam parameters and TF10 time
response parameters. A custom processing pipeline avoids filtering the
data. We extract the raw bolometer signal and demodulate it using the
parity bit. We use the flags created by the
time ordered information (TOI) processing pipeline to
exclude data samples contaminated by cosmic rays, and we additionally
flag all data samples where the nonlinear gain correction is more than
0.1\%. We use Horizons\footnote{\url{ssd.jpl.nasa.gov/?horizons}}
emphemerides to compute the pointing of each horn relative to the
planet center.
The time domain signal from the planet is modeled as an elliptical
Gaussian convolved with the TF10 time response as follows:
\begin{equation}
d(t) = H_{10} \star A (t) G\left[\vec{x}(t); \vec{x}_0,\epsilon,
\theta_{FWHM}, \psi \right]
\end{equation}
where the Gaussian optical beam model $G$ is parameterized as in
Eqs.~9--11 of \cite{Huffenberger2010}, except the planet amplitude is
parameterized with a disk temperature rather than a single amplitude:
\begin{equation}
A (t) = T_{disk} \frac{\Omega_{p} (t)}{\Omega_{b}},
\end{equation}
where $T_{disk}$ is the whole-disk temperature of the planet,
$\Omega_p$ is the solid angle of the planet, which can vary significantly
during the observation, and $\Omega_b$ is the solid angle of the
beam. $\Omega_p$
is computed using Horizons, which is programmed with {\it Planck\/}'s orbit.
The free parameters of the fit are the six parameters of the time
response corresponding to $H_{\rm bolo}$, the two components of the
centroid of the beam $\vec{x}_0$, the mean FWHM $\theta_{\rm FWHM}$, the
ellipticity $\epsilon$, the ellipse orientation angle $\psi$, and the
planetary disk temperature $T_{\rm disk}$. The four parameters describing
the electronics are somewhat degenerate with the bolometer part of the
time response, and we fix them at the ground-based values.
Because of the large nonlinear response and highly non-Gaussian beams
at 545 and 857\,GHz, we do not perform fits to the planet data at
these frequencies. Instead we rely on pre-launch fits for the time
response.
By taking the Fourier transform of the time response function derived
on planets, one obtains the system response to a Dirac impulse. This
response can be compared to the glitches generated by cosmic rays that
deposit energy in the sensor grids.
The glitches detected by HFI are sampled with time steps $1/(2 F_{\rm
mod})$. However, the glitches can be superresolved in time by
normalizing, phasing, and stacking single glitch events
\citep{Crill2003}. This gives glitch templates for each channel
(Alexandre Sauv\'e, private communication) that are effectively
sampled at a much higher frequency.
Figure \ref{PlanckIPF1.5:fig:glitch_vs_tf} shows the comparison between a
superresolved glitch template and the corresponding calculated
response. There is good agreement in general, but there are
discrepancies at high frequency ($f > 100$\,Hz). The physical model
for the electronics transfer functions briefly described at the end of
section~\ref{PlanckIPF1.5:SSS:TF10_GroundData} suppresses this discrepancy at
high frequency.
\begin{figure}
\includegraphics[width=1\columnwidth]{HFI_IFP_fig10
\caption{Comparison of the impulse response of channel 143-2a (red
curve) and a template made from stacking glitch events (black
curve). Noise begins to dominate further in the timeline. The
ringing observed at the modulation frequency is generated by the
electronics rejection filter.}
\label{PlanckIPF1.5:fig:glitch_vs_tf}
\end{figure}
Each planet observation suffers unique systematic effects, so a
comparison of the time response recovered on each gives a good assessment
of the effects of these different systematics. Mars has a large
proper motion, giving excellent sampling in the cross-scan
direction; however, there is a known diurnal variability in its
brightness temperature \citep{Swinyard2010}. Jupiter has a large
angular diameter (48\ifmmode {^{\scriptstyle\prime\prime}) relative to the HFI beam size, and
Saturn and Jupiter are so bright that the HFI detectors are
driven significantly nonlinear (see Table \ref{PlanckIPF1.5:tab:nonlinearity}).
Nonetheless, we find that the time response is consistent to 0.1-0.5\%
when recovered from each of the planets individually as well as from
all planets simultaneously.
A further cross-check is done by stacking planet scans to build
a superresolved planet timeline. Time response parameters are fit to
the superresolved planet using the assumption of a near-Gaussian beam
profile, and are consistent with the first approach.
The in-flight time response differs from the ground-based
time-response by at worst 1.5\% between 1\,Hz and 40\,Hz. We do not
include this difference in the final error budget, because it is
likely that the time response has changed due to differences in
background conditions.
\subsubsection{Low Frequency Excess Response}
The HFI bolometers show low frequency excess response (LFER),
~\citep{lamarre2010}. Though the planets are bright, the short impulse
they provide is close to a delta function and the energy is spread
evenly across nearly all harmonic components. In combination with
low frequency noise, the measurements are not sensitive to frequencies below
$\sim0.5$\,Hz; so with planet observations alone we cannot constrain
an excess response at very low frequency. We maintain the ground
measurements as our best estimation of the \hbox{LFER}. In the
ground-based measurements the bias step vs.~carbon fiber differs by at
most 1.5\% at low frequency, so we assign a systematic error of 1.5\%
for frequencies below 0.5\,Hz.
For future data releases, we will use the difference of sky signal
between surveys to constrain the \hbox{LFER}.
\subsubsection{Summary of Errors in the Time Response}
As noted above, the data represent the combined effect of the time
response and the optical beam. The time response, however, is not
degenerate with a Gaussian parameterization of the beam; the true
beams deviate from a Gaussian shape at the several percent level near
the main lobe, while time response effects tend to give the beam an
extended tail following the planet in the scan direction. The
Gaussian assumption could slightly bias the recovered time response;
however, any residual bias is captured in the measurement of the
post-deconvolution scanning beam \citep{planck2011-1.7}.
Because of the high signal-to-noise ratio of the planet data,
statistical errors in the fit are small, so we assess the systematic
errors in the resulting time response by checking the consistency of
various methods of recovering the time response. We fit to different
combinations of planet data: Mars, Jupiter, and Saturn data separately
and all of the data simultaneously to check for systematics resulting
from various planets. Additionally we compare the planet-fitted time
response with ground-based data and with the impulse response from
cosmic ray glitches.
Our final error budget is as follows:
\begin{itemize}
\item Low frequency ($f<0.5$\,Hz): the errors are dominated by the
possibility of a low frequency excess response below 0.5\,Hz at a
level of 1.5\%.
\item Middle frequency ($0.5\,{\rm Hz}<f<50$\,Hz): We set an error bar
between 0.1\% and 0.5\% depending on the channel. This error bar is
set by the consistency in results from different sets of planet
data.
\item High frequency: ($f>50$\,Hz) Our empirical model of the
electronics in the TF10 model does not describe the system very well
at these frequencies, as shown by some disagreement between the
glitches and the TF10 impulse response. However, for this data
release, the low-pass filter applied to the data and the beam cutoff
reduce the importance of this frequency band.
\end{itemize}
The {\it Planck\/}\ scan strategy is such that the same region of the sky
is observed scanning in nearly opposite directions six months apart. An
error in the time response is highlighted in the difference of maps
obtained from the first six months and the second six months of the
survey. This difference map shows some level of contamination, in
particular near the Galactic plane, where the signal is higher. The
same level of contamination is observed in simulations in which the
data are generated with a transfer function, and analysed with a
different one, in order to mimic the uncertainties described
above. With this technique, we validate the error budget.
\subsection{Optical Beams}
The {\em optical beam} is defined (Sect.~4.1.1) as the instantaneous
directional response to a point source. For HFI, the optical beams
for each channel are determined by the telescope, the horn antennas in
the focal plane and, for the polarized channels, by the orientations
of their respective polarization sensitive bolometers (PSBs)
\citep{Maffei2010}. Model calculations of the beams are essential,
since it was possible to measure only a limited number of beams in the
telescope far-field before launch. The 545 and 857\,GHz channels,
which employ multimoded corrugated horns and waveguides, were not
included in this campaign. (The optical beam is related to, but is
not the same as the \emph{scanning beam} defined in
\cite{planck2011-1.7} and used for data analysis purposes.)
\cite{tauber2010b} reported the best pre-launch expectations for the
optical beams, obtained using physical optics calculations with
CORRUG\footnote{SMT Consultancies
Ltd. \url{www.smtconsultancies.co.uk}} and GRASP\footnote{TICRA,
\url{www.ticra.com}}. Table~\ref{PlanckIPF1.5:tab:optical_beam_parameters}
compares the calculated and measured (Sect.~\ref{PlanckIPF1.5:SSS:TF10_model})
beams for the single-moded channels (up to 353\,GHz).
\begin{table*}
\caption{Comparison of pre-launch calculations and measured parameters
for the HFI optical beams (band averages). Standard deviations
$\sigma$ are computed as the dispersion between the Saturn, Jupiter,
and Mars data for each given channel.}
\label{PlanckIPF1.5:tab:optical_beam_parameters}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
Band & Expected & Mars & Mars & Expected & Mars & Mars \\
& FWHM & FWHM & $\sigma_{\rm FWHM}$ & ellipticity & ellipticity
& $\sigma_{\rm ellip}$ \\
& [\ifmmode {^{\scriptstyle\prime}] & [\ifmmode {^{\scriptstyle\prime}] & [\ifmmode {^{\scriptstyle\prime}] & & & \\
[0.5ex]
\hline
100P & 9.58 & 9.37 & 0.06 & 1.17 & 1.18 & 0.006\\
143P & 6.93 & 6.97 & 0.10 & 1.06 & 1.02 & 0.004\\
143 & 7.11 & 7.24 & 0.10 & 1.03 & 1.04 & 0.005\\
217P & 4.63 & 4.70 & 0.06 & 1.12 & 1.13 & 0.006\\
217 & 4.62 & 4.63 & 0.06 & 1.10 & 1.15 & 0.010\\
353P & 4.52 & 4.41 & 0.06 & 1.08 & 1.07 & 0.009\\
353 & 4.59 & 4.48 & 0.04 & 1.23 & 1.14 & 0.007\\
545 & 4.09 & 3.80 & -- & 1.03 & 1.25 & -- \\
857 & 3.93 & 3.67 & -- & 1.04 & 1.03 & -- \\
[1ex]
\hline
\end{tabular}
\end{table*}
\begin{figure}
\rotatebox{180}{%
\includegraphics[width=.95\columnwidth]{HFI_IFP_fig11}
\caption{The distribution of the HFI beams on the sky relative to the
telescope boresight as viewed from infinity. Contours of the
Gauss-Hermite decomposition of the Mars data at 1\%, 10\%, and 50\%
power levels from the peak. For the photometers containing a pair
of PSBs, the average beam of the two PSBs is shown.}
\label{PlanckIPF1.5:fig:focal_plane_layout}
\end{figure}
For these channels the pre-launch calculations of FWHM and ellipticity
and measured Mars values agree to within a few percent. These
differences are contained within $2.7\sigma$ of the data errors. The
main source of discrepancy could be a slight misalignment of the
pre-launch telescope model with respect to the actual in-flight
telescope geometry, which is currently being investigated
\citep{Jensen2010}.
\begin{table}[!t]
\caption{Geometric mean of the asymmetric gaussian FWHM.}
\label{PlanckIPF1.5:tab:SummaryBeams}
\centering
\begin{tabular}{l c c c }
\hline\hline
Bolometer & beam & spectral Band & spectral Band \\
& \ \ FWHM\ \ & cut-on & cut-off \\
Name & [\ifmmode {^{\scriptstyle\prime}] & GHz & GHz \\
\hline
100-1a & 9.46 & 84.9 & 113.87 \\
100-1b & 9.60 & 87.0 & 115.27 \\
100-2a & 9.41 & 86.5 & 116.28 \\
100-2b & 9.43 & 84.4 & 115.42 \\
100-3a & 9.42 & 84.4 & 116.77 \\
100-3b & 9.47 & 84.4 & 116.77 \\
100-4a & 9.43 & 84.9 & 117.79 \\
100-4b & 9.45 & 84.9 & 117.79 \\
143-1a & 6.91 & 120.8 & 161.77 \\
143-1b & 6.99 & 120.3 & 162.78 \\
143-2a & 6.78 & 119.8 & 162.26 \\
143-2b & 6.80 & 119.3 & 163.28 \\
143-3a & 6.91 & 120.3 & 158.73 \\
143-3b & 6.86 & 120.3 & 160.75 \\
143-4a & 7.01 & 118.8 & 167.83 \\
143-4b & 7.01 & 119.3 & 161.26 \\
143-5 & 7.45 & 120.3 & 166.31 \\
143-6 & 7.08 & 120.3 & 165.81 \\
143-7 & 7.18 & 120.8 & 167.83 \\
143-8 & 7.20 & 120.8 & 165.3 \\
217-5a & 4.73 & 184.0 & 249.72 \\
217-5b & 4.75 & 183.9 & 249.12 \\
217-6a & 4.66 & 182.5 & 253.26 \\
217-6b & 4.64 & 189.6 & 252.76 \\
217-7a & 4.63 & 188.6 & 253.77 \\
217-7b & 4.68 & 189.6 & 250.74 \\
217-8a & 4.69 & 182.5 & 253.26 \\
217-8b & 4.73 & 182.0 & 252.76 \\
217-1 & 4.68 & 189.6 & 249.72 \\
217-2 & 4.61 & 189.1 & 253.26 \\
217-3 & 4.59 & 191.1 & 252.76 \\
217-4 & 4.61 & 193.1 & 252.76 \\
353-3a & 4.47 & 310.9 & 403.91 \\
353-3b & 4.46 & 310.4 & 405.93 \\
353-4a & 4.40 & 323.5 & 400.88 \\
353-4b & 4.39 & 313.9 & 406.94 \\
353-5a & 4.41 & 302.3 & 405.43 \\
353-5b & 4.42 & 299.8 & 405.93 \\
353-6a & 4.47 & 300.3 & 406.94 \\
353-6b & 4.45 & 314.4 & 397.84 \\
353-1 & 4.57 & 310.4 & 401.38 \\
353-2 & 4.46 & 312.9 & 407.45 \\
353-7 & 4.44 & 326.1 & 404.4 \\
353-8 & 4.53 & 318.5 & 405.92 \\
545-1 & 3.94 & 466.1 & 638.93 \\
545-2 & 3.63 & 464.5 & 633.87 \\
545-3 & 3.79 & 467.6 & 633.87 \\
545-4 & 4.17 & 479.2 & 635.89 \\
857-1 & 3.73 & 748.1 & 986.59 \\
857-2 & 3.66 & 736.5 & 982.65 \\
857-3 & 3.76 & 747.1 & 984.21 \\
857-4 & 3.67 & 744.1 & 970.02 \\
\hline
\end{tabular}
\end{table}
Table~\ref{PlanckIPF1.5:tab:SummaryBeams} reports our best knowledge of the FWHM
of the optical beams for each channel. We stress that this table does
not provide parameters of the scanning beam of the processed data,
which accounts for the additional effects of the instrument time
response and of the time domain filtering in the data processing
\citep{planck2011-1.7}.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth,keepaspectratio,trim=0 0 0 70,clip]%
{HFI_IFP_fig12
\caption{The ``dimpling effect'' as seen at 545\,GHz (left panel) and
857\,GHz (right panel). The grid spacing is 10\ifmmode {^{\scriptstyle\prime}. The color
scale is in dB normalized to the peak signal of Jupiter.}
\label{PlanckIPF1.5:fig:dimpling}
\end{center}
\end{figure}
As reported in \citet{Maffei2010}, the 545\,GHz and 857\,GHz channels
are multimoded (more than one electromagnetic mode propagating through
the horn antennas) and their optical beams are markedly
non-Gaussian. The understanding of these channels through simulations
has progressed since {\it Planck\/}\ was launched, especially in the
characterization of their modal content \citep{Murphy2010}.
In Table \ref{PlanckIPF1.5:tab:optical_beam_parameters} we compare pre-launch
calculations of the beams with the beams measured with Mars.
Differences in FWHM are less than 7\%. We stress that this
discrepancy does not impact the scientific products of the {\it Planck\/}\
mission since the scanning beams are the ones to be used for data
analysis purposes. From an instrumental point of view, the inflight
measurements must obviously be considered as the reference for the
performance of these channels.
The development of the HFI multimoded channels necessitated the novel
extension of previously existing modelling techniques for the analysis
of the corrugated horn antennas and waveguides, as well as for the
propagation of partially coherent fields (modes) through the telescope
onto the sky \citep{Murphy2001}. Extensive pre-launch measurement
campaigns were conducted for all the HFI horn antenna/filter
assemblies \citep{Ade2010}. The HFI multimoded channels are suitable
for the scientific goals of {\it Planck\/}. Nevertheless, for future
instruments, more research can be envisaged in this field. The
characterization of the modal filtering in the horn-waveguide assembly
and the understanding of the coupling of the waveguide modes to the
detector need further theoretical and experimental study.
The similarity of the pre-launch expectations to our current knowledge
of the HFI focal plane (beams and their positions on the sky) tells us
that the overall structural integrity of the focal plane has been
preserved after launch. Furthermore, the optical beams as measured on
Mars are shown in Fig.~\ref{PlanckIPF1.5:fig:focal_plane_layout} and can be
compared with the equivalent representations of the focal plane layout
based on calculations in earlier papers
\citep{Maffei2010,tauber2010b}. A detailed account of the full focal
plane reconstruction can be found in \cite{planck2011-1.7}.
There is a ``dimpling'' of the reflector surfaces from the irregular
print-through of the honeycomb support structures on the reflector
surfaces themselves \citep{tauber2010b}. GRASP calculations predict
that this will generate a series of rings of narrow bright grating
lobes around the main lobe. Since the small-scale details of the
dimpling structure of the {\it Planck\/}\ reflectors are irregular, these
grating lobes tend to merge with the overall power scattered by the
reflector surfaces (Ruze scattering \citep{Ruze66}).
Fig.~\ref{PlanckIPF1.5:fig:dimpling} shows a HEALPIX
\citep{gorski2005} map of the first survey observation of Jupiter
minus the second survey observation of the same sky region to remove
the sky background. We see the first ring of grating lobes as
expected in the map from all 545 and 857\,GHz channels, where the
signal-to-noise ratio on the planets is highest. The inner
15\ifmmode {^{\scriptstyle\prime}\ of the beam is saturated and does not appear in the map. At
857\,GHz, the discrete grating lobes appear at level below $-35$\,dB
with respect to the peak ($\sim 30$\,dB), and represent a negligible
fraction of the total beam throughput. The shoulder of the beam,
extending radially to $\sim 15\ifmmode {^{\scriptstyle\prime}$, represents a larger contribution
to the throughput, ranging from 0.5\% to a few percent for the CMB and
sub-mm channels, respectively.
\section{Noise properties}
\label{PlanckIPF1.5:Sect5}
The {\it Planck\/}\ HFI is the first example of space-based bolometers,
continuously cooled to 100\,mK for several years. Although the
detectors were thoroughly tested on the ground
\citep{lamarre2010,pajot2010}, it remained then to be seen how they would
behave in the L2 space environment. We describe here the noise
properties of the HFI in the first year of operation, focusing on the
differences between space and ground performance.
This section deals with the Gaussian part of the noise. Section
(\ref{PlanckIPF1.5:Sect6}) describes the systematic effects that have
been analyzed in the data so far.
An example of raw time ordered information (TOI) is shown in
Fig.~\ref{PlanckIPF1.5:fig:TOIexample}. The TOI is dominated by the
signal from the CMB dipole, Galactic emission, point sources, and
glitches. Therefore, the noise properties cannot be directly deduced
from the \hbox{TOI}. We first describe the general method used to
evaluate the noise, then we give general statements on the noise
properties.
\begin{figure*}
\includegraphics[angle=180,width=1\textwidth]{HFI_IFP_fig13
\caption{Examples of raw (unprocessed) TOI for one bolometer at each
of six HFI frequencies and one dark bolometer. Slightly more than
two scan circles are shown. The TOI is dominated by the CMB dipole,
the Galactic dust emission, point sources, and glitches. The relative part
of glitches is over represented on these plots due to the thickness of the
lines that is larger than the real glitch duration.}
\label{PlanckIPF1.5:fig:TOIexample}
\end{figure*}
\subsection{Noise estimation}
The Level-2 \textit{detnoise} pipeline \citep{planck2011-1.7} is used
to determine a noise power spectrum, from which one extracts the noise
equivalent power (NEP) of the detectors (see \cite{planck2011-1.7} for
a full description). The pipeline uses redundancies in the
observations to determine an estimate of the sky signal, which is then
subtracted from the full TOI to produce a pure noise timeline. The
signal estimates are the integration of typically 40 circles of data
at a constant spin axis pointing. The average signal, binned in spin
phase, provides an accurate estimate of the signal. This signal as a
function of spin phase is then subtracted from the \hbox{TOI}. The
residual is an estimate of the instantaneous noise. Power spectra of
this residual timeline are then obtained for each pointing period (see
Fig.~\ref{PlanckIPF1.5:fig:NEP}) and fit for the white noise level,
i.e., the NEP, in the spectral region between 0.6 and 2.5\,Hz. The
lower limit of 0.6\,Hz is high enough that the low frequency excess
noise can be neglected and the upper limit small enough to keep the
time response near to its value at low frequencies (16\,mHz) at which
the instrument is calibrated.
The noise is stable at a level better than 10\% in the majority of
detectors. Exceptions are: (1)~a few rings with unusual events that
contaminate the measurement, e.g., poorly corrected/flagged glitches
or passage over very strong sources such as the Galactic centre,
especially at 857\,GHz; (2)~a weak trend, smaller than 1\% in
amplitude, that correlates with the duration of the pointing period
(an expected bias due to the ring average signal removal);
(3)~bolometers affected by random telegraph signals (RTS) (see next
Section); and (4)~some uncorrelated jumps in the noise levels for
about ten bolometers at the 30\% level for isolated periods of a few
days. The overall result is that a very clear \textit{baseline} value
can be identified and can be used to determine the NEP of each
bolometer. This is then converted to NE$\Delta$T with the help of the
flux calibration. The NE$\Delta$Ts thus measured are given in
Table~\ref{PlanckIPF1.5:tab:noiseJML}. The quoted uncertainties are
derived from the rms of the NEPs in a band around the baseline.
\begin{figure*}
\includegraphics[width=1\textwidth]{HFI_IFP_fig14
\caption{Typical power spectrum amplitude of bolometers 143-5 and
545-2. For the upper panels, this is the power spectrum density of
valid samples, after an average ring (the sky signal) has been
subtracted from the \hbox{TOI}. Stacking of the result for 200
rings is shown in the lower panel. Here, the instrument time
response is not deconvolved from the data.}
\label{PlanckIPF1.5:fig:NEP}
\end{figure*}
\subsection{The noise components}
The detector noise is described by the combination of several
components:
\begin{itemize}
\item Photon and bolometer noise, which appear as white noise filtered
by the time response of the bolometer, the readout electronics, and
the TOI processing.
\item Electronics and Johnson noise, which produce noise that is
nearly white across the frequency band, but with a sharp decrease at
the high frequency end due to the on-board data handling and the TOI
filtering.
\item The 4\,K lines (Sect~\ref{PlanckIPF1.5:Sect6}), appearing as
residuals in the spectra.
\item The energy deposited by cosmic rays on the bolometers, which
appears as "glitches", i.e., positive peaks in the signal, which are
removed by the TOI processing (Sect.~\ref{PlanckIPF1.5:Sect6}, and
\cite{planck2011-1.7}). Residuals from glitches appear in the noise
spectrum as a bump between 0.1 and 1\,Hz.
\item Low frequency excess (LFE) noise, which is present below about
100\,mHz.
\end{itemize}
The last three sources of noise are detailed in
Section~\ref{PlanckIPF1.5:Sect6}.
There is additional noise (of the order of 0.5\% or less) due to the
on-board quantization of the data before transmission. In general,
the noise level, as measured by the NEP, is between 10 and
20\,aW\,Hz$^{-1/2}$ for the 100 to 353\,GHz channels, and between 20
and 40\,aW\,Hz$^{-1/2}$ for the 545 and 857\,GHz channels. It is in
line with the ground-based expectations and the lower estimate of the
background load with a detector-to-detector variability of less than
$20\,\%$ (see Sect.~\ref{PlanckIPF1.5:Sect8}).
Due to the AC bias modulation scheme, the $1/f$ noise from the
electronics is aliased near the modulation frequency where it is
heavily filtered out. The benefit of this scheme is visible on the
noise power spectrum of the 10\,M$\Omega$ resistor which shows a flat
spectrum at the Johnson value down to 1\,mHz, a tribute to the
electronic chain stability.
At the present time, we assume that the LFE noise, not observed in
ground-based measurements, is mostly due to the 100\,mK bolometer
plate fluctuations. While drifts in the 100\,mK stage that are
correlated between bolometers are removed, there are likely local
temperature fluctuations due to particle energy deposited close to
each detector.
\section{First assessment of systematic effects}
\label{PlanckIPF1.5:Sect6}
\subsection{4\,K lines}
The fundamental frequency of the $^4$He-JT\ cooler (40.083373\,Hz) is
phase-locked to the frequency of the data acquisition (180.37518\,Hz)
in a ratio of 2 to 9. EMI/EMC impacts the TOI only as very narrow
lines. Unfortunately, in flight, unlike in ground-based measurements,
these lines are not stable. The 4\,K line variations are illustrated
in Fig.~\ref{PlanckIPF1.5:fig:A0_4K_all}. The variability of the lines is in part
due to temperature fluctuations in the service module of the Planck
spacecraft. Indeed, some of the variability was related to the power
cycling of the data transponder which, for stability reasons, has been
kept on continuously since 25 January 2010 (OD 258,
\citealt{planck2011-1.3}, see Fig.~\ref{PlanckIPF1.5:fig:A0_4K_OnOff}).
\begin{figure}
\includegraphics[width=1\columnwidth]{HFI_IFP_fig15
\caption{Typical trend of the cosine and sine coefficient variation of
the four main 4\,K lines measured in the TOI processing on the test
resistance at 10, 30, 50, and 70\,Hz}
\label{PlanckIPF1.5:fig:A0_4K_all}
\end{figure}
\begin{figure*}
\includegraphics[angle=0,width=0.5\textwidth]{HFI_IFP_fig16a
\includegraphics[angle=0,width=0.5\textwidth]{HFI_IFP_fig16b
\caption{Zoom on the four main 4\,K line systematics amplitude
measured in the TOI processing on the test resistance. (\emph{Left})
Period when the transponder was switched on once per day for the
3-hours Daily Tele-Communication Period (DTCP). (\emph{Right})
Period when the transponder was kept on at all times, later in the
mission.}
\label{PlanckIPF1.5:fig:A0_4K_OnOff}
\end{figure*}
\subsection{Abnormal noise in the electronics}
Of the 54 bolometers on HFI, three show a significant RTS, also known
as ``popcorn noise.'' These are 143-8, 545-3, and
857-4. Fig~\ref{PlanckIPF1.5:fig:RTS} illustrates their behaviour. The noise
timeline clearly exhibits a two-level system. The three RTS bolometers
in flight are the ones where RTS occurred most frequently in ground
measurements. However, in flight: 1)~the level difference is well
above the noise (at least ten times the rms); 2)~the two-level system
can be a three-level system or even larger; and 3)~the RTS is
intermittent. For large duration, it can be unnoticeable, especially
for the 857-4 bolometer.
\begin{figure}
\includegraphics[angle=180,width=0.5\textwidth]{HFI_IFP_fig17
\caption{Random telegraphic signal in the noise timeline (fW) of
bolometer 545-3 plotted vs.~time. RTS is here a two-level signal
with random occurrences of flipping. Full sampling is in black
dots. A smoothed version (by 41 samples) is plotted with a red
line.}
\label{PlanckIPF1.5:fig:RTS}
\end{figure}
In an unrelated fashion, we see uncorrelated jumps in the noise
TOI of many bolometers at a rate of just a few every year.
\subsection{Cosmic rays and their effects}
Energy is deposited by cosmic rays in various parts of the HFI
instrument. We observe these events in the TOIs of all detectors as a
signal peak characterised by a very short rise time (less than
1.5\,ms) and an exponential decay. These events are called glitches.
The other effect of the cosmic rays is a thermal input to the
bolometer plate, which induces low frequency noise on the
bolometers. Thermal effects are described in \cite{planck2011-1.7}
and their very long term consequences detailed in
Sect.~\ref{PlanckIPF1.5:Sect7}.
\subsection{Cosmic ray-induced glitch spectrum seen by Planck}
Cosmic rays consist of two main components at the L2 location of
{\it Planck\/}: the Solar component and the Galactic component. The Solar
component is at low energy (a few keV) except during flares, when
energies can reach \hbox{GeV}. HFI is immune to the low energy
component and no major flares have yet been recorded. The Galactic
component \citep{Bess2007} with a maximum between roughly 300\,MeV and
1\,GeV, is modulated by Solar activity. The {\it Planck\/}\ mission began
during the weakest solar activity for a century
\citep{McDonald2010}. Hence, the Galactic cosmic ray component is
expected to be at its highest level.
The glitch rate evolution (Fig.~\ref{PlanckIPF1.5:fig:SremGlRate})
follows closely the proton monitoring by the space radiation
environnement monitor (SREM), a piggy-back experiment mounted on the
{\it Planck\/}\ spacecraft. This figure shows that cosmic rays are the main
source of HFI glitches. The glitch rate tended to decrease during the
mission since January 2010 due to the slow increase of the Solar
activity.
\begin{figure}
\includegraphics[angle=90,width=.5\textwidth,totalheight=.15\textheight]{HFI_IFP_fig18
\caption{SREM hit count and HFI bolometer average glitch rate
evolution. SREM TC1 hit counts measure the protons with
a deposited energy larger than 0.085\,MeV.}
\label{PlanckIPF1.5:fig:SremGlRate}
\end{figure}
This glitch rate can be understood as the sum of two interaction
modes, depending on whether the cosmic ray has a direct or indirect
interaction with the bolometer. High energy cosmic rays can also
interact with the bolometer plate and induce thermal effects and
correlated glitches on the bolometers. These
are dealt with in Sect.~\ref{PlanckIPF1.5:Sect7}. The glitch
characteristics also depend on the location of the energy deposit
within the bolometer: the thermistor, the absorbing grid, or the
bolometer housing.
\subsubsection{Direct interaction}
Cosmic ray particles can deposit energy directly on the thermistor or
the absorbing grid. This is observed at well-defined deposited
energies corresponding to the thickness of the element. Typically
thermistor hits are about 20\,keV, whereas grid hits are about
2\,keV. These events occur at a rate of a few per minute.
\subsubsection{Indirect interaction}
Cosmic ray particles can also deposit energy indirectly. All particles
crossing some matter produce a shower of secondary electrons, through
ionization, that are mostly absorbed in the matter nearby. However,
interactions occurring within microns of the internal surface of
the bolometer box produce a shower of free secondary particles. A
fraction of these particles is absorbed by the thermistor and the grid
of the bolometer. This explains the large coincidence rate of gliches
between PSB bolometers \emph{a} and \emph{b} sharing the same mounting
structure. The energy of those glitches follows a power law
distribution spanning the whole range, from the detection threshold to
the saturation level. This spectrum is expected for the delta and
secondary electrons produced via the ionization process. The total
rate of these events is typically one per second, and thus dominates
the total counts shown in Fig.~\ref{PlanckIPF1.5:fig:glrate}.
\begin{figure*}
\includegraphics[angle=180,width=1\textwidth,totalheight=.4\textheight]{HFI_IFP_fig19
\caption{Glitch rate of all HFI bolometers. An average over the first
sky survey has been performed. The asymmetry between PSB bolometers
sharing the same horn is an effect of detection threshold and
asymmetric time constant properties between PSB \emph{a} and
\emph{b}. }
\label{PlanckIPF1.5:fig:glrate}
\end{figure*}
A more detailed description of the effect of cosmic rays on HFI
detectors is postponed to a dedicated paper, and glitch handling in
the data processing is described by \cite{planck2011-1.7}.
\section{Instrument stability}
\label{PlanckIPF1.5:Sect7}
The radiative power reaching each bolometer is the co-addition of the
flux from the sky and of the thermal emission of all optical elements
"seen" from the detector: filters, horns, telescope reflectors,
shields, and mechanical parts visible in the side-lobes. In addition,
fluctuations of the heat sink temperature (the bolometer plate) appear
like an optical signal. Any change in any of the parameters
(temperature, emissivity, geometrical coefficient) driving these
sources may be visible in the bolometer signal as a "DC level",
i.e. a stable or very slowly varying (days) component in the signal.
Monitoring the "DC level" supposes that one is able to remove the
varying sky signal from the stable sources. This is done in the
map-making process \citep{planck2011-1.7} by using the redundancy of
the scanning strategy. Fig.~\ref{PlanckIPF1.5:fig:bolo_dc_levels}
shows for the 217\,GHz bolometers the history of the DC level during
nearly one year. All follow a pattern similar to that of the cosmic
ray activity measured by the SREM (see Sect.~\ref{PlanckIPF1.5:Sect6}
and Fig.~\ref{PlanckIPF1.5:fig:SremGlRate}), which indicates that
cosmic rays are at the origin of the measured signal. One can check on
families of bolometers with non-uniform heat leaks $G$ that this
signal is directly related to temperature variations of the bolometer
plate and not to external optical sources. In fact, we can see here
residual fluctuations that the PID of the bolometer plate fails to
compensate because its efficiency is far from one. The similarity of
Fig.~\ref{PlanckIPF1.5:fig:bolo_dc_levels} and
Fig.~\ref{PlanckIPF1.5:fig:SremGlRate}, also shows that the effect of
gain variations and of DC level drifts of the readout electronics is
small with respect to other sources of signal drifts.
It should be noted that the "DC level" variation of 217\,GHz
bolometers is equivalent to an optical power of a couple of
femtowatts, while the total background power on these bolometers is
about 1\,p\hbox{W}. This fluctuation is mainly due to the
energy deposited by cosmic rays on the bolometer plate, which means
that the ``equivalent power'' of the other sources of temperature
fluctuation and of optical background fluctuations are no more than a
fraction of femtowatt, i.e. less than one part per thousand of the
background.
The change of gain induced by the DC level variations can be estimated
from the non-linearity measurements (see
Sect.~\ref{PlanckIPF1.5:SSS:Linearity}). In the case considered in
Fig.~\ref{PlanckIPF1.5:fig:bolo_dc_levels}, the relative gain change
is of the order of a few 10$^{-4}$.
During the CPV phase, the readout electronics was "balanced", i.e. the
offset parameter was tuned to get a signal near to zero. During the
first year of operation, and for all bolometers, deviations from this
point remained small with respect to the total range of acceptable
values. In consequence, no re-tuning of the readout electronics was
needed during this period and it is expected to be the same up to the
end of the mission.
\begin{figure}
\includegraphics[width=1\columnwidth]{HFI_IFP_fig20
\caption{The drift in the DC level of the 217 GHz SWB bolometers, in
femtoWatts of equivalent power in the detector, for the first year
of operation.}
\label{PlanckIPF1.5:fig:bolo_dc_levels}
\end{figure}
\section{Main performance parameters}
\label{PlanckIPF1.5:Sect8}
The primary difference between the in-flight and pre-launch
performance of the HFI derives from the relatively high rate of cosmic
rays in the L2 environment. At the energies of interest, the low
level of solar activity results in an elevated cosmic ray flux. The
glitches that result from cosmic ray events must be identified and
removed from the time ordered information prior to processing the data
into maps. The TOI processing also removes a significant fraction of
the common mode component that appears in the bolometer TOIs at low
frequencies. A residual low-frequency component is removed during the
map-making process \citep{planck2011-1.7}.
Table \ref{PlanckIPF1.5:tab:noiseJML} summarizes the noise properties of the
processed TOI \citep{planck2011-1.7}, by the following parameters:
\begin{itemize}
\item A white noise model. NEP$_1$ is the average of the Noise
Equivalent Power spectrum in the 0.6--2.5\,Hz range.
\item A model with a white noise NEP$_2$ plus a low frequency component:
NEP = NEP$_2$ [1 + (f$_{\rm knee}$/f)$^\alpha$]
\item The sensitivity NE$\Delta$T$_{CMB}$ to temperature differences
of the CMB. Note that this quantity is not particularly relevant for
the channels at 545 and 857\,GHz for which it takes large values that
are highly dependent on the details of the spectral transmission
for each detector.
\item The sensitivity NE$\Delta$T$_{RJ}$ to temperature differences
for sources observed in the Rayleigh-Jeans regime.
\end{itemize}
Figure \ref{PlanckIPF1.5:fig:NET} compares the goal, pre-launch and in-flight
NE$\Delta$Ts. The average in-flight NE$\Delta$Ts are 27\% higher
than the pre-launch NE$\Delta$Ts. While the pre-launch and in-flight
values are not directly
comparable due to differences in the processing, these differences
can account for less than half of the observed variation. The
remaining part is attributed to residual contamination from cosmic
rays that are not completely removed in the current TOI
processing.
The sensitivity goals are taken from Table~1 of
\citet{lamarre2010}, and are consistent with Table~1.3 of
\citet{planck2005-bluebook} corrected for the use of PSBs at 100\,GHz.
Note that Fig.~\ref{PlanckIPF1.5:fig:NET} supersedes Figure~11 of
\citet{lamarre2010} in which requirements and goals were improperly
plotted. The in-flight sensitivities estimated from NEP$_1$ exceed the
goals, which are defined by a total noise level equal to twice the
expected contribution of photon noise. The average measured NEP$_1$
is typically 70\% of the initial goal. The improvement in the NEP
over the design goals is primarily the result of having reduced the
photon background through careful design of the optical system.
\begin{figure}
\includegraphics[width=\columnwidth, keepaspectratio]{HFI_IFP_fig21
\caption{Noise Equivalent Delta Temperature measured on the ground and
in-flight with slightly different tools}
\label{PlanckIPF1.5:fig:NET}
\end{figure}
\begin{table*}[tmb]
\caption{ NEP$_1$ is the average of the NEP of
processed signal in the band 0.6--2.5\,Hz. NEP$_2$ is the white noise
component of the NEP (see text). The NE$\Delta$Ts are derived from the
NEP$_2$. $\dag$ is for the bolometers suffering from RTS.}
\label{PlanckIPF1.5:tab:noiseJML}
\centering
\begin{tabular}{l r c c c c c c}
\hline\hline
\multicolumn{2}{l}{Bolometer} & Noise & \multicolumn{3}{c}{Two component fit} & CMB & RJ \\
& & NEP$_1$ & NEP$_2$ & f$_{\rm knee}$ & \ \ \ \ $\alpha$\ \ \ \ & NE$\Delta$T$_{CMB}$ & NE$\Delta$T$_{RJ}$ \\
Name & ID\# & 10$^{-17}$ W/$\sqrt{Hz}$ & 10$^{-17}$ W/$\sqrt{Hz}$ & \ mHz\ & & $\mu$K$_{CMB} \sqrt{s}$ & $\mu$K$_{CMB} \sqrt{s}$ \\
\hline
100-1a & 1 & 1.13 & 1.04 & 218 & 0.93 & 78 & 60.7 \\
100-1b & 2 & 1.21 & 1.14 & 166 & 1.02 & 69 & 53.0 \\
100-2a & 3 & 1.22 & 1.16 & 126 & 0.96 & 56 & 42.9 \\
100-2b & 4 & 1.31 & 1.22 & 182 & 0.95 & 58 & 44.7 \\
100-3a & 5 & 1.22 & 1.16 & 117 & 1.01 & 61 & 47.4 \\
100-3b & 6 & 1.09 & 1.01 & 173 & 1.02 & 66 & 51.4 \\
100-4a & 7 & 1.23 & 1.18 & 109 & 0.97 & 59 & 45.2 \\
100-4b & 8 & 1.18 & 1.08 & 212 & 0.95 & 70 & 53.6 \\
143-1a & 9 & 1.35 & 1.31 & 91 & 1.11 & 50 & 30.4 \\
143-1b & 10 & 1.18 & 1.09 & 197 & 1.01 & 51 & 30.6 \\
143-2a & 11 & 1.28 & 1.20 & 161 & 0.97 & 50 & 30.3 \\
143-2b & 12 & 1.30 & 1.27 & 106 & 1.18 & 50 & 30.1 \\
143-3a & 13 & 1.35 & 1.26 & 202 & 1.01 & 53 & 32.2 \\
143-3b & 14 & 1.18 & 1.09 & 190 & 1.02 & 51 & 30.9 \\
143-4a & 15 & 1.27 & 1.18 & 185 & 0.99 & 53 & 31.7 \\
143-4b & 16 & 1.32 & 1.24 & 161 & 1.07 & 59 & 35.5 \\
143-5 & 17 & 1.53 & 1.46 & 138 & 1.10 & 40 & 23.9 \\
143-6 & 18 & 1.37 & 1.25 & 230 & 1.03 & 40 & 24.1 \\
143-7 & 19 & 1.49 & 1.40 & 154 & 1.09 & 40 & 23.8 \\
$^\dag$143-8 & 20 & 2.2 & 1.60 & 1244 & 0.90 & 55 & 33.1 \\
217-5a & 21 & 1.35 & 1.30 & 117 & 1.10 & 82 & 26.4 \\
217-5b & 22 & 1.33 & 1.22 & 219 & 1.06 & 81 & 25.9 \\
217-6a & 23 & 1.30 & 1.25 & 107 & 1.07 & 78 & 25.1 \\
217-6b & 24 & 1.31 & 1.26 & 118 & 1.08 & 79 & 25.2 \\
217-7a & 25 & 1.41 & 1.36 & 98 & 1.07 & 80 & 25.4 \\
217-7b & 26 & 1.25 & 1.17 & 157 & 1.05 & 73 & 23.4 \\
217-8a & 27 & 1.37 & 1.31 & 148 & 1.05 & 80 & 25.5 \\
217-8b & 28 & 1.27 & 1.17 & 206 & 1.03 & 78 & 24.9 \\
217-1 & 29 & 1.59 & 1.49 & 187 & 1.14 & 66 & 20.7 \\
217-2 & 30 & 1.61 & 1.48 & 229 & 1.10 & 69 & 21.7 \\
217-3 & 31 & 1.63 & 1.54 & 165 & 1.12 & 66 & 20.8 \\
217-4 & 32 & 1.62 & 1.53 & 173 & 1.14 & 68 & 21.3 \\
353-3a & 33 & 1.53 & 1.43 & 174 & 0.98 & 305 & 21.9 \\
353-3b & 34 & 1.39 & 1.31 & 162 & 1.06 & 282 & 20.3 \\
353-4a & 35 & 1.34 & 1.28 & 124 & 1.04 & 324 & 22.6 \\
353-4b & 36 & 1.30 & 1.25 & 127 & 1.12 & 313 & 21.8 \\
353-5a & 37 & 1.26 & 1.21 & 121 & 1.05 & 268 & 19.4 \\
353-5b & 38 & 1.33 & 1.27 & 125 & 1.09 & 281 & 20.3 \\
353-6a & 39 & 1.47 & 1.38 & 208 & 1.08 & 429 & 30.7 \\
353-6b & 40 & 1.33 & 1.26 & 179 & 1.20 & 432 & 32.4 \\
353-1 & 41 & 1.59 & 1.52 & 100 & 1.04 & 192 & 13.7 \\
353-2 & 42 & 1.72 & 1.66 & 98 & 1.07 & 189 & 13.4 \\
353-7 & 43 & 1.62 & 1.54 & 155 & 1.18 & 237 & 16.4 \\
353-8 & 44 & 1.67 & 1.59 & 159 & 1.15 & 260 & 17.6 \\
545-1 & 45 & 3.50 & 3.19 & 295 & 1.20 & 1490 & 8.7 \\
545-2 & 46 & 2.93 & 2.66 & 322 & 1.20 & 1293 & 7.9 \\
$^\dag$545-3 & 47 & 4.48 & 3.70 & 431 & 1.23 & 2116 & 12.7 \\
545-4 & 48 & 2.76 & 2.51 & 297 & 1.19 & 1446 & 8.7 \\
857-1 & 49 & 3.59 & 3.31 & 222 & 1.20 & 36566 & 3.4 \\
857-2 & 50 & 4.10 & 3.75 & 265 & 1.15 & 36923 & 3.8 \\
857-3 & 51 & 3.47 & 3.21 & 236 & 1.20 & 37037 & 3.5 \\
$^\dag$857-4 & 52 & 3.64 & 3.00 & 622 & 1.09 & 50180 & 5.4 \\
Dark1 & 53 & 1.17 & 1.14 & 136 & 1.42 & 16496 & -- \\
Dark2 & 54 & 1.39 & 1.35 & 148 & 1.40 & 19462 & -- \\
\hline
\end{tabular}
\end{table*}
\section{Conclusions}
\label{PlanckIPF1.5:Sect9}
We report on the in-flight performance of the High Frequency
Instrument on board the {\it Planck\/}\ satellite. These results are derived
from the data obtained during a dedicated period of diagnostic testing
prior to the initiation of the scientific survey, as well as an
analysis of the survey data that form the basis of the early release
scientific products.
With the exception of a single anomaly in the operation of the $^4$He-JT\
cooler, the HFI has operated nominally since launch. The settings of
the readout electronics determined during pre-launch testing were
found to be very near the optimal value in flight and were applied
without any modification. A random telegraphic signal is observed in
the same three channels that exhibited this behaviour during the final
pre-launch testing. These channels are currently excluded from the scientific
analysis. The instrument operation has been extremely
stable during the first year of operation, requiring no adjustment of
the readout electronics.
The optical design, and the alignment of the optical assembly, relied
on both theoretical analysis and testing at the subsystem level. The
beams of the 545 and 857\,GHz channels, which employ multimoded
corrugated horns and waveguide, could not be measured on the ground.
The actual beam widths of these channels measured on planets are in
general smaller than the design goals and estimated values. The
optical properties of the single mode channels are in excellent
agreement with the design expectations.
A higher than expected cosmic ray flux, related to the level of Solar
activity, results in a manageable loss of signal and degradation of
thermal stability. Discrete cosmic ray events result in glitches in
the scientific signal that are flagged and removed by an algorithm
making use of the signal redundancy in the timeline. In addition to
these single events, the cosmic ray flux contributes a significant
thermal load on the sub-kelvin stage. Variations in this flux produce
low-frequency fluctuations in the bolometer plate that induce a common
mode component to the scientific signal and the focal plane
thermometry. Although a component correlated with the dark bolometer
outputs is removed during the TOI processing, a residual low frequency
contribution is removed at the map-making stage. With the exception of
the three detectors affected by telegraph noise, the sensitivity
measured above 0.6\,Hz exceeds the design goals of all HFI
channels. After the removal of the residual low frequency noise, the
final sensitivity of the frequency maps exceeds the mission
requirements, and approaches the goals, as described in the companion
paper \cite{planck2011-1.7}.
\begin{acknowledgement}
The Planck HFI instrument (\url{http://hfi.planck.fr/}) was designed and
built by an international consortium of laboratories, universities and
institutes, with important contributions from the industry, under the
leadership of the PI institute, IAS at Orsay, France. It was funded in
particular by CNES, CNRS, NASA, STFC and ASI. The authors extend their
gratitude to the numerous engineers and scientists, who have
contributed to the design, development, construction or evaluation of
the HFI instrument. A description of the Planck Collaboration and a list
of its members, indicating which technical or scientific activities they have
been involved in, can be found at
\url{www.rssd.esa.int/index.php?project=PLANCK\&page=Planck\_Collaboration}.
\end{acknowledgement}
\input{HFIperf4arxiv_tmp.bbl}
\end{document}
|
2,869,038,156,646 | arxiv |
\section{Introduction and Related Work}
In this work, we present improved distributed ($\mathsf{LOCAL}$ model) algorithms for the \emph{degree splitting problem}, and also use them to provide simpler and faster deterministic distributed algorithms for the classic and well-studied problem of \emph{edge coloring}.
\paragraph{\boldmath $\mathsf{LOCAL}$ Model.}
In the standard $\mathsf{LOCAL}$ model of distributed computing\cite{linial1987LOCAL, Peleg:2000}, the network is abstracted as an $n$-node undirected graph $G=(V, E)$, and each node is labeled with a unique $O(\log n)$-bit identifier. Communication happens in synchronous rounds of \emph{message passing}, where in each round each node can send a message to each of its neighbors. At the end of the algorithm each node should output its own part of the solution, e.g., the colors of its incident edges in the edge coloring problem. The time complexity of an algorithm is the number of synchronous rounds.
\paragraph{Degree Splitting Problems.}
The \emph{undirected degree splitting} problem seeks a partitioning of the graph edges $E$ into two parts so that the partition looks almost balanced around each node. Concretely, we should color each edge red or blue such that for each node, the difference between its number of red and blue edges is at most some small \emph{discrepancy} value $\kappa$. In other words, we want an assignment $q\colon E\rightarrow \{+1, -1\}$ such that for each node $v \in V$, we have
\[\textstyle\bigl|\sum_{e\in E(v)} q(e)\bigr|\leq \kappa,\]
where $E(v)$ denotes the edges incident on $v$. We want $\kappa$ to be as small as possible.
In the \emph{directed} variant of the {degree splitting} problem, we should orient all the edges such that for each node, the difference between its number of incoming and outgoing edges is at most a small discrepancy value $\kappa$.
\paragraph{Why Should One Care About Distributed Degree Splittings?}
On the one hand, degree splittings are natural tools for solving other problems with a \emph{divide-and-conquer} approach. For instance, consider the well-studied problem of edge coloring, and suppose that we are able to solve degree splitting efficiently with discrepancy $\kappa=O(1)$. We can then compute an edge coloring with ${(2+\varepsilon)\Delta}$ colors, for any constant $\varepsilon>0$; as usual, $\Delta$ is the maximum degree of the input graph $G=(V,E)$. For that, we recursively apply the degree splittings on $G$, each time reapplying it on each of the new colors, for a recursion of height $h=O(\log \varepsilon\Delta)$. This way we partition $G$ in $2^{h}$ edge-disjoint graphs, each with maximum degree at most
\[\Delta'=\frac{\Delta}{2^{h}} + \sum_{i=1}^{h}\frac{\kappa}{2^{i}} \leq \frac{\Delta}{2^{h}} + \kappa = O(1/\varepsilon).\]
We can then edge color each of these graphs with $2\Delta'-1$ colors, using standard algorithms (simultaneously in parallel for all graphs and with a separate color palette for each graph), hence obtaining an overall coloring for $G$ with $2^{h} \cdot (2\Delta'-1) \leq 2\Delta + 2^{h}\kappa = (2+\varepsilon)\Delta$ colors. We explain the details of this relation, and the particular edge coloring algorithm that we obtain using our degree splitting algorithm, later in \Cref{crl:edgeColoring}.
On the other hand, degree splitting problems are interesting also on their own: they seem to be an elementary locally checkable labeling (LCL) problem\cite{naor1993can}, and yet, even on bounded degree graphs, their distributed complexity is highly non-trivial. In fact, they exhibit characteristics that are intrinsically different from those of the classic problems of the area, including maximal independent set, maximal matching, $\Delta+1$ vertex coloring, and $2\Delta-1$ edge coloring. All of these classic problems admit trivial sequential greedy algorithms, and they can also be solved very fast distributedly on bounded degree graphs, in $\Theta(\log^* n)$ rounds\cite{linial1987LOCAL}. In contrast, degree splittings constitute a middle ground in the complexity: even on bounded degree graphs, deterministic degree splitting requires $\Omega(\log n)$ rounds, as shown by Chang et al.~\cite{chang2016exponential}, and randomized degree splitting requires $\Omega(\log\log n)$ rounds, as shown by Brandt et al.~\cite{brandt2016lower}. These two lower bounds were presented for the \emph{sinkless orientation} problem, introduced by Brandt et al.~\cite{brandt2016lower}, which can be viewed as a very special case of directed degree splitting: In sinkless orientation, we should orient the edges so that each node of degree at least $d$, for some large enough constant $d$, has at least one outgoing edge. For this special case, both lower bounds are tight\cite{ghaffari17}.
\paragraph{What is Known?}
First, we discuss the existence of low-discrepancy degree splittings. Any graph admits an undirected degree splitting with discrepancy at most $2$. This is the best possible, as can be seen on a triangle. This low-discrepancy degree splitting can be viewed as a special case of a beautiful area called \emph{discrepancy theory} (see e.g. \cite{chazelle2000discrepancy} for a textbook coverage), which studies coloring the elements of a ground set red/blue so that each of a collection of given subsets has almost the same number of red and blue elements, up to a small additive discrepancy. For instance, by a seminal result of Beck and Fiala from 1981\cite{beck1981integer}, any hypergraph of rank $t$ (each hyperedge has at most $t$ vertices) admits a red/blue edge coloring with per-node discrepancy at most $2t-2$. See \cite{bukh2016improvement, bednarchak1997note} for some slightly stronger bounds, for large $t$. In the case of standard graphs, where $t=2$, the existence proof is straightforward: Add a dummy vertex and connect it to all odd-degree vertices. Then, take an Eulerian tour, and color its edges red and blue in an alternating manner. In directed splitting, a discrepancy of $\kappa=1$ suffices, using the same Eulerian tour approach and orienting the edges along a traversal of this tour.
In the algorithmic world, Israeli and Shiloach~\cite{israeli1986improved} were the first to consider degree splittings. They used it to provide an efficient parallel ($\mathsf{PRAM}$ model) algorithm for maximal matching. This, and many other works in the $\mathsf{PRAM}$ model which later used degree splittings (e.g., \cite{karloff1987efficient}) relied on computing Eulerian tours, following the above scheme. Unfortunately, this idea cannot be used efficiently in the distributed setting, as an Eulerian tour is a non-local structure: finding and alternately coloring it needs $\Omega(n)$ rounds on a simple cycle.
Inspired by Israeli and Shiloach's method~\cite{israeli1986improved}, Hanckowiak et al.~\cite{hanckowiak01} were the first to study degree splittings in distributed algorithms. They used it to present the breakthrough result of a $\polylog(n)$-round deterministic distributed maximal matching, which was the first efficient deterministic algorithm for one of the classic problems. However, for that, they ended up having to relax the degree splitting problem in one crucial manner: they allowed a $\delta=1/\polylog n$ fraction of nodes to have arbitrary splits, with no guarantee on their balance. As explained by Czygrinow et al.~\cite{czygrinow2001coloring}, this relaxation ends up being quite harmful for edge coloring; without fixing that issue, it seems that one can get at best an $O(\Delta\log n)$-edge coloring.
Very recently, Ghaffari and Su\cite{ghaffari17} presented solutions for degree splitting without sacrificing any nodes, and used this to obtain the first $\polylog n$ round algorithm for ${(2+o(1))\Delta}$-edge coloring, improving on prior $\polylog(n)$-round algorithms that used more colors: the algorithm of Barenboim and Elkin~\cite{Barenboim:edge-coloring} for ${\Delta\cdot \exp(O(\frac{\log \Delta}{\log\log \Delta}))}$ colors, and the algorithm of Czygrinow et al.~\cite{czygrinow2001coloring} for $O(\Delta\log n)$ colors. The degree splitting algorithm of Ghaffari and Su\cite{ghaffari17} obtains a discrepancy $\kappa=\varepsilon\Delta$ in $O((\Delta^2 \log^5 n)/\varepsilon)$ rounds. Their method is based on iterations of flipping augmenting paths (somewhat similar in style to blocking flows in classic algorithms for the maximum flow problem\cite{dinitz2006dinitz}) but the process of deterministically and distributedly finding enough disjoint augmenting paths is quite complex. Furthermore, that part imposes a crucial limitation on the method: it cannot obtain a discrepancy better than $\Theta(\log n)$. As such, this algorithm does not provide any meaningful solution in graphs with degree $o(\log n)$.
\paragraph{Our Contributions.}
Our main result is a deterministic distributed algorithm for degree splitting that improves on the corresponding result of \cite{ghaffari17}. The new algorithm is (1) simpler, (2) faster, and (3) it gives a splitting with a much lower discrepancy.
\pagebreak
\begin{restatable}{theorem}{thmMainSplitting}\label{thm:mainSplitting}
For every $\varepsilon>0$, there are deterministic $O\big(\varepsilon^{-1}\cdot\log\varepsilon^{-1}\cdot\big(\log\log\varepsilon^{-1}\big)^{1.71}\cdot \log n\big)$-round distributed algorithms for computing directed and undirected degree splittings with the following properties:
\begin{enumerate}[label=(\alph*)]
\item For directed degree splitting, the discrepancy at each node $v$ of degree $d(v)$ is at most $\varepsilon \cdot d(v) + 1$ if $d(v)$ is odd and at most $\varepsilon\cdot d(v) + 2$ if $d(v)$ is even.
\item For undirected degree splitting, the discrepancy at each node $v$ of degree $d(v)$ is at most $\varepsilon\cdot d(v) + 4$.
\end{enumerate}
\end{restatable}
An important corollary of this splitting result is a faster and simpler algorithm for $(2+o(1))\Delta$-edge coloring, which improves on the corresponding result from \cite{ghaffari17}. The related proof is deferred to \Cref{sec:edgeColoring}.
\begin{restatable}{corollary}{crlEdgeColoring}\label{crl:edgeColoring}
For every $\varepsilon>1/\log \Delta$, there is a deterministic distributed algorithm that computes a $(2+\varepsilon)\Delta$-edge coloring in $O\big(\log^2\Delta \cdot \varepsilon^{-1} \cdot \log\log \Delta \cdot (\log\log\log\Delta)^{1.71} \cdot \log n\big)$ rounds.
\end{restatable}
This is significantly faster than the $O(\log^{11} n/\varepsilon^3)$-round algorithm of \cite{ghaffari17}. Subsequent and partly also in parallel to the work on the conference version of this paper, there has been further significant progress in the development of deterministic distributed edge coloring algorithms. This in particular includes the
first polylogarithmic-time deterministic $(2\Delta-1)$-edge coloring algorithm in \cite{FOCS17} by Fischer, Ghaffari, and Kuhn, which requires $O(\log^7 n)$ rounds. This has afterwards been improved to $O(\log^6 n)$ rounds by Ghaffari, Harris and Kuhn in \cite{FOCS18} and to $O(\log^4 n)$ rounds by Harris in \cite{HarrisDerandomization}. In \cite{GKMU17}, Ghaffari, Kuhn, Maus and Uitto even go below the threshold of $2\Delta-1$ colors and provide deterministic polylogarithmic-time algorithms for $(1+\varepsilon)\Delta$-edge coloring. The splitting result of the current paper plays an important role in the latter result: This splitting brings down the degree to a small value, with a negligible $(1+o(1))$ factor loss, and then those small degree graphs are colored efficiently.
\Cref{thm:mainSplitting} has another fascinating consequence. Assume that we have a graph in which all nodes have an odd degree. If $\varepsilon<1/\Delta$, we get a directed degree splitting in which each node $v$ has outdegree either $\lfloor d(v)/2 \rfloor$ or $\lceil d(v)/2 \rceil$. Note that the number of nodes for which the outdegree is $\lfloor d(v)/2 \rfloor$ has to be exactly $n/2$. We therefore get an efficient distributed algorithm to exactly divide the number of nodes into two parts of equal size in any odd-degree graph. For bounded-degree graphs, the algorithm even runs in time $O(\log n)$.
\paragraph{Our Method in a Nutshell.}
The main technical contribution is a distributed algorithm that partitions the edge set of a given graph in \emph{edge-disjoint short paths} such that each node is the start or end of at most $\delta$ paths. We call such a partition a \emph{path decomposition} and $\delta$ its \emph{degree} (cf. \Cref{fig:pathDecomp} for an illustration of a path decomposition). Now if we orient each path of a path decomposition with degree $\delta$ consistently, we obtain an orientation of discrepancy at most $\delta$. Moreover, such an orientation can be computed in time which is linear in the maximum path length.
To study path decompositions in graph $G$, it is helpful to consider an auxiliary graph $H$ in which each edge $\{u,v\}$ represents a path from $u$ to $v$ in $G$; now $\delta$ is the maximum degree of graph~$H$. To construct a low-degree path decomposition where $\delta$ is small, we can start with a trivial decomposition $H = G$, and then repeatedly join pairs of paths: we can replace the edges $\{u,v_1\}$ and $\{u,v_2\}$ in graph $H$ with an edge $\{v_1,v_2\}$, and hence make the degree of $u$ lower, at a cost of increasing the path lengths---this operation is called a \emph{contraction} here.
If each node $u$ simply picked arbitrarily some edges $\{u,v_1\}$ and $\{u,v_2\}$ to contract, this might result in long paths or cycles. The key idea is that we can use a \emph{high-outdegree orientation} to select a good set of edges to contract: Assume that we have an orientation in $H$ such that all nodes have outdegree at least $2k$. Then each node could select $k$ pairs of outgoing edges to contract; this would reduce the maximum degree of $H$ from $\delta$ to $\delta - 2k$ and only double the maximum length of a path. Also see the illustrations of this contracting process in \Cref{fig:contract,fig:contractOne}.
In essence, this idea makes it possible to \emph{amplify} the quality of an orientation algorithm: Given an algorithm $A$ that finds an orientation with a large (but not optimal) outdegree, we can apply $A$ repeatedly to reduce the maximum degree of $H$. This will result in a low-degree path decomposition of $G$, and hence also provide us with a well-balanced orientation in $G$.
\paragraph{Structure.}
The roadmap for this paper is as follows:
\begin{itemize}[noitemsep]
\item \Cref{sec:shortPathsDecompositions}: Partitioning graphs in edge-disjoint short paths (main technical contribution).
\item \Cref{sec:basecase}: Finding orientations in $3$-regular graphs (used in \Cref{sec:outdegtwo} and in \Cref{sec:mainsplitting}).
\item \Cref{sec:outdegtwo}: Finding orientations in $5$-regular graphs (used in \Cref{sec:shortPathsDecompositions}).
\item \Cref{sec:mainsplitting}: Proof of \Cref{thm:mainSplitting}.
\item \Cref{sec:edgeColoring}: Proof of \Cref{crl:edgeColoring}.
\item \Cref{sec:weak2orientation-lb}: A lower bound for orientations in even-degree graphs.
\end{itemize}
Here \Cref{sec:shortPathsDecompositions} is the most interesting part; \Cref{sec:basecase} and \Cref{sec:outdegtwo} deal with some corner cases that are needed in order to have tight constants for odd-degree graphs.
\section{Short Path Decompositions}\label[section]{sec:shortPathsDecompositions}
The basic building block of our approach is to find consistently oriented and short (length $O(\Delta)$) paths in an oriented graph.
The first crucial observation is that an oriented path going through a node $v$ is ``good'' from the perspective of $v$ in the sense that it provides exactly one incoming and one outgoing edge to $v$.
Another important feature is that flipping a consistently oriented path does not increase the discrepancy between incoming and outgoing edges for any non-endpoint node along the path.
Following these observations, we recursively decompose a graph into a set of short paths, and merge the paths to ensure that every node is at the end of only a few paths. If a node $v$ is at the end of $\delta(v)$ paths an arbitrary orientation of these paths will provide a split with discrepancy at most $\delta(v)$ for $v$.
The recursive graph operations may turn graphs into multigraphs with self-loops. Thus throughout the chapter a multigraph is allowed to have self-loops and the nodes of a path $v_1, \ldots, v_k$ do not need to be distinct; however, a path can contain each edge at most once. A self-loop at a node $v$ contributes two to the degree of $v$.
\subsection{Orientations and Edge Contractions}
The core concept to merge many paths in parallel in one step of the aforementioned recursion is given by the concept of weak $k(v)$-orientations. We begin by extending and adapting prior work \cite{ghaffari17} on weak orientations to our needs.
\begin{definition}\label{def:sinkless}
A \emph{weak $k(v)$-orientation} of a multigraph $G=(V,E)$ is an orientation of the edges $E$ such that each node $v\in V$ has outdegree at least $k(v)$.
\end{definition}
Note that a weak $1$-orientation is a \emph{sinkless orientation}. By earlier work, it is known that a weak $1$-orientation can be found in time $O(\log n)$ in simple graphs of minimum degree at least three.
\begin{lemma}[Sinkless Orientation, \cite{ghaffari17}]
\label[lemma]{lemma:weak1}
A weak $1$-orientation can be computed by a deterministic algorithm in $O(\log n)$ rounds in simple graphs with minimum degree $3$ (and by a randomized algorithm in $O(\log \log n)$ rounds in the same setting).
\end{lemma}
In our proofs, we may face multigraphs with multiple self-loops and with nodes of degree less than three and thus, we need a slightly modified version of this result.
\begin{corollary}[Sinkless Orientation, \cite{ghaffari17}]
\label[corollary]{lemma:weakmulti}
Let $G = (V, E)$ be a multigraph and $W \subseteq V$ a subset of nodes with degree at least three. Then, there is a deterministic algorithm that finds an orientation of the edges such that every node in $W$ has outdegree of at least one and runs in $O(\log n)$ rounds (and a randomized algorithm that runs in $O(\log \log n)$ rounds).
\end{corollary}
\begin{proof}
For every multi-edge, both endpoints pick one edge and orient it outwards, ties broken arbitrarily. For every self-loop, the node will orient it arbitrarily. This way, every node with an incident multi-edge or self-loop will have an outgoing edge.
From here on, let us ignore the multi-edges and self-loops and focus on the simple graph $H$ remaining after removing the multi-edges.
For every node $v$ with degree at most two in $H$, we connect $v$ to $3 - d(v)$ copies of the following gadget $U$.
The set of nodes of $U = \{ u_1, u_2, u_3, u_4, u_5 \}$ is connected as a cycle.
Furthermore, we add edges $\{u_2, u_4\}$ and $\{u_3, u_5\}$ to the gadget and connect $u_1$ to $v$.
This way, the gadget is $3$-regular.
In the simple graph constructed by adding these gadgets, we run the algorithm of \Cref{lemma:weak1}.
Thus, any node of degree at least three in the original graph that was not initially adjacent to a multi-edge or self-loop gets an outgoing edge.
Since we know that every node incident to a multi-edge or self-loop in $G$ also has an outgoing edge, the claim follows.
\end{proof}
The sinkless orientation algorithm from \Cref{lemma:weakmulti} immediately leads to an algorithm which finds a weak $\lfloor d(v)/3\rfloor$-orientation in multigraphs in time $O(\log n)$.
\begin{lemma}[Weak $\lfloor d(v) /3\rfloor$-Orientation]
\label[lemma]{lemma:weakDelta3}
There is a deterministic algorithm that finds a weak $\lfloor d(v) /3\rfloor$-orientation in time $O(\log n)$ in multigraphs.
\end{lemma}
\begin{proof}
Partition node $v$ into $\lceil d(v)/3\rceil$ nodes and split its adjacent edges among them such that $\lfloor d(v)/3\rfloor$ nodes have exactly three adjacent edges each and the remaining node, if any, has $d(v) \bmod 3$ adjacent edges. Note that the partitioning of a node into several nodes may cause self-loops to go between two different copies of the same node. Then, use the algorithm from \Cref{lemma:weakmulti} to compute a weak $1$-orientation of the resulting multigraph where degree two or degree one nodes do not have any outdegree requirements. If we undo the partition but keep the orientation of the edges we have a weak $\lfloor d(v)/3 \rfloor$-orientation of the original multigraph.
\end{proof}
The concept of weak orientations can be extended to both indegrees and outdegrees.
\begin{definition}
A \emph{strong $k(v)$-orientation} of a multigraph $G=(V,E)$ is an orientation of the edges $E$ such that each node $v\in V$ has both indegree and outdegree at least $k(v)$.
\end{definition}
The techniques in this section need orientations in which nodes have at least two outgoing edges. \Cref{lemma:weakDelta3} provides such orientations for nodes of degree at least six; but for nodes of smaller degree it guarantees only one outgoing edge. It is impossible to improve this for nodes with degree smaller than five in time $o(n)$ (cf. \Cref{thm:weak2orientation-lb}). But we obtain the following result for the nodes with degree five. Its proof relies on different techniques than the techniques in this section, and therefore it is deferred to \Cref{sec:outdegtwo}.
\begin{restatable}[Outdegree 2]{lemma}{lemmaFirstoutdegtwo}\label{lemma:firstoutdeg-2}
The following problem can be solved in time $O(\log n)$ with deterministic algorithms and $O(\log \log n)$ with randomized algorithms:
given any multigraph, find an orientation such that all nodes of degree at least $5$ have outdegree at least $2$.
\end{restatable}
\subsection{Path Decompositions}
We now introduce the concept of a path decomposition.
The decomposition proves to be a strong tool due to the fact that it can be turned into a strong orientation (cf.~\Cref{lemma:fromPathDecompToStrongOrient}).
\begin{definition}[Path Decomposition]
Given a multigraph $G=(V,E)$, a positive integer~$\lambda$, and a function $\delta\colon V\rightarrow \mathbb{R}_{\geq 0}$, we call a partition $\mathcal{P}$ of the edges $E$ into edge-disjoint paths $P_1,\ldots, P_{\rho}$ a \emph{$(\delta,\lambda)$-path decomposition} if
\begin{itemize}[noitemsep]
\item for every $v\in V$ there are at most $\delta(v)$ paths that start or end in $v$,
\item each path $P_i$ is of length at most $\lambda$.
\end{itemize}
For each path decomposition $\mathcal{P}$, we define the multigraph $G(\mathcal{P})$ as follows: the vertex set of $G(\mathcal{P})$ is $V$, and there is an edge between two nodes $u,v\in V$ if $\mathcal{P}$ has a path which starts at $u$ and ends at $v$ or vice versa.
The \emph{degree of $v$ in $\mathcal{P}$} is defined to be its degree in $G(\mathcal{P})$ and the \emph{maximum degree of the path decomposition $\mathcal{P}$} is the maximum degree of $G(\mathcal{P})$.
\end{definition}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figures/pathDecomposition.pdf}
\caption{The longest path in the given path decomposition has length five and there are three paths that start/end at the red node.}
\label{fig:pathDecomp}
\end{figure}
Notice that $\delta(v)$ is an upper bound on the degree of $v$ in $\mathcal{P}$ and $\max_{v\in V}\delta(v)$ is an upper bound on the maximum degree of the path decomposition. Note that $d_G(v)-d_{G(\mathcal{P})}(v)$ is always even. If the function in some path decomposition $(\delta,\lambda)$-path decomposition satisfies $\delta(v)=a$ for some $a$ we also speak of a $(a,\lambda)$-path decomposition. See \Cref{fig:pathDecomp} for an illustration of a path decomposition and its parameters.
To make proofs more to the point instead of getting lost in notation, we often identify $G(\mathcal{P})$ with $\mathcal{P}$ and vice versa.
A distributed algorithm has computed a path decomposition $\mathcal{P}$ if every node knows the paths of $\mathcal{P}$ it belongs to.
Note that it is trivial to compute a $(d(v),1)$-path decomposition in 1 round, because every edge can form a separate path.
Let $\left\lfloor \cdot \right\rfloor_*$ denote the function which rounds down to the previous even integer, that is, $\left\lfloor x \right\rfloor_* = 2 \lfloor x/2 \rfloor$.
The following virtual graph transformation, which we call \emph{edge contraction}, is the core technical construction in this section.
\paragraph{Disjoint Edge Contraction.}
The basic idea behind edge contraction is to turn two incident edges $\{v, u\}$ and $\{v, w\}$ into a single edge $\{u, w\}$ by removing the edges $\{v, u\}$ and $\{v, w\}$ and adding a new edge $\{u, w\}$.
We say that node $v$ contracts when an edge contraction is performed on some pair of edges $\{v, u\}$ and $\{v, w\}$.
When node $v$ performs a contraction of edges $\{v, u\}$ and $\{v, w\}$, its degree $d(v)$ is reduced by two while maintaining the degrees of $u$ and $w$.
Notice that adjacent nodes can only contract edge-disjoint pairs of edges in parallel and a contraction may also produce isolated nodes, multi-edges and self-loops. If a self-loop $\{v,v\}$ is selected to be contracted with any other edge $\{v,w\}$ it simply results in a new edge $\{v,w\}$ as if the self-loop was any other edge. Such a contraction still reduces the degree of $v$ by two as the self-loop was considered as both -- an incoming and an outgoing edge of $v$.
See \Cref{fig:contract} for an illustration.
Edge contractions can be used to compute path decompositions, e.g., an edge which is created through a contraction of two edges can be seen as a path of length two. If an edge $\{u,v\}$ represents a path from $u$ to $v$ in $G$, e.g., when recursively applying edge contractions on the graph $G(\mathcal{P})$ for some given path decomposition $\mathcal{P}$, each contraction merges two paths of $\mathcal{P}$. If each node simply picked arbitrarily some edges to contract, this
might result in long paths or cycles.
The key idea is to use orientations of the edges to find large sets of edges which can be contracted in parallel. If every node only contracts outgoing edges of a given orientation all contractions of all nodes can be performed in parallel.
If we start with a trivial decomposition, i.e., each edge is its own path, and perform $k$ iterations of parallel contraction, where, in each iteration, each node contracts two edges, we obtain a $(d(v) - 2k, 2^k)$-path decomposition.
If we want the degrees $d(v)-2k$ to be constant we have to choose $k$, i.e., the number of iterations, in the order of $\Delta$ which implies exponentially long paths and runtime as the path lengths (might) double with each contraction.
The technical challenge to avoid exponential runtime is to achieve a lot of parallelism while at the same time reducing the degrees quickly. We achieve this with the help of weak orientation algorithms: An outdegree of $f(v)$ at node $v$ allows the node to contract $\left\lfloor f(v)\right\rfloor_*$ edges at the same time and in parallel with all other nodes. If $f(v)$ is a constant fraction of $d(v)$ this implies that $O(\log \Delta)$ iterations are sufficient to reach a small degree.
As the runtime is exponential in the number of iterations and the constant in the $O$-notation might be large, this is still not enough to ensure a runtime which is linear in $\Delta$, up to polylogarithmic terms.
Instead, we begin with the weak orientation algorithm from the previous section and iterate it until a path decomposition with a small (but not optimal!) degree is obtained. Then we use it to construct a better orientation algorithm. Then, we use this better orientation to compute an even better one and so on. Recursing with the correct choice of parameters leads to a runtime which is linear in $\Delta$, up to polylogarithmic terms.
We take the liberty to use the terms recursion and iteration interchangeably depending on which term is more suitable in the respective context.
Refer to \Cref{fig:contractOne} for an illustration of the edge contraction technique with a given orientation.
\begin{figure}
\centering
\begin{tabular}{@{}c@{\hspace{10mm}}|@{\hspace{10mm}}c@{}}
\includegraphics[scale=0.8]{figures/contractNew0.pdf} &
\includegraphics[scale=0.8]{figures/contractNew25.pdf} \\[10mm]
\includegraphics[scale=0.8]{figures/contractNew1a.pdf} &
\includegraphics[scale=0.8]{figures/contractNew3a.pdf} \\[10mm]
\includegraphics[scale=0.8]{figures/contractNew2a.pdf} &
\includegraphics[scale=0.8]{figures/contractNew4a.pdf}
\end{tabular}
\caption{In two sequences of three illustrations this figure depicts two sets of contractions. In each column the first illustration is the situation before the contraction, the second one depicts the orientation and the selected outgoing edges which will be contracted in parallel and the third illustration shows the situation after the contraction where new edges are highlighted.\\
\hspace*{2.5ex}A contraction may produce isolated nodes, multi-edges and self-loops. If a self-loop $\{v,v\}$ is selected to be contracted with any other edge $\{v,w\}$ it simply results in a new edge $\{v,w\}$ as if the self-loop was any other edge. Such a contraction still reduces the degree of $v$ by two.\\
\hspace*{2.5ex}Note that we used a graph with small node degrees for illustration purposes. We cannot quickly compute an orientation with large outdegree for nodes with degree less than five.}
\label[figure]{fig:contract}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{figures/contractOne1.pdf}\label[figure]{fig:contractOne1}
\hspace{\stretch{1}}
\includegraphics[scale=0.4]{figures/contractOne2a.pdf}\label[figure]{fig:contractOne2}
\hspace{\stretch{1}}
\includegraphics[scale=0.4]{figures/contractOne25.pdf}\label[figure]{fig:contractOne35}
\hspace{\stretch{1}}
\includegraphics[scale=0.4]{figures/contractOne3.pdf}\label[figure]{fig:contractOne3}
\hspace{\stretch{1}}
\includegraphics[scale=0.4]{figures/contractOne4.pdf}\label[figure]{fig:contractOne4}
\caption{The first two illustrations show that selecting the outgoing edges for a contraction can be seen as dividing the node into a set of virtual nodes, each incident to two outgoing edges. Then, in the third illustration, the contraction is obtained by removing the virtual nodes but keeping the connection alive. The last two illustrations show how an orientation on contracted edges is used to orient the edges of the original graph such that virtual nodes obtain an equal split (and such that the original node obtains a good split).}
\label[figure]{fig:contractOne}
\end{figure}
We will now apply a simple version of our contraction technique to obtain a fast and precise path decomposition algorithm in $\Delta$-regular graphs for $\Delta = O(1)$. The result can also be formulated for non-regular graphs, but here we choose regular graphs to focus on the proof idea which is the key theme throughout most proofs of this section.
\begin{theorem}[$(\Delta-2k,2^k)$-Path Decomposition]
\label[theorem]{thm:decompExpDeltaRuntime}
Let $G=(V,E)$ be a $\Delta$-regular multigraph. For any positive integer $k \leq \Delta/2-2$ there is a deterministic distributed algorithm that computes a $(\Delta-2k,2^k)$-path decomposition in time $O(2^k \log n)$.
\end{theorem}
\begin{proof}
We recursively compute $k$ multigraphs $H_1, \ldots, H_k$ where $H_k$ corresponds to the resulting path decomposition.
To obtain $H_1$, we begin by computing a weak $2$-orientation $\pi$ of $G$ with the algorithm from \Cref{lemma:weakDelta3} (note that by assumption we have $k \ge 1$ and therefore $\Delta \ge 6$).
Then, every node contracts a pair of outgoing incident edges.
Notice that contractions of adjacent nodes are always disjoint.
The degree of of every node is reduced to $\Delta - 2$ and each edge in the resulting multigraph $H_1$ consists of a path in $G$ of length at most two.
Applying this method recursively with recursion depth $k$ yields multigraphs $H_1,\ldots,H_k$ where the maximum degree of $H_i$ is $\Delta-2i$ and each edge in $H_i$ corresponds to a path in $G$ of length at most $2^i$. Thus, $H_k$ corresponds to a $(\Delta-2k,2^k)$-path decomposition. Note that there is one execution of \Cref{lemma:weakDelta3} in each recursion level and it provides a weak $2$-orientation of the respective graph because the degree of each node is at least six due to $i\leq k\leq \Delta/2-2$.
One communication round in recursion level $i$ can be simulated in $2^i$ rounds in the original graph.
Thus, the runtime is dominated by the application of \Cref{lemma:weakDelta3} in recursion level $k$ which yields a time complexity of $O(2^k \log n)$.
\end{proof}
Next, we show how to turn a $(\delta,\lambda)$-path decomposition efficiently into a strong orientation.
The strong orientation obtained this way has $\delta(v)$ as an upper bound on the discrepancy between in- and outdegree of node $v$.
\begin{lemma}
\label[lemma]{lemma:fromPathDecompToStrongOrient}
Let $G=(V,E)$ be a multigraph with a given $(\delta,\lambda)$-path decomposition $\mathcal{P}$. There is a deterministic algorithm that computes a strong $\frac{1}{2}(d(v)-\delta(v))$-orientation of $G$ in $O(\lambda)$ rounds.
\end{lemma}
\begin{proof}
Let $H=G(\mathcal{P})$ be the virtual graph that corresponds to $\mathcal{P}$ and let $\pi_H$ be an arbitrary orientation of the edges of $H$.
Let $(u, v)$ be an edge of $H$ oriented according to $\pi_H$ and let $P = u_1, \ldots, u_k$, where $u_1 = u$ and $u_k = v$, be the path in the original graph $G$ that corresponds to edge $(u, v)$ in $H$.
Now, we orient the path $P$ in a consistent way according to the orientation of $(u, v)$, i.e., edge $\{u_i, u_{i + 1}\}$ is directed from $u_i$ to $u_{i + 1}$ for all $1 \leq i \leq k$.
Since every edge in $G$ belongs to exactly one path in the decomposition, performing this operation for every edge in $H$ provides a unique orientation for every edge in $G$.
Let us denote the orientation obtained this way by $\pi_G$.
Consider some node $v$ and observe that orienting any path that contains $v$ but where $v$ is not either the start or the endpoint adds exactly one incoming edge and one outgoing edge for~$v$.
Therefore, the discrepancy of the indegrees and outdegrees of $v$ in $\pi_G$ is bounded from above by the discrepancy in $\pi_H$, which is at most $\delta(v)$ by the definition of a $(\delta,\lambda)$-path decomposition. It follows that $\pi_G$ is a strong $\frac{1}{2}(d(v)-\delta(v))$-orientation.
Finally, since the length of any path in $\mathcal{P}$ is bounded above by $\lambda$, consistently orienting the paths takes $\lambda$ communication rounds finishing the proof.
\end{proof}
In the following, we formally use weak orientations to compute a path decomposition. This lemma will later be iterated in \Cref{cor:iterative}.
\begin{lemma}
\label[lemma]{lemma:newDecomp}
Assume that there exists a deterministic distributed algorithm that finds a weak $\bigl(\bigl(\frac{1}{2}-\varepsilon\bigr)d(v)- 2\bigr)$-orientation in time $T(n,\Delta)$.
Then, there is a deterministic distributed algorithm that finds a $\bigl(\bigl(\frac{1}{2}+\varepsilon\bigr)d(v)+4,\, 2\bigr)$-path decomposition in time $O(T(n,\Delta))$.
\end{lemma}
\begin{proof}
Let $G$ be a multigraph with a weak $\bigl(\bigl(\frac{1}{2}-\varepsilon\bigr)d(v)-2\bigr)$-orientation given by the algorithm promised in the lemma statement.
Now every node $v$ arbitrarily divides the outgoing edges into pairs and contracts these pairs yielding a multigraph with degree at most
\[\textstyle d(v) - \left\lfloor\bigl(\frac{1}{2}-\varepsilon\bigr)d(v)\right\rfloor_* + 2 \leq \bigl(\frac{1}{2} + \varepsilon\bigr)d(v) + 4.\]
Observing that all of the chosen edge pairs are disjoint yields that the constructed multigraph is a $\bigl(\bigl(\frac{1}{2}+\varepsilon\bigr)d(v)+4,\, 2\bigr)$-path decomposition.
The contraction operation requires one round of communication.
\end{proof}
In the following lemma we iterate \Cref{lemma:newDecomp} to obtain an even better path decomposition. Furthermore, more care is required in the details to avoid rounding errors and to obtain the correct result when the degrees get small. \Cref{cor:iterative} will be applied many times in proceeding subsections.
\begin{lemma}
\label[lemma]{cor:iterative}
Let $0<\varepsilon\leq 1/6$.
Assume that $T(n,\Delta)\geq \log n$ is the running time of an algorithm $\mathcal{A}$ that finds a weak $\bigl((1/2-\varepsilon)d(v)-2\bigr)$-orientation.
Then for any positive integer~$i$, there is a deterministic distributed algorithm $\mathcal{B}$ that finds a $\bigl((1/2+\varepsilon)^id(v)+4,\, 2^{i+5} \bigr)$-path decomposition $\mathcal{P}$ in time $O(2^i\cdot T(n,\Delta))$.
\end{lemma}
\begin{proof}
Let $i$ be a positive integer. We define algorithm $\mathcal{B}$ such that it uses algorithm $\mathcal{A}$ to recursively compute graphs $H_0,H_1,\dotsc,H_i, H_{i+1}, \dotsc, H_{i+5}$ and path decompositions $\mathcal{P}_1,\mathcal{P}_2,\dotsc,\allowbreak\mathcal{P}_{i},\allowbreak \mathcal{P}_{i+1},\allowbreak\dotsc,$ $ \mathcal{P}_{i+5}$. Let $G=(V,E)$ be a multigraph. For $j=0,\ldots, i-1$ we set $H_0=G$ and $H_{j+1}=H_j(\mathcal{P}_{j+1})$, where $\mathcal{P}_{j+1}$ is the path decomposition which is returned by applying \Cref{lemma:newDecomp} with algorithm $\mathcal{A}$ on $H_j$. This guarantees that path decomposition $\mathcal{P}_i$ has maximum degree $(\frac{1}{2}+\varepsilon)^id(v)+12$. The remaining five graph decompositions are computed afterwards (see the end of this proof) and reduce the additive $12$ to an additive $4$.
\subparagraph{Properties of \boldmath$\mathcal{P}_{1}, \ldots, \mathcal{P}_{i}$.}
We first show that for $j=1,\ldots,i$ the path decomposition $\mathcal{P}_j$ is a $(z_j(v), 2^j)$-path decomposition with
\[z_j(v)=\bigl(\tfrac{1}{2}+\varepsilon\bigr)^jd(v)+4\sum_{k=0}^{j-1}\bigl(\tfrac{1}{2}+\varepsilon\bigr)^k.\]
With every application of \Cref{lemma:newDecomp} the length of the paths at most doubles in length which implies that the path length of $\mathcal{P}_j$ is upper bounded by $2^j$.
We now prove by induction that the variables $z_j(v)$, $j=1,\ldots i$ behave as claimed:
\begin{itemize}
\item \emph{Base case:} $z_1(v)=\big(\frac{1}{2}+\varepsilon\big)d(v)+4$ follows from the invocation of \Cref{lemma:newDecomp} with $\mathcal{A}$ on $H_0=G$.
\item \emph{Inductive step:} Using the properties of \Cref{lemma:newDecomp} we obtain
\begin{align*}
z_{j+1}(v) & =\bigl(\tfrac{1}{2}+\varepsilon\bigr)z_j(v)+4
\leq \bigl(\tfrac{1}{2}+\varepsilon\bigr)\biggl(\bigl(\tfrac{1}{2}+\varepsilon\bigr)^jd(v)+4\sum_{k=0}^{j-1}\bigl(\tfrac{1}{2}+\varepsilon\bigr)^k\biggr)+4\\
& =\bigl(\tfrac{1}{2}+\varepsilon\bigr)^{j+1}d(v)+4\sum_{k=0}^{j}\bigl(\tfrac{1}{2}+\varepsilon\bigr)^k\text{.}
\end{align*}
\end{itemize}
Using the geometric series to bound the last sum and then $\varepsilon\leq 1/6$ we obtain that
\[z_i(v)\leq \bigl(\tfrac{1}{2}+\varepsilon\bigr)^id(v)+12.\]
\subparagraph{Reducing the Additive Term.}
Now, we compute the five further path decompositions $\mathcal{P}_{i+1}, \ldots, \mathcal{P}_{i+5}$ to reduce the additive term in the degrees of the path decomposition from $12$ to $4$; in each path decomposition this additive term is reduced by two for certain nodes. In each of the first four path decompositions nodes with degree at least six in the current path decomposition reduce the additive term by at least two: we compute a weak $\lfloor d(v)/3\rfloor$-orientation (using \Cref{lemma:weakDelta3}) and then every node with degree at least six contracts two outgoing edges.
In the last path decomposition we compute an orientation in which every node with degree at least five in the current path decomposition has two outgoing edges (using \Cref{lemma:firstoutdeg-2}) and then each of them contracts two incident edges. Thus in the last path decomposition the additive term of nodes with degree five is reduced by two.
To formally prove that we obtain the desired path decomposition let $x_{i+j}(v)$ be the actual degree of node $v$ in $G(\mathcal{P}_{i+j})$ for $j=0,\ldots, 5$. First note that the degree of a node never increases due to an edge contraction, not even due to an edge contraction which is performed by another node.
\subparagraph{Constructing \boldmath$\mathcal{P}_{i+1}, \ldots, \mathcal{P}_{i+4}$.}
To determine path decomposition $\mathcal{P}_{i+j+1}$ for $j=0,\ldots,3$, we compute an orientation of $G(\mathcal{P}_{i+j})$ in which every node $v$ with $x_{i+j}(v) \geq 6$ has outdegree at least two (one can use the algorithm described in \Cref{lemma:weakDelta3}). Then $\mathcal{P}_{i+j+1}$ is obtained if every node with $x_{i+j}(v)\geq 6$ contracts two of its incident outgoing edges. So, whenever $x_{i+j}(v)\geq 6$ we obtain that $x_{i+j+1}(v)= x_{i+j}(v)-2$, that is $x_{i+j+1}\leq z_i(v)-2(j+1)$. If $x_{i+j}(v)\geq 6$ for all $j=0,\ldots,3$ we have
\[x_{i+5}(v)\leq x_{i+4}(v)\leq (1/2+\varepsilon)^id(v)+4.\]
Otherwise, for some $j=0,\ldots,3$, we have $x_{i+j}(v)\leq 5$, that is, $x_{i+4}(v)\leq 4$ or $x_{i+4}(v)=5$. If $x_{i+4}(v)\leq 4$ we have
\[x_{i+5}(v)\leq x_{i+4}(v)\leq 4\leq (1/2+\varepsilon)^id(v)+4.\]
\subparagraph{Constructing \boldmath$\mathcal{P}_{i+5}$.}
For nodes with $x_{i+4}(v)=5$ we compute one more path decomposition.
We use \Cref{lemma:firstoutdeg-2} to compute an orientation of $G(\mathcal{P}_4)$ in which each node with degree at least five has two outgoing edges; then each node with at least two outgoing edges contracts one pair of its incident outgoing edges.
Thus the degree of nodes with degree five reduces by two and we obtain that the path decomposition $\mathcal{P}_{i+5}$ is a $\bigl((\frac{1}{2}+\varepsilon)^id(v)+4,\, 2^{i+5}\bigr)$-path decomposition.
\subparagraph{Running Time.}
The time complexity to invoke algorithm $\mathcal{A}$ or the algorithms from \Cref{lemma:weakDelta3} or \Cref{lemma:firstoutdeg-2} on graph $H_j$ is $O(2^jT(n,\Delta))$ because the longest path in $H_j$ has length $2^j$ and $T(n,\Delta)\geq \log n$. Thus, the total runtime is
\[O\biggl(\,\sum_{j=0}^{i+5}2^jT(n,\Delta)\biggr)=O\bigl(2^{i}T(n,\Delta)\bigr).\qedhere\]
\end{proof}
The reduction of the additive term in the proof of \Cref{cor:iterative} is most likely not helpful for edge coloring applications as constant degree graphs can be colored quickly anyways. However, for theoretical reasons it is interesting to see how close we can get to optimal splits with regard to the discrepancy. The splits that we obtain for directed splitting are optimal; the undirected splitting result leaves a bit of space for improvement.
\subsection{Amplifying Weak Orientation Algorithms}
Now, we use \Cref{cor:iterative} to iterate a given weak orientation algorithm $\mathcal{A}$ to obtain a new weak orientation algorithm $\mathcal{B}$. The goal is that $\mathcal{B}$ has an outdegree guarantee which is much closer to $(1/2) d(v)$ than the guarantee provided by algorithm $\mathcal{A}$.
\begin{lemma}\label[lemma]{lemma:weakTransformation}
Let $0<\varepsilon_2<\varepsilon_1\leq \frac{1}{6}$.
Assume that there is a deterministic algorithm $\mathcal{A}$ which computes a weak $\left(\left( \frac{1}{2}-\varepsilon_1 \right) d(v)-2\right)$-orientation and runs in time $T(n,\Delta)$.
Then there is a deterministic weak $\left(\left( \frac{1}{2}-\varepsilon_2 \right)d(v)-2 \right)$-orientation algorithm $\mathcal{B}$ with running time
\begin{align}
\label[equation]{eqn:runtimeweakTransformation}
O\Bigl(\varepsilon_2^{\log_2^{-1}( \frac{1}{2}+\varepsilon_1 )}\cdot T(n,\Delta)\Bigr)=O\Bigl(\varepsilon_2^{-(1+24\varepsilon_1)}\cdot T(n,\Delta)\Bigr).
\end{align}
\end{lemma}
Let $\alpha = \frac{1}{2}-\varepsilon_1$ and $\beta = \frac{1}{2} + \varepsilon_1$. The roadmap for the proof of Lemma \ref{lemma:weakTransformation} is as follows:
\begin{enumerate}[label=(\arabic*)]
\item Execute $i$ iterations of a weak $(\alpha d(v) - 2)$-orientation algorithm, for an $i$ that will be chosen later, and after each iteration, perform disjoint edge contractions. Thus, we obtain a $\bigl(\beta^{i} d(v) + 4,\, 2^{i + 5} \bigr)$-path decomposition using \Cref{cor:iterative}.
\item Apply \Cref{lemma:fromPathDecompToStrongOrient} to obtain a strong (and thus also a weak) $\bigl( \frac{1}{2} ( 1 - \beta^i ) d(v) - 2\bigr)$-orientation.
\item By setting $i = \log(\varepsilon_2) / \log(\beta)$ we get that $\beta^{i} = \varepsilon_2$ and the running time of steps 1--2 is
\[
O(2^i T(n,\Delta)) = O\bigl(\varepsilon_2^{\log_2^{-1}{\beta }}\cdot T(n,\Delta)\bigr) = O\bigl(\varepsilon_2^{-(1 + 24\varepsilon_1)}\cdot T(n,\Delta)\bigr),
\]
where $T(n, \Delta)$ is the runtime of the weak $(\alpha d(v) - 2)$-orientation algorithm. The last equality holds because with \Cref{lemma:taylor}, we obtain that $-\log_2^{-1} \beta \leq 1+24\varepsilon_1$ when $\varepsilon_1 \leq 1/6$.
\end{enumerate}
With \Cref{lemma:weakTransformation} at hand we can amplify the quality of splitting algorithms and obtain the following theorem.
\begin{theorem}
\label[theorem]{thm:orientationAlgorithms}
Let $\delta$ be a positive integer. There exist the following deterministic weak orientation algorithms.
\begin{enumerate}[label=(\alph*)]
\item $\mathcal{A}$: weak $\left( \left( \frac{1}{2}-1/\log\log \frac{\Delta}{\delta} \right)d(v)- 2 \right)$-orientation in time $O\bigl( \bigl(\log\log \frac{\Delta}{\delta} \bigr)^{1.71} \cdot \log n \bigr)$.
\item $\mathcal{B}$: weak $\left( \left( \frac{1}{2}-1/\log \frac{\Delta}{\delta} \right)d(v)- 2 \right)$-orientation in time $O \bigl( \log \frac{\Delta}{\delta} \cdot \bigl(\log\log{\frac{\Delta}{\delta}}\bigr)^{1.71} \cdot \log n \bigr)$.
\item $\mathcal{C}$: weak $\left( \left( \frac{1}{2}-\frac{\delta}{\Delta} \right)d(v)- 2 \right)$-orientation in time $O \bigl(\frac{\Delta}{\delta} \cdot \log \frac{\Delta}{\delta} \cdot \bigl( \log\log{\frac{\Delta}{\delta}} \bigr)^{1.71} \cdot \log n \bigr)$.
\end{enumerate}
\end{theorem}
In the proof of \Cref{thm:orientationAlgorithms} we perform the following steps:
\begin{enumerate}[resume*]
\item Use \Cref{lemma:weakTransformation} with $\varepsilon_1 = 1/6$ and $\varepsilon_2 = 1/\log \log \Delta$ to obtain an algorithm which computes a weak $\bigl( \bigl( \frac{1}{2} - 1/\log \log \Delta \bigr) d(v) - 2 \bigr)$-orientation and runs in time $O((\log \log \Delta)^{1.71} \cdot \log n)$. In this step, we plug in $\varepsilon_1 = 1/6$ to obtain the exponent
\[-\log_2^{-1} \beta = -\log_2^{-1} \bigl(\tfrac{1}{2} + \tfrac{1}{6}\bigr) < 1.71.\]
\item Using the construction twice more, once with $\varepsilon_1=1/\log\log\Delta$ and $\varepsilon_2=1/\log\Delta$ and once with $\varepsilon_1 = 1/\log \Delta$ and $\varepsilon_2 = 1 / \Delta$, yields a
\[\text{weak }\left( \left( \frac{1}{2} - \frac{1}{\Delta} \right) d(v) - 2 \right)\text{-orientation algorithm}\] that runs in time $O(\Delta \cdot \log \Delta \cdot (\log \log \Delta)^{1.71} \cdot \log n)$.
\end{enumerate}
Before we continue with the formal proofs of \Cref{lemma:weakTransformation,thm:orientationAlgorithms} we prove the following technical result that we use to simplify running times; it is proved with a Taylor expansion.
\begin{lemma}
\label[lemma]{lemma:taylor}
Let $0<\varepsilon \leq 1/6$. Then, $-\log^{-1}_2{(\frac{1}{2} + \varepsilon)} \leq 1 + 24\varepsilon$.
\end{lemma}
\begin{proof}
Let $z = 2\varepsilon/\bigl(\frac{1}{2} + \varepsilon\bigr) \leq 4\varepsilon$.
Notice that $2 - z = \bigl(\frac{1}{2} + \varepsilon\bigr)^{-1}$.
By writing $\log_2^{-1}(2 - z)$ using Taylor series at $2$, we get that
\begin{align*}
-\log_2^{-1}\bigl(\tfrac{1}{2} + \varepsilon \bigr) & = \log_2^{-1}(2-z) = \ln(2) \ln^{-1}(2 - z) \\
& = \ln(2)\biggl(\ln(2)-\sum_{k=1}^{\infty}\frac{1}{k\cdot2^k}z^k\biggr)^{-1}\\
& \leq \biggl(1-\ln^{-1} 2 \sum_{k=1}^{\infty}\frac{1}{2^k }z^k\biggr)^{-1}\\
& \stackrel{|z|<1}{\leq} \biggl(1-z\ln^{-1} 2\sum_{k=1}^{\infty}\frac{1}{2^k }\biggr)^{-1}\\
& = (1-z\ln^{-1}2)^{-1} \leq 1+z\cdot\frac{\ln^{-1}2}{1-z\ln^{-1}2}\\
& \stackrel{\varepsilon \leq 1/6}{\leq} 1+6z\stackrel{z \leq 4\varepsilon} \leq 1+24\varepsilon.
\qedhere
\end{align*}
\end{proof}
In the following proof we perform steps 1--3 of the aforementioned agenda.
\begin{proof}[Proof of \Cref{lemma:weakTransformation}]
Let $i = \log_2(\varepsilon_2) / \log_2{(1/2+\varepsilon_1)}$ which is bounded by $(1 + 24\varepsilon_1) \log_2(1/\varepsilon_2)$ due to \Cref{lemma:taylor}; thus it is sufficient to show the left hand side of $(\ref{eqn:runtimeweakTransformation})$.
By applying \Cref{cor:iterative} with parameter $i$ and algorithm $\mathcal{A}$, we get a distributed algorithm that finds a
\[\left(\left(1/2 + \varepsilon_1\right)^r d(v) + 4,\, 2^{i+5} \right)\text{-path decomposition}\] in time
\[
O\bigl(2^i \cdot T(n, \Delta)\bigr) = O\Bigl(\varepsilon_2^{\log_2^{-1}( \frac{1}{2}+\varepsilon_1 )}\cdot T(n,\Delta)\Bigr).
\]
The degree of node $v$ in the path decomposition is upper bounded by
\[\bigl(\tfrac{1}{2} + \varepsilon_1\bigr)^i d(v)+4=\varepsilon_2d(v)+4.\]
Now \Cref{lemma:fromPathDecompToStrongOrient} yields a weak $\bigl( \frac{1}{2}\bigl(1 - \varepsilon_2\bigr) d(v) - 2\bigr)$-orientation algorithm with the same running time; in particular, this is a weak $\bigl( \bigl(\frac{1}{2} - \varepsilon_2\bigr) d(v) - 2\bigr)$-orientation algorithm.
\end{proof}
We close the section by performing steps 4--5 of the agenda. Note that the theorem is more general than what was outlined in the agenda as it contains an additional parameter $\delta$ which can be used to tune the running time at the cost of the quality of the weak orientation algorithm.
\begin{proof}[Proof of \Cref{thm:orientationAlgorithms}]Each statement is proven by applying \Cref{lemma:weakTransformation} with different values for $\varepsilon_1$ and $\varepsilon_2$.
\begin{enumerate}[label=(\alph*)]
\item
We obtain the algorithm $\mathcal{A}$ by applying \Cref{lemma:weakTransformation} with the weak $\lfloor \Delta/3\rfloor$-orientation algorithm from \Cref{lemma:weakDelta3}, that is with $\varepsilon_1=1/6$, and with $\varepsilon_2 = 1/\log\log (\Delta / \delta)$.
\item Algorithm $\mathcal{B}$ is obtained by applying \Cref{lemma:weakTransformation} with the algorithm $\mathcal{A}$ from part (a) as input (i.e., $\varepsilon_1 = 1/\log\log (\Delta / \delta)$) and with $\varepsilon_2 = 1/\log (\Delta / \delta)$.
\item Algorithm $\mathcal{C}$ is obtained by applying \Cref{lemma:weakTransformation} with the algorithm $\mathcal{B}$ from part (b) as input (i.e., $\varepsilon_1=1/\log (\Delta / \delta)$) and with $\varepsilon_2=1/(\Delta / \delta) = \delta / \Delta$.
\qedhere
\end{enumerate}
\end{proof}
\subsection{Short and Low Degree Path Compositions Fast}
Our higher level goal is to compute a path decomposition where the degree is as small as possible to obtain a directed split with the discrepancy as small as possible (with methods similar to \Cref{lemma:fromPathDecompToStrongOrient}, also see the proof of \Cref{thm:mainSplitting}).
As we will show in the next theorem, with the methods introduced in this section and the appropriate choice of parameters, we can push the maximum degree of the path decomposition down to $\varepsilon d(v) + 4$ for any $\varepsilon>0$. This is the true limit of this approach because we cannot compute weak $2$-orientations of $4$-regular graphs in sublinear time (see \Cref{thm:weak2orientation-lb}).
\newcommand{\alpha}{\alpha}
\begin{theorem}
\label[theorem]{lemma:mainPathDecomposition}
Let $G=(V,E)$ be a multigraph with maximum degree $\Delta$. For any $\varepsilon>0$ there is a deterministic distributed algorithm which computes a $(\delta(v),O(1/\varepsilon))$-path decomposition in time
$O\bigl(\alpha \cdot \log \alpha \cdot (\log\log\alpha)^{1.71} \cdot \log n\bigr)$, where $\alpha=2/\varepsilon$ and $\delta(v)=\varepsilon d(v)+3$ if $\varepsilon d(v)\geq1$ and $\delta(v)=4$ otherwise.
\end{theorem}
\begin{proof} Apply \Cref{cor:iterative} with algorithm $\mathcal{B}$ from \Cref{thm:orientationAlgorithms}, $\delta=\Delta/\alpha$, and
\[i=\frac{\log{\alpha^{-1}}}{\log(1/2+1/\log (\alpha))}.\]
This implies a path decomposition with degrees $\lfloor\alpha^{-1} d(v)+4\rfloor=\lfloor \varepsilon d(v)/2+4\rfloor$. If $\varepsilon d(v)\geq 1$ this is smaller than $\varepsilon d(v)+3$. If $\varepsilon d(v)<1$ this is at most $4$.
The length of the longest path is upper bounded by $O(2^i)=O\bigl(\alpha^{1+24/\log{\alpha}}\bigr)=O(\alpha)$ where we used \Cref{lemma:taylor}.
The runtime is bounded by
\[O\bigl(2^i \cdot T_{\mathcal{B}}(n,\Delta)\bigr) = O\bigl(\alpha \cdot \log \alpha \cdot \left(\log\log \alpha\right)^{1.71} \cdot \log n \bigr),\]
where $T_{\mathcal{B}}(n,\Delta)$ is the running time of algorithm $\mathcal{B}$.
\end{proof}
Choosing $\varepsilon=1/(2\Delta)$ in \Cref{lemma:mainPathDecomposition} yields the following corollary.
\begin{corollary}[Constant Degree Path Decomposition]
There is a deterministic algorithm which computes a
$(4,O(\Delta))$-path decomposition in time $O\bigl(\Delta\cdot \log\Delta\cdot (\log\log\Delta)^{1.71}\cdot \log n\bigr)$.
\end{corollary}
\begin{remark}
For any positive integer $k$ smaller than $\logStar(\alpha)\pm O(1)$ one can improve the runtime of \Cref{lemma:mainPathDecomposition} to
$O\bigl(\alpha\cdot (\log^{(k)}\alpha)^{0.71}\cdot\log n\cdot \Pi_{j=1}^k\log^{(j)} \alpha\bigr)$, where $\log^{(j)}(\cdot)$ denotes the $j$ times iterated logarithm, $\alpha=2/\epsilon$ and the constant in the $O$-notation grows exponentially in $k$. This essentially follows from a version of \Cref{thm:orientationAlgorithms} that turns a weak $\bigl((1/2-1/\log^{(k)}\alpha)d(v)- 2\bigr)$-orientation algorithm into a weak $\bigl((1/2-1/\log \alpha)d(v)- 2\bigr)$-orientation algorithm in $k-1$ iterations.
\end{remark}
\section[\texorpdfstring{Degree $3$: Sinkless and Sourceless Orientations}{Degree 3: Sinkless and Sourceless Orientations}]{\texorpdfstring{Degree \boldmath$3$: Sinkless and Sourceless Orientations}{Degree 3: Sinkless and Sourceless Orientations}}\label[section]{sec:basecase}
The results of this section are used in the proof of \Cref{thm:mainSplitting} and in \Cref{sec:outdegtwo}.
First note that an arbitrary consistent orientation of the paths in the best path decomposition of \Cref{sec:shortPathsDecompositions} would result in a splitting in which each node $v$ has discrepancy at most $\varepsilon\cdot d(v) + 4$. In the case of directed splitting we slightly tune this in the proof of \Cref{thm:mainSplitting} by consistently orienting the paths in such a way that each node has at least one outgoing and one incoming path. As the graph corresponding to the path decomposition is a low degree graph this is the same as finding sinkless and sourceless orientations in low-degree graphs; in this section we show how to compute these. Thereby the most challenging case is to make sure that also nodes of degree three will have one outgoing and one incoming edge.
The main results of this section are \Cref{lemma:sinkless-sourceless} and the immediate \Cref{corollary:sinkless-sourceless}.
To prove the lemma we will first concentrate on high-girth graphs; then, in \Cref{sec:shortCycles}, we show how to handle short cycles and complete the proof of the following lemma.
\begin{lemma}[Sinkless and Sourceless Orientation]\label[lemma]{lemma:sinkless-sourceless}
The following problem can be solved in time $O(\log n)$ with deterministic algorithms and $O(\log \log n)$ with randomized algorithms:
given a $3$-regular multigraph, find a sinkless and sourceless orientation.
\end{lemma}
With a simple reduction (similar to \Cref{lemma:weakmulti}), we can generalise these results to non-regular graphs as well:
\begin{corollary}[Sinkless and Sourceless Orientation]\label[corollary]{corollary:sinkless-sourceless}
The following problem can be solved in time $O(\log n)$ with deterministic algorithms and $O(\log \log n)$ with randomized algorithms:
given any multigraph, find an orientation such that all nodes of degree at least $3$ have outdegree and indegree at least $1$.
\end{corollary}
\begin{proof}
Let $G$ be any multigraph. First, we split any node of degree $k+3$ into $k$ nodes of degree $1$ and one node of degree $3$. We also split each node of degree $k < 3$ into $k$ nodes of degree $1$. Now we are left with a graph $G'$ in which each node has degree $1$ (these are \emph{leaf nodes}) or $3$ (these are \emph{internal nodes}). Finally, we augment each leaf node with a gadget in order to obtain a $3$-regular graph $G''$ (\Cref{fig:basecase-regularity}).
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/gadget-1-black.pdf}
\caption{Given a general graph $G$, we first split the nodes to obtain a graph $G'$ with degrees $1$ and $3$, and then add gadgets to obtain a $3$-regular graph $G''$.}\label[figure]{fig:basecase-regularity}
\end{figure}
We then invoke \Cref{lemma:sinkless-sourceless} to find a sinkless and sourceless orientation in $G''$. We delete the gadgets to get back to graph $G'$; now each internal node has outdegree at least $1$. Finally, we revert the splitting to get back to graph $G$; now for each node of degree at least $3$, there is one internal node that contributes at least one outgoing edge. Furthermore, computation in $G''$ can be simulated in $G$ with only constant-factor overhead.
\end{proof}
In the proof of \Cref{lemma:sinkless-sourceless}, we will use the following observations:
\begin{itemize}
\item It is easier to find a sinkless and sourceless orientation if we only care about nodes of degree at least $6$.
\item In high-girth graphs, we can make the degrees larger with the help of contractions, and then it is sufficient to find orientations that make nodes of degree at least $6$ happy. (These contractions are different from the contractions in \Cref{sec:shortPathsDecompositions} and are like the contractions that are known from building minors)
\item In low-girth graphs, we can exploit short cycles to find orientations, eliminate them, and then it is sufficient to find orientations in high-girth graphs.
\end{itemize}
\subsection{Degree 6: Sinkless and Sourceless Orientation}
Let us now start with simple observations related to the case of nodes of degree at least $6$. For brevity, let us write $T_{\mathsf{so}}$ for the time complexity of sinkless orientations in the model that we study: $T_{\mathsf{so}} = O(\log n)$ for deterministic algorithms and $T_{\mathsf{so}} = O(\log \log n)$ for randomized algorithms. We start with the following simple lemma (cf.\ \Cref{lemma:weakDelta3}).
\begin{lemma}[Degree 6, Outdegree 2]\label[lemma]{lemma:deg6-outdeg2}
The following problem can be solved in time $O(T_{\mathsf{so}})$:
given a graph, find an orientation such that all nodes of degree at least $6$ have outdegree at least $2$.
\end{lemma}
\begin{proof}
Split all nodes of degree at least $6$ into two nodes of degree at least $3$. Apply \Cref{lemma:weakmulti} to find an orientation in which all nodes of degree at least $3$ have outdegree at least $1$. Merge the nodes back.
\end{proof}
A useful interpretation of the above lemma is that each node with degree at least 6 \emph{owns} at least two of its incident edges, i.e., the outgoing edges. Now each node can freely re-orient the edges that it owns whichever way it wants. In particular, each node of degree at least $6$ can make sure that there is at least one outgoing edge and at least one incoming edge:
\begin{corollary}[Degree 6, Sinkless and Sourceless]\label[corollary]{corollary:deg6-sinkless-sourceless}
The following problem can be solved in time $O(T_{\mathsf{so}})$:
given a graph, find an orientation such that all nodes of degree at least $6$ have outdegree and indegree at least $1$.
\end{corollary}
\subsection{High-Girth: Sinkless and Sourceless Orientation}
Now we amplify the result of \Cref{corollary:deg6-sinkless-sourceless} so that we can find sinkless and sourceless orientations also in low-degree graphs---at least if we have a high-girth graph. We will now prove the following result in this section:
\begin{lemma}[High Girth, Sinkless and Sourceless]\label[lemma]{lemma:high-girth-sinkless-sourceless}
There is a constant $g$ such that the following problem can be solved in time $O(T_{\mathsf{so}})$:
given a graph of girth at least $g$, find an orientation such that all nodes of degree at least $3$ have outdegree and indegree at least $1$.
\end{lemma}
\paragraph{Proof Overview.}
Our overall plan is as follows. Given any graph $G$ of girth at least $g$, we perform a sequence of modifications (both types of modifications are explained in detail below this proof overview) that change the degree distribution:
\begin{itemize}[noitemsep]
\item Splitting for $d=3$: all nodes have degree 1 or exactly 3.
\item Contraction from $d=3$ to $d'=4$: all nodes have degree 1 or at least 4.
\item Splitting for $d=4$: all nodes have degree 1 or exactly 4.
\item Contraction from $d=4$ to $d'=6$: all nodes have degree 1 or at least 6.
\item Splitting for $d=6$: all nodes have degree 1 or exactly 6.
\end{itemize}
Then we apply \Cref{corollary:deg6-sinkless-sourceless} to find an orientation such that degree-$6$ nodes have outdegree and indegree at least $1$. Finally, we revert all splitting and contraction steps to recover an orientation of the original graph with the desired properties.
We will assume that $g$ is sufficiently large so that each contraction is applied to a tree-like neighborhood (in particular, contractions do neither lead to multiple parallel edges nor to self-loops). The splitting step does not create any short cycles.
\paragraph{Splitting Step.}
Given any graph and any value $d > 1$, we can apply the splitting idea from \Cref{corollary:sinkless-sourceless} to obtain a graph in which we have \emph{leaf nodes} of degree $1$ and \emph{internal nodes} of degree $d$.
The edges that join a pair of internal nodes are called \emph{internal edges}; all other edges are \emph{leaf edges}. If at any point we obtain a connected component that does not contain any internal edges, such a component is a star and we can find a valid orientation trivially in constant time. Hence let us focus on the components that contain some internal edges.
\paragraph{Contraction Step.}
Let $d' = 2d-2$. We assume that we have a graph in which all nodes are either leaf nodes or internal nodes of degree $d$, and we will show how to modify the graph so that the internal nodes have degree at least $d'$.
First, find a maximal matching $M$ of the internal edges (this is possible in time $O(\log^* n) = o(T_{\mathsf{so}})$ with a deterministic algorithm, as we have a constant maximum degree). Then each internal node $u$ that is not matched picks arbitrarily one of its matched neighbors $v$, and adds the edge to $v$ to a set $X$. Now, $Y = M \cup X$ is a collection of internal edges that covers all internal nodes. Furthermore, each connected component in the graph induced by $Y$ has a constant diameter; it consists of an edge $e \in M$ and possibly some edges adjacent to $e$.
Now each edge $e \in M$ labels the edges of $X$ adjacent to $e$ arbitrarily with distinct labels $1, 2, \dotsc$. This way we obtain a partitioning of $Y$ in subsets $Y_0, Y_1, \dotsc, Y_k$ for some $k = O(1)$, where $Y_0 = M$ and $Y_i$, $i > 0$, consists of the edges of $X$ with label $i$.
The key observation is that each $Y_i$ is a matching. Now we do a sequence of $k+1$ edge contractions: we contract first all edges of $Y_k$, then all edges of $Y_{k-1}$, etc. For each edge that we contract, we delete the edge and identify its endpoints.
Note that all internal nodes take part in at least one edge contraction that merges a pair of internal nodes. Hence all internal nodes will have degree at least $d' = 2d-2$ after contractions. Furthermore, just before we contract the edges of $Y_i$ the edges of $Y_i$ still form a matching despite the contractions for $Y_k,\ldots,Y_{i+1}$ that we have already performed (for this property to hold it is crucial that we begin contracting edges in $Y_k$ and not the edges in $Y_0$). Thus we only shorten distances by a constant factor; the new graph $G'$ that we obtain can be still simulated efficiently with a distributed algorithm that runs in the original graph $G$.
\paragraph{Orientation.}
After a sequence of split and contract operations, we have a graph $H$ in which each node has degree $1$ or at least $6$. Then we apply \Cref{corollary:deg6-sinkless-sourceless} on $H$ and obtain an orientation of $H$ in which every node with degree at least $6$ has outdegree and indegree at least $1$.
\paragraph{Reverting Splits \& Contractions.}
Now we need to revert the splittings and contraction operations to turn the orientation of $H$ into an orientation of $G$. Reverting a split is trivial, but reverting a contraction needs more care to make sure that we maintain the property that all internal nodes have at least one outgoing and one incoming edge.
Consider an edge $e = \{u,v\}$ that was contracted to a single node $x$. Node $x$ is incident to at least one outgoing edge and at least one incoming edge. Revert the contraction of edge $e$ (preserving the orientations, but leaving the new edge $e$ unoriented; note that all other edges incident to $u$ or $v$ are oriented). Now if both $u$ and $v$ are happy we can orient $e$ arbitrarily. Otherwise at least one of them is unhappy; assume that $u$ is unhappy. We have the following cases:
\begin{itemize}
\item Node $u$ is incident to only outgoing edges. Then node $v$ is incident to at least one incoming edge. Orient $e$ from $v$ to $u$. Now both $u$ and $v$ have both incoming and outgoing edges, and hence both of them are happy.
\item Node $u$ is incident to only incoming edges. Orient $e$ from $u$ to $v$; again, both of them will be happy.
\end{itemize}
Hence we only need to invoke \Cref{corollary:deg6-sinkless-sourceless} once, in a virtual graph that can be simulated efficiently in the original network, and then do a constant number of additional operations. This completes the proof of \Cref{lemma:high-girth-sinkless-sourceless}.
\subsection{Short Cycles: Sinkless and Sourceless Orientation}\label[section]{sec:shortCycles}
The only concern that remains to prove \Cref{lemma:sinkless-sourceless} is the existence of short cycles (a special case of which is a $2$-cycle formed by a pair of parallel edges in a multigraph, and a $1$-cycle formed by a self-loop). As we will see, the existence of short cycles actually makes the problem easier to solve; only nodes that are not part of any short cycle need nontrivial computational effort.
\paragraph{Identification of Short Cycles.}
Let $g = O(1)$ be the constant from \Cref{lemma:high-girth-sinkless-sourceless}. Given a $3$-regular multigraph $G$, we first identify all cycles of length at most $g$. This is possible in time $O(1)$. Then for each cycle $C$, we assign a unique numerical identifier $i(C)$. Each cycle can for example be uniquely labelled by the sequence of node identifiers that result when starting at the highest ID node of the cycle and traversing the cycle in one of the two directions. We also pick arbitrarily an orientation $d(C)$ for the cycle.
Now let $S \subseteq E$ be the set of the edges that are involved in at least one cycle of length at most $g$, and let $X \subseteq V$ be the set of nodes involved in at least one cycle of length at most $g$. We will first orient the edges of $S$ so that all nodes in $X$ become happy, i.e., they have at least one outgoing edge and at least one incoming edge in $S$. To achieve this, we will first design a simple centralized, sequential algorithm $A$ that solves this, and then observe that we can develop an efficient distributed algorithm $A'$ that calculates in constant time the same result as what $A$ would output.
\paragraph{Centralized Algorithm.}
Algorithm $A$ proceeds as follows. We take the list of all short cycles, order them by the unique identifiers $i(C)$, and process the cycles in this order. Whenever we process some cycle $C$, we orient all edges of $C \subseteq S$ in a consistent manner, using orientation $d(C)$. While doing this, we may re-orient some edges that we had already previously oriented. Nevertheless, we make progress:
\begin{itemize}
\item After processing cycle $C$, all nodes along $C$ are happy (regardless of whether they were already happy previously).
\item All nodes not along $C$ that were happy before this step are also happy after this step (we did not touch any of their incident edges).
\end{itemize}
Hence after going through the list of all cycles, all edges of $S$ are oriented and all nodes of $X$ are happy.
\paragraph{Distributed Algorithm.}
The centralized algorithm is clearly inefficient for our purposes, but for each edge $e \in S$, we can directly compute what is its final orientation in the output of algorithm $A$: simply consider all cycles $C$ with $e \in C$, pick the cycle $C^*_e$ that has the largest identifier among all cycles that pass through $e$, and orient $e$ according to $d(C^*_e)$. This is easy to implement in constant time, as all cycles of interest are of constant length.
\paragraph{Remaining Nodes.}
Now all nodes of $X$ are happy. We delete all edges of $S$ and also delete all isolated nodes; this way we obtain a graph $G'$ in which all nodes have degree $1$ or $3$ and all edges are unoriented. Then we can apply \Cref{lemma:high-girth-sinkless-sourceless} to make nodes of degree $3$ happy. Finally, we put back the edges of $S$ to make all other nodes happy. This completes the proof of \Cref{lemma:sinkless-sourceless}.
\section{Degree 5: Outdegree Two}\label[section]{sec:outdegtwo}
One final piece is still missing: in \Cref{sec:shortPathsDecompositions} we used the following result but postponed its proof:
\lemmaFirstoutdegtwo*
We will simplify the problem slightly by first focusing on regular graphs. In this section we will prove the following statement:
\begin{lemma}\label[lemma]{lemma:outdeg-2}
The following problem can be solved in time $O(\log n)$ with deterministic algorithms and $O(\log \log n)$ with randomized algorithms:
given a $5$-regular multigraph, find an orientation such that all nodes have outdegree at least $2$.
\end{lemma}
The same reduction as in the proof of \Cref{corollary:sinkless-sourceless} then generalizes this result to non-regular graphs, and \Cref{lemma:firstoutdeg-2} follows directly.
\paragraph{Half-Path Decompositions.}
To prove \Cref{lemma:outdeg-2}, we start by introducing the concept of \emph{half-path decompositions}. In such a decomposition, each edge is divided in two \emph{half-edges} and we require that, for each node, exactly two incident half-edges are labeled with the color \emph{red}; all other half-edges are \emph{black}. We say that we have a \emph{decomposition with half-paths of length $k$} if the red half-edges form paths (never cycles), and each such path consists of at most $k$ half-edges but there is no requirement for black half-edges.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/orientations_rb_labellings.pdf}
\caption{Orientations and half-path decompositions in a $5$-regular graph. (a)~A weak $2$-orientation. Each node has selected exactly $2$ incident edges; these are indicated with a blue color. (b)~A decomposition with half-paths of length $2$. (c)~A decomposition with half-paths of length at most $8$.}\label[figure]{fig:basecase-halfpath}
\end{figure}
Half-path decompositions are closely related to weak $2$-orientations; see \Cref{fig:basecase-halfpath}. If we could find a weak $2$-orientation, each node could simply pick two outgoing edges, label their sides of these edges red, and we would have a decomposition with half-paths of length $2$. Conversely, given a decomposition with half-paths of length $2$, we could easily find a weak $2$-orientation: an edge that is half-red is oriented from red half to black half, and all other edges (which are fully black) are oriented arbitrarily.
\paragraph{Proof Idea and Intuition.}
Half-paths of length $k > 2$ can be interpreted as a relaxation of weak $2$-orientations. To find a weak $2$-orientation, we will proceed in two steps:
\begin{itemize}
\item Find a decomposition with half-paths of length at most $8$.
\item Use such a decomposition to find a weak $2$-orientation (in the proof of \Cref{lemma:outdeg-2}).
\end{itemize}
To get some intuition on the basic idea for computing half-path decompositions, let us first consider a simplified setting. Assume that we have a simple $5$-regular graph $G$, and assume that we are given a \emph{perfect matching} $M$. Now we could simply remove $M$, and we would be left with a $4$-regular graph $G'$. Then we could apply \Cref{lemma:weak1} to find a sinkless orientation in $G'$. Finally, we could color the half-edges of $G$ as follows:
\begin{itemize}
\item For each edge $e \in M$, label both of its half-edges red. This contributes one red half-edge per node.
\item For each node $v$, pick one outgoing edge in the orientation $G'$, and color the end of $v$ red and the other half black. This also contributes one half-edge per node.
\end{itemize}
We would now have a decomposition with half-paths of length $4$. Unfortunately, we cannot find a perfect matching efficiently. However, in the following lemma we will show that it is sufficient to find a \emph{maximal matching} $M$. This may result in some unmatched nodes, but the key insight is that such nodes form an independent set, and we can apply a split-and-contract trick to label those nodes; this will result in half-paths of length at most $8$.
\begin{lemma}[Half-Path Decomposition]\label[lemma]{lemma:halfpath}
The following problem can be solved in time $O(T_{\mathsf{so}})$:
given a $5$-regular multigraph, find a decomposition with half-paths of length at most $8$.
\end{lemma}
\begin{proof}
Let $G = (V,E)$ be a $5$-regular multigraph. Let $V_L$ be the set of nodes that have at least one self-loop; for each such node, we pick one loop and add it to $L\subseteq E$.
Then find a maximal matching $M$ in the graph induced by the nodes $V \setminus V_L$; this is possible in time $O(\log^* n) = o(T_{\mathsf{so}})$ with a deterministic algorithm, as we have a constant maximum degree. Let $V_M$ be the set of matched nodes, and let $V_U$ be the set of unmatched nodes. Note that $V_U$ is an independent set of nodes and none of these have any self-loops.
We split each node of $V_U$ arbitrarily into two parts: a node of degree $2$ and a node of degree $3$. Let $V_2$ be the set of degree-$2$ nodes, and let $V_3$ be the set of degree-$3$ nodes formed this way, and write $V_5 := V_L \cup V_M$ for all other nodes (which have degree $5$).
Note that for each $v \in V_2$, both of its neighbors are in $V_5$. Now we eliminate the nodes of $V_2$ by contracting each path of the form $V_5$--$V_2$--$V_5$; let $C$ be the set of edges that result from such contractions. We have the following setting:
\begin{itemize}[noitemsep]
\item $C$, $L$, and $M$ are disjoint sets of edges.
\item The endpoints of $C$, $L$, and $M$ are in $V_5$.
\end{itemize}
Now we remove the edges of $L$ and $M$. We have a multigraph $G'$ with the following sets of nodes:
\begin{itemize}[noitemsep]
\item $V_L$: nodes of degree $3$ (they lost two endpoints when we eliminated self-loops).
\item $V_M$: nodes of degree $4$ (they lost one endpoint when we eliminated the matching).
\item $V_3$: nodes of degree $3$.
\end{itemize}
We find a sinkless orientation in $G'$, using e.g. \Cref{corollary:sinkless-sourceless}. Then all nodes of $V_5=V_L\cup V_M$ pick one outgoing edge and label this half-edge red. We have:
\begin{itemize}[noitemsep]
\item Nodes of $V_L$ and $V_M$ are incident to exactly one red half-edge.
\item Nodes of $V_3$ are not incident to any red half-edges.
\item Each edge has at most one red half.
\item The longest red path has length $1$.
\end{itemize}
Then we put back $M$ and label both halves of these edges red. We also put back $L$ and label exactly one endpoint of these edges red. We have:
\begin{itemize}[noitemsep]
\item Nodes of $V_L$ and $V_M$ are incident to exactly two red half-edges.
\item Nodes of $V_3$ are not incident to any red half-edges.
\item The longest red path has length $4$ (an edge from $M$ plus two half-edges).
\end{itemize}
Then we revert the contractions and put back the nodes of set $V_2$. Note that each edge of $C$ had at most one red half-edge. We apply the following rules to color the new half-edges:
\begin{itemize}[noitemsep]
\item black--black becomes black--red--red--black,
\item red--black becomes red--red--red--black.
\end{itemize}
We obtain:
\begin{itemize}[noitemsep]
\item Nodes of $V_L$ and $V_M$ are incident to exactly two red half-edge.
\item Nodes of $V_3$ are not incident to any red half-edges.
\item Nodes of $V_2$ are incident to exactly two red half-edges.
\item The longest red path has length $8$.
\end{itemize}
Finally, we combine each pair of $u \in V_2$ and $v \in V_3$ to restore the original multigraph $G$. Here $u$ contributes two red half-edges and $v$ does not contribute any red half-edges. Overall, all nodes of $G$ are incident to exactly two red half-edges.
\end{proof}
Now we are ready to prove \Cref{lemma:outdeg-2}. Thanks to a half-path decomposition, this is straightforward. Incidentally, we get a strong $2$-orientation for free here, even though we only need a weak $2$-orientation.
\begin{proof}[Proof of \Cref{lemma:outdeg-2}]
Given a $5$-regular multigraph $G$, we first find a decomposition with half-paths of length at most $8$. Split each node into red and black parts: a degree-$2$ node incident to two red half-edges, and a degree-$3$ node incident to three black edges. Now each path of degree-$2$ nodes consists of at most $4$ nodes. We contract such paths to single edges to obtain a $3$-regular multigraph. We apply \Cref{corollary:sinkless-sourceless} to orient it (this also orients the edges that represent paths), and then undo the contractions where edges in a path are oriented according to the orientation of the edge representing the path. Now we have an oriented multigraph $G'$ in which degree-$3$ nodes have outdegree and indegree at least $1$, and degree-$2$ nodes have outdegree and indegree equal to $1$. Undo the split to get back to multigraph $G$; now each original node has outdegree and indegree at least $2$.
\end{proof}
\section{Directed and Undirected Splits}\label{sec:mainsplitting}
We are now ready to prove our main result:
\thmMainSplitting*
\begin{proof}
For both parts apply
\Cref{lemma:mainPathDecomposition}, which provides a $\big(\delta(v),O(1/\varepsilon)\big)$-path decomposition $\mathcal{P}$ with $\delta(v)=\varepsilon d(v)+3$ if $\varepsilon d(v)\geq1$ and $\delta(v)=4$ otherwise.
\subparagraph{Proof of (b).}
Nodes color each path of $\mathcal{P}$ alternating with red and blue. Because the length of a path in $\mathcal{P}$ is bounded by $O(1/\varepsilon)$ this can be done in $O(1/\varepsilon)$ rounds.
Consider some node $v$ and observe that $v$ has one red and one blue edge for any path where $v$ is not a startpoint or endpoint. Thus the discrepancy of node $v$ is bounded above by $\delta(v)\leq \varepsilon d(v)+4$.
\subparagraph{Proof of (a).}
Use \Cref{corollary:sinkless-sourceless} to compute an orientation $\pi_{\mathcal{P}}$ of $G(\mathcal{P})$ in which all nodes which have degree at least three in $G(\mathcal{P})$ have at least one incoming and one outgoing edge. Then orient paths in the original graph according to $\pi_{\mathcal{P}}$ as in the proof of \Cref{lemma:fromPathDecompToStrongOrient} and denote the resulting orientation of the edges of $G$ with $\pi_G$.
Consider some node $v$ and observe that orienting any path that contains $v$ but where $v$ is not a startpoint or endpoint adds exactly one incoming edge and one outgoing edge for $v$.
Therefore, the discrepancy of the indegrees and outdegrees of $v$ in $\pi_{\mathcal{P}}$ bounds from above the discrepancy of the indegrees and outdegrees in $\pi_G$. The goal is to upper bound this discrepancy as desired.
Therefor let $d_{\mathcal{P}}(v)$ denote the degree of $v$ in $G(\mathcal{P})$.
If $d_{\mathcal{P}}(v)$ is at least three then its discrepancy in $\pi_{\mathcal{P}}$ is bounded by $d_{\mathcal{P}}(v)-2$ as the algorithm from \Cref{corollary:sinkless-sourceless} provided one incoming and one outgoing edge for $v$ in $G(\mathcal{P})$.
Furthermore we obtain that $d_{\mathcal{P}}(v)$ and $d(v)$ have the same parity because $d(v)=d_{\mathcal{P}}(v)+2x$ holds where $x$ is the number of paths that contain $v$ but where $v$ is neither a startpoint nor an endpoint.
Thus we have the following cases.
\begin{itemize}
\item $d_{\mathcal{P}}(v)\geq 3$:
\begin{itemize}
\item$\varepsilon d(v)\geq 1$: $v$'s discrepancy in $\pi_G$ is bounded by $d_{\mathcal{P}}(v)-2\leq \varepsilon d(v)+1$.
\item$\varepsilon d(v)< 1$, $d(v)$ even: $v$'s discrepancy in $\pi_G$ is bounded by $d_{\mathcal{P}}(v)-2\leq 2$.
\item$\varepsilon d(v)< 1$, $d(v)$ odd: As $d_{\mathcal{P}}(v)$ has to be odd and $3\leq d_{\mathcal{P}}(v)\leq \delta(v)=4$ holds we have $d_{\mathcal{P}}(v)= 3$. Thus $v$'s discrepancy in $\pi_G$ is bounded by $d_{\mathcal{P}}(v)-2\leq 1$.
\end{itemize}
\item $d_{\mathcal{P}}(v)< 3$:
\begin{itemize}
\item$d(v)$ even: We have $d_{\mathcal{P}}\in \{0,2\}$ and $v$'s discrepancy in $\pi_G$ is also $0$ or $2$.
\item$d(v)$ odd: We have $d_{\mathcal{P}} = 1$ and $v$'s discrepancy in $\pi_G$ is also $1$.
\end{itemize}
\end{itemize}
In all cases we have that the discrepancy of node $v$ is upper bounded by $\varepsilon d(v)+1$ if $d(v)$ is even and by $\varepsilon d(v)+2$ if $d(v)$ is even, which proves the result.
\qedhere
\end{proof}
\section[\texorpdfstring{$((2+o(1))\Delta)$-Edge Coloring via Degree Splitting}{((2+o(1))Delta)-Edge Coloring via Degree Splitting}]{\texorpdfstring{\boldmath$((2+o(1))\Delta)$-Edge Coloring via Degree Splitting}{((2+o(1))Delta)-Edge Coloring via Degree Splitting}}\label{sec:edgeColoring}
In this section we will show how to use the undirected edge splitting algorithm to find an edge coloring:
\crlEdgeColoring*
\begin{proof}
The coloring is achieved by iterated application of the undirected splitting result of \Cref{thm:mainSplitting}. Set $\gamma = \frac{\varepsilon}{20\log \Delta}$. In each of $h=\log \frac{\epsilon\Delta}{18}$ recursive iterations we apply the splitting of \Cref{thm:mainSplitting} with parameter $\gamma$ to each of the parts in parallel, until we reach parts with degree $O(1/\varepsilon)$. If the maximum degree of each part before iteration $i$ is upper bounded by $\Delta_{i-1}$ the maximum degree of the parts is upper bounded by
$\Delta_i\leq \frac{1}{2}(\Delta_{i-1}+\gamma \Delta_{i-1}+4)$ after iteration $i$. An induction on the number of iterations shows that the maximum degree of each part after iteration $h$ is upper bounded by
\begin{align*}\left(\frac{1+\gamma}{2}\right)^h\Delta+2\sum_{i=0}^{h-1}\left(\frac{1+\gamma}{2}\right)^i\leq \left(\frac{1+\gamma}{2}\right)^h\Delta +5=:\Delta_h,\end{align*} where the last inequality follows with the geometric sum formula and with $\gamma\leq 1/10$.
In the end, we have partitioned the edges into $2^h$ classes of maximum degree at most $\Delta_h=O(1/\varepsilon)$. We can easily compute a $(2\Delta_h-1)$-edge coloring of each of these classes, all in parallel and with different colors, in $O(\Delta_h + \log^* n)=O(1/\varepsilon + \log^* n)$ rounds, using the classic edge coloring algorithm of Panconesi and Rizzi~\cite{panconesi-rizzi}. Hence, we get an edge coloring of the whole graph with
\begin{align*}2^h\cdot (2\Delta_h-1) & \leq \big((1+\gamma)^{\log \Delta}\big)2\Delta +9\cdot 2^h\leq 2e^{\varepsilon/20}\Delta+\frac{\varepsilon}{2}\Delta \\
& \leq \left(2+\frac{\varepsilon}{2}\right)\Delta +\frac{\varepsilon}{2}\Delta\leq (2+\varepsilon)\Delta
\end{align*} colors.
Each iteration has round complexity
\begin{align*}
O\biggl(\frac{1}{\gamma}\cdot\log\frac{1}{\gamma}\cdot\log^{1.71}\log\frac{1}{\gamma}\cdot \log n\biggr) & = O\biggl(\frac{\log \Delta}{\varepsilon}\cdot\log\frac{\log \Delta}{\varepsilon}\cdot\log^{1.71}\log\frac{\log \Delta}{\varepsilon}\cdot \log n \biggr) \\
& = O\biggl(\frac{\log \Delta}{\varepsilon}\cdot\log\log \Delta\cdot \log^{1.71}\log\log \Delta \cdot \log n \biggr).
\end{align*}
The total round complexity, over all the $\log \Delta$ iterations and the last coloring step, is
\[
\begin{split}
\log \Delta \cdot O\biggl(\frac{\log \Delta}{\varepsilon} \cdot\log\log\Delta\cdot (\log\log\log \Delta)^{1.71} \cdot \log n \biggr) + O\biggl(1/\varepsilon + \log^* n\biggr) = \\
O\biggl(\frac{\log^2\Delta}{\epsilon} \cdot \log\log \Delta \cdot \big(\log\log\log\Delta\big)^{1.71} \cdot \log n\biggr).
\qedhere
\end{split}
\]
\end{proof}
\section[Lower Bound for Weak 2-Orientation in 4-Regular Graphs]{Lower Bound for Weak 2-Orientation \newline in 4-Regular Graphs}\label{sec:weak2orientation-lb}
We have seen that we can efficiently find, e.g., weak and strong 1-orientations in 3-regular graphs and weak and strong 2-orientations in 5-regular graphs. We will now prove that it is not possible to find weak or strong 2-orientations in 4-regular graphs efficiently.
\begin{figure}
\centering
\includegraphics[scale=0.4]{figures/weak2orientation-gadget-black}
\caption{Gadget $U$ consists of eight nodes. If $u_1$ has an incoming edge from outside the gadget, then $u_8$ must have an edge going out of the gadget. The reduction to sinkless orientation is by constructing a cycle of gadgets. The edges between the gadgets must be oriented in a consistent manner.} \label{fig:weak2orientation-gadget}
\end{figure}
\begin{theorem} \label[theorem]{thm:weak2orientation-lb}
Weak 2-orientation in 4-regular graphs requires $\Omega(n)$ time.
\end{theorem}
\begin{proof}
The proof is by reduction to sinkless orientation on cycles. We construct a graph consisting of constant-sized gadgets connected into a cycle such that the edges between the gadgets must be oriented consistently. This is a global problem that requires $\Omega(n)$ time.
The gadget $U$ consists of eight nodes $V(U) = \{u_1, u_2, \dots, u_8 \}$, with $U_L = \{u_2, u_3, u_4 \}$ and $U_R = \{u_5, u_6, u_7 \}$ forming the two sides of a complete bipartite graph $K_{3,3}$. In addition, $u_1$ is connected to all nodes in $U_L$ and $u_8$ to all nodes in $U_R$.
Now for any $n$, we construct a graph $G$ on $8n$ nodes as follows. Take $n$ copies $U_1, U_2, \dots, U_n$ of $U$, and for every $i = 1, \dots n$, connect the $i$th copy of $u_8$ (denoted by $u_{i,8}$) to $u_{i+1,1}$ modulo $n$.
See \Cref{fig:weak2orientation-gadget} for an illustration.
Now consider an edge $\{u_{i,8}, u_{i+1,1} \}$ and assume that it is oriented from $u_{i,8}$ to $u_{i+1,1}$. We will show that the gadget $U$ propagates orientations, that is, then we must have that $\{u_{i+1,8}, u_{i+2,1} \}$ is also oriented from $u_{i+1,8}$ to $u_{i+2,1}$. In any weak 2-orientation, $u_{i+1,1}$ must have two outgoing edges. Assume w.l.o.g. that these are to $u_{i+1,2}$ and $u_{i+1,3}$. In addition, $u_{i+1,4}$ must have an outgoing edge, giving a total of five outgoing edges from $U_{i+1, L}$ towards $U_{i+1, R}$. Therefore there must be at least two nodes in $U_{i+1, R}$ that have an outgoing edge towards $u_{i+1, 8}$, and $u_{i+1,8}$ must then have an outgoing edge toward $u_{i+2,1}$.
Sinkless orientation requires time $\Omega(n)$ in cycles, since all edges must be oriented consistently. If weak 2-orientation could be solved in time $o(n)$ on 4-regular graphs, then nodes could virtually add gadgets $U$ between each edge and there would be an $o(n)$ time algorithm for sinkless orientation on cycles.
\end{proof}
|
2,869,038,156,647 | arxiv | \section{Introduction}
Complex networks, and scale-free networks in particular, have been identified as potent promoters of cooperation in all major types of social dilemmas \cite{santos-prl}. Especially the prisoner's dilemma, being one of the most widely applicable games \cite{axelrod}, as well as the snowdrift and ultimatum games, have thus far been studied on diluted \cite{nowak-ijbc, vainstein-pre} and hierarchical networks \cite{vukov-pre}, random graphs \cite{duran-pd}, small-world \cite{abramson-pre, wu-cpl, kuperman-epjb} and real empirical networks \cite{holme-prex}, as well as games on graphs in general \cite{szabo-pr}. Arguably the most important feature of complex networks responsible for the promotion of cooperation is the heterogeneous linkage of participating players, which is due to large differences in the degrees of constitutive nodes. Indeed, studies have shown that masking the heterogeneity via the introduction of participation costs or the usage of normalized or effective payoffs \cite{santos-jeb, tomassini-ijmpc, masuda-prsb, szolnoki-pa} eliminates excessive benefits for cooperators and yields results similar to those reported on regular square lattices \cite{nowak-nat, lindgren-pd, szabo-pre, hauert-nat}. Since heterogeneities due to complex interactions networks have proven wildly successful in promoting cooperation, similar characteristics have been introduced also via differences in the influence and teaching activities of players \cite{kim-pre, wu-pre, szolnoki-epl} and social diversity \cite{perc-pre, santos-nat}.
Besides introducing the relevant heterogeneities artificially, recently it has been shown that the latter can emerge also spontaneously as a part of a coevolutionary process accompanying the evolution of strategies. Particularly, in Ref.~\cite{szolnoki-njp} the teaching activity was considered as an evolving property of players, and it has been shown that simple coevolutionary rules may lead to highly heterogeneous distributions of teaching activity from an initially non-preferential setup, which in turn promotes cooperation in social dilemmas such as the prisoner's dilemma or the snowdrift game. Moreover, similar results were obtained by Poncela \textit{et al.} \cite{poncela-plos}, who considered a coevolutionary preferential attachment and growth scheme to generate complex networks on which cooperation can thrive. An important precursor to these studies were works considering the coevolution of strategy and structure \cite{pacheco-prl} as well as random or intentional rewiring procedures \cite{ebel-pre, zimmermann-pre, eguiluz-ajs, perc-njp}, showing that they as well may help to maintain cooperative behavior. Interestingly, similar effects can also be observed if the players are allowed to move on the lattice during the strategy evolution \cite{vainstein-jtb}.
In this letter, we propose a new model based on a simple coevolutionary rule that, alongside the evolution of the cooperative and the defective strategy within the prisoner's dilemma game, entails increasing the neighborhood of players by allowing them to make new permanent connections with the not yet linked neighbors. The only condition necessary to exercise this is a successful pass of the player's strategy to one of its current opponents. Since each reproduction is considered as a statement of success of the donor player at that time, the latter is rewarded by the expansion of its neighborhood. Thus, the basic premise of the proposed coevolutionary rule is that in real social systems the more successful individuals will typically have more associates than the less successful ones. Related to this, we study the impact of different limits with respect to the maximal degree an individual is allowed to obtain during the coevolutionary process, as well as the degree distribution of thereby resulting networks. Starting from a fully homogeneous and non-preferential setup where each player is linked only to its four nearest neighbors on a square lattice, we show that the model can sustain cooperation even at high temptations to defect, and moreover, that the resulting networks are highly heterogeneous with roughly an exponential degree distribution. In addition, we shed light on the observed phenomena by studying the interconnectedness of influential players and the information exchange between them. Our results suggest that `making new friends' is an essential part of the evolutionary process, playing a crucial role by the sustainability of cooperation in environments prone to defection.
The remainder of this letter is organized as follows. First, we describe the prisoner's dilemma game and the protocol for the coevolution of neighborhoods. Next we present the results, whereas lastly we summarize and discuss their implications.
\section{Game definitions and setup}
We consider an evolutionary prisoner's dilemma game with cooperation and defection as the two competing strategies. The game is characterized by the temptation to defect $T = b$, reward for mutual cooperation $R = 1$, and both the punishment for mutual defection $P$ as well as the suckers payoff $S$ equaling $0$, whereby $1 < b \leq 2$ ensures a proper payoff ranking. Initially, each player $x$ is designated either as a cooperator $(C)$ or defector $(D)$ with equal probability and linked to its four nearest neighbors on a regular $L \times L$ square lattice with periodic boundary conditions, thus having degree $k=4$. This setup warrants that initially all players have equal chances of success, which is crucial for evaluating the impact of the proposed coevolutionary rule. We note, however, that below results are robust against variations in the initial conditions, as well as variations in the parametrization of the prisoner's dilemma game. The evolution of the two strategies is performed in accordance with the Monte Carlo simulation procedure comprising the following elementary steps. First, a randomly selected player $x$ acquires its payoff $p_x$ by playing the game with all its $k_x$ neighbors. Next, one randomly chosen neighbor of $x$, denoted by $y$, also acquires its payoff $p_y$ by playing the game with all its $k_y$ neighbors. Last, if $p_x > p_y$ player $x$ tries to enforce its strategy $s_x$ on player $y$ in accordance with the probability
\begin{equation}
W(s_x \rightarrow s_y)=(p_x-p_y)/b k_q,
\label{eq1}
\end{equation}
where $k_q$ is the largest of $k_x$ and $k_y$. The introduction of $k_q$ is necessary since the degree $k_x$ is presently subject to evolution as well. In particular, each time player $x$ succeeds in passing its strategy to player $y$ the degree $k_x$ is increased by an integer $\Delta k$ according to $k_x \rightarrow k_x + \Delta k$, whereby for simplicity we here use $\Delta k = 1$. Practically, the increase of degree $k_x$ is realized so that player $x$ establishes a permanent new connection with a not yet connected player which is selected randomly amongst the direct neighbors of the current neighborhood of $x$. Thus, successful players are allowed to grow compact large neighborhoods that are centered around their initial four nearest neighbors. Notably, similar results as will be reported below can be obtained if players extend their neighborhoods via long-range connections, but since we primarily wanted to eschew effects of resulting small-world topologies \cite{abramson-pre} and focus solely on the impact of coevolutionary extending neighborhoods, we here present the results obtained with the former model. The described coevolutionary rule would eventually result in a fully connected graph, which in turn would prevent the survival of cooperation due to the applicability of the well-mixed limit. Accordingly, to abridge the latter effect we introduce $k_{max}$ as the maximal degree a player is allowed to obtain. In fact, the coevolutionary process of making new connections is stopped as soon as the degree $k$ of a single player within the whole population reaches $k_{max}$, whereby this limit prevents the formation of a homogeneous system and will be one of the main parameters to be varied below. Despite being strikingly simple, the proposed protocol for the coevolution of neighborhoods is remarkably robust, delivering conclusive results with respect to the final distribution of $k$ as well as the two competing strategies.
In accordance with the random sequential update, each individual is selected once on average during a full Monte Carlo step (MCS), which consists of repeating the above elementary steps $L^2$ times corresponding to all participating players. Monte Carlo results were obtained on populations comprising $100 \times 100$ to $400 \times 400$ individuals, whereby the stationary fraction of cooperators $\rho_C$ was determined within $10^5$ to $10^6$ MCS after sufficiently long transients were discarded. Moreover, as the coevolutionary process yields highly heterogeneous interaction networks, thus yielding heavily fluctuating outputs, final results were typically averaged over $200$ independent runs for each set of parameter values in order to take into account the stochastic feature of host graph topology resulting from the coevolutionary process.
\section{Results}
\begin{figure}
\scalebox{0.45}[0.45]{\includegraphics{fig1.eps}}
\caption{Time evolutions of $\rho_C$ obtained for $b = 1.24$. Dashed line shows results obtained in the absence of the coevolutionary neighborhood growth, while the solid line depicts the outcome of the prisoner's dilemma game if $k_{max} = 50$. Note that the $x$-axis has a logarithmic scale on which fractions of the first full MCS are shown as well for clarity.}
\label{fig1}
\end{figure}
The remarkable impact of the above defined coevolutionary process is demonstrated in Fig.~\ref{fig1}, where the time evolution of $\rho_C$ obtained with and without the inclusion of coevolution is presented. The difference between the two outcomes is evident as the basic version of the game fails to sustain cooperative behavior (dashed line) while the inclusion of the coevolutionary process is able to recover it and maintain respectable $\rho_C = 0.66$ (solid line). Clearly thus, the newly proposed model is able to sustain cooperation in regions of $b$ where the regular square lattice interaction topology fails. However, it is interesting to note that this is true for the final outcome of the game, whereas during the first $100$ MCS it seems that the cooperative behavior will actually fare better without the inclusion of the coevolutionary process. Note that the solid line drops slightly faster than the dashed line during the initial phase of the game. Yet rather surprisingly, the tide then shifts in favor of the cooperative strategy as depicted by the solid line in Fig.~\ref{fig1}; a feature that cannot be observed in case the coevolution of neighborhoods is absent. Figure~\ref{fig1} also suggests that, while the results are essentially robust against variations of initial conditions, the initial density of cooperators shouldn't be too low, as otherwise the promotive impact of coevolution could be missed.
\begin{figure}
\scalebox{0.457}[0.457]{\includegraphics{fig2.eps}}
\caption{Promotion of cooperation in dependence on $k_{max}$ for $b=1.15$ (open squares), $b=1.2$ (closed squares), and $b=1.28$ (open circles). There exists an optimal value of $k_{max}$ at which $\rho_C$ is maximal. Lines are solely guides to the eye.}
\label{fig2}
\end{figure}
To sharpen the facilitative effect of the coevolutionary rule on the cooperative behavior, we present stationary values of $\rho_C$ in dependence on $k_{max}$ for three different values of $b$ in Fig.~\ref{fig2}. It can be inferred that there exist an optimal maximal degree $k_{max}$ a player is allowed to obtain during the coevolutionary process at which cooperation thrives best. This holds irrespective of $b$, although the optimal values of $k_{max}$ fluctuate between $50$ and $70$. The non-monotonous dependence on $k_{max}$, illustrated in Fig.~\ref{fig2}, is a consequence of the limited support for cooperation offered by the square lattice (results obtained at values of $k_{max}$ equal or close to $4$), and the well-mixed limit that is reached for high $k_{max}$. Note that $k_{max}$ comparable with the system size inevitably lead to a high degree of interconnectedness amongst the players, which is characteristic for a well-mixed population. In between the two extremes, the coevolutionary rule obviously yields a favorable host graph topology for the evolution of cooperation.
\begin{figure}
\scalebox{0.45}[0.45]{\includegraphics{fig3.eps}}
\caption{Promotion of cooperation in dependence on $b$ for $k_{max} = 4$ (open squares), $k_{max} = 50$ (closed squares), and $k_{max} = 200$ (open circles). Cooperators are most successful if $k_{max} = 50$, which roughly corresponds to the peak values of $\rho_C$ depicted in Fig.~\ref{fig2}. Lines are solely guides to the eye.}
\label{fig3}
\end{figure}
Figure~\ref{fig3} shows $\rho_C$ in dependence on $b$ for three different values of $k_{max}$, whereby it can be observed that the optimal value of $k_{max} = 50$ is able to sustain some fraction of cooperators almost halfway through the whole span of $b$. By comparison, in the absence of the coevolutionary process (note that $k_{max} = 4$ leaves the initial topology unaltered) the cooperative trait goes extinct at $b = 1.115$. Moreover, large values of $k_{max}$ still yield some advantages for the cooperators, as can be inferred from the $k_{max} = 200$ curve depicted in Fig.~\ref{fig3}, yet increasing $k_{max}$ even further introduces well-mixed-like conditions where the sustainability of cooperation is practically absent even for low values of $b$.
\begin{figure}
\scalebox{0.444}[0.444]{\includegraphics{fig4.eps}}
\caption{Cooperation level $\rho_C$ in dependence on the time separation between strategy and structure updating $q$ for $b = 1.2$ (open squares), $b = 1.25$ (closed squares), and $b = 1.3$ (open circles). The maximal degree was limited to $k_{max} = 50$ for all three values of $b$. Lines are solely guides to the eye.}
\label{figad}
\end{figure}
Before turning our attention to networks emerging due to the proposed coevolutionary rule, we test above results against the separation of time scales \cite{sanchez-prl}, presently characterizing the evolution of strategies and structure. Thus far, the two time scales were treated as identical since every successful reproduction was followed by an increase in the player's degree. The model can be generalized via a parameter $q$ that determines the probability of degree extension after a successful strategy pass. Evidently, $q=1$ recovers the originally proposed model while decreasing $q$ result in increasingly separated time scales. At $q=0$ the model becomes equivalent to the spatial model without coevolution, hence yielding $\rho_C = 0$ by high $b$, as demonstrated in Fig.~\ref{figad}. An increase in $q$, resulting in a moderately fast yet effective coevolution, is beneficial for cooperation since influential cooperators can then extend their neighborhoods and thus become stronger by collecting higher payoffs already during the coevolutionary process. Oppositely, influential defectors become weaker as their defecting neighborhoods grow, which ultimately results in the highest cooperation levels at intermediate $q$. However, further increasing $q$ can generate a slight downward trend of $\rho_C$ because the influential cooperators cannot take full advantage of their newly acquired neighbors within the short time between consecutive building steps, and thus defectors can gain a slight yet permanent advantage. The moderate decrease in $\rho_C$ due to the too fast network evolution if compared to the strategy evolution is, however, virtually absent by very high $b$, since then the dominating feature is the final heterogeneous network topology rather than initial fights of dominance. For simplicity, and to preserve comparability with the results in the first three figures, we will continue to use $q=1$ in what follows.
Next, we examine properties of networks resulting from the coevolutionary process. As heterogeneity is the most important property favoring cooperation, we first focus on the degree distribution $P(k)$. Given the fact that substantial promotion of cooperation was in the past often associated with strongly heterogeneous states, either in form of the host network \cite{santos-prl} or social diversity \cite{perc-pre, santos-nat}, it is reasonable to expect that $P(k)$ will exhibit similar features as well. Results presented in Fig.~\ref{fig4} clearly attest to this expectation as the semi-log plot of the distribution reveals a highly heterogeneous outlay of $P(k)$ that can be most accurately described by an exponential fit. The latter feature is crucial for the fortified facilitative effect on cooperation outlined in Figs.~\ref{fig2} and \ref{fig3}, in particular since it incubates cooperative clusters around individuals with high $k$, as described previously in \cite{santos-prl} and reviewed in \cite{szabo-pr}. Contrary, since the positive feedback of the imitating environment is not associated with influential defectors they therefore fail to survive even if temptations to defect are large. As already noted, a similar behavior underlies the cooperation-facilitating mechanism reported for the scale-free network where players with the largest connectivity (presently equivalent to those having $k$ close to $k_{max}$) also act as robust sources of cooperation in the prisoner's dilemma game. Noteworthy since it is related to time courses presented in Fig.~\ref{fig1}, before the heterogeneous network topology fully evolves, defectors temporarily thrive since they gain a larger base of neighbors to exploit. Once, however, the prime spots of the evolved network are overtaken by cooperators the defectors start loosing ground fast, which explains the initial drop and the subsequent recovery of cooperative behavior depicted by the solid line in Fig.~\ref{fig1}. The presently reported spontaneous emergence of the heterogeneous distribution of degree from an initially non-preferential state within the framework of evolutionary game theory suggests that even very simple coevolutionary rules might lead to a strong segregation amongst participating players, which is arguably advantageous for flourishing cooperative states. We argue that the core mechanism responsible for the emergence of heterogeneity in the degree distribution presented in Fig.~\ref{fig4} can be related to the growth and preferential attachment mechanism proposed by Barab\'{a}si and Albert \cite{barabasi-sci}. In particular, our model incorporates preferential attachment in that the probability of increasing the degree is larger by players that have had successful reproductions in the past since they are more likely to reproduce in the future. Obviously, however, our model does not incorporate growth since players are not added in time. Nevertheless, since the evolution is halted by a given $k_{max}$, preferential attachment alone can still lead to highly heterogeneous but not scale-free distributions \cite{barabasi-sci}.
\begin{figure}
\scalebox{0.45}[0.45]{\includegraphics{fig5.eps}}
\caption{Final distribution of degree $P(k)$ in the studied prisoner's dilemma game obtained for $b=1.26$ via $k_{max} = 50$. Note that the $y$ axis has a logarithmic scale to clearly reveal the heterogeneous outlay of $P(k)$.}
\label{fig4}
\end{figure}
\begin{figure}
\scalebox{0.395}[0.395]{\includegraphics{fig6.eps}}
\caption{Snapshots of typical distributions of players on a $100 \times 100$ grid, obtained for $k_{max} = 14$ (left panel), $k_{max} = 50$ (middle panel) and $k_{max} = 200$ (right panel) at $b=1.2$. Red and green are influential players (see text for details) in defector and cooperator states, respectively, while yellow are all the direct neighbors of the depicted influential players. If a player is neither influential nor belonging to a neighborhood of an influential player it is marked white.}
\label{fig5}
\end{figure}
One may argue, however, that similar highly heterogeneous degree distributions can be obtained at higher $k_{max}$ as well, yet the promotion of cooperation is then still moderate, as demonstrated in Fig.~\ref{fig2}. This observation highlights that the heterogeneous distribution itself is not a sufficient condition for ample levels of cooperation at high temptations to defect. To uncover the additional decisive feature of resulting networks by different $k_{max}$, we study the overlap of neighborhoods of the so-called influential players, whereby a player is designated as influential if it has the highest degree among any other players that can adopt the strategy from the influential player via an elementary process. Figure~\ref{fig5} shows typical distributions of influential players, which are denoted either green (cooperators) or red (defectors) depending on their strategy. In addition, their neighborhoods, formed by those directly linked to the influential players, are depicted yellow. The distributions are plotted for different values of $k_{max}$ but for an identical temptation to defect equalling $b=1.2$. At small $k_{max}$ there exist many influential players with small neighborhoods surrounding them, yet the outlay is virtually homogenous, and thus the promotion of cooperation is not notably enhanced if compared to the square lattice alone. Around the optimal $k_{max}$, however, influential players become fewer, and their neighborhoods larger. Importantly though, the overlap between their neighborhoods is still remarkable, which is crucial as it enables influential cooperators to overtake influential defectors as soon as the latter weaken their neighborhoods. As we will show next, this effective information transfer between the influential players is crucial for the feedback mechanism to work. At higher $k_{max}$ influential players become rarer still, and their neighborhoods grow further, yet crucially, the overlap between them vanishes, thus hindering influential cooperators to overtake influential defectors. In sum, defectors are virtually undisturbed in exploiting their large neighborhoods, ergo leading to a population in which defection is widespread.
\begin{figure}
\scalebox{0.45}[0.45]{\includegraphics{fig7.eps}}
\caption{Frequency of strategy adoptions between influential players $a_S$ (closed circles) and the cooperation level $\rho_C$ (open circles) in dependence on $k_{max}$ for $b=1.28$. Both quantities are normalized by their maximal values for better comparisons, and are therefore decorated by square brackets on the corresponding axis label. Lines are solely guides to the eye.}
\label{fig6}
\end{figure}
The impact of the above discussed topological feature can be studied directly by measuring the information transfer between influential players, which we realize via $a_S$ quantifying the frequency of strategy adoptions between influential players in the stationary state of the prisoner's dilemma game for each value of $k_{max}$. Figure~\ref{fig6} features the results, and in addition, shows the cooperation level for the sake of comparison. Note that both $a_S$ and $\rho_C$ are depicted normalized with their maximal values (denoted as $[a_S]$ and $[\rho_C]$ on the vertical axis), yet the outlay of the curves thereby remains unaltered. At values of $k_{max}$ that are comparable to the initial degree of all participating players the neighborhood size of influential individuals is small. Thus, they cannot communicate efficiently with one another, which ultimately results in low values of $a_S$. As the maximally attainable degree limit increases the average neighborhood size of influential players grows as well. Consequently, direct strategy adoptions between them become more frequent, and most crucially, strong influential cooperative players can overtake weakened influential defectors, in turn allowing the feedback mechanism to blossom in the intermediate region of $k_{max}$. Contrary, when $k_{max}$ exceeds the optimal value some players become too influential and grow well separated neighborhoods (see the right panel of Fig.~\ref{fig5}), which hampers the information exchange between them so that influential defectors can prevail for prolonged periods of time despite of their weak posture inflicted by the defecting neighborhoods. Indeed, results presented in Fig.~\ref{fig6} show that the level of information exchange, quantified via $a_S$, and the global cooperation level in a heterogeneous environment are strongly bound to one another, following very similar patterns depending on $k_{max}$, thus validating our reasoning. Moreover, it is worth noting that our explanation is in agreement with a previous observation of Rong \textit{et al.} \cite{rong-pre07}, who detected a fall of overall cooperation levels in the prisoner's dilemma game on scale-free networks following disassortative mixing, which also results in an enhanced isolation of hubs.
\section{Summary}
We have demonstrated that the introduction of a very simple coevolutionary process to the spatial prisoner's dilemma game markedly improves survival chances of cooperators in highly defection-prone environments, and moreover, enhances their overall dominance at moderate temptations to defect. Most notably, the underlying mechanism behind the reported promotion of cooperation is routed in the resulting highly heterogeneous network structure which emerges spontaneously from a non-preferential setup following a simple coevolutionary rule that indirectly promotes players that are able to pass their strategy by allowing them to extend their neighborhoods via new connections to not yet linked players. Moreover, we have shown that the newly introduced coevolutionary rule yields optimal results if limited by a maximal degree the most influential player is allowed to obtain. If this limit is surpassed, the detrimental impact on the evolution of cooperation sets in due to the decreasing overlap of neighborhoods of the influential players, which enables defectors to reign in isolation from potentially stronger influential cooperators. The success driven increase of degree indirectly introduces a preferential attachment mechanism into the model, which combined with the limiting $k_{max}$, results in heterogeneous degree distributions.
In sum, presented results confirm that the presence of influential leaders is advantageous for cooperation, and more importantly, that a simple `making new friends' coevolutionary rule may bring about just the appropriate diversity between participating players if appropriately timed. The coevolutionary model demonstrates how influential leaders can evolve from an initially non-preferential state, and that it is optimal for cooperation if their overall density remains bounded to intermediate levels.
\acknowledgments
Discussions with Gy{\"o}rgy Szab{\'o} are gratefully acknowledged.
|
2,869,038,156,648 | arxiv | \section{Introduction}
The seminal work of Shapley~\cite{shapley1953stochastic}
defines Stochastic Games (SGs) to study the dynamic non-cooperative multi-player game, where each player simultaneously and independently chooses an action at each round, and the next state is determined by a probability distribution depending on the current state and the chosen joint actions.
In two-player zero-sum SGs, Shapley \cite{shapley1953stochastic} proved the existence of a stationary strategy profile in which no agent has an incentive to deviate; similarly, the existence of equilibrium in stationary strategies also holds in multi-player nonzero-sum SGs \cite{fink1964equilibrium}.
Such a solution concept (now also known as \textsl{Markov perfect equilibrium} (MPE) \cite{maskin2001markov}) models the dynamic nature of multi-player games.
As a refinement of Nash equilibrium \cite{nash1951} on SGs, MPE prevents non-payoff-relevant variables from affecting strategic behaviors, which allows researchers to identify the impact of state variables on outcomes.
Due to its generality,
the framework of SGs has enlightened a sequence of studies \cite{neyman2003stochastic,solan2015stochastic} on a wide range of real-world applications ranging from advertising and pricing \cite{albright1979birth}, fisheries modelling \cite{sobel1982stochastic}, football player selection \cite{winston1984stochastic}, travelling inspection \cite{filar1985player}, and designing modern gaming AIs \cite{peng2017multiagent}.
As a result, developing algorithms to compute MPE in SGs has become one of the key subjects in an extremely rich research domain, including but not limited to applied mathematics, economics, operations research, computer science and artificial intelligence \cite{filar2012competitive,raghavan1991algorithms,solan2015stochastic}.
SGs underpin many AI/machine learning studies. For example, it is the key framework for studying adversarial training \cite{franci2020game, goodfellow2014generative} and modelling robustness \cite{pinto2017robust,abdullah2019wasserstein} in zero-sum setting.
In reinforcement learning (RL),
SG extends the Markov decision process (MDP) formulation to incorporate strategic interactions.
Similar to the role of MDP in RL \cite{sutton2018reinforcement},
SGs build the foundation for multi-agent reinforcement learning (MARL) techniques to study optimal decision makings in multi-player games \cite{littman1994markov}.
In last decades, a wide variety of MARL algorithms have been developed to solved SGs \cite{yang2020overview}.
Computing a MPE in (general-sum) SGs requires a perfect knowledge of the transition dynamics and the payoffs of the game \cite{filar2012competitive}, which is often infeasible in practice.
To overcome this difficulty, MARL methods are often applied to learn the MPE of a SG based on the interactions between agents and the environment.
MARL algorithms are generally considered under two settings: \textsl{online} and \textsl{offline}.
In the offline setting (also known as the batch setting \cite{perolat2017learning}), the learning algorithm controls all players in a centralised way, with the hope that the learning dynamics can eventually lead to a MPE by using limited number of interaction samples.
In the online setting, the learner controls only one of the players to play with arbitrary opponents in the game, assuming having unlimited access to the game environment, and the central focus is often about the \textsl{regret}: the difference between a benchmark measure (often in hindsight) and the learner's total reward during learning.
In the offline setting, two-player zero-sum (discounted) SGs have been extensively studied.
Since the opponent is purely adversarial in zero-sum SGs, the process of seeking for the worst-case optimality for each player can be thought of as solving MDPs.
As a result, (approximate) dynamic programming methods \cite{bertsekas2008approximate,szepesvari1996generalized} such as LSPI \cite{lagoudakis2003least} and FQI \cite{munos2008finite} / NFQI \cite{riedmiller2005neural} can be adopted to solve SGs \cite{perolat2016softened,lagoudakis2002value,perolat2015approximate,sidford2020solving,jia2019feature}.
Under this setting, policy-based methods ~\cite{daskalakis2020independent, hansen2013strategy} can also be applied.
However, directly applying exiting MDP solvers on general-sum SGs are challenging.
Since solving two-player NE in general-sum normal-form games (i.e., one-shot SGs) is well-known to be PPAD-complete \cite{daskalakis2009complexity,chen2006settling}, the complexity of MPE in general-sum SGs are expected to be at least PPAD.
Although early attempts such as Nash-Q learning \cite{hu2003nash}, Correlated-Q learning \cite{greenwald2003correlated}, Friend-or-Foe Q-Learning \cite{littman2001friend} have been made to solve general-sum SGs under strong assumptions,
Zinkevich et. al. \cite{zinkevich2006cyclic} demonstrated that the entire class of value iteration methods cannot find stationary NE policies in general-sum SGs.
The difficulties on both the complexity side and the algorithmic side lead to very few existing MARL solutions to general-sum SGs; successful approaches either assumes knowing the complete information of the SG and thus solving MPE can be turned into an optimisation problem \cite{prasad2015two}, or, proves the convergence of batch RL methods to a weaker notion of NE \cite{perolat2017learning}.
In the online setting, one of the most well-known algorithm is R-MAX \cite{brafman2002r}, which studied (average-reward) zero-sum SGs and provided a polynomial (in terms of game size and error parameter) regret bound when competing with an arbitrary opponent.
Under the same regret definition, recently, UCSG \cite{wei2017online} improved R-MAX and achieved a sublinear regret, but still in two-player zero-sum SGs.
When it comes to MARL solutions, Littman \cite{littman1994markov} proposed a practical solution named Minimax-Q that replaces the max operator with the minimax value. Asymptotic convergence results of Minimax-Q in both tabular cases \cite{littman1996generalized} and value function approximations \cite{fan2020theoretical} have been shown.
Yet, playing the minimax value could be overly pessimistic. If the adversary plays sub-optimally, the learner could achieve a higher reward.
To account for this, WoLF \cite{bowling2001rational} was proposed; and unlike Minimax-Q, WoLF is \emph{rationale} in the sense that it can exploit opponent's policy.
AWESOME \cite{conitzer2007awesome}
further generalised WoLF and achieve NE convergence in multi-player general-sum repeated games.
However,
outside the scope of zero-sum SGs,
the question \cite{brafman2002r} of whether a polynomial time no-regret (near-optimal) RL/MARL algorithm exists for general-sum SGs is still unanswered.
Although SG has been proposed for more than 60 years and despite its importance, surprisingly, the complexity of finding a MPE in SG has never been answered.
In fact, unlike the fruitful results on zero-sum SGs, we still know very little about the complexity of solving general-sum SGs.
Two relevant results we know are
that determining whether a pure-strategy NE exist in a SG is \textbf{PSPACE}-hard \cite{conitzer2008new}, and it is \textbf{NP}-hard to determine if there exists a memoryless $\epsilon$-NE in \emph{reachability} SGs \cite{chatterjee2004nash}.
It is long projected solving MPE in (infinite-horizon) SGs is at least \textbf{PPAD}-hard, since solving a two-player NE in one-shot SGs is already \textbf{PPAD}-hard \cite{daskalakis2009complexity,chen2006settling}.
This suggests that under computational hardness assumption, it is unlike to have polynomial-time algorithms in even two-player stochastic games.
Yet, the unresolved question is that
\begin{tcolorbox}[fonttitle=\normalsize,fontupper=\normalsize,fontlower=\normalsize,top=1pt,bottom=1pt,left=1pt,right=1pt,title=The key question that we try to address in this paper:]
\centering
\emph{Can solving MPE in general-sum SGs be anywhere harder in the complexity class?}
\end{tcolorbox}
In this paper, we answer to the above question negatively by proving that computing a MPE in a finite-state discounted SG is \textbf{PPAD}-complete.
Based on our result, we given an affirmative answer that finding an MPE in SGs is highly unlikely to be \textbf{NP}-hard under the circumstance that \textbf{NP}$\neq$ \textbf{co-NP}.
We hope this result could encourage MARL researchers to work more on general-sum SGs, leading to fruitful MARL solutions as those currently on zero-sum SGs.
\subsection{Intuitions and a Sketch of Our Main Ideas}
Like the classic complexity class \textbf{NP}, \textbf{PPAD} is a collection of computational problems. As the definition of \textbf{NP}-completeness, a problem is said to be \textbf{PPAD}-complete if it is in \textbf{PPAD}, and is at least as hard as every problem in \textbf{PPAD}. When one Stochastic Game has only one state and the discount factor $\gamma=0$, then finding a Markov perfect equilibrium (MPE) is equivalent to finding a Nash equilibrium in the corresponding normal-form game, which is known to be \textbf{PPAD}-complete \cite{daskalakis2009complexity,chen2006settling}. So the \textbf{PPAD}-hardness of finding MPE is relatively direct (\Cref{PPAD-hard}).
To obtain the \textbf{PPAD}-complete result (\Cref{PPAD-complete}), it is sufficient for us to prove the \textbf{PPAD} membership of MPE (\Cref{PPAD membership}).
\textbf{i)} The first key observation is that we can construct a function $f$ of the strategy profile space, such that each strategy profile is a fixed point of $f$ if and only of it is an MPE (\Cref{theorem: MPE exists}). Further, we prove the function $f$ is continuous (actually $\lambda$-Lipschitz by \Cref{lemma: f Lipschitz}), so that fixed points are guaranteed to exist by the Brouwer fixed point theorem.
\textbf{ii)} We then prove the function $f$ has some ``good'' approximation properties. Let $|\mathcal{SG}|$ be the input size of a stochastic game. If we can find a $\texttt{poly}(|\mathcal{SG}|)\epsilon^2$-approximate fixed point $\pi$ of $f$, i.e., $\|f(\pi)-\pi\|_{\infty}\leq \texttt{poly}(|\mathcal{SG}|)\epsilon^2$, where $\pi$ is a strategy profile, then $\pi$ is an $\epsilon$-approximate MPE for the Stochastic Game (combining \Cref{lemma: single state} and \Cref{lemma: all states}). So our goal converts to finding an approximate fixed point.
\textbf{iii)} To prove the \textbf{PPAD} membership of finding an MPE, we will reduce it to the problem {\sc End of the Line} (whose formal definition is in \Cref{sec: PPAD and MPE problem}), which is the first \textbf{PPAD}-complete problem introduced by Papadimitriou~\cite{papadimitriou1994complexity}. We will show, the reduction could be constructed in polynomial time, and every solution of the problem {\sc End of the Line} corresponds to a good approximate fixed point (\Cref{lemma: stopping appro}), thus yields an $\epsilon$-approximate MPE.
\section{Stochastic Games}
\begin{definition}[Stochastic Game]\label{definition: Stochastic Game}
A Stochastic Game is defined by a tuple of key elements $\left\langle n,\mathbb{S},\mathbb{A},P,r,\gamma\right\rangle$, where
\begin{itemize}
\item $n$ is the number of agents.
\item $\mathbb{S}$ is the set of finite environmental states. Suppose that $|\mathbb{S}|=S$.
\item $\mathbb{A}=\mathbb{A}^1\times\cdots\times \mathbb{A}^n$ is the set of agents' joint actions. Suppose that $|\mathbb{A}^i|=A^i$ and $A_{\max}=\max_{i\in[n]} A^i$.
\item $P: \mathbb{S} \times \mathbb{A} \rightarrow \Delta(\mathbb{S})$ is the transition probability, that is, at each time step, given the agents' joint action $a\in \mathbb{A}$, then the transition probability from state $s$ to state in the next time step $s'$ is $P(s'|s,a)$.
\item $r=r^1\times\cdots\times r^n: \mathbb{S}\times \mathbb{A} \rightarrow \mathcal{R}_+^n$ is the reward function, that is, when an agents are at state $s$ and play a joint action $a$, then the agent $i$ will get reward $r^i(s,a)$. We assume that the rewards are uniformly bounded by $R_{\max}$.
\item $\gamma\in \left[0,1\right)$ is the discount factor that specifies the degree to which the agent’s rewards are discounted over time.
\end{itemize}
\end{definition}
Each agent aims to find a behavioral strategy with Markovian property, meaning that each agent's strategy can be conditioned only on the current state of the game.
Note that behavioral strategy is different from mixed strategy. To be more clear, we give both definitions of mixed strategy and behavioral strategy.
The pure strategy space of an agent $i$ is $\prod_{s\in\mathbb{S}}\mathbb{A}^i$, meaning that the agent $i$ needs to select an action at each state. Note that the size of pure strategy space of each agent is $|\mathbb{A}^i|^S$, which is exponential in the number of states.
\begin{definition}[Mixed Strategy]
The mixed strategy space is $\Delta\left(\prod_{s\in\mathbb{S}}\mathbb{A}^i\right)$, i.e., the probability distribution on pure strategy space $\prod_{s\in\mathbb{S}}\mathbb{A}^i$.
\end{definition}
\begin{definition}[Behavioral Strategy]
A behavioral strategy of an agent $i$ is $\pi^i: \mathbb{S}\rightarrow \Delta(\mathbb{A}^i)$, i.e., $\forall s\in\mathbb{S}, \pi^i(s)$ is a probability distribution on $\mathbb{A}^i$.
\end{definition}
In the rest of the paper, we will refer to a behavioral strategy simply as a strategy for convenience. A strategy profile $\pi$ is the Cartesian product of all agents' strategy, i.e., $\pi=\pi^1\times\cdots\times\pi^n$.
We denote the probability of agents using the joint action $a$ on state $s$ by $\pi(s,a)$, the probability of agent $i$ using the action $a^i$ on state $s$ by $\pi^i(s,a^i)$. The strategy profile other than agent $i$ is denoted by $\pi^{-i}$. Given $\pi$, the transition probability and the reward function only depend on the current state $s\in \mathbb{S}$. So let $r^{i,\pi}(s)$ denote $\mathbb{E}_{a\sim \pi(s)}[r^i(s,a)]$ and let $P^{\pi}(s'|s)$ denote $\mathbb{E}_{a\sim \pi(s)}[P(s'|s,a)]$. Given $\pi^{-i}$, the transition probability and the reward function only depend on the current state $s\in \mathbb{S}$ and player $i$'s action $a^i$. So let $r^{i,\pi^{-i}}(s,a^i)$ denote $\mathbb{E}_{a^{-i}\sim \pi^{-i}(s)}[r^i(s,(a^i,a^{-i}))]$ and let $P^{\pi^{-i}}(s'|s,a)$ denote $\mathbb{E}_{a^{-i}\sim \pi^{-i}(s)}[P(s'|s,(a^i,a^{-i}))]$.
For any positive integer $m$, let $\Delta_m:=\{x\in \mathcal{R}_+^m|\sum_{i=1}^m x_i=1\}$. Define $\Delta_{A^i}^k:=\times_{p=1}^k\Delta_{A^i}$. Then $\forall s\in\mathbb{S},\pi^i(s)\in \Delta_{A^i}$, $\pi^i\in \Delta_{A^i}^S$ and $\pi\in \prod_{i=1}^n\Delta_{A^i}^S$.
\begin{definition}[Value Function]\label{definition: value function}
A value function for a strategy profile $\pi$ of an agent $i$, written $V^{\pi^i,\pi^{-i}}:\mathbb{S}\rightarrow R$ gives the expected sum of discounted rewards of the agent $i$ when the starting state is $s$:
$$V^{\pi^i,\pi^{-i}}(s)=\mathbb{E} \left[\sum_{t=0}^{\infty} \gamma^t r^i(s_t,a)\Big|s_0=s,a\sim\pi(s_t), s_{t+1}\sim P^{\pi}(s_t) \right].$$
Alternatively, the value function can also be defined recursively via the Bellman equation.
$$V^{\pi^i,\pi^{-i}}(s)=\sum_{s'\in \mathbb{S}}\mathop{\mathbb{E}}_{a\sim\pi(s)}\left[ r^{i}(s,a)\right]+\gamma P^{\pi}(s'|s)V^{\pi^i,\pi^{-i}}(s').$$
\end{definition}
\begin{definition}[Markov Perfect Equilibrium (MPE)]\label{definition: MPE}
A behavioral strategy profile $\pi$ is called a Markov Perfect Equilibrium if $$\forall s\in \mathbb{S}, i\in[n], \forall \tilde{\pi}^i\in \Delta_{A^i}^S, V^{\pi^i,\pi^{-i}}(s)\geq V^{\tilde{\pi}^i,\pi^{-i}}(s).$$
\end{definition}
\begin{definition}[$\epsilon$-approximate MPE]
Given $\epsilon>0$, a behavioral strategy profile $\pi$ is called an $\epsilon$-approximate MPE if $$\forall s\in \mathbb{S}, i\in[n], \forall \tilde{\pi}^i\in \Delta_{A^i}^S, V^{\pi^i,\pi^{-i}}(s)\geq V^{\tilde{\pi}^i,\pi^{-i}}(s)-\epsilon.$$
\end{definition}
The Markov perfect equilibrium is a concept within SGs in which the players’ strategies depend only on the current state and not the game history. So the state encodes all relevant information for the player's strategies.
\section{The Class \textbf{PPAD} and {\sc Markov-Perfect Equilibrium} Problem}
\label{sec: PPAD and MPE problem}
The complexity class \textbf{PPAD} is introduced~\cite{papadimitriou1994complexity} to characterize the
mathematical proof structure required in a class of mathematical problems based on a parity argument for a solution to exist as in the following problem of {\sc End of the Line}.
It has included Nash equilibrium computation~\cite{daskalakis2009complexity,chen2006settling}, as well as many other problems.
The problem is defined on a class of directed graphs consisting of an exponential number of vertices (numbered from $0^n$ to $2^n-1$). Edges of this graph is defined by two polynomial-size circuits $S$ and $P$, each with $n$ input bits and $n$ output bits. There is an edge from vertex $u$ to vertex $v$ if and only if $S(u)=v$ and $P(v)=u$. Note that each vertex has at most 1 indegree and at most 1 outdegree, which means that the graph only consists of paths, cycles, and isolated vertices.
\begin{definition}[$(S,P)$-Graph~\cite{goldberg2013complexity}]
An $(S,P)$-graph with parameter $n$ is a graph on $\{0,1\}^n$ specified by circuits $S$ and $P$, as described above, subject to the constraint that vertex $0^n$ has no incoming edge but does have an outgoing edge.
\end{definition}
Based on $(S,P)$-graphs, the problem {\sc End of the Line} is to find a vertex other that $0^n$ such that
the sum of its indegree and outdegree is one but {\sc Other End of this Line} is to find the end of the particular path that starts at $0^n$~\cite{goldberg2013complexity}.
It turns out that the two problems are dramatically different in terms of their computational complexity. The former is \textbf{PPAD}-complete~\cite{papadimitriou1994complexity} but the latter is PSPACE-complete~\cite{goldberg2013complexity}.
Here we give the definition of computational problem of finding a Markov Perfect Equilibrium in Stochastic Games.
\begin{definition}[{\sc Markov-Perfect Equilibrium}]
The input instance of problem {\sc Markov-Perfect Equilibrium} is a pair $(\mathcal{SG},L)$ where $\mathcal{SG}$ is a Stochastic Game and $L$ is a binary integer. The output of problem {\sc Markov-Perfect Equilibrium} is a strategy profile $\pi\in\prod_{i=1}^n\Delta_{A^i}^S$ such that $\pi$ is a $1/L$-approximate MPE.
\end{definition}
\begin{theorem}[Main Theorem]\label{PPAD-complete}
{\sc Markov-Perfect Equilibrium} is \textbf{PPAD}-complete.
\end{theorem}
We note that when $|S|=1$ and $\gamma=0$, a Stochastic Game degenerates to an $n$-player matrix game. At this time, any Markov Perfect Equilibrium of this Stochastic Game is a Nash Equilibrium for the corresponding matrix game. So we have the following hardness result immediately:
\begin{lemma}\label{PPAD-hard}
{\sc Markov-Perfect Equilibrium} is \textbf{PPAD}-hard.
\end{lemma}
In the rest of the paper, we will mainly focus on the proof of \textbf{PPAD} membership of MPE.
\begin{lemma}\label{PPAD membership}
{\sc Markov-Perfect Equilibrium} is in \textbf{PPAD}.
\end{lemma}
\section{On the Existence of MPE}\label{Existence of MPE}
The original proof of the existence of MPE is from \cite{fink1964equilibrium}, mainly based on the Kakutani fixed point theorem. Here we give an alternative proof based on the Brouwer fixed point theorem, which also leads to our proof of \textbf{PPAD} membership of {\sc Markov-Perfect Equilibrium}.
Inspired by the continuous transformation defined by Nash to prove the existence of equilibrium point \cite{nash1951}, we define a new function $f: \prod_{i=1}^n\Delta_{A^i}^S\rightarrow \prod_{i=1}^n\Delta_{A^i}^S$ for a Stochastic Game to establish the existence of MPE. Let $V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)$ denote the value function of agent $i$ if agent $i$ uses pure action $a^i$ at state $s$, uses mixed actions $\pi^i(s')$ at state $s'\neq s$, and for any other agent $j\neq i$, agent $j$ uses the strategy $\pi^j$.
Let $\pi\in \prod_{i=1}^n\Delta_{A^i}^S$ be a strategy profile. Then for each player $i\in [n]$, each state $s\in\mathbb{S}$ and each action $a^i\in\mathbb{A}^i$, the modification of $\pi^i(s,a^i)$ is defined as follows: $$\left(f(\pi)\right)^i(s,a^i)=\frac{\pi^i(s,a^i)+\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)}{1+\sum_{b^i\in\mathbb{A}^i}\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)}.$$
We define the distance of two strategy profiles $\pi_1$ and $\pi_2$, denoted by $\|\pi_1-\pi_2\|_{\infty}$, as follows. $\|\pi_1-\pi_2\|_{\infty}=\max_{i\in[n], s\in\mathbb{S},a^i\in\mathbb{A}^i}|\pi_1^i(s,a^i)-\pi_2^i(s,a^i)|$.
We first prove the function $f$ satisfies a continuity property namely \textit{$\lambda$-Lipschitz}, where $\lambda$ is defined as $\frac{9nS^2A_{\max}^2R_{\max}}{(1-\gamma)^2}$. The proof of \Cref{lemma: f Lipschitz} is challenging, because the value function $V^{\pi^i,\pi^{-i}}$ is defined recursively via Bellman equation. It could be written informally like $V^{\pi^i,\pi^{-i}}=(I-\gamma P^{\pi})^{-1}r^{i,\pi}$, which is not linear even for each fixed $\pi^{-i}$. We refer the interested reader to \Cref{proof of sec 4} for a complete proof, whose techniques might be of independent interest.
\begin{restatable}{lemma}{fLipschitz}
\label{lemma: f Lipschitz}
The function $f$ is $\lambda$-Lipschitz, i.e., for every $\pi_1,\pi_2\in \prod_{i=1}^n\Delta_{A^i}^S$ such that $\left\|\pi_1-\pi_2\right\|_{\infty}\leq \delta$, we have $$\Big\|f(\pi_1)-f(\pi_2)\Big\|_{\infty}\leq \frac{9nS^2A_{\max}^2R_{\max}}{(1-\gamma)^2} \delta.$$
\end{restatable}
Now we could establish the existence of MPE by the Brouwer fixed point theorem.
\begin{theorem}\label{theorem: MPE exists}
For any Stochastic Game $\left\langle n,\mathbb{S},\mathbb{A},P,R,\gamma\right\rangle$, a strategy profile $\pi$ is MPE if and only if it is a fixed point of the function $f$, i.e., $f(\pi)=\pi$. Furthermore, the function $f$ has at least one fixed point.
\end{theorem}
\begin{proof}
We first show the function $f$ has at least one fixed point. Brouwer fixed point theorem states that for any continuous function mapping a compact convex set to itself, there is a fixed point. Notice that $f$ is a function mapping a compact convex set to itself. Also, $f$ is continuous by \Cref{lemma: f Lipschitz}. So the function $f$ has at least one fixed point.
We then prove a strategy profile $\pi$ is MPE if and only if it is a fixed point.
The proof of the necessity part is immediate by the definition of MPE (\Cref{definition: MPE}). If $\pi$ is a MPE, then we have for each player $i\in [n]$, each state $s\in\mathbb{S}$ and each action $a^i\in\mathbb{A}^i$, $V^{\pi^i,\pi^{-i}}(s)\geq V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)$, which means $\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0$. Then for each player $i\in [n]$, each state $s\in\mathbb{S}$ and each action $a^i\in\mathbb{A}^i$, $\left(f(\pi)\right)^i(s,a^i)=\pi^i(s,a^i)$, which means $\pi$ is a fixed point of $f$.
For the proof of the sufficiency part, suppose that $\pi$ is a fixed point of $f$. Then we have for each player $i\in [n]$, each state $s\in\mathbb{S}$ and each action $a^i\in\mathbb{A}^i$
\begin{eqnarray*}
&&\pi^i(s,a^i)=\frac{\pi^i(s,a^i)+\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)}{1+\sum_{b^i\in\mathbb{A}^i}\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)}\\
&\Longrightarrow& \pi^i(s,a^i)\sum_{b^i\in\mathbb{A}^i}\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right).
\end{eqnarray*}
Pick arbitrarily $$a^{i,*}\in\mathop{\arg\min}_{b^i\in\mathbb{A}^i, \pi^i(s,b^i)>0}V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s).$$ It is not hard to prove $\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^{i,*})=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0$, which means
\begin{eqnarray*}
&& \pi^i(s,a^{i,*})\sum_{b^i\in\mathbb{A}^i\setminus\{a^{i,*}\}}\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0\\
&\Longrightarrow& \sum_{b^i\in\mathbb{A}^i\setminus\{a^{i,*}\}}\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0\\
&\Longrightarrow& \forall b^i\in\mathbb{A}^i\setminus\{a^{i,*}\}, \max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0.
\end{eqnarray*}
So we have $\forall b^i\in \mathbb{A}^i, \max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,b^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=0$, i.e., for any state $s\in \mathbb{S}$,
\begin{equation}\label{equation: pi^i in argmax}
\pi^i\in\mathop{\arg\max}_{\substack{\pi^{i,*}\in \Delta_A^S\\\forall s'\neq s,\pi^{i,*}(s')=\pi^i(s)}}V^{\pi^{i,*},\pi^{-i}}(s).
\end{equation}
Note that if we fix the strategy profile other agent $i$, then for agent $i$, it is essentially a Markov decision process. By \Cref{equation: pi^i in argmax}, we know that $\pi^i$ is an optimal policy of agent $i$, which means $$\forall s\in \mathbb{S}, i\in[n], \forall \tilde{\pi}^i\in \Delta_A^S, V^{\pi^i,\pi^{-i}}(s)\geq V^{\tilde{\pi}^i,\pi^{-i}}(s),$$ i.e., $\pi$ is a MPE of the Stochastic Game.
\end{proof}
\section{\textbf{PPAD} Membership of {\sc Markov-Perfect Equilibrium}}\label{section: membership}
In this section, we will prove the \textbf{PPAD} membership of {\sc Markov-Perfect Equilibrium}, by reducing it to {\sc End of the Line}. We highlight our approximation guarantee proof (\Cref{appoximation gua}), which includes several innovative understanding of Markov Decision Processes and Stochastic Games. The construction of the graph of {\sc End of the Line} is relatively standard and is from the simplicial approximation algorithm of Laan and Talman \cite{LaanT82computation}, which will be provided into \Cref{construct graph}.
\subsection{The Approximation Guarantee}\label{appoximation gua}
In \Cref{Existence of MPE}, \Cref{theorem: MPE exists} states that $f$ has a fixed point $\pi$ if and only if $\pi$ is an MPE for the Stochastic Game. Now we will prove $f$ has some good approximation properties beyond that: if we find an $\epsilon$-approximate fixed point $\pi$ of $f$, then it is also a $\texttt{poly}(|\mathcal{SG}|)\sqrt{\epsilon}$-approximate MPE for the Stochastic Game (combining \Cref{lemma: single state} and \Cref{lemma: all states}).
Moreover, we also get \Cref{coro: approxi MDP}, which leads to better understanding for Markov Decision Process and might be of independent interest. The statement of \Cref{coro: approxi MDP} is as follows. Let $\epsilon>0$ and $\pi$ be a (not necessarily deterministic) policy. If for every starting state $s_0\in\mathbb{S}$, the agent only changes the action of $s_0$ could gain at most $\epsilon$ more value, then the agent could gain at most $\epsilon/(1-\gamma)$ more value even if the agent changes its policy to the optimal policy, i.e., $\pi$ is a good approximation of MDP.
The formal statements of lemmas and proofs are as follows. Proof of \Cref{lemma: single state} is in \Cref{proof of single state}.
\begin{restatable}{lemma}{apprsinglestate}\label{lemma: single state}
Let $\epsilon>0$ and $\pi$ be a strategy profile. If $\|f(\pi)-\pi\|_{\infty}\leq \epsilon$, then for each player $i\in [n]$, each state $s\in \mathbb{S}$ and each action $a^i\in \mathbb{A}^i$, we have $$\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)\leq A_{\max}\left(\frac{\sqrt{\epsilon'}}{1-\gamma}+R_{\max}\sqrt{\epsilon'}+\epsilon'\right),$$ where $\epsilon'=\epsilon\left(1+\dfrac{A_{\max}R_{\max}}{1-\gamma}\right).$
\end{restatable}
\begin{lemma}\label{lemma: all states}
Let $\epsilon>0$ and $\pi$ be a strategy profile. If for each player $i\in [n]$, each state $s\in \mathbb{S}$ and each action $a^i\in \mathbb{A}^i$, $\max\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)\leq \epsilon$, then $\pi$ is an $\epsilon/(1-\gamma)$-approximate MPE.
\end{lemma}
\begin{proof}
Pick any player $i\in [n]$, it is sufficient for us to prove $\forall s\in \mathbb{S}, \forall \tilde{\pi}^i\in \Delta_A^S, V^{\pi^i,\pi^{-i}}(s)\geq V^{\tilde{\pi}^i,\pi^{-i}}(s)-\epsilon.$ Suppose that $\max_{a^i\in\mathbb{A}^i}\left(0,V^{\pi^i,\pi^{-i}}_{\pi^i(s,a^i)=1}(s)-V^{\pi^i,\pi^{-i}}(s)\right)=\epsilon(s).$ Consider the following linear program:
\begin{equation}\label{equation: optimal lp}
\begin{aligned}
\min \quad& \sum_{s\in\mathbb{S}} V(s)& \\
\text{s.t.,} \quad & V(s)\geq r^{i,\pi^{-i}}(s,a^i)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V(s')&\forall s\in \mathbb{S}, a^i\in \mathbb{A}^i.
\end{aligned}
\end{equation}
Let $V^*$ be the solution of the linear program (\ref{equation: optimal lp}). It satisfies $$V^*(s)=\max_{a^i\in\mathbb{A}^i}\left(r^{i,\pi^{-i}}(s,a^i)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V^*(s')\right),$$ which is also the value function of player $i$ when she uses the optimal policy given others' strategy profile $\pi^{-i}$. (Note that when we are given $\pi^{-i}$, it is essentially a Markov Decision Process for player $i$. So we are using linear programming to solve this MDP.)
Now look at the other linear program:
\begin{equation}\label{equ: approxmate lp}
\begin{aligned}
\min \quad& \sum_{s\in\mathbb{S}} V(s)& \\
\text{s.t.,} \quad & V(s)\geq r^{i,\pi^{-i}}(s,a^i)-\epsilon(s)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V(s')&\forall s\in \mathbb{S}, a^i\in \mathbb{A}^i.
\end{aligned}
\end{equation}
Let $V'$ be the solution of the linear program (\ref{equ: approxmate lp}). It satisfies $$V'(s)=\max_{a^i\in\mathbb{A}^i}\left(r^{i,\pi^{-i}}(s,a^i)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V'(s')\right)-\epsilon(s),$$ which is also the value function for the strategy profile $\pi$ for the player $i$.
Now it is sufficient for us to bound $V^*(s)-V'(s), \forall s\in\mathbb{S}$. Let $\epsilon_{\max}=\max_{s\in\mathbb{S}}\epsilon(s)$. Construct a new value vector for the player $i$: $\tilde{V}(s)=V'(s)+\epsilon_{\max}/(1-\gamma)$. Then we have
\begin{eqnarray*}
&&V(s)\geq r^{i,\pi^{-i}}(s,a^i)-\epsilon(s)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V(s')\\
&\Longleftrightarrow& V'(s)+\frac{\epsilon_{\max}}{1-\gamma}\geq r^{i,\pi^{-i}}(s,a^i)-\epsilon(s)+\frac{\epsilon_{\max}}{1-\gamma}+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)V(s')\\
&\Longleftrightarrow& V'(s)+\frac{\epsilon_{\max}}{1-\gamma}\geq r^{i,\pi^{-i}}(s,a^i)-\epsilon(s)+\epsilon_{\max}+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)\left(V(s')+\frac{\epsilon_{\max}}{1-\gamma}\right)\\
&\Longleftrightarrow& \tilde{V}(s)\geq r^{i,\pi^{-i}}(s,a^i)-\epsilon(s)+\epsilon_{\max}+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)\tilde{V}(s)\\
&\Longrightarrow& \tilde{V}(s)\geq r^{i,\pi^{-i}}(s,a^i)+\gamma\sum_{s'\in\mathbb{S}}P^{\pi^{-i}}(s'|s,a^i)\tilde{V}(s).
\end{eqnarray*}
So $\tilde{V}$ is a feasible solution of linear program (\ref{equation: optimal lp}), which means $V^*(s)\leq \tilde{V}(s)$ for any $s\in\mathbb{S}$. Then we have $$V^*(s)-V'(s)\leq \tilde{V}(s)-V'(s)=\epsilon_{\max}/(1-\gamma),$$ i.e., the difference between the optimal value $V^*(s)$ and $V^{\pi^i,\pi^{-i}}$ is upper bounded by $\epsilon/(1-\gamma)$. The argument above applies to any player. So by the definition of $\epsilon$-approximate MPE, we know that $\pi$ is an $\epsilon/(1-\gamma)$-approximate MPE.
\end{proof}
\begin{corollary}\label{coro: approxi MDP}
Let $\epsilon>0$ and $\pi$ be a (not necessarily deterministic) policy of the agent. If for each state $s\in \mathbb{S}$ and each action $a\in \mathbb{A}$ (where $\mathbb{A}$ is the action space of the agent), $\max\left(0,V^{\pi}_{\pi(s,a)=1}(s)-V^{\pi}(s)\right)\leq \epsilon$, then $\pi$ is an $\epsilon/(1-\gamma)$ approximation of MDP.
\end{corollary}
\subsection{Constructing the {\sc End of the Line} Graph}\label{construct graph}
In this section, we give an outline of our reduction from {\sc Markov-Perfect Equilibrium} to {\sc End of the Line}, with the help of the simplicial approximation algorithm of Laan and Talman~\cite{LaanT82computation}. We will focus on the correctness of reduction, leaving details about how to construct the vertices to the appendix.
Recall that the input instance of {\sc Markov-Perfect Equilibrium} is a pair $(\mathcal{SG},L)$. Let $d$ be an integer, which will be defined later to make sure we can find an $1/L$-approximate MPE.
For each $i\in[n]$, define $\Delta_{A^i}(d)$ is the set of points of $\Delta_{A^i}$ induced by the regular grid of size $d$, i.e., $$\Delta_{A^i}(d)=\left\{x\in\Delta_{A^i}|x_j=y_j/d,y_j\in\mathbb{Z}^+,\sum_{j=1}^{A^i}y_j=d \right\}.$$ Similarly, define $\Delta_{A^i}^k(d):=\times_{p=1}^k\Delta_{A^i}(d)$.
\textbf{The Vertices of {\sc End of the Line} Graph.} The set of vertices $\Sigma$ is a set of simplices defined on $\prod_{i=1}^n \Delta_{A^i}^S(d)$, which could be encoded with string $\{0,1\}^N$, where $N$ is polynomial in $|\mathcal{SG}|$ and $\log d$. The formal definition of $\Sigma$ is in \Cref{vertices of end of the line}.
\textbf{Labelling the Grid Points.} We will give each point in $\prod_{i=1}^n \Delta_{A^i}^S(d)$ a label, which will be an element of the set $\mathcal{L}:=\bigcup_{i\in[n],s\in\mathbb{S},a^i\in\mathbb{A}^i}(i,s,a^i)$.
Without loss of generality, we assign a number to the state set $\mathbb{S}$ and action set $\mathbb{A}^i$ for each $i\in[n]$ arbitrarily for the purpose of labelling. Suppose that $\mathbb{S}=\{s_1,\cdots,s_S\}$ and $\mathbb{A}^i=\{a^i_1,\cdots,a^i_{A^i}\}$.
For each strategy profile $\pi\in \prod_{i=1}^n \Delta_{A^i}^S(d)$, $\pi$ receives the label $(i,s_j,a^i_k)$ if and only if $(i,s_j,a^i_k)$ is the lexicographically least index such that $\pi^i(s_j,a^i_k)>0$ and $$(f(\pi))^i(s_j,a^i_k)-\pi^i(s_j,a^i_k)\leq (f(\pi))^{i'}(s_{j'},a^{i'}_{k'})-\pi^{i'}(s_{j'},a^{i'}_{k'})$$ for all $i'\in[n],s_{j'}\in\mathbb{S}$ and $a^{i'}_{k'}\in\mathbb{A}^{i'}$.
Note that each strategy profile $\pi\in \prod_{i=1}^n \Delta_{A^i}^S(d)$ has exactly one label, which could be denoted by $l(\pi)$. Since the function $f$ could be computed in time polynomial in $N$ and $|\mathcal{SG}|$, the label could also computed in time polynomial in $|\mathcal{SG}|$ and $\log d$. Also the labelling rule is proper in the sense that $l(\pi)\neq (i,s_j,a^i_k)$ if $\pi^i(s_j,a^i_k)=0$.
A simplex $\sigma\in\Sigma$ will be called complete labelled if all its vertices\footnote{Please distinguish the vertices of a simplex and vertices of the {\sc End of the Line} graph.} have a different label. A completely labelled simplex $\sigma$ is called $(i,s_j)$-stopping if for each $a^i_k\in\mathbb{A}^i$, there exists $\pi\in\sigma$ such that $l(\pi)=(i,s_j,a^i_k)$. Further, a completely labelled simplex $\sigma$ is called stopping if there exist $i\in[n]$ and $s_j\in\mathbb{S}$ such that $\sigma$ is $(i,s_j)$-stopping.
The following lemma asserts that if we can find a stopping simplex, then we can find an $\texttt{poly}(|\mathcal{SG}|)/d$-approximate fixed point. The proof is in \Cref{proof of stopping appro}.
\begin{restatable}{lemma}{approstopping}{(\rm \cite{LaanT82computation})}\label{lemma: stopping appro}
Suppose that a simplex $\sigma\in\Sigma$ is $(i,s)$-stopping for $i\in[n]$ and $s\in\mathbb{S}$. Then for any strategy profile $\pi\in\sigma$, we have $$\|f(\pi)-\pi\|_{\infty}\leq A_{\max}^2(\lambda+1)\frac{1}{d}.$$
\end{restatable}
\textbf{The Choice of $d$.} Let $$d=\dfrac{32A_{\max}^5R_{\max}^3(\lambda+1)}{(1-\gamma)^5}L^2.$$ It is easy to see $d$ is $\texttt{poly}(|\mathcal{SG}|,L)$. The correctness of our choice is in \Cref{correct choice of d}.
\textbf{The Edges of {\sc End of the Line} Graph.} In the algorithm of Laan and Talman~\cite{LaanT82computation}, they develop a partial one-to-one function $g: \Sigma'\rightarrow\Sigma'$ for $\Sigma'\subseteq\Sigma$ as well as a starting simplex $\sigma_0\in\Sigma$, which have the following properties:
\begin{itemize}
\item $\sigma_0\in\Sigma'$ and there is no $\sigma'\in\Sigma'$ such that $g(\sigma')=\sigma_0$;
\item For any $\sigma\in\Sigma'$, if $\sigma$ has no image, then $\sigma$ is a stopping simplex. For any $\sigma\in\Sigma'\setminus\{\sigma_0\}$, if $\sigma$ has no pre-image, then $\sigma$ is a stopping simplex.
\item the function $g$ and $g^{-1}$ could be computed in time polynomial in $|\mathcal{SG}|$ and $\log d$.
\end{itemize}
For the purpose of constructing the {\sc End of the Line} graph, we complete the function $g$ by letting $g(\sigma)=\sigma$ for any $\sigma\in\Sigma\setminus\Sigma'$. It is easy to verify our operation does not impact the properties of function $g$. So for any input instance $(|\mathcal{SG}|,L)$, we can reduce it to an instance of {\sc End of the Line}, where the two circuits $S$ and $P$ correspond to $g$ and $g^{-1}$. If we can find a solution of the {\sc End of the Line}, by \Cref{lemma: stopping appro} we know that there is an $A_{\max}^2(\lambda+1)\frac{1}{d}$-approximate fixed point in the solution simplex, thus an $1/L$-approximate MPE by \Cref{lemma: single state}, \Cref{lemma: all states}, and our choice of $d$.
\section{Conclusion}
Solving a Markov Perfect Equilibrium (MPE) in general-sum stochastic games (SG) has long expected to be at least $\textbf{PPAD}$-hard.
In this paper,
we prove that computing an MPE in a finite-state infinite horizon discounted SGs is $\textbf{PPAD}$-complete.
Our proof is novel in the sense that we adopt a function with a polynomial-bound description in the strategy space that effectively helps convert the MPE computation problem to a fixed-point problem, which, otherwise, would take a representation that requires an exponential number of pure strategies with respect to the number of states and the number of agents.
Our completeness result indicates that computing MPE in SGs is highly unlikely to be $\textbf{NP}$-hard.
We hope our results can encourage MARL researchers to study solving MPE in general-sum SGs, leading to more prosperous algorithmic developments as those currently on zero-sum SGs.
\clearpage
\bibliographystyle{plain}
|
2,869,038,156,649 | arxiv |
\section{Conclusion}
We present TANDEM, a new architecture to learn active and efficient exploration policy with task-related decision making. Our approach consists of distinct modules for exploration, discrimination, and world encoding. Even though our approach separates exploration and discrimination, they are co-trained interweavingly. The explorer learns to reveal useful information to the discriminator efficiently and the discriminator adapts to the partial observability of labeled data collected by the explorer. We show that they co-evolve and converge at the end of the training process. We demonstrate our method on tactile object recognition and compare our approach against multiple baselines for exploration (such as edge-following and info-gain) and recognition (such as ICP). Our experiments show that TANDEM recognizes objects with a higher success rate and lower number of movements. Our real-robot experiments demonstrate that our approach, despite being trained purely in simulation, transfers well to the real hardware, and is robust to sensor noise. \addedtext{1-7}{Future directions include generalizing to high-dimensional tactile data and extending our framework to also estimate object orientations and locations along with object identities.}
\section{Problem Definition}
We want to train an efficient tactile exploration policy and we demonstrate our framework on tactile object recognition task. The task is to identify the object from a set of known objects using only tactile sensory feedback. We want the robot to be able to accurately identify the object with as little movement as possible. Thus, the performance is measured by both accuracy and number of actions executed by the robot.
Our tactile finger moves on a 2D plane and is always perpendicular to the plane (Fig.~\ref{fig:teaser}). We make the assumption that the target object is placed roughly at the center of the workspace with any random orientations. The object is fixed and it does not move after interaction with the finger. At each time step $t$, the robot can execute an action $a_t \in \mathcal{A} = \{\text{up, right, down, left}\}$ which corresponds to a 5mm translation in the 4 directions on the plane. After each action, the robot receives a binary tactile signal $s_t \in \{0, 1\}$, where 0 indicates collision and 1 indicates collision-free. The robot at each time step needs to (1) encode the sequence of binary collision signals, (2) predict the label of the object, and (3) decide whether to terminate the episode based on its confidence level.
\section{Experiments}
\begin{table*}
\caption{Comparative performance of various methods in simulation under 0.1\% and 0.5\% sensor failure rate. For each method, we present the number of actions taken (\#Actions) and the number of pixels explored (\#Explored Pixels) before making a prediction, as well as the success rate in identifying the correct object (Success Rate). Mean and standard deviation over 1,000 trials are shown. A detailed description of each method can be found in Sec.~\ref{sec:methods}.}
\label{tab:results}
\centering
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{3}{c|}{\textit{0.1\% Sensor Failure}} & \multicolumn{3}{c}{\textit{0.5\% Sensor Failure}} \\
& \#\textbf{Actions} & \textbf{\#Explored Pixels} & \textbf{Success Rate} & \#\textbf{Actions} & \textbf{\#Explored Pixels} & \textbf{Success Rate} \\
\midrule
Random-walk & 1427 $\pm$ 654.8 & 354.8 $\pm$ 148.9 & 0.31 & 1350 $\pm$ 667.4 & 338.3 $\pm$ 148.5 & 0.27 \\
Not-go-back & 684.5 $\pm$ 565.9 & 466.6 $\pm$ 320.4 & 0.49 & 621.4 $\pm$ 524.7 & 427.9 $\pm$ 293.8 & 0.43 \\
Info-gain & 435.1 $\pm$ 397.5 & 341.7 $\pm$ 250.3 & 0.45 & 365.1 $\pm$ 360.6 & 291.2 $\pm$ 232.2 & 0.42 \\
Edge-follower & 60.05 $\pm$ 218.6 & 33.01 $\pm$ 15.95 & 0.91 & 95.24 $\pm$ 282.5 & 32.48 $\pm$ 32.81 & 0.75 \\
Edge-ICP & 136.1 $\pm$ 339.1 & 72.29 $\pm$ 16.78 & 0.94 & 400.6 $\pm$ 719.4 & 75.63 $\pm$ 41.35 & 0.81 \\
PPO-ICP& 921.2 $\pm$ 679.1 & 286.2 $\pm$ 189.6 & 0.35 & 860.4 $\pm$ 698.3 & 231.7 $\pm$ 172.4 & 0.31 \\
All-in-one & 28.63 $\pm$ 207.8 & 3.827 $\pm$ 6.735 & 0.23 & 66.05 $\pm$ 328.0 & 6.229 $\pm$ 15.15 & 0.22 \\
TANDEM (ours) & 54.97 $\pm$ 106.5 & 44.74 $\pm$ 37.32 & 0.96 & 64.76 $\pm$ 109.3 & 49.71 $\pm$ 36.27 & 0.95 \\
\bottomrule
\end{tabular}
\vspace{-0.1in}
\end{table*}
In this section, we describe our experimental setup, in both simulation and the real world\footnote{For real-world video demonstrations or more information, please visit our project website at \url{https://jxu.ai/tandem}.}. Our method is trained entirely in simulation; it can then be tested either in simulation or on a real robot. We present an extensive set of comparisons against a number of baselines in simulation, then validate the performance of our method on real hardware.
\subsection{Setup}
Our experiments assume a tactile finger that moves on a 30cm by 30cm plane and is always perpendicular to the plane (Fig.~\ref{fig:teaser}). The target object is placed roughly at the center of the workspace in any random orientation. The object is fixed and does not move after interaction with the finger. At each time step $t$, the robot can execute an action $a_t \in \mathcal{A} = \{\text{up, right, down, left}\}$ which corresponds to a 5mm translation in the 4 directions on the plane. \remindtext{1-1b}{After each action, the robot receives a binary collision signal $s_t \in \{0, 1\}$, where 0 indicates collision and 1 indicates collision-free. As described above, this information is encoded in an occupancy grid with a 5mm cell size.}
In real-world experiments, we use the DISCO finger~\cite{piacenza2020sensorized} as our tactile sensor (Fig.~\ref{fig:teaser}), but discard additional tactile information (such as contact force magnitude) and only rely on touch/no-touch data. We mount the finger on a UR5 robot arm. For simulation, we use the PyBullet engine and assume a floating finger with similar tactile capabilities.
Sensor noise is an important consideration since most real-world tactile sensors exhibit some level of noise in their readings, and ours is no exception. It is important for any tactile-based methods to be able to handle erroneous readings without compromising efficiency or accuracy. In particular, we found through empirical observations of our sensors that the chance of an incorrect touch signal being reported is around 0.3\% - 0.5\%. We thus compared all the methods presented below for relevant levels of tactile sensor noise. For learning-based methods, we also have the option of simulating noise during the training process in order to increase robustness; in our case, we simulate a 0.5\% sensor failure rate in the co-training process for our method.
We generate 10 polygons with random shapes as our test objects, as shown in Fig.~\ref{fig:teaser}. These polygons are generated by walking around the circle, taking a random angular step each time, and at each step putting a point at a random radius. The maximum number of edges is 8 and the maximum radius for each sampled point is 10cm. We 3D-print these polygons for real-world experiments or use their triangular meshes for the simulated versions. For simulation, we decompose each polygon into a set of convex parts for collision checking.
Each episode is terminated when the confidence of the discriminator is greater than the preset threshold of 0.98 or the number of actions has exceeded 2,000. At termination, the prediction of the discriminator is compared to the ground truth identification of that object to check success.
\subsection{Training}
We train our proposed method entirely in simulation. In each co-training iteration, the discriminator is trained for $N_d=15$ epochs on the data buffer of size $|\mathcal{D}|=1e^{6}$, and the explorer is trained for $N_e=2e^5$ steps. A 0.5\% sensor failure noise is applied during training.
Fig.~\ref{fig:relationship} shows the training plots during our co-training process. Our method's ability to correctly recognize the object (success rate) improves consistently during the process; however, the number of actions taken for the explorer to make the discriminator confident starts at a low level, first increases, and then drops after peaking at around 5M steps. Our discriminator is initialized randomly and when the training starts, it is making bold decisions to terminate the exploration quickly. This is why the number of actions starts low and the success rate is also bad in the beginning. However, as more and more labeled counter-examples of such wrong termination are gathered by the explorer and added to the data buffer of the discriminator, the discriminator starts to become cautious, and thus the number of actions to make it confident grows. At around 5M steps, a decent enough discriminator is obtained for the explorer and discriminator to start co-evolving until convergence.
\subsection{Baselines}
\label{sec:methods}
In order to evaluate the effectiveness of our learned exploration policy on the tactile object recognition task, we choose to compare our approach to learned all-in-one (without separating exploration and discrimination) and non-learned (heuristic-based) baselines. The metrics that we are most interested in are the number of actions and the success rate in accurately identifying the objects. The methods we evaluate are as follows:
\subsubsection{Random-walk} This method generates a random move at each step. A discriminator is trained with this exploration policy for object identification and terminating exploration. We apply a 0.5\% sensor failure rate during training.
\subsubsection{Not-go-back} Similar to \textit{Random-walk}, except that the random move generated at each time step is always to an unexplored neighboring pixel.
\subsubsection{Info-gain} This method uses the info-gain heuristics: it also picks an action that leads to an unexplored pixel, but, unlike \textit{Not-go-back} which picks it randomly, it picks the action that provides the most salient information. At time step $t$, let $\mathbf{p}$ denote the probability distribution over 10 objects predicted by the discriminator on the current grid. Let $\mathbf{p_w}$ and $\mathbf{p_b}$ denote the new probability distributions if the newly explored pixel turns out to be white and black respectively, after applying a particular action. Then the action $a_t$ is chosen by:
\begin{equation*}
a_t = \argmax_{a \in \mathcal{A}} \: \bigg\{ \mathcal{H} (\mathbf{p}) - \left(\frac{1}{2} \mathcal{H} (\mathbf{p_w}) + \frac{1}{2} \mathcal{H} (\mathbf{p_b})\right) \bigg\}
\end{equation*}
where $\mathcal{H}$ denotes the entropy of a probability distribution. It uses entropy as a measure of uncertainty and picks an action that provides the most information gain (reduces the most uncertainty). A discriminator is trained and we apply a 0.5\% sensor failure rate during training.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{imgs/noise_demo.pdf}
\caption{(a) Performance of \textit{TANDEM} and \textit{Edge-follower} as the sensor failure rate increases from 0.6\% to 2.5\%. For \#Actions, $\pm$ 0.1 standard deviation is shaded. For Effective Action Rate, $\pm$ 0.2 standard deviation is shaded. With higher sensor noise, both methods need more actions. However, \textit{TANDEM} retains a high success rate and action efficiency while those of \textit{Edge-follower} deteriorate continuously. (b) Exploration behavior of \textit{TANDEM} and \textit{Edge-follower} when sensor failure happens. The location of the sensor failure is circled in red (in the simulation we can ensure it occurs at the same location for both methods). (i)(iii) show a sensor failure after contacting object 1, and (ii)(iv) show a sensor failure before contacting object 5. For these two examples, \textit{Edge-follower} makes the wrong prediction with 39 and 6 actions while \textit{TANDEM} correctly identifies the objects with 38 and 79 actions respectively.}
\label{fig:noise_demo}
\vspace{-0.15in}
\end{figure}
\subsubsection{Edge-follower} This method uses the popular contour-following heuristic as the exploration policy. A discriminator is trained in this method but we do not apply sensor noise during training. We notice that when applying sensor noise during training, the performance of the \textit{Edge-follower} drops significantly. This is because \textit{Edge-follower} can sometimes get trapped at locations where a collision-free pixel is identified as collision and starts circling that pixel. In such a case, unlike other methods such as \textit{Random} and \textit{Not-go-back}, the \textit{Edge-follower} can not keep exploring with random actions. Thus, the discriminator trained in \textit{Edge-follower} becomes unnecessarily cautious but its exploration policy is not able to increase its confidence.
\subsubsection{Edge-ICP} This method uses the same exploration policy as \textit{Edge-follower}. However, instead of training a learning-based discriminator, it uses the Iterative Closest Point (ICP) algorithm. The occupancy grid is converted to a point cloud using the center location of each pixel. The discriminator runs ICP to match the point cloud to each object using 36 different initial orientations evenly spaced between [0\degree, 360\degree]. For each object, the minimum error among all orientations represents the matching quality. If the error is smaller than 0.0025cm then the object is marked as a match. The output probability distribution assigns equal probabilities to the matched objects and zeroes to not-matched ones. There is no training required for this method.
\subsubsection{PPO-ICP} This method trains a PPO explorer using the ICP discriminator as in \textit{Edge-ICP}. A 0.5\% sensor failure rate is applied during training.
\subsubsection{All-in-one} This method does not separate explorer and discriminator. It has the same structure as the PPO explorer proposed in our approach except that the action space has been expanded to 14 actions. The first 4 actions correspond to a move and the remaining 10 actions correspond to a prediction. If a prediction is made, the episode is terminated. A reward of 1 is given only when the episode is terminated and the prediction is correct. A 0.5\% sensor failure rate is applied during training.
\subsubsection{TANDEM} This is our proposed method.
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{imgs/real_demo.pdf}
\caption{10 examples of our method on real robot experiments. The top row shows the object poses, the medium row shows the occupancy grids at termination, and the last row shows the results for each trial. The first 9 examples are successful and the last one is a failure case. While sensor noise can happen anywhere in a trial, it is easier to identify when it occurs before the contact. We highlight in red circles such sensor noise for objects 3 and 8. Our method is able to bypass the noisy pixel, continue exploring and make the correct prediction.}
\label{fig:real}
\vspace{-0.15in}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{imgs/thresholds.pdf}
\caption{Success Rate and \#Actions when using different threshold values (annotated in text box) in co-training. Each number is computed with 1,000 trials. Note that a threshold of 1 is too strict and the models converge at around 1,500 actions and a 0.4 success rate.}
\label{fig:thresholds}
\vspace{-0.15in}
\end{figure}
\subsection{Comparative Performance Analysis} We compare all methods described above over a large set of simulated experiments, as shown in Table~\ref{tab:results}.
For both sensor noise levels we consider, \textit{TANDEM} outperforms the baselines in terms of both success rate and the number of actions required. Only \textit{All-in-one} uses fewer actions at 0.1\% sensor noise but at the price of an extremely low success rate.
We attribute this gap in performance to multiple factors. For example, while \textit{Random-walk} or \textit{Not-go-back} are clearly inefficient exploration strategies, \textit{Info-gain} is a popular heuristic-based method and has been shown to be efficient in other contexts by previous works. However, we found it to not work well in conjunction with a CNN discriminator. Compared to other methods, the \textit{Info-gain} explorer is more dependent on the discriminator because the discriminator affects not only the termination of each episode but also the action selection at each time step. For the \textit{Info-gain} explorer to be effective, it likely requires a discriminator with high accuracy to begin with, which our method does not. The \textit{All-in-one} method, which is not equipped with a dedicated discriminator, cannot train decision-making directly using the labeled samples collected by the explorer, leading to inefficient training and much worse performance if given the same amount of training time as \textit{TANDEM}.
Edge-following, unsurprisingly, is an efficient exploration heuristics for our task, given its 2D nature. \textit{Edge-follower} and \textit{Edge-ICP} have the best performance among all baselines. However, they are shown to be very sensitive to sensor noise, in terms of both accuracy and efficiency. To further investigate this aspect, we compared \textit{TANDEM} and \textit{Edge-follower} for sensor failure chance further increased up to 2.5\%. As shown in Fig.~\ref{fig:noise_demo}, despite being trained with a fixed 0.5\% sensor noise, \textit{TANDEM} maintains a high success rate even in the presence of more noise. We also report the Effective Action Rate (EAR) in this experiment, where EAR is computed as \#Explored Pixels / \#Actions per episode, a metric reflecting the effectiveness of the move in exploring new locations. We can see that the actions generated by our method maintain high exploration efficiency as shown by the EAR plot. In comparison, both EAR and success rate drop as the sensor failure rate increases for \textit{Edge-follower}. Both methods need longer episode lengths to handle larger sensor noise. Two examples of exploration behavior under noise are shown in Fig.~\ref{fig:noise_demo}. \textit{Edge-follower} makes the wrong prediction for both examples while \textit{TANDEM} successfully handles both. This is due to \textit{Edge-follower}'s discrimination policy overfitting to the edge-following behavior and not being able to explore further after being trapped at an incorrect collision signal.
Unlike \textit{Edge-ICP}, \textit{PPO-ICP} struggles to achieve similar performance. ICP needs a sufficient number of points to achieve decent recognition accuracy and terminate the exploration because it is not able to utilize non-collision pixels. While the edge-following policy is good at collecting points through constantly touching the object, the PPO explorer struggles at learning similar behavior because of the extremely sparse termination reward provided by ICP.
\subsection{Confidence Threshold}
\addedtext{1-5}{The confidence threshold used by the discriminator to determine termination has a large effect on the performance of the co-training framework. Our threshold value of 0.98 is chosen empirically. Fig.~\ref{fig:thresholds} shows the number of actions and success rate with different thresholds used in co-training. Smaller confidence thresholds make the discriminator terminate the exploration earlier. Thus, when the co-training converges, fewer actions are needed but at the same time, the success rate of correctly identifying the objects is worse. We choose 0.98 because it achieves a good trade-off between the success rate ($\geq 0.95$) and the number of actions ($\leq 65$).}
\subsection{Real-World Performance}
We validate the performance of \textit{TANDEM} on a real robot. We run 3 trials for each of 10 objects with random orientations (30 trials total), with results shown in Table~\ref{tab:real}.
Our method still achieves a high identification accuracy, even if slightly lower when compared to simulation results at a 0.5\% sensor failure rate. Exploration efficiency, as illustrated by the number of actions, is at similar levels. We attribute the sim-to-real gap to imperfections in our noise models, shape printing, and robot control.
Fig.~\ref{fig:real} shows ten examples of \textit{TANDEM} in operation, one for each object in a random orientation, also showing the occupancy grid at the moment that a decision is made. This decision is correct 90\% of the time despite the limited nature of the information collected by that point. We also note that our method is robust enough to handle sensor noise, even before making first contact (objects 3 and 8). We also show a failure case where our method incorrectly recognizes object 9 as object 7: both these polygons have a large opening triangle, which makes them hard to distinguish when this area is under contact. Our learned exploration policy is often similar to edge-following, but has the added ability to handle sensor noise, and also learns to take shortcuts when appropriate and take advantage of non-collision pixels for discrimination: the discriminator terminates the episode at a non-collision location for object 0.
\begin{table}
\setlength\tabcolsep{4.5pt}
\caption{Real robot experiment results (mean and standard deviation over 30 trials).}
\label{tab:real}
\centering
\begin{tabular}{c|ccc}
\toprule
\textbf{Method} & \#\textbf{Actions} & \textbf{\#Explored Pixels} & \textbf{Success Rate} \\
\midrule
TANDEM & 67.33 $\pm$ 23.47 & 53.95 $\pm$ 18.16 & 0.90 (27/30) \\
\bottomrule
\end{tabular}
\vspace{-0.15in}
\end{table}
\section{Introduction}
\IEEEPARstart{T}{actile} sensing plays an important role for robots aiming to perform complicated manipulation tasks when vision is unavailable due to factors like occlusion, lighting, restricted workspace, etc.
The ability of touch to provide useful information in the absence of vision is immediately clear in the case of human manipulation: we are able to search and manipulate efficiently inside of a bag or pocket without visual data. In particular, we have little problem in distinguishing between similar objects from tactile cues only.
However, a number of challenges remain before tactile sensing can be used with similar effectiveness by robotic manipulators. Fundamentally, touch is an active sensing modality, and individual tactile signals are very local and sparse. Guidance becomes critical: tactile sensors need to be physically moved by a robotic manipulator to obtain new signals, introducing additional costs for every sensor measurement. Without smart guidance, we can only blindly scan/grope on a surface~\cite{okamura2001feature, allen1985object} or continuously make repetitive and high amounts of contacts at tightly controlled positions~\cite{meier2011probabilistic, allen1988integrating, bierbaum2009grasp, gaston1984tactile, skiena1989problems}. These strategies are extremely inefficient and often incur prohibitively high costs and burdens. Furthermore, it is also important to have an intelligent way to rearrange or encode such local and sparse signals into a global representation.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{imgs/teaser_v3_low_res.pdf}
\caption{Object recognition based on tactile feedback alone. (a) Real robot setup. Our tactile finger is mounted on a robot arm, and the target object (unknown identity and orientation) is placed roughly around the workspace center. (b) Known object set of 10 randomly-generated polygons. (c) Active exploration. Using our framework, our robot collects data and quickly converges on the correct object identity (object 4 from the set).}
\label{fig:teaser}
\vspace{-0.15in}
\end{figure}
In this work, we focus on the process of guiding tactile exploration, and its interplay with task-related decision making. Our goal is to provide a method that can train effective guidance (exploration) strategies. The task we chose to highlight this interplay and to develop our method is tactile object recognition, in which one object must be identified out of a set of known models based only on touch feedback (Fig.~\ref{fig:teaser}). The goal of our method is to correctly recognize the object with as few actions as possible.
In order to learn efficient guidance for such tasks, we propose an architecture combining an exploration strategy (i.e. \textit{explorer}) and a discrimination strategy (i.e. \textit{discriminator}). The \textit{explorer} guides the tactile exploration process by providing actions to take; the \textit{discriminator} attempts to identify the target object and determines when to terminate the exploration after enough information has been collected. To convert local and sparse tactile signals into a global representation, we also use an encoding strategy (i.e. \textit{encoder}). In our current version, the \textit{encoder} simply rearranges sparse tactile signals into an occupancy grid, but more complex implementations could be used for future tasks.
In our proposed architecture, both the explorer and the discriminator are learned using data-driven methods; in particular, the explorer is trained via reinforcement learning (RL) and the discriminator is trained via supervised learning. \addedtext{1-1}{In our current implementation, both of these components are trained in simulation. The use of binary touch data, which is easier to simulate accurately compared to other tactile features, facilitates zero-shot sim-to-real transfer, which we demonstrate in the real robot experiments.}
Critically, even though our architecture separates the exploration and decision making, we interleave their training process: we propose a co-training framework that allows batch and repeated training of the discriminator on a set of samples collected by the explorer. We call our method \textbf{TANDEM}, for \textbf{TA}ctile exploration a\textbf{N}d \textbf{DE}cision \textbf{M}aking. In summary, the main contributions of this paper include:
\begin{itemize}
\item We propose a new architecture to learn an efficient and active tactile exploration policy, comprising distinct modules for exploration, discrimination, and world encoding. We also propose a novel framework to co-train the exploration policy along with the task-related decision-making module, and show that they co-evolve and converge at the end of the training process.
\item We demonstrate our method on a tactile object recognition task. In this context, we compare our approach against multiple baselines, including all-in-one learning-based approaches that do not distinguish between our proposed components, and other methods traditionally used for exploration (such as random-walk, info-gain, etc.) or tactile recognition (such as ICP). Our experiments, performed in simulation and validated on real robots, show that our proposed method outperforms these alternatives, achieving a higher success rate in identifying the correct object while also using fewer actions, and is robust to sensor noise.
\end{itemize}
\section{Architecture}
Our work aims to develop a framework that combines effective exploration and decision-making when using an active and local sensing modality, such as touch. Our key insight is that exploration and decision-making are distinct, yet deeply intertwined components of such a framework. An ideal exploration strategy will strive to reveal information that the decision-making component can make the best use of. Similarly, a decision-making component will adapt to the constraints of a real-world robot collecting touch data, which can only be obtained sequentially and incrementally.
The concrete task we develop and test our method on is touch-only object recognition using a robot arm equipped with a tactile finger. We assume a set of known two-dimensional object shapes (randomly-generated polygons). One object is placed in the robot's workspace, in an unknown orientation. The robot must determine the object's identity using only tactile data, and with as little movement as possible. Performance is measured by both identification accuracy and the number of robot movements.
Our proposed architecture is illustrated in Fig.~\ref{fig:overview}. The key components are the following: (1) The \textbf{explorer}, which generates an action for the robot to take in order to collect more data. In our implementation, the explorer consists of a policy trained via deep RL. (2) The \textbf{discriminator}, which predicts the identity of the object, along with a confidence value. This is a supervised learning problem, implemented in this case as a Convolutional Neural Network (CNN). Finally, in addition to the explorer and discriminator, we distinguish one additional component, namely (3) the \textbf{encoder} which converts the sequence of local and sparse tactile signals into a global representation. \remindtext{1-1a}{For our object recognition problem, the encoder simply aggregates binary touch signals into an occupancy grid.}
An equally important aspect of the proposed architecture is the training process. While we formulate distinct explorer and discriminator modules, trained via different formalisms (RL vs. supervised learning), we choose to interweave their training processes. This allows us to train the discriminator with data batches gathered by the explorer, which significantly improves data efficiency compared to an all-in-one approach that combines exploration and decision-making into a single component. In the co-training process, the explorer learns to increase the discriminator's confidence as fast as possible, and the discriminator learns to predict object identity based on the type of data generated by the explorer.
\subsection{Encoder}
The job of the encoder is to maintain a history buffer of the sequence of contact data, convert that history into a global representation, and provide this representation as input to both the explorer and the discriminator. In our current implementation, we use binary signals indicating touch / no-touch. The encoder simply integrates these into an occupancy grid representation of the world, as shown in Fig.~\ref{fig:overview}.
All pixels of the occupancy grid are initially grey (unexplored). After each action, if contact is detected, the corresponding pixel is colored white; otherwise, it is colored black. We also use a special value (light grey) to mark the current position of the finger on the grid. Knowing the current location of the finger is useful for the explorer to compute the next action; however, this special color is eliminated when the grid is provided as input to the discriminator because such information is not necessary for predicting the object identity.
For the task addressed here, we believe an occupancy grid works well due to its simple nature, ability to represent geometrical information, and small size in memory. However, when aggregating more complex information (e.g. from tactile sensors providing more than binary touch signals) or for more complex tasks, we expect that different encoding methods will be needed, even while the role in the architecture will be the same. We hope to explore more complex, learning-based encoders for our architecture in future studies.
\subsection{Discriminator}
The discriminator is the component of our pipeline in charge of interpreting sensor data for task-related purposes. Thus, for our problem, its job is to provide a prediction regarding the object identity, along with an associated confidence value. Making a confident prediction also implicitly terminates the exploration.
In our implementation, underlying the discriminator is a CNN, as shown in Fig.~\ref{fig:overview}, taking as input the occupancy grid produced by the encoder. The network consists of two convolutional layers followed by a max-pool layer. After the dropout layer, the input is then flattened to go through another two fully-connected layers. A softmax function is applied to the raw 10-dimensional output from the fully-connected layer to generate a probability distribution. The object with the highest probability is chosen as the predicted identity and its corresponding probability is the confidence estimate. If the prediction confidence is greater than a preset threshold, the exploration is terminated and a final prediction is made. Otherwise, the occupancy grid is passed to the explorer to generate the next move.
\addedtext{1-6a}{As part of the co-training process, the discriminator is trained on partially complete occupancy grids, which can be ambiguous over objects, especially when very few pixels have been explored. This ambiguity is in fact the supervision needed to learn a confidence estimate. For instance, if the discriminator data buffer contains multiple duplicates of a highly incomplete grid, each with a different object label, then, in order to minimize the loss, the discriminator network will assign equal probabilities to all candidate objects, thus decreasing the confidence in each individual prediction.}
\subsection{Explorer}
The job of the explorer is to generate the next action for the robot, actively collecting additional information. For our task, this means selecting the next move (up, down, left, or right). Tactile data is collected automatically during the move and passed to the encoder as described above.
We implement the explorer as a Proximal Policy Optimization (PPO)~\cite{schulman2017proximal} agent taking the occupancy grid provided by the encoder as input. It has a similar architecture as the discriminator but the last fully-connected layer is replaced by a separate fully-connected layer for both the actor and critic, as shown in Fig.~\ref{fig:overview}. Even though the discriminator and explorer share part of the same architecture, we found through experiments that keeping the weights separate has a much better performance. This is likely because the discriminator and explorer focus on different aspects of the grid and should learn separate intermediate embeddings. As mentioned earlier, the grid input to the explorer has an extra bit of information providing the current location of the agent.
The reward structure warrants additional discussion. The explorer receives a reward if the discriminator reaches a confidence level that exceeds a preset threshold and thus terminates the exploration. However, the reward for the explorer is not conditioned on the correctness of the prediction. This is in keeping with our tenet of separating the exploration from decision making: the explorer is not aware of prediction correctness and it is rewarded as long as the discriminator is confident enough to make a prediction.
\subsection{Co-training}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/relationship_v3.pdf}
\caption{Training plots of the discriminator and explorer, and illustration of how they affect each other in the co-training process. The left and right plots show the success rate and the number of actions over the last 100 episodes. Results are averaged over three random seeds and one standard deviation is shaded.}
\label{fig:relationship}
\vspace{-0.15in}
\end{figure}
\remindtext{1-6}{While our architecture is constructed around separate discriminator and explorer modules, we find that the interplay and inter-dependencies between the two components make independent training infeasible and suggest a co-training framework. On one hand, training the discriminator requires a labeled dataset with partial observations of object geometry, but the distribution of partial observability highly depends on the exploration policy. On the other hand, training the explorer needs termination signals provided by the discriminator. This termination signal can highly affect the explorer's learning efficiency.} \addedtext{1-6b}{Co-training is also important because any pre-trained discriminator will not generalize well as the explorer evolves and implicitly changes the distribution of the data presented to the discriminator. To handle this shift, the discriminator needs to co-evolve with the explorer.}
Our co-training process is shown in Alg.~\ref{algo:ours}. Initially, both discriminator and explorer are initialized randomly. We collect an initial data buffer of labeled samples for the discriminator with a randomly initialized explorer. In the co-training loop, we first train the discriminator using the data buffer. Then we fix the discriminator, train the explorer, and, at the same time, push the partially observed occupancy grids collected by the explorer along with their ground truth identities into the data buffer. The updated data buffer is used for discriminator training in the next iteration.
\begin{algorithm}
\SetAlgoLined
Initialize discriminator randomly\;
Initialize explorer randomly\;
Collect an initial data buffer $\mathcal{D}$ using the explorer\;
\While{steps $<$ maximum step}{
Train the discriminator for $N_d$ epochs\;
Fix the discriminator, train the explorer for $N_e$ steps, and push all occupancy grids (with object identity labels) collected by the explorer into data buffer $\mathcal{D}$\;
}
\caption{Co-training Discriminator and Explorer}
\label{algo:ours}
\end{algorithm}
In this process, the discriminator affects episode termination and the explorer affects partial observability of the labeled training data (Fig.~\ref{fig:relationship}). The explorer is rewarded when the discriminator becomes certain and terminates the episode; thus, it learns to make the discriminator confident as quickly as possible. Batch training of the discriminator with samples collected by the explorer also facilitates data reuse and efficiency. Every time one component gets improved, the other component adapts to the distributional shift. Because updates happen with each iteration, this shift is manageable. As a result, the discriminator and the explorer co-evolve, gradually pushing the other to improve and eventually converge.
\section{Related Work}
\subsection{Tactile Object Recognition}
Object recognition is a key problem in robotics and is a fundamental step to gaining information about the environment. Conventionally, visual perception has been the primary sensing modality for object recognition.
However, due to the limitations of vision such as illumination and occlusion and with the development in tactile perception technology such as DISCO~\cite{piacenza2020sensorized},
object recognition with only tactile information is receiving increasingly wider attention in robotics research.
Tactile object recognition can be roughly divided into three major categories depending on the characteristics of the object~\cite{liu2017recent}: (1) rigid object recognition (the problem in this paper), (2) material recognition, and (3) deformable object recognition.
However, many existing works are either using predefined action sequences or a heuristic-based exploration policy such as contour following, while we focus on developing a learning-based active exploration policy.
\subsection{Tactile Exploration Policy}
The Exploration Policy (EP) is the sequence of exploratory actions the agent executes to gather tactile information. Since tactile information can only be obtained by interacting with the target object, the EP plays a critical role. We divide tactile sensing EPs into three major categories.
\subsubsection{Passive mode} The robotic manipulator is fixed, and the human operator hands over the object to the manipulator, often times in random orientations and/or translations to collect tactile data~\cite{schmitz2014tactile, strub2014using, pastor2019using}.
\subsubsection{Semi-active mode} The manipulator interacts with the object according to a prescribed trajectory and does not need to react based on sensor data, maybe except being compliant to avoid damage.
Examples include poking the object from uniformly sampled directions or grasping it multiple times with a predefined set of grasps~\cite{allen1988haptic, watkins2019multi, meier2011probabilistic}.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{imgs/overview.pdf}
\caption{An overview of the proposed architecture, and its application to tactile object recognition. The tactile finger interacts with the target object and generates local and sparse sensor data (in this task, binary collision signals). The encoder keeps a history buffer of such sequential signals and converts them into a global representation. Our encoder in this task rearranges them into an occupancy grid image. The discriminator takes in the global representation and attempts to identify the object along with a confidence estimate. If the confidence is higher than a predefined threshold, the exploration is terminated and the final prediction is produced. Otherwise, the explorer reads the representation and generates the next move. The neural networks used by the discriminator and explorer are shown inside their respective block. The parameters of the \texttt{conv2D} layer are the number of filters, kernel size, and stride. The parameters of the \texttt{max pool} layer is stride. The parameters of the \texttt{fc} layer are input dimension and output dimension. The parameter of \texttt{dropout} layer is the probability of an element being zeroed out.}
\label{fig:overview}
\vspace{-0.15in}
\end{figure*}
\subsubsection{Active mode} The manipulator finds the object and explores it reactively in a closed-loop fashion. The exploratory action is a function of current and/or past sensor data. EPs can be heuristic- or learning-based.
\modifiedtext{1-3}{Some of the most popular heuristic-based exploration policies range from contour following~\cite{martinez2013active, yu2015shape, suresh2020tactile, pezzementi2011tactile} to information gain (uncertainty reduction)~\cite{hebert2013next, xu2013tactile, schneider2009object,
driess2017active}.
Other heuristics to decide the regions of interest to explore include attention cubes~\cite{rajeswar2021touch}, Monte Carlo tree search~\cite{zhang2017active} and dynamic potential fields~\cite{bierbaum2009grasp}. However, while heuristic-based EPs require no training and can reduce the number of actions effectively, they are also sensitive to sensor noise and the performance of a particular heuristic can be task-dependent. In contrast, our learning-based EP is trained with sensor noise, and thus outperforms heuristic-based baselines when such noise is present in the evaluation.}
\addedtext{1-3}{Similar to ours, other works combine exploration and decision making, whereby a classifier is pre-trained from pre-collected data and used to estimate action quality with Bayesian methods to reduce uncertainty~\cite{fishel2012bayesian, lepora2013active, martinez2017active, kaboli2017tactile, kaboli2019tactile}. Most of these make effective use of high-dimensional or multimodal tactile data. Our use of relatively simple contact signals allows training an exploration policy through trial and error in simulation, with zero-shot transfer to real robots, eliminating the need for training on physical objects. Nevertheless, we achieve high recognition accuracy with relatively few actions, which we attribute in part to the fact that, unlike in previous methods, our discriminator is constantly updated as the exploration policy improves.}
\subsection{Sim-to-real Adaptation}
We co-train a discriminator and explorer in the simulator and then evaluate the performance in both simulation and on real robots. Since we do not fine-tune our model on real robots, there is a sim-to-real gap when transferring our model trained in simulator to the real robot. We summarize the four categories of sim-to-real gaps and our methods to address them.
\subsubsection{Control} This comes from moving the robot arm in the real world setup. In simulation, we have precise control of the floating finger but on the real robots, we execute the 5mm action with five 1mm steps, so that the robot can subscribe to the tactile feedback after each small step to achieve closed-loop control. The challenge comes from precisely moving the finger for 1mm distance and also from the inverse kinematics solver to maintain the finger perpendicular to the workspace plane. The accumulated error in control for each episode on the real robot is around 5mm.
\subsubsection{Sensor} This noise comes from the tactile finger where a collision happens but is not detected, and vice versa. Our tactile finger contains a neural network that is trained to detect collision from the raw optic signals. Based on our analysis, the chance of such sensor error happening is around 0.3\% - 0.5\%. In order to handle the sensor noise, we apply 0.5\% sensor failure in our co-training process.
\subsubsection{Object Position} We assume that the target object is always located at the center of the workspace; however, it is impossible to perfectly align the center of these random polygon shapes at the center of the workspace. In order to overcome this, we apply a random translation between -1cm to 1cm to both the $x$ and $y$ coordinates of the objects during training.
\subsubsection{Geometry} Even though the object is located at the same location in simulation and in real world, the target object might have a different ground truth occupancy grid. This can be a result of two reasons. (1) The real robot finger height does not perfectly match that of the simulator so that the location on the hemisphere tip touching the object is different. This makes the geometry shape of the object in real world shrunk / grown compared to that in the simulator. (2) In order to do collision checking in simulator, we run convex decomposition on the random polygons so that they are a union of convex sub-parts. This approximation process can introduce extra geometry discrepancy.
|
2,869,038,156,650 | arxiv | \section{Introduction}
The phase space of loop gravity on a fixed graph is given by holonomies of the gravitational connection and fluxes of the triad field. In \cite{noi}, we introduced a parametrization of this phase space in terms of quantities describing the intrinsic and extrinsic discrete geometry of a cellular decomposition dual to the graph. The description provides a natural extension of Regge geometries allowing for discontinuous metrics \cite{noi} (See also \cite{IoCarlo,BiancaJimmy}).
The name \emph{twisted} was meant to stress this discontinuous nature, but also to imply the existence of a relation to twistors. In fact, as we show explicitly in this brief note, the parametrization can be derived from a geometric interpretation of twistors.
\medskip
\section{Twisted geometries from phase space reduction}\label{SecRed2}
Our starting point is the twistor space
\begin{equation}
\mathbb{T} \equiv {\mathbbm C}^{2} \times {\mathbbm C}^{2},
\end{equation}
with coordinates $(z_A,\tilde{z}_A)$, $A=0,1$. We equip $ \mathbb{T}$ with the standard Poisson algebra,
\begin{equation}\label{Pz}
\{z_{A},\bar{z}_{B}\} = - i \delta_{AB},\qquad \{\tilde{z}_{A},\bar{\tilde{z}}_{B}\} = - i \delta_{AB}.
\end{equation}
In each ${\mathbbm C}^2$ space we introduce the 2-dimensional spinors $|{\bf{z}}\rangle\equiv (z_{0},z_{1})$ and $|{\bf{z}}]= (-\bar{z}_{1},\bar{z}_{0})$.
Both spinors can be used to construct a 4-dimensional future-pointing null vector $X^{\mu}=(X^0,X^i)$. Choosing the first one, we have
\begin{equation}\label{3}
|{\bf z} \rangle\langle {\bf z}| = X^{0}\mathbbm{1} + X^{i} \sigma_{i},
\end{equation}
where $\sigma_i$ are the Pauli matrices.
In components,
\begin{equation}\label{Vz}
X^{0} = \frac12(|z_{0}|^2 + |z_{1}|^2)\equiv \frac12 \langle {\bf z}|{\bf z} \rangle,\quad
X^{+} = \bar{z}_{0} z_{1},\quad
X^{-} = z_{0} \bar{z}_{1},\quad
X^{3} = \frac12(|z_{0}|^2 - |z_{1}|^2),
\end{equation}
with
$X^i\equiv {\mathrm{Tr}}(X\sigma^i)$, and\footnote{Our conventions imply that
$\sigma_{3}=\sigma^{3}$, $\sigma_{-}=\sigma^{+}/2$, $\sigma_{+}=\sigma^{-}/2$,
so that the scalar product in these components reads $X^3 Y^3 + X^+ Y^-/2 + X^- Y^+/2 $.
}
$\sigma^{\pm} =\sigma_{1}\pm i\sigma_{2}$. Notice that
\Ref{Vz} is nothing but the classical version of the well-known Schwinger representation of the angular momentum in terms of two harmonic oscillators.
We can then parametrize ${\mathbbm C}^2_*={\mathbbm C}^2\backslash \{\langle {\bf z}| {\bf z}\rangle = 0\}$ in terms of the null vector $X^\mu$ and a phase,
$\varphi~\equiv~\arg(z_0) + \arg (z_1)$
(which is well defined provided $\langle {\bf z}| {\bf z}\rangle \neq 0$),
\begin{equation}
{\mathbbm C}_{*}^{2}=\{(X^i, \varphi) \}.
\end{equation}
The induced algebra reads
\begin{subequations}\label{PPT1}\begin{equation}\label{PP1b}
\{X^i,X^j \} = \eps^{ij}{}_k X^{k},
\end{equation}
\begin{equation}
\{X^{0},\varphi\}=1, \qquad \{ X^3,\varphi \} = 0, \qquad \{ X^\pm,\varphi \} = \frac{X^0}{X^\mp}.
\end{equation}\end{subequations}
Similarly, we denote $\tilde{X}^\mu$ the null vector built from $\tilde{z}_A$ as $|{\bf\tilde{z}}\rangle\langle {\bf\tilde{z}}| = \tilde{X}^{0}\mathbbm{1} +\tilde{X}^{i}\sigma_{i}$, and $\tilde\varphi$ the left over phase.
This leads to parametrize $\mathbb{T}_{*} = {\mathbbm C}^2_*\times{\mathbbm C}^2_*$ as
\begin{equation}\label{T1}
\mathbb{T}_{*} = \{ (X^i, \tilde{X}^i, \varphi, \tilde \varphi) \},
\end{equation}
where both $(X^{i},\varphi)$ and $(\tilde{X}^{i},\tilde{\varphi})$ satisfy the same algebra \Ref{PPT1}, while they commute with each other.
Consider now the constraint
\begin{equation}\label{H}
H\equiv X_{0}-\tilde{X}_{0}=0,
\end{equation}
imposing the two spatial vectors to have the same norm. This constraint generates the following U(1) action on $\mathbb{T}$,
\begin{equation}\label{U(1)}
\{H, {z_A} \} = \frac{i}2z_A, \qquad \{H, {\tilde{z}_A} \} = -\frac{i}2 \tilde{z}_A, \qquad
(\ket{\bf z}, \ket{\bf {\tilde z}}) \mapsto (e^{i\frac\th2}\ket{\bf z}, e^{-i\frac\th2}\ket{\bf {\tilde z}}),
\end{equation}
which leaves $X^{i}$ and $\tilde{X}^{i}$ invariant, while it translates the angles,
\begin{equation}
\varphi \rightarrow \varphi+\theta,\qquad \tilde{\varphi} \rightarrow \tilde{\varphi}- {\theta}.\end{equation}
We claim that the symplectic reduction of the eight-dimensional twistor space $\mathbb{T}_*$ by the constraint \Ref{H} gives the six-dimensional phase space of twisted geometries
\begin{equation}\label{defP}
P_* \equiv S^2_j\mathop{\simboloB{\times}} S^2_j \mathop{\simboloB{\times}} T^*S^1 \, = \, \{(N, \tilde{N}, j,\xi)\}\backslash \{j=0\},
\end{equation}
where $N$ and $\tilde{N}$ are unit vectors parametrizing the two spheres of radius $j\in{\mathbbm R}\backslash \{0\}$, and $\xi$ is an angle.
Let us make this statement more precise. Recall \cite{noi} that \Ref{defP} is a symplectic space locally isomorphic to the cotangent bundle of SU(2),\footnote{We parametrize $T^*\mathrm{SU}(2)\cong \mathfrak{su}(2)\times \mathrm{SU}(2)$ with a pair $(X,g)$. The isomorphism can be made global, i.e. including the configurations $j=0$ and $|X|=0$, taking an appropriate closure of $P_*$, see \cite{noi} for details.}
\begin{equation}\label{TSU2P}
P_*/{\mathbbm Z}_2 \cong T^*\mathrm{SU}(2)\backslash \{|X|=0\},
\end{equation}
where the quotient by ${\mathbbm Z}_2$ corresponds to the identification
\begin{equation}\label{Z2}
( N,\tilde{N}, j, \xi)\leftrightarrow (-N,-\tilde{N},-j, -\xi).
\end{equation}
We can now make the following
{\bf Proposition 1:}
\begin{equation}\label{TP}
\mathbb{T}_{*}/\!/{\rm U}(1) \cong P_*.
\end{equation}
{\bf Proof.} To prove it, it suffices to consider one of the two branches $j\gtrless 0$ identified by \Ref{Z2}. We consider $j>0$, but the proof is analogous for $j<0$.
Let us denote by $j>0$ the common norm of the vectors $X^{i}$ and $\tilde{X}^{j}$,
\begin{subequations}\label{isom}\begin{equation}\label{defj}
j\equiv\frac12(X^{0}+\tilde{X}^{0}),
\end{equation}
and introduce the unit vectors
\begin{equation}\label{defNN}
N^i = \frac{X^i}{j}, \qquad \tilde{N}^i = \frac{\tilde{X}^i}{j}.
\end{equation}
In order to make contact between the original variables $z_A$ and \Ref{defNN}, we need to
parametrize the vectors on the sphere as $N(z)$ in terms of the stereographic complex coordinate $z$. For instance using the conventions of \cite{noi},
$$N^i(z)= \frac1{(1+|z|^{2})}\Big( (1-|z|^{2}) , -2z, -2\bar{z}\Big), \qquad i=(3,-,+)$$
and the same for $\tilde{N}(\tilde{z})$. Then taking \Ref{Vz}, we see that \Ref{defNN} is achieved through the Hopf maps $z\equiv -\bar{z}_1/\bar{z}_0$, $\tilde{z}\equiv-\bar{\tilde{z}}_1/\bar{\tilde{z}}_0$.
The variables $j$, $N^i$ and $\tilde{N}^i$ span a 5-dimensional subspace commuting with the constraint \Ref{H}.
Hence, it only remains to identify the sixth and last variable spanning the reduced phase space. To do so, we evaluate
$$
\{i \ln\frac{z_A}{\bar z_A}, H \} = 1, \qquad \{i\ln\frac{\tilde{z}_A}{\bar \tilde{z}_A}, H \} = -1, \qquad
\{i\ln\frac{z_A}{\bar z_A}, j \} = \f12, \qquad \{i\ln\frac{\tilde{z}_A}{\bar \tilde{z}_A},j \} = \f12.
$$
From these brackets it follows that if we define
\begin{equation}\label{isom3}
\xi_A \equiv i \left(\ln\frac{z_A}{\bar z_A} + \ln\frac{\tilde{z}_A}{\bar \tilde{z}_A}\right),
\end{equation}\end{subequations}
we have
\begin{equation}\label{PP1a}
\{\xi_A, H\} = 0, \qquad \{\xi_A, j\} =1.
\end{equation}
That is, both $\xi_0$ and $\xi_1$ commute with the constraint, and furthermore are conjugated to $j$.
They are thus equally valid choices for the reduced space,
related by the canonical transformation \linebreak $\xi_{1}=\xi_0 + 2\arg(z)+2\arg(\tilde{z})$.
We conclude that the reduced phase space is spanned by $(N(z), \tilde{N}(\tilde{z}), j, \xi_A)$.
Concerning its Poisson algebra, we have the right bracket of \Ref{PP1a}, as well as the brackets \Ref{PP1b} written in terms of \Ref{defNN}. It is also immediate to see that $j$ commutes with both $N$ and $\tilde{N}$. The only remaining brackets to evaluate are
\begin{equation}\label{PPL}
\{ \xi_A, jN^i\} \equiv L^i_A(N),
\end{equation}
which give, in cylindrical components $(i=3,-,+)$,
\begin{equation}
L^i_0\big(N(z)\big)=(1,-z, -\bar{z}),
\qquad L^i_1\big(N(z) \big) = (1,1/\bar{z},1/{z}).
\end{equation}
Here $L(N) \equiv L_0( N(z)) $ is precisely the Lagrangian introduced in \cite{noi},
and $L_{1}(N)=L(N(-1/\bar{z}))=L(-N(z))$.
From now on, we take $\xi\equiv\xi_0$ as the reduced variable. As explained in \cite{noi}, the existence of canonical transformations which shift the $\xi$ variable and the Lagrangian are related to changes of section in the Hopf map.
Collecting the brackets \Ref{PP1b}, \Ref{PP1a} and \Ref{PPL}, we find
\begin{subequations}\label{PP}\begin{eqnarray}\label{PS2}
&& \{jN^i, jN^j \} = \eps^{ij}{}_k\,j N^k,
\hspace{.8cm} \{j \tilde N^i, j \tilde N^j \} = \eps^{ij}{}_k\, j \tilde N^k,
\hspace{.7cm} \{N^i, \tilde N^j \} = 0, \\ \nonumber\\\label{Pzero}
&& \{\xi, j\} = 1, \hspace{2.95cm} \{N^i, j \} = 0, \hspace{2.6cm} \{\tilde N^i, j \} = 0, \\ \nonumber\\\label{PL}
&& \{\xi, j N^i \} \equiv L^i(N), \hspace{1.65cm} \{\xi, j \tilde N^i \} \equiv L^i(\tilde N),\label{PTh}
\end{eqnarray}\end{subequations}
which can be recognized as the algebra of twisted geometries, with both spheres positively oriented.\footnote{With respect to the opposite orientation taken in \cite{noi}, this different choice affects the isomorphism with $T^*\mathrm{SU}(2)$ in a minor way, see \Ref{mapg} below.}
$\square$
\medskip
The proof shows how the algebra \Ref{PP} descends in a simple way from the canonical Poisson brackets on twistor space.
We remark also that in the spirit of the Guillemin-Sternberg theorem \cite{GS}, the symplectic quotient $P_{*} $ can be written as a complex quotient \emph{without} imposing the constraints:
\begin{equation}
\mathbb{T}_{*}/\!/{\rm U}(1) \cong P_{*}\cong \mathbb{T}_{*}/{\mathbbm C},
\end{equation}
where the ${\mathbbm C}$ action is given by:
$$(\ket{\bf z}, \ket{\bf {\tilde z}}) \mapsto (\lambda \ket{\bf z}, \lambda^{-1}\ket{\bf {\tilde z}}).$$
It is indeed trivial to show that we can always reach the constraint surface by choosing \linebreak
$\lambda = \sqrt{\langle {\bf \tilde{z}}|{\bf \tilde{z}} \rangle}/\sqrt{\langle {\bf z}|{\bf z}\rangle}$.
Let us go back to the symplectomorphism \Ref{TSU2P}, and notice that together with \Ref{TP}, it implies the symplectic reduction from twistor space to the cotangent bundle of SU(2). For completeness, we now give explicitly this alternative reduction.
{\bf Proposition 2:}
\begin{equation}\label{TTSU2}
\mathbb{T}_{*}/\!/U(1) \cong T^*\mathrm{SU}(2)\backslash\{|X|=0\}.
\end{equation}
{\bf Proof.} Recall \cite{noi} that if we trivialize $T^*\mathrm{SU}(2)\cong \mathfrak{su}(2)\times \mathrm{SU}(2)$ as $(X,g)$ with right-invariant vector fields $X$, we have that
\begin{equation}\label{deftX}
\tilde{X} \equiv - g^{-1} X g
\end{equation}
is a left-invariant vector field and that the Poisson algebra on linear functions reads
\begin{align}\label{PT*G1}
& \{X^i, X^j \} = \eps^{ij}{}_k X^k, & \{\tilde{X}^i, \tilde{X}^j \} = \eps^{ij}{}_k \tilde{X}^k,
&& \{X^i, g \} = -\tau^i g, && \{\tilde X^i, g \} = g \tau^i.
\end{align}
The first two brackets hold automatically in the reduction of $\mathbbm{T}_*$, since $X^i$ and $\tilde{X}^i$ commute with $H$ and satisfy \Ref{PP1b}.
It thus suffices to find $g(z_A,\tilde{z}_A)$ in $\mathbbm{T}_*$ such that $(i)$ it is an SU(2) group element, $(ii)$ it commutes with $H$, and ($iii$) it satisfies \Ref{deftX} and \Ref{PT*G1}. It is not hard to see that
\begin{equation}\label{gzz}
g(z_A,\tilde{z}_A) \equiv \frac{\ket{\bf z}[{\bf \tilde{z}}| - |{\bf z}]\bra{\bf \tilde{z}}}{ \sqrt{\langle {\bf z}|{\bf z} \rangle\langle {\bf \tilde{z}}|{\bf \tilde{z}} \rangle}},
\end{equation}
fulfills $(i-iii)$. Indeed, thanks to $\langle{\bf \bar{z}}|{\bf z} ]=0$, one can check that this map satisfies
\begin{equation}
g|{\bf{\tilde{z}}}] = |{\bf{z}} \rangle,\qquad g |{\bf{\tilde{z}}}\rangle = - |{\bf{z}} ],\qquad gg^{\dag }= g^{\dag} g ={\mathbbm 1}.
\end{equation}
The commutation with $H$ is straightforward. A less trivial calculation shows also that the matrix elements commute among themselves when $H=0$ is satisfied.
Finally, \Ref{deftX} follows from
$g{|\bf{\tilde{z}} ][\bf{\tilde{z}}} |g^{\dag} = {\bf |z\rangle\langle z|}$, and
\Ref{PT*G1} from the brackets \Ref{Pz} and the parametrization \Ref{Vz}. $\square$
\section{Null twistors}
Thus far, we have connected the twisted geometries to pairs of spinors in ${\mathbbm C}^4$. We now show that our construction is effectively related to \emph{twistors}, in particular to null twistors. To that end, let us briefly review some basic facts about twistors, refering the reader to the literature \cite{PR} for details.
A twistor $Z^{\alpha} \in {\mathbbm C}^{4}$ can be viewed as a pair of spinors $Z^{\alpha}=(|\omega \rangle, |\pi\rangle)$, where
$|\pi \rangle $ defines a null direction $ p_{\pi}= |\pi ][\pi|$ in Minkowski space, while $ |\omega \rangle $ defines a
point $x$ in complexified Minkowski space
via $|\omega \rangle = ix |\pi\rangle$.
On twistor space there is a natural hermitian pairing given by
$$\bar{Z}_{\alpha} Z^{\alpha}= \langle \omega |\pi\rangle + \langle \pi |\omega \rangle,$$
and the quantity $s=\bar{Z}_{\alpha} Z^{\alpha}/2$ is called the helicity of the twistor.
When a twistor is null, i.e. $s=0$, the matrix $x$ is Hermitian and thus identifies a point in real Minkowski space.
However, $x$ is defined only up to the addition of a null momentum $p_{\pi}$, since $p_{\pi}|\pi\rangle =0$.
The resulting null ray $x+\lambda p_\pi$ can be explicitly reconstructed as
\begin{equation}
x(\lambda)= \frac{|\omega\rangle \langle\omega |}{i \langle \omega| \pi \rangle} + \lambda |\pi][\pi|, \qquad \lambda \in {\mathbbm R}.
\end{equation}
Hence, a null twistor defines a null generator $p_\pi$ and a null ray in Minkowski space.
We call these data a ``ruled'' null ray, since the ray has a specific generator.
The relation between twistors and twisted geometries is established through the map
\begin{equation}
|\omega \rangle \equiv |{\bf z}\rangle + |{\bf \tilde{z}} ], \quad |\pi \rangle \equiv |{\bf z}\rangle - |{\bf \tilde{z}} ].
\end{equation}
Under this map the twistor Hermitian pairing becomes
\begin{equation}\label{s}
s = \frac12 \Big(\langle \omega | \pi \rangle + \langle \pi |\omega\rangle \Big) = \langle {\bf z} | {\bf z} \rangle - [{\bf \tilde{z}} | {\bf \tilde{z}} ].
\end{equation}
Then, the constraint $H=0$ in \Ref{H} is equivalent to the statement that $Z^{\alpha}({\bf z},{\bf \tilde{z}})$ is a null twistor, and
the U(1) action \Ref{U(1)} translates into a global rescaling of $Z^{\alpha}$:
\begin{equation}\label{Twiphase}
Z^{\alpha}=(|\omega \rangle, |\pi\rangle) \rightarrow (e^{i\frac{\theta}2}|\omega \rangle, e^{i\frac{\theta}2} |\pi\rangle)= e^{i\frac{\theta}2}Z^{\alpha}.
\end{equation}
Therefore $P_*$, which is the symplectic reduction of the space $\{(\ket{\bf z},\ket{\bf \tilde{z}})\}$ by $H=0$, can be interpreted as a
phase space of null twistors $\mathbb{T} \mathbb{N}$ up to a global phase,
\begin{equation}\label{PTN}
P_* = \mathbb{T} \mathbb{N}/ \mathrm{U}(1).
\end{equation}
This is the connection between (null) twistors and twisted geometries. Notice that the U(1) rescaling \Ref{Twiphase} leaves invariant the ruled null ray $x+\lambda p_\pi$ defined by $Z^\alpha$, thus \Ref{PTN} means that an element of $P_*$ defines a ruled null ray.
The reverse is also true: Given a null ray in Minkowski space with a specific null generator, we can reconstruct
uniquely a null twistor up to a global phase, and hence an element of the phase space $P_*$.
This mathematical correspondence shows that we can think of an element of $P_*$, the edge phase space of loop quantum gravity, as a ruled null ray in Minkowski space.
Whether this is just a mathematical correspondence, or it has a deeper geometrical origin, is still a mystery for us, and a fascinating one.
\section{Geometrical meaning of the constraints}
To understand the geometrical meaning of the constraints $H_e$, consider a cellular decomposition dual to the graph. A twisted geometry assigns to each face (dual to the edge $e$)
its oriented area $j_e$, the two unit normals $N_e$ and $\tilde{N}_e$ as seen from the two vertex frames sharing it, and an additional angle $\xi_e$ related to the extrinsic curvature between the frames.
Working with ${\mathbbm C}^4{}_e=\{(z_A,\tilde{z}_A) \}_e=\{(N,\tilde{N},X^0,\tilde{X}^0,\varphi,\tilde\varphi)\}_e$ corresponds to relaxing the uniqueness of the area, and assigning to each face \emph{two areas} $X^0_e$ and $\tilde{X}^0_e$ (and their conjugate variables $\varphi_e$ and $\tilde\varphi_e$), one for each polyhedral frame. The constraints $H_e$ impose the matching of these areas (as well as reducing $\varphi_e$ and $\tilde\varphi_e$ to a single $\xi_e$).
This is the geometric meaning of the constraints $H_e=0$. What we have shown is that the phase space of loop quantum gravity on a fixed graph can be obtained starting from a geometric intepretation of twistors and imposing an \emph{area matching} condition equivalent to say that the twistors are null.
\section{Conclusions}
Let us summarize.
We unraveled a relation between the space $P_*$ of twisted geometries, isomorphic to $T^*\mathrm{SU}(2)$, and null twistors in ${\mathbbm C}^4$. Since the phase space of loop quantum gravity on a fixed graph is just the Cartesian product $\mathop{\simboloB{\times}}_e T^*\mathrm{SU}(2)$, our results imply that it can be derived starting from the larger space
$\mathop{\simboloB{\times}}_e {\mathbbm C}^4$, and then imposing the area matching constraint \Ref{H} at each edge.
The derivation can be done in both the usual holonomy-flux parametrization $(g_e,X_e)$ (Proposition 2), or in the twisted geometries parametrization $(N_e, \tilde{N}_e, j_e,\xi_e)$ (Proposition 1).
An interesting aspect of the twistor description is that it admits a complete factorization over the \emph{vertices}, as opposed to the edges:
\begin{equation}\label{vertexfact}
\mathop{\simboloG{\times}}_e {\mathbbm C}^4 = \mathop{\simboloG{\times}}_v {\mathbbm C}^{2E(v)},
\end{equation}
where $E(v)$ is the valency of the vertex $v$. This result follows straighforwardly once we use the orientation of the edges to uniquely assign $\ket{\bf z}$ to say the source vertex, and $\ket{\bf \tilde{z}}$ to the target one. The factorization over the vertices is an interesting spin-off of the twistor description, and can lead to useful applications (e.g. \cite{UN}).
Twistors and twisted geometries form natural spaces that can be associated to a graph. They admit simple geometric interpretations, and are related to loop gravity. Specifically, to the kinematical (i.e. prior to imposing the Gauss law implementing gauge-invariance) phase space of loop gravity on a fixed graph. For completeness, let us also recall \cite{noi} that gauge-invariance is implemented reducing the space of twisted geometries by the closure conditions
\begin{equation}\label{C}
C_v \equiv \sum_{e\in v}j_e N_e = 0
\end{equation}
at each vertex. The resulting space of \emph{closed} twisted geometries is isomorphic to the gauge-invariant phase space of loop gravity, $\times_e T^*SU(2) /\!/ SU(2)^V$.
The variables parametrize it as $\mathop{\simboloB{\times}}_e T^*S^1_e \mathop{\simboloB{\times}}_v S_{\vec{\jmath} _{v}} $, where $T^*S^1$ is the cotangent bundle of a circle, and $S_{\vec{\jmath} _{v}}$ is the space of shapes of a polyhedron, introduced in \cite{Kapovich} and studied in relation to loop gravity in \cite{CF3,FKL}.
Closed twisted geometries define a local flat metric on each polyhedron. However, this metric is discontinuous: although each face has a unique area, it acquires a different shape when determined from the variables associated to the two polyhedra sharing it, since
there is nothing enforcing a consistent matching of the faces. This discontinuity can be traced back to the fact that the normals carry both intrinsic and extrinsic geometry.
Finally, for graphs dual to triangulations, the space of closed twisted geometries can be related to the phase space of Regge calculus when one further imposes the gluing or shape matching conditions \cite{BS}. For more discussions on the relation between loop gravity/twisted geometries and discrete gravity, see discussions in \cite{noi,IoCarlo,BiancaJimmy}.
The various phase spaces that can be associated to a graph, and their relations, are summarized by the following scheme:
\begin{center}{
\begin{tabular}{lll}
Twistor space & & \\
& & \\
\multicolumn{3}{l}{\hspace{.7cm} $\downarrow$ \emph{area matching reduction}} \\
& & \\
Twisted geometries & $\Longleftrightarrow$ & loop gravity \\
& & \\
\hspace{.7cm} $\downarrow$ \emph{closure reduction} & \\
& & \\
Closed twisted geometries &
$\Longleftrightarrow$ & gauge-invariant loop gravity \\
& & \\
\multicolumn{3}{l}{\hspace{.7cm} $\downarrow$ \emph{shape matching reduction}} \\
& & \\
Regge phase space & $\Longleftrightarrow$ & Regge calculus
\end{tabular}
}\end{center}
\noindent This scheme shows how twisted geometries fit into a larger hierarchy. From top to bottom, we move from larger and simpler spaces, with less intuitive geometrical meaning, to smaller and more constrained spaces, with clearer geometrical meaning.
The results establish a path between twistors and Regge geometries, via loop gravity.\footnote{For a different relation between twistors and (two-dimensional) Regge calculus, see \cite{Carfora}.} Furthermore, notice also that each phase space but the twistor one is related to a well-known representation of general relativity on a given graph, be it loop gravity or Regge calculus. This raises the intriguing question of whether such a representation can be given directly in terms of twistors. The possibility of defining a ``twistor gravity'' is a fascinating new direction opened by this new way of looking at loop quantum gravity.
|
2,869,038,156,651 | arxiv | \section{Introduction}
Suppose $X$ is a connected scheme, $\bar{y}$ is a geometric point of $X$, and $\pi_G:X_G \to X$ is a finite \'etale Galois $G$-covering. Such a covering corresponds to a morphism $\pi_1^\et(X, \bar{y}) \twoheadrightarrow G$. For every closed point $x \in |X|$ and a geometric point $\bar{x}$ above $x$, choose a path from $\bar{x}$ to $\bar{y}$ and consider the composition $\phi_x: \pi_1(\Spec k(x), \bar{x}) \to \pi_1(X, \bar{x}) \xrightarrow{\sim} \pi_1(X, \bar{y}) \twoheadrightarrow G$. The image $H_x = \im \phi_x$ of this map describes the Galois action on the fiber above $x$; changing the path from $\bar{x}$ to $\bar{y}$ changes $H_x$ by a conjugation, and so every closed point $x \in |X|$ defines a conjugacy class of a subgroup $H_x \subset G$. Standard results in number theory, such as Chebotarev's density and Hilbert's irreducibility theorems, concern the distribution of the conjugacy classes $H_x$ as $x \in |X|$ varies. In this paper we consider the case when $X$ is an arbitrary smooth variety over a number field $k$ (a variety is a separated scheme of finite type over a field). In this setting the set of closed points $|X|$ is too complicated to talk about distribution questions in the analytic sense; the main result of this note is that every conjugacy class appears above some point $x \in |X|$, and that, moreover, we can require $k(x)/k$ to be a degree $n$, $S_n$-extension for all sufficiently large and divisible $n$.
Recall that the \emph{index} $i(X)$ of a variety $X/k$ is the greatest common divisor of the degrees $\deg P$ of closed points $P \in |X|$. A degree $n$ separable field extension $K/k$ is called an $S_n$-extension if the Galois group of the Galois closure of $K/k$ is the symmetric group $S_n$; we call degree $n$ closed point $P$ on a variety $X/k$ an \emph{$S_n$-point} if $k(x)/k$ is an $S_n$-extension.
\begin{theorem}\label{main theorem}
Suppose $k$ is a number field, and $X/k$ is a smooth quasi-projective variety of dimension at least $1$. Suppose $X_G \to X$ is a finite \'etale Galois covering with Galois group $G$ such that $X_G$ is geometrically irreducible. Fix a subgroup $H \subset G$. Then there exists a constant $N$ such that for any finite extension $L/k$ and for any $n>N$ which is divisible by $i(X)[G:H]$, there exist infinitely many degree $n$ $S_n$-points $x \in |X_L|$ such that $H_x$ is conjugate to $H$.
\end{theorem}
\begin{remark}
In this theorem we may replace $X$ with an open subset that still contains a degree $d$ point. In particular, one can replace the quasi-projective assumption with the notion of ``FA-scheme'' in the sense of \cite[Section 2.2]{gabber2013index}.
\end{remark}
\begin{remark}
Theorem \ref{main theorem} is related to a result of Poonen \cite[Theorem 1]{poonen2001points}, which implies that, in notation of our theorem, there is a closed point $x \in |X|$ with $H_x = \{e\}$ (but does not give control over the field extension $k(x)/k$.)
\end{remark}
This result can be applied to a covering of moduli spaces to build objects with prescribed Galois structure. Taking $X$ to be the quotient $\PP^n_k/\!/G$ under a faithful action $G\curvearrowright \PP^n$ we obtain the following result (see Section \ref{applications}).
\begin{theorem}[Inverse Galois Problem up to $S_n$]\label{Inverse-Galois-application}
Suppose $k$ is a number field, and $G$ is a finite group. Then there is a constant $N=N(G)$ such that for every $n>N$ there are infinitely many degree $n$ extensions $L/k$ with Galois group $S_n$, for which there exists an extension $F/L$ with Galois group $G$.
\end{theorem}
Note that it is easy to realize $G$ over some extension of $k$, say, by embedding $G$ intro a symmetric group and taking the fixed field of $G$, but such constructions never realize $G$ over an $S_n$ extension. A weak version of Theorem \ref{Inverse-Galois-application} can be deduced from the main results of \cite{fried1992embedding}: from \cite[Corollary~1]{fried1992embedding}, one can realize any group $G$ over a $\prod_{n\geqslant 1} S_n$-extension of $k$.
The same idea can be used to construct abelian varieties with specified level $N$ structures.
\begin{theorem}\label{level-structures}
Suppose $k$ is a number field that contains $N$-th roots of unity, $g\geqslant 1$ is an integer, and $H \subset \Sp(2g, \Z/N\Z)$ is a subgroup of index $d$. Then there exists a constant $M=M(H)$ such that if $n>M$ is an integer divisible by $d$, then there exists a degree $n$, $S_n$-extension $K/k$ and a $g$-dimensional abelian variety $A/K$ such that the image of the Galois action on the $N$-torsion points $\Gal_K \to \Sp(A[N])$ is conjugate to $H$.
\end{theorem}
\section{Main Theorem}
We first reduce the proof to the case of curves by combining Bertini and Lefschetz theorems.
\begin{lemma}\label{reduce-to-curves}
In the setting of Theorem \ref{main theorem}, let $D \subset X$ be a closed subscheme which is a union of closed points $D=\bigcup_{i=1}^s P_i$ such that $\gcd(\deg P_i)=i(X)$. Then there exists a smooth geometrically integral curve $Z \subset X$ with $D \subset Z$ such that for a point $z \in (Z \setminus D)(\C)$ the natural map $\pi_1((Z\setminus D)(\C), z) \to \pi_1((X\setminus D)(\C), z)$ is surjective.
\end{lemma}
\begin{proof}
We prove the statement by induction on $\dim X$. The base case $\dim X = 1$ is immediate. If $\dim X>1$, we embed $X$ into a projective space $\PP^r$ and consider the intersection of $X$ with a general degree $n$ hypersurface $H \subset \PP^r$ passing through $D$. For $n$ large enough, this intersection $X'=X \cap H$ is smooth and geometrically irreducible by a suitable version of Bertini's theorem (see, for example, \cite[Theorems 1 and 7]{kleiman1979bertini}). The surjectivity on fundamental groups (for sufficiently large $n$) follows from a version of Lefschetz's theorem, as we now explain. After replacing $X \to \PP^r$ with a high degree Pl\"ucker embedding, we can assume that the span $\Span D$ of the points of $D$ intersects $X$ only at $D$, and that the projection $\pi_D$ from $\Span D$ has $\dim X$-dimensional image. In this language, the variety $X'\setminus D$ is a preimage of a hyperplane under $\pi_D$, and for $x \in X'\setminus D$ the surjectivity of $\pi_1((X'\setminus D)(\C),x) \to \pi_1((X\setminus D)(\C),x)$ follows from Lefschetz theorem in the form of \cite[Lemma 1.4]{deligne1981groupe}.
\end{proof}
\begin{remark}
Since the varieties are assumed to be smooth, if $\dim X > 1$ we have a natural isomorphism $\pi_1((X\setminus D)(\C),z) \xrightarrow{\sim} \pi_1(X(\C),z).$
\end{remark}
\begin{proof}[Proof of Theorem \ref{main theorem}]
Fix a collection of distinct closed points $P_1, \dots, P_s \in |X|$ such that $\gcd(\deg P_i)=i(X)$, and denote by $D$ the union $D=\bigcup_i P_i$. Let $Z$ be the smooth curve from Lemma \ref{reduce-to-curves}. Since the map $\pi_1((Z\setminus D)(\C), z) \to \pi_1((X\setminus D)(\C), z)$ is surjective, the covering $X_G \to X$ remains a geometrically connected Galois $G$-covering when pulled back to $Z$. Therefore it suffices to consider the case $\dim X =1$. In this case, after compactifying, we can assume that $X_G$ and $X$ are both smooth proper curves, $\pi_G: X_G \to X$ is a geometrically irreducible $G$-covering, and $D \subset X$ is a divisor which does not intersect the branch locus of $\pi_G$.
Let $\pi_H:X_H \to X$ be the intermediate covering of $\pi_G$ corresponding to $H$, so that $\pi_{G/H}:X_G \to X_H$ is an $H$-covering. Another way of phrasing the theorem is that there are infinitely many degree $n$, $S_n$-points $x \in |X_L|$ and $L(x)$-rational points $x_H \in \pi_H^{-1}(x)$ such that $\pi_{G/H}^{-1}(x_H)$ is an irreducible scheme. Consider the collection $\calS$ of divisors of the form $E=m_1 \pi_H^{-1}(P_1) + \dots + m_s \pi_H^{-1}(P_s)$ for nonnegative integers $m_1, \dots, m_s$. Let $m$ be a constant satisfying the following conditions:
\begin{enumerate}
\item $m > [G:H]=\deg \pi_H$;
\item $m>2 g(X_H)$, so that any divisor on $X_H$ of degree larger than $m$ is very ample;
\item any integer larger than $m$ and divisible by $i(X)[G:H]$ is the degree of a divisor from $\calS$ ($m$ is larger than the Frobenius number of the semigroup of degrees of divisors from $\calS$).
\end{enumerate}
Note that $m$ can be chosen independently of $L.$ We claim that any $n>m$ and divisible by $i(X)[G:H]$ satisfies the conditions of the theorem. Fix an $n>m$ divisible by $i(X)[G:H]$ and choose a divisor $E \in \calS$ of degree $n$. Consider the embedding $X_H \subset \PP^r$ given by the complete linear system $|E|$.
To simplify notation, for the remainder of the proof all varieties are considered after a base change to $L$.
Consider the correspondences $I_G$, $I_H$ that parameterize incidences between points $x$ on $X_G$ and $X_H$ and hyperplanes $h$ in $\PP^{|E|}$: $I_G \subset X_G \times \left(\PP^{|E|}\right)^\vee$ and $I_H \subset X_H \times \left(\PP^{|E|}\right)^\vee$, given by
\[I_G=\{(x, h)\colon \pi_{G/H}(x) \in h\}\] and \[I_H=\{(x, h)\colon x \in h\}.\]
The variety $I_G$ is irreducible, since the projection $I_G \to X_G$ is as a proper map with irreducible equidimensional fibers (the fibers are projecitve spaces of dimension $\dim |E|-1$). The covering $I_H \to (\PP^{|E|})^\vee$ has degree $n$ and monodromy group $S_n$ (see, for example, \cite[Lemma, Chapter III, page 111]{ACGH}). Therefore, by Hilbert irreducibility theorem applied to the (factored) covering $I_G \to I_H \to \left(\PP^{|E|}\right)^\vee$, a general hyperplane section $x_H = h \cap X_H$ is a degree $n$, $S_n$-point on $X_H$, and moreover, since $I_G$ is irreducible, the fiber $\pi_{G/H}^{-1}(x_H)$ is irreducible as well (see \cite[Chapter 9]{serre1989lectures} for Hilbert's theorem in a geometric form). Finally, consider the image $x \colonequals \pi_H(x_H)$. The field extension $L(x_H)/L$ has no intermediate subextensions, and so either $L(x)=L$, or $L(x)=L(x_H)$. If $x$ is a rational point, then the degree of $x_H \subset \pi_H^{-1}(x)$ is at most $[G:H]$, contradicting the assumption $n\geqslant m > [G:H]$. Therefore $x$ is the degree $n$, $S_n$-point on $X$ we seek.
\end{proof}
\section{Applications}\label{applications}
\begin{proof}[Proof of Theorem \ref{Inverse-Galois-application}]
Consider an action of the group $G$ on a vector space $V$ over $k$, such that the induced action on $\PP(V)$ is faithful, and let $\pi$ be the associated quotient map $\pi: \PP(V) \to \PP(V)/\!/G$. Consider a smooth affine open subset $U \subset \PP(V)/\!/G$ over which $\pi$ is an \'etale Galois $G$-covering. Since the rational points are dense in $\PP(V)/\!/G$, the variety $U$ has rational points and so $i(U)=1$. Applying Theorem \ref{main theorem} to $X=U$, $X_G=\pi^{-1}(U)$, and $H=G$ gives the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{level-structures}]
Consider any abelian scheme $\calA \to X$ over a smooth base $X$ equipped with a geometric point $\bar{x}$ above a rational point $x \in X(k)$ such that the action $\bar{\rho}_N\colon \pi_1^{\et}(X_{\kbar}, \bar{x}) \to \Sp(2g, \Z/N\Z)$ on the $N$-torsion is surjective. There are many different such families for which the monodromy is known to be surjective, for instance it is true for the Jacobian of the universal curve (see \cite[Section 5.12]{deligne1969irreducibility}). Since $k$ is assumed to have $N$-th roots of unity, the arithmetic monodromy -- image of $\rho_N:\pi_1^{\et}(X, x) \to \GL(A[N])$ -- is contained in $\Sp(A[N])$. Since $\bar{\rho}_N$ is surjective, so is the arithmetic monodromy $\rho_N$. Therefore the covering $X_N \to X$ corresponding to the kernel of $\rho_N$ is geometrically irreducible Galois \'etale $\Sp(2g, \Z/N\Z)$-covering of algebraic varieties. Applying Theorem \ref{main theorem} to this covering gives the result.
\end{proof}
\bibliographystyle{alpha}
|
2,869,038,156,652 | arxiv | \section{Introduction}
Many medical and experimental procedures benefit from the
availability of radiation shielding that can be moulded
to shape without using metal cast at high temperature.
Formulations for shielding putties based on clays
and thermoset resins \cite{Fa92a,Ei94a,Sa04a,Ok05a} have been
published but these have the disadvantage that the material is not reusable.
A commercial product known as ``EnviroClay'' based on bismuth
held in some binding clay is available from Radiation Products Design
Inc.\ \cite{Ra}
but this remains soft and pliable at room temperature.
A thermoplastic shielding material was introduced by
Maruyama et~al.\ \cite{Ma69a}
which consisted of lead shot embedded in a matrix of a tooling wax
(M. Arg\"{u}eso \& Co., ``Rigidax''). A similar composition,
described by Haskell~\cite{Ha96a},
was a thermoplastic dental wax (Moyco Industries ``impression compound'')
loaded with bismuth powder, chosen for its low toxicity.
This last work suggests that ``a synthetic thermoplastic could be
substituted for the preferred hydrocarbon wax blend'' but,
presumably, at that time, no suitable material was available.
The current work describes the use of polycaprolactone as a binder
to produce novel thermoplastic shielding compounds for gamma, X-ray and
neutron applications.
The impetus was to improve shielding in low background
particle-astrophysics experiments. Such experiments are typically
operated deep
underground within shielding castles constructed from lead blocks
and with an inner layer of ultra-pure copper blocks. Inevitably, there
are ports and voids in the shields where pipework and cables pass
through the castles. Further, the castles are customarily fabricated
from available or stock-sized blocks or ingots which rarely match the
experimental apparatus accurately.
There is a clear
requirement for a shielding material that will fill the spaces and
voids in the castle as precisely as possible.
Either cutting or casting metal to shape are possible but both are
problematic, especially in an underground laboratory.
Another approach is to use lead wool or shot held in bags or sacks,
but these have handling problems and the potential release of
finely divided material. Incorporating the shielding metal in a
polymer to provide a mouldable thermoplastic offers a simple
system of containment combined with flexibility of
use and re-usability.
In low background experiments, a key requirement is the absence of
radioactive impurities in the shielding,
consequently the formulations based on waxes
\cite{Ma69a,Ha96a} were not usable. These waxes contain high
proportions of mineral
fillers which would be expected to contain unacceptable levels of
uranium and thorium. EnviroClay \cite{Ra} was similarly rejected.
It is considered that the composites described here will have more
general applicability than the application for which they were
originally produced.
\section{Polycaprolactone}
Polycaprolactone is a rigid crystalline polymer, having the unit
formula (C$_{6}$H$_{10}$O$_{2}$)$_{n}$, which is remarkable
for its low melting point of 58--60$^\circ$C. When molten it easily
wets uneven or greasy surfaces and is consequently much used as
a hot-melt adhesive. It is soluble in a range of common solvents, is
intrinsically non-toxic and is bio-degradable.
A range of thermoplastic polycaprolactones are manufactured by
Ingevity \cite{In} as powders or pellets under the CAPA tradename.
These were previously sold under the Perstorp and Solvay brand names.
Various grades are available
with molecular weights from 10,000 to 80,000.
Small quantities of polycaprolactone (CAPA6800) in pellet form are
marketed for experimental and hobbyist
purposes under the tradename ``Polymorph'' in the
UK and this was the inspiration for the present study. Similar
material is marketed
under the names ``Shapelock'', "Polydoh", "Friendly Plastic" and others.
Initial investigations of samples of Perstorp polycaprolactone
using a low-background Germanium detector indicate that its
radiopurity is high, comparable with other pure polymer materials.
Concentrations of uranium, thorium and potassium were all measured
as less than 30ppb by weight.
\section{Shielding for X-rays and gamma-rays}
To provide maximum shielding for X-rays and gamma-rays the
thermoplastic matrix should be loaded as highly as possible with
dense metal. For the present project fine lead shot was chosen for
cheapness, but copper could be used to produce a low background
composite, while bismuth would produce a low toxicity
composite, but at higher cost. Tungsten granules could also be
considered and this would offer a method of producing very
dense shields without the difficulty of machining tungsten metal.
The ratio of lead to binder was chosen to ensure that the
composite would have the maximum strength
available consistent with the maximum density, which
would be obtained when all voids between the lead are filled with polymer.
Assuming that the lead shot consisted of perfect identical spheres,
which were randomly stacked,
the highest packing fraction obtainable
is 0.64 \cite{Ja92a}.
The proportional mass of polymer required to bind the lead is given by:
\begin{equation*}
\frac{m_{poly}}{m_{Pb}} =
\frac{(1-0.64)\rho_{poly}}{0.64\rho_{Pb}}
= 0.56\cdot\frac{1.1}{11.3} = 0.054
\end{equation*}
Maruyama~et~al.\ \cite{Ma72a} point out that by using a mixture
of shot sizes, even higher packing fractions may be attained.
Specifically, with a mixture of two sizes having a ratio of diameters
of 0.22 and a mixing ratio of between 1:2--1.4 larger to smaller
pellets, a packing fraction of 0.72 was obtained. More recent work
on packing fractions of particles with different distributions of
sizes was reviewed by Brouwers~\cite{Br06a}.
Such distributions were not
further investigated in the present study.
Lead based thermoplastic shielding composite was prepared using
Perstorp CAPA6506, a polycaprolactone with an approximate molecular
weight of 50000 produced as powder with a 600$\mu$m grain size.
It would be difficult to ensure a homogenous sample if pelleted polycaprolactone were used.
The lead shot was obtained from Calder Industrial Materials
Ltd.~\cite{Ca} and had an average size of
1.3mm, though it was clear that there was a distribution both in size
and shape.
The packing fraction of this material was measured as 0.63, only
marginally less than the theoretical prediction for uniform spheres~\cite{Ja92a}.
Initial 100g samples were prepared by weighing out the lead shot
and polycaprolactone and intimately mixing them by simply stirring.
They were then
heated in a shallow stainless steel vessel (known in the UK as a
``balti dish'') on a hot plate.
Production quantities of the composite, up to 1kg, were subsequently
made in larger stainless steel pans.
It was found advantageous to
include 50ml or so of water per 100g of lead to prevent the temperature
exceeding
100$^\circ$C, as the polycaprolactone decomposes at about 200$^\circ$C\
and smokes badly if overheated. The water also helped conduct the heat
uniformly to the composite mass and, to an extent, prevented the
composite from adhering to the pan.
A certain amount of stirring or kneading the material
was required to ensure that the
lead and polymer were adequately mixed.
At this stage the composite was a viscous toffee-like mass.
It could be removed from the
dish with a spoon or gloved hand and any water remaining on it would
run off. The warm composite
was somewhat sticky and when
moulding the material, it was found that the mould should
either be greased or lined with polythene or mylar sheet. On cooling, it was
found almost impossible to remove the composite from the mould if
this precaution was not taken.
When the composite is at the boiling point of water,
extreme care should be taken as getting the hot material
on bare skin would be likely to cause a painful scald.
However, when it had cooled to 60$^\circ$C\ it was found
that it could be easily worked to shape by hand and remained workable
for a few minutes.
The cooled material was a tough resinous solid with
a measured density, $\rho_{comp} = 6.7$, which was slightly less than
the predicted density of 7.4. Presumably a
small fraction of voids remain.
A small (66g) sample moulded into a round cornered
cylinder can be seen in figure~\ref{sample}.
At the surface, all
the lead particles are covered with a film of polymer which should assist in minimizing
contact with toxic metal.
\begin{figure}
\vspace{1mm}
\center
\includegraphics[width=\columnwidth]{scan5.pdf}
\caption{Test sample of lead composite. }
\label{sample}
\vspace{-5mm}
\end{figure}
Surprisingly, the composite was found to be electrically
non-conducting; each particle of lead being insulated from its
neighbours by films of polymer.
The composite could be remelted by simply returning it to the pan
and reheating. Some experiments were made on moulding the composite
using a hot air gun. This was possible but care had to be taken
since the composite had rather poor thermal conductivity
and it was found easy to overheat the material at the surface before the
core had melted.
Simple tests of the gamma shielding capability of the composite using
weak sources and a geiger counter
indicated that its properties were exactly that which would be
expected from reduced density lead.
\section{Shielding for neutrons}
Shielding of particle-astrophysics experiments against environmental neutrons
typically requires hydrocarbon material of at least
10cm thickness in order to ensure that the external neutrons are
thermalised and captured. In the case of detector
systems larger than
a cubic metre, the requirement for neutron shielding material
is of order some tonnes, therefore there is the overriding
need for low cost material \cite{Mc05a}. As before, there is also
the requirement for high radiopurity of low background materials.
Similar considerations are also
applicable to other experimental disciplines where bulk neutron shielding
is required.
Sakurai \cite{Sa04a} and Okuno \cite{Ok05a} both describe the use of
thermoset resins loaded with either isotopically enriched lithium or
boron compounds to capture thermal neutrons.
For particle-astrophysics
applications, the difficulty of re-use of such material makes the use
of thermosets unattractive while the cost of large quantities of
these materials remains prohibitive.
Considerable use has been made of thick slabs of polypropylene or
high density polyethylene, which are widely available commercially but
remain relatively expensive. A far cheaper option is to use raw
polyethylene or polypropylene pellets held in fabric bags or sacks or
contained within wooden shuttering \cite{Mc05a}. The thickness of
material has to be increased to allow for the packing fraction
but this is rarely problematic.
The availability of a mouldable composite based on polymer pellets
would mean that the containment could be dispensed with.
A composite material can be produced using
polypropylene pellets embedded in water extended polyester (WEP) at
reasonable cost \cite{Mc05a}.
Unfortunately, this has the disadvantages mentioned
above with respect to thermosets combined with problems of
manufacture associated with the presence of styrene monomer in the
resin.
Pure polycaprolactone, being (C$_{6}$H$_{10}$O$_{2}$)$_{n}$
is an intrinsic neutron shielding material,
however, it is not a cost-effective choice, being approximately
seven times as expensive as polypropylene
pellets. To reduce the cost, a composite was produced using
polypropylene pellets bound together using polycaprolactone.
A composite in which all the voids were filled would have optimal
strength, while a composite with minimum binder would result in the
lowest cost.
The materials used were CAPA6506 polycaprolactone powder and Targor
Procom polypropylene pellets. These latter were roughly cylindrical
with a diameter of about 4.5mm and an average of 3.6mm thick, though
there was considerable variation between pellets.
Experiments were performed to determine the optimum ratio of pellets
to binder.
The production method was similar to the lead version.
To produce large panels of this composite,
it should be possible to make a mould in which the ingredients are
heated by blowing steam through the material.
It was found that with low proportions of CAPA to
polypropylene beads, the resulting composite was not structurally
supporting and easily disintegrated.
Composite produced with 25g CAPA to 100g of
polypropylene beads or more was structurally sound, however, it
is of comparable price to commercially available thick
slabs of polypropylene. While it is only marginally cost-effective,
it may have advantages in terms of easy
fabrication in complex shapes and in re-usability.
While this was not attempted, it is clear that quantities of boron or lithium
compounds, or minerals such as colemanite,
could be added to the composite without greatly affecting its mechanical
properties. Other neutron absorbers such as compounds of gadolinium,
cadmium or europium could similarly be considered. Polycaprolactone
could also be employed as an alternative to the epoxy resin binder
used in ``crispy mix'' boron-carbide based neutron shielding
composites \cite{Pu85a}, again giving a remouldable thermoplastic
material.
\section{Long term stability}
While no long term assessments of these composites have been
performed, a few general observations can be made.
Polycaprolactones are biodegradable, but this appears to be by
fungal attack in the presence of moisture. If the material is
kept dry, it can be expected to last as long as any other plastic
material.
If polycaprolactone is exposed to very high doses (500kGy) of gamma
radiation the polymer will cross-link \cite{Da98a},
which would be expected to render the material stiffer and even
more viscous in the molten state and would presumably lead to
embrittlement in the solid phase.
\section{Acknowledgements}
The author would like to acknowledge the assistance of Solvay in
providing samples of polycaprolactone.
\bibliographystyle{unsrt}
|
2,869,038,156,653 | arxiv | \section{Introduction}
\label{sec:introduction}
Designing deep neural network architectures often requires various task-specific domain knowledge, and it is challenging to achieve the state-of-the-art performance by manual tuning without such information.
Consequently, the automatic search for network architectures becomes an active research problem~\cite{stanley2002evolving,andrychowicz2016learning,li2016learning}.
Several neural architecture search (NAS) techniques achieve the state-of-the-art performances on the standard benchmark datasets~\cite{zoph2018learning,chen2018searching}.
However, NAS methods inherently suffer from high computational cost due to their huge search spaces for architectural variations and frequent performance evaluation requirement during training.
To overcome these limitations, weight sharing concept has recently been proposed and illustrated its advantage in terms of accuracy and efficiency~\cite{pham2018efficient}.
Despite such efforts, the computational cost of NAS approaches is still too prohibitive to be applied to the search problems for large-scale models and/or datasets.
Another critical drawback of the most existing methods is that it is extremely difficult to understand their training progress since decision making typically relies on the hidden state representations of the architecture search models.
\iffalse
\begin{figure}[t!]
\begin{center}
\centerline{\includegraphics[width=3in]{figures/Comp_graph.pdf}}
\caption{Performance comparison between various NAS methods with weight sharing in terms of search cost and test accuracy on CIFAR-10. The proposed algorithm, referred to as EDNAS, shows competitive test accuracy and the lowest search cost.}
\vspace{-0.5cm}
\label{performance_comparison}
\end{center}
\end{figure}
\fi
We propose an efficient decoupled neural architecture search (EDNAS) algorithm based on reinforcement learning (RL).
Contrary to the conventional RL-based NAS methods, which employ an RNN controller to sample candidate architectures from the search space, we use the policy vectors for decoupled sampling from the structure and operation search spaces.
The decoupled sampling strategy enables us to reduce search cost significantly and analyze the architecture search procedure in a straightforward way.
The resulting architecture achieves competitive performance compared to the output models from the state-of-the-art NAS techniques.
We claim the following contributions in this paper:
\begin{itemize}
\item We propose an RL-based neural architecture search technique, which learns the policy vectors to samples candidate models by decoupling structure and operation search spaces of a network.
\item Our sampling strategy is based on the fully observable policy vectors over the two orthogonal search spaces, which facilitates to analyze the architecture search progress and understand the learned models.
\item Our algorithm achieves the competitive performances on various benchmark datasets including CIFAR-10, ImageNet, and Penn Treebank with a fraction of computational cost.
\end{itemize}
The rest of this paper is organized as follows.
In Section~\ref{sec:related}, we discuss the related work.
We describe the proposed algorithm in Section~\ref{sec:methodology}, and then illustrate experimental results in Section~\ref{sec:experiments}.
Section~\ref{sec:conclusion} concludes our paper.
\section{Related Work}
\label{sec:related}
Existing NAS methods are categorized into three groups based on their search methodologies: RL-based ~\cite{zoph2016neural,zoph2018learning,pham2018efficient,zhong2018practical,liu2018progressive, perez2018efficient}, evolutionary algorithm (EA)-based ~\cite{real2017large,real2018regularized,elsken2018efficient,so2019theevolved}, and gradient-based ~\cite{liu2018darts,luo2018neural,zhang2018you,xie2018snas,cai2018proxylessnas}.
We summarize the technical details of each approach.
\subsection{RL-based Methods}
RL-based architecture search techniques are originally proposed in ~\cite{zoph2016neural}, where the RNN controller is used to search for whole models and the feedback from validation is given by reinforcement learning.
NASNet~\cite{zoph2018learning} follows the optimization framework of ~\cite{zoph2016neural}, but the construction of the network is based on the cells discovered by the RNN controller.
The search space reduction to a cell is proven to improve not only efficiency but also accuracy compared to the unrestricted exploration ~\cite{zoph2018learning,bender2018understanding,pham2018efficient}.
However, NASNet still demand a lot of search cost, which makes the algorithm impractical.
ENAS~\cite{pham2018efficient} aims to further improve efficiency by weight sharing.
It achieves a remarkable reduction in search cost compared to NASNet with competitive accuracy.
\subsection{EA-based Methods}
The most representative work of EA-based approach is ~\cite{real2017large}, where a CNN is evolved from the trivial simple architecture resulting in comparable image classification performance to the state-of-the-art models.
This technique is extended to evolving the convolutional cells rather than the whole architecture, which is referred to as AmoebaNet~\cite{real2018regularized}.
The best discovered architecture of AmoebaNet achieves the outstanding performance on ImageNet.
Recently, the evolution from the known architecture has been investigated to achieve the improved accuracy in ~\cite{so2019theevolved}.
\subsection{Gradient-based Methods}
The gradient-based approach is an emerging direction in the NAS field, and many recent works can be classified into this category.
These methods relax a discrete architecture space into a continuous one and apply a gradient-based optimization technique to find the best model.
The relaxation is achieved by constructing a network structure based on various operations with differentiable weights~\cite{liu2018darts}, encoding network architectures using feature vectors in a continuous space~\cite{luo2018neural}, and adopting the concrete distribution over network architectures~\cite{xie2018snas}.
\subsection{Others}
There are other types of methods, which include hypernetworks~\cite{brock2017smash, zhang2018graph} and efficient random search techniques~\cite{bender2018understanding}.
\section{Methodology}
\label{sec:methodology}
EDNAS is an RL-based NAS approach with reduced complexity and observable search mechanism while maintaining accuracy in target tasks.
We first define our search space in Section~\ref{sect2.1}, and then discuss the details of the policy vectors and our search algorithm in Section~\ref{sect2.2}.
The training procedure is described in Section~\ref{sect2.3}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/Overall_architecture_and_DAG.pdf}
\vspace{0.1cm}
\caption{The overall architecture of the convolutional network.}
\label{overall_and_DAG}
\end{center}
\end{figure}
\subsection{Overall Architecture Design}
According to recent studies, it is inefficient to search the entire architecture of neural network~\cite{zoph2018learning, pham2018efficient, real2018regularized}.
This search strategy is called a macro search method~\cite{pham2018efficient}.
In opposite, micro search means that search only cell architectures and stack discovered cells to construct the entire architecture.
In several previous works~\cite{zoph2018learning, pham2018efficient}, it is proven that micro search has better final performance than macro search even its search cost is lower.
Therefore, we adopted the micro search strategy in EDNAS.
Our overall architecture design is presented in Fig~\ref{overall_and_DAG}.
As presented in Fig~\ref{overall_and_DAG}, the entire convolutional network is consists of stacked cells.
There are two sorts of cells which are normal cell and reduction cell.
A normal cell has the same input and output size, and reduction cell has an output size of half of the input width and height.
By changing the repeating number of normal cells, we can change the parameter size of the overall network.
In the case of the recurrent cell, we used a single cell for the entire recurrent network.
\subsection{Search Space}
\label{sect2.1}
The search space of EDNAS is given by a cell structure as in the recent studies~\cite{zoph2018learning, pham2018efficient, liu2018darts}.
The cell architecture is defined by a directed acyclic graph (DAG), where the nodes and the edges represent the local computation results and the flow of activations, respectively.
The search process starts with a manually designed overall architecture, which is a stack of cells in case of convolutional neural networks, and a sequence of cells in recurrent networks.
A convolutional cell has two input nodes, which have no incoming edges in the DAG, and they correspond to the outputs of two previous cells.
A node takes the outputs of two previous nodes as its inputs, and applies operations to individual inputs.
Then, inputs are summed to generate the output of the node.
The input-output relations are defined by the edges in the DAG.
We consider seven operations as in the existing methods~\cite{liu2018darts, pham2018efficient, luo2018neural}, which include $3\times3$ and $5\times5$ separable convolutions, $3\times3$ and $5\times5$ dilated separable convolutions, a $3\times3$ max pooling, a $3\times3$ average pooling, and an identity connection.
The output of the convolutional cell is defined as the depth-wise concatenation of all hidden nodes.
A recurrent cell also has two input nodes, which correspond to the input of the current cell and the hidden state of the previous cell.
Each node takes an output of the preceding one, performs an operation given by an edge, and then produces its own output after applying the activation function.
We employ four activation functions commonly used in previous studies~\cite{liu2018darts, pham2018efficient, luo2018neural}: sigmoid, tanh, ReLU, and identity.
To construct a recurrent cell, we use eleven nodes with nine hidden nodes.
The output of the recurrent cell is defined as the average of all hidden nodes.
\begin{figure*}[t!]
\begin{center}
\centerline{\includegraphics[width=0.83\linewidth]{figures/EDNAS_CNN.pdf}}
\caption{An example of the convolutional cell sampling by EDNAS when $N=5$. Input nodes are represented with the dashed rectangles while non-input nodes are represented with the solid rectangles. From each policy vector with softmax, the actions are sampled by multinomial sampling. The selected edges and operations are highlighted with colors in the policy vectors.}
\vspace{-0.5cm}
\label{cnn_sampling}
\end{center}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\centerline{\includegraphics[width=0.83\linewidth]{figures/EDNAS_RNN.pdf}}
\caption{An example of the recurrent cell sampling by EDNAS when $N=4$. The first hidden node is represented with the dashed rectangle, and other hidden nodes are represented with the solid rectangles. From each policy vector with softmax, the actions are sampled by multinomial sampling. The selected edges and operations are highlighted with colors in the policy vectors.}
\vspace{-0.5cm}
\label{rnn_sampling}
\end{center}
\end{figure*}
\subsection{Architecture Search Process in EDNAS}
\label{sect2.2}
\subsubsection{Overall Process}
The objective of our algorithm, EDNAS, is to maximize the expected reward of the sampled architectures~\cite{zoph2016neural}, which is given by
\begin{equation}
\label{EDNAS_objective}
\max_{\theta}~ \mathbb{E}_{P(\mathbf{m};\theta)}[R],
\end{equation}
where $\theta$ is the policy parameter and $\mathbf{m}$ is the sample model based on the current policy.
While the RNN controller manages policies in the conventional RL-based methods ($\theta = \theta_c$), EDNAS employs the policy vectors instead ($\theta = \theta_v$) to search for the optimal architecture.
\iffalse
\textcolor{blue}{So EDNAS applies decoupled search method which is independently trained by the structures and operations to reduce the search cost.}
\fi
EDNAS decouples the structure search and operation search, which are performed based on the separate policy vectors.
There are two kinds of policy vectors in EDNAS; one is for non-input nodes and the other is for edges.
All non-input nodes in the DAG are associated with a $c_i$-dimensional policy vector, where $c_i$ is the number of incoming edge combinations to the $i$-th node.
The policy vector of node $n_i$, $\mathbf{p}_{n_i}$, is given by
\begin{equation}
\label{node_policies}
\mathbf{p}_{n_i}=[v_1, ..., v_{c_i}], \;\; c_i={e_i\choose r},
\end{equation}
where $e_i$ is the number of incoming edges to $n_i$, and $r$ is the number of the selected edges.
The policy vector of edge $e$ in the DAG is a $k$-dimensional vector, where $k$ is the number of operations:
\begin{equation}
\label{edge_policies}
\mathbf{p}_{e}=[w_1, ..., w_{k}].
\end{equation}
Based on these policy vectors, we perform architecture sampling as follows.
First, we search for the overall structure of the network by sampling edges from the entire DAG.
To this end, the softmax function is applied to $\mathbf{p}_{n_i}$ for its normalization.
Then, an input edge combination of each node is sampled from the multinomial distribution given by $\text{softmax}(\mathbf{p}_{n_i})$.
After that, we optimize the operation corresponding to each selected edge.
The operation of each selected edge is determined by drawing a sample from the multinomial distribution defined by $\text{softmax}(\mathbf{p}_{e})$, which is similar to the structure searching step.
The policy vectors for the structure and operation search are observable in our framework.
Therefore, we can analyze the training progress of EDNAS based on the statistics of architecture samples and the visualized policy vectors during training.
For example, it is possible to see which combinations of edges are selected and which operations are preferred at each iteration or over many epochs.
\subsubsection{Searching in Convolutional Cells}
Figure~\ref{cnn_sampling} illustrates an example of convolutional cell architecture sampling.
The number of nodes \textit{N} is 5, and the number of operations is 3.
There are two input nodes in the DAG as described in Section~\ref{sect2.1}, and non-input nodes in our convolutional cell receive two inputs from preceding nodes, {\it i.e.,} $r=2$.
In the structure search step, one input edge combination is selected for each non-input node by multinomial sampling.
In the example, the edges heading to $n_3$ from $n_0$ and $n_2$ are selected as the input edge combination of $n_3$.
For the node $n_4$, edges from $n_2$ and $n_3$ are chosen.
In operation search step, the specific operation of the selected edges is determined.
Note that we are supposed to select an operation for each selected edge.
We obtain a sampled architecture after both steps are completed, as shown in the rightmost graph in Figure~\ref{cnn_sampling}.
Since the recent NAS methods typically adopt both normal and reduction cell structure to construct convolutional networks, we also search for both cells based on the policy vectors defined separately.
The computational complexity of the architecture search in convolutional cells is estimated by the number of candidate architectures.
In case of EDNAS, a total number of possible edge combinations is $\prod_{i=1}^{N-2} {i+1\choose 2}$ and a number of cases of operation combinations in an edge combination is $7^8$.
Therefore, the number of possible convolutional cell architecture of EDNAS is $\prod_{i=1}^{N-2} {i+1\choose 2} \cdot 7^8 = 1.04 \times 10^9$ since we use $N=6$.
\subsubsection{Searching in Recurrent Cells}
Sampling procedure in the recurrent cell architectures is presented in Figure~\ref{rnn_sampling}.
In the example, the number of nodes $N$ is 4 and the number of activation functions is 3.
Each non-input nodes takes an incoming edge from one of the preceding nodes ($r=1$).
The first hidden node, denoted by $n_0$ in Fig.~\ref{rnn_sampling}, adds up two input nodes and applies the $\text{tanh}$ activation to compute its output.
Similarly to the convolutional cell architecture sampling, the edges and activation functions are selected by multinomial distribution based on the policy vectors after applying the softmax function.
In case of the example in Figure~\ref{rnn_sampling}, the edge entering $n_2$ from $n_0$ and the edge heading to $n_3$ from $n_1$ are selected in the structure searching step.
After the operation searching step, a sampled architecture is determined as shown in the graph on the right of Figure~\ref{rnn_sampling}.
The computational complexity is also given by the number of possible recurrent cell architectures of EDNAS, which is $\prod_{i=1}^{N-1} {i\choose 1} \cdot 4^8 = 2.64 \times 10^9$ since we use $N=9$.
\subsection{Training Process and Deriving Architectures}
\label{sect2.3}
In EDNAS, the entire training process consists of the child model training step and the policy vector training step.
We perform the optimization by alternating the two training steps, where the child model sampled from the DAG is learned using the training data while the policy vectors are trained with the validation set.
For training the child model, we use stochastic gradient descent (SGD) on the shared parameters of the DAG to minimize the expected loss.
During the child model training step, the parameters of the policy vectors are fixed.
As mentioned in ENAS~\cite{pham2018efficient}, the gradient given by the Monte Carlo estimate based on a single sample works fine to train the child model.
Therefore, we sample one architecture in every iteration, compute a gradient based on the sampled model, and train the model using SGD.
In the policy vector training step, the model parameters are fixed and the policy vectors are updated to maximize the expected reward.
We use Adam optimizer~\cite{kingma2014adam}, and the gradient is computed by REINFORCE~\cite{williams1992simple}, as shown below:
\begin{equation}
\label{reinforce}
\nabla_{\theta_v}\log P(\mathbf{m};\theta_v)(R-b),
\end{equation}
where $P(\mathbf{m};\theta_v)$ is the probability of the model $\mathbf{m}$ sampled based on the policy vectors $\theta_v$, and $b$ is a moving average baseline of rewards.
We calculate the reward $R$ on the validation set, and encourage policy vectors to learn architecture with high generalization performance.
In the case of the image classification experiment, we use validation accuracy on a single minibatch as a reward.
In the language modeling, we employ $c \cdot \left( \text{ppl}_\text{valid} \right)^{-1}$ as a reward, where $c$ is a pre-defined constant, and $\text{ppl}_\text{valid}$ is the perplexity on a single validation minibatch.
The training process is repeated until the training epoch reaches the pre-defined maximum number of epochs.
When deriving the final architecture, we sample a predefined number of models and compute rewards of each model on a single minibatch.
Then, the model that achieves the best reward is selected as the final architecture discovered by EDNAS.
We employ 100 models to obtain the final model in both convolutional and recurrent cell search.
\begin{table*}[t]
\caption{Comparison between EDNAS and the state-of-the-art neural architecture search methods for image classification on CIFAR-10 dataset with respect to computational cost, params and test errors. Additionally, $\dagger$ marks are the performance we conducted experiments again in our environment by using codes which made by the author from their GitHub.}
\label{cifar_result}
\vskip 0.15in
\begin{center}
\scalebox{0.85}{
\begin{tabular}{@{}clcccc@{}}
\toprule
& \multicolumn{1}{c}{}& \multicolumn{1}{c}{Search Cost} & \multicolumn{1}{c}{Params} & \multicolumn{1}{c}{Test Error} & \multicolumn{1}{c}{}\\
Category & \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{(GPU days)} & \multicolumn{1}{c}{(M)}& \multicolumn{1}{c}{(\%)} & Search Method\\
\midrule
\multirow{2}{*}{Manual} & DenseNet~\cite{huang2017densely} & - & 25.6 & 3.46 & manual \\
&DenseNet + cutout~\cite{huang2017densely} & - & 26.2 & 2.56 & manual \\
\midrule
\multirow{5}{*}{\shortstack{NAS without \\ weight sharing}} & NASNet-A + cutout~\cite{zoph2018learning} & 1800 & 3.3 & 2.65 & RL\\
&AmoebaNet-A + cutout~\cite{real2018regularized} & 3150 & 3.2 & 3.34 & EA\\
&AmoebaNet-B + cutout~\cite{real2018regularized} & 3150 & 3.1 & 2.55 & EA\\
&PNAS~\cite{liu2018progressive} & 225 & 3.1 & 3.41 & RL\\
&NAONet + cutout~\cite{luo2018neural} & 200 & 128 & 2.11 & gradient\\
\midrule
\midrule
\multirow{10}{*}{\shortstack{NAS with \\weight sharing}} & ENAS + cutout~\cite{pham2018efficient} & 0.5 & 4.6 & 2.89 & RL\\
&ENAS + cutout$^\dagger$~\cite{pham2018efficient} & 0.6 & 3.2 & 3.32 & RL\\
&DARTS (first order) + cutout~\cite{liu2018darts} & 0.38 & 2.9 & 2.94 & gradient \\
&DARTS (first order) + cutout$^\dagger$~\cite{liu2018darts} & 0.32 & 2.8 & 3.05 & gradient \\
&DARTS (second order) + cutout~\cite{liu2018darts} & 1 & 3.4 & 2.83 & gradient\\
&NAONet-WS~\cite{luo2018neural} & 0.3 & 2.5 & 3.53 & gradient\\
&GHN + cutout~\cite{zhang2018graph} & 0.84 & 5.7 & 2.84 & hypernet \\
&DSO-NAS + cutout~\cite{zhang2018you} & 1 & 3.0 & 2.84 & gradient \\
\cmidrule{2-6}
&Random & 0.27 & 3.4 & 3.91 & -\\
&EDNAS + cutout & 0.28 & 3.7 & 2.84 & RL\\
\bottomrule
\end{tabular}
}
\end{center}
\vskip -0.1in
\end{table*}
\subsection{Characteristics of EDNAS}
EDNAS has a more interpretable architecture search procedure compared to other RL-based methods searching for architectures using RNN controllers~\cite{zoph2018learning,pham2018efficient}.
This is because our algorithm maintains two decoupled policy vectors---one for structure search and the other for operation search---to sample architectures; these policy vectors are human interpretable and the search procedures are fully observable.
Another benefit of decoupling the policy vectors is the reduction of architecture search cost by the projection of search spaces.
Note that the methods relying on RNN controllers~\cite{zoph2018learning,pham2018efficient} need to consider all the generated architecture sequences while a gradient-based method~\cite{liu2018darts} consider all possible combinations of architectures during training and has to construct a huge model for neural architecture search.
In practice, the running time of EDNAS is faster than other methods, and the gaps become larger in the large-scale network architecture search.
\begin{figure*}[t!]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{figures/heatmap_cifar.pdf}}
\vspace{-0.1cm}
\caption{An example of policy vectors for sampling operations in a time sequence of epochs on CIFAR-10 dataset. This table defines columns as operations and rows as connecting to each edge.}
\vspace{-0.5cm}
\label{cifar_heatmap}
\end{center}
\end{figure*}
\iffalse
Due to the auto-regressive characteristic, previous selection of nodes and operations affect the next selection of nodes and operations in case of the RNN controller.
Therefore, various cases should be considered to search the structures and operations, and the search cost tends to increase.
In case of~\cite{liu2018darts}, parameters about all operations of all edges are trained by computing gradients.
The requirement of massive computation makes the method hard to be trained for sufficiently large iterations to search the architecture.
However,
\fi
\section{Experiments}
\label{sec:experiments}
We conducted experiments on CIFAR-10 and ImageNet dataset to identify the optimal convolutional models, and on Penn Treebank (PTB) dataset to search for recurrent networks.
Refer to our project page\footnote{https://github.com/logue311/EDNAS} for the source code which is able to facilitate the reproduction of our results.
\subsection{Convolutional Cell Search with CIFAR-10}
\subsubsection{Data and Experiment Setting}
The CIFAR-10 dataset has 50,000 training images and 10,000 testing examples.
Among the training examples, EDNAS uses 40,000 images for training models and 10,000 images for training policy vectors in the architecture search.
In architecture search, EDNAS uses 2 input nodes and 4 operation nodes to design the architecture in a cell.
We construct the whole architecture using 6 normal cells and 2 reduction cells and each reduction cell is located after 2 normal cells.
Our approach utilizes the following hyper-parameters.
For training child models, we employ SGD with the Nesterov momentum~\cite{nesterov1983} with the learning rate 0.05 and the batch-size 128.
For learning the policy vectors, the optimization is given by the Adam~\cite{kingma2014adam} with the learning rate 0.00035.
Both child models and the policy vectors are trained for 300 epochs.
\iffalse
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{figures/statistic_cifar.pdf}}
\caption{An example of selected operations and edges during accumulated epochs on CIFAR-10 dataset. This table defines columns as operations and rows as connecting to each edge.}
\label{cifar_statistic}
\end{center}
\end{figure*}
\fi
\begin{figure*}[t]
\centering
\subfigure[The edge sampling statistics of the normal cells.]{
\includegraphics[width=0.42\linewidth]{figures/normal_edges_histogram.pdf}
\label{edges_stat_normal}}
\hspace{0.4in}
\subfigure[The edge sampling statistics of the reduction cells.]{
\includegraphics[width=0.42\linewidth]{figures/reduction_edges_histogram.pdf}
\label{edges_stat_reduction}}
\subfigure[The operation sampling statistics of the normal cells.]{
\includegraphics[width=0.42\linewidth]{figures/normal_operations_histogram.pdf}\label{operations_stat_normal}}
\hspace{0.4in}
\subfigure[The operation sampling statistics of the reduction cells.]{
\includegraphics[width=0.42\linewidth]{figures/reduction_operations_histogram.pdf}
\vspace{-0.1cm}
\label{operations_stat_reduction}}
\caption{The cumulative histograms of the sampled edges and operations for every 50 epoch. The sampling results in the normal cells show the clear tendency while the sampling results in the reduction cells are mostly stable over time.}
\label{histograms}
\end{figure*}
We adopt a different architecture for performance evaluation, which is composed of 20 cells (18 normal cells and 2 reduction cells); 6 normal cells are followed by a reduction cell twice, and there are 6 normal cells with the auxiliary classifier at the end of the network to reduce the vanishing gradient problem.
The learning rate is 0.025, the batch size is 128, and the network is trained for 600 epochs.
We optimize the network using SGD without the Nesterov momentum and incorporate the cutout~\cite{devries2017improved} method for better generalization.
\iffalse
In Table~\ref{cifar_result}, the first block means the manually designed architecture for image classification, which has the state-of-the-art performance~\cite{huang2017densely}.
They have a lot of parameters because there are many short connections in the architecture.
The second block is the NAS methods without weight sharing.
They have state-of-the-art performance in the field without weight sharing in search progress.
If searching the neural architecture without weight sharing, the searching methods need a lot of GPUs or take a long time.
We define this period to search architectures as Search Cost means days to run the architecture on a single GPU.
NAONet with cutout has top performance in the field without weight sharing, but the method has a massive size of parameters.
The third block means the novel NAS methods with weight sharing.
$\ast$ marks mean the performance indicated in their papers and $\dagger$ marks are the performance we conducted experiments again in our environment by using codes made by the author from their GitHub.
ENAS and DARTS (first order) worked in our environment.
Also, we wanted to experiment with the methods of the latest paper such as GHN or DSO-NAS in our environment, but there were not their shared codes to the public.
Our approach achieved the test error of 2.84, which is comparable to the state-of-the-art test error of 2.83 in case of weight shared methods.
Moreover, search cost of EDNAS is lowest in all comparison methods.
Among the weights shared methods, the search cost of DARTS first order method is similar to EDNAS.
However, the test error of the derived model is worse than the EDNAS.
\fi
\subsubsection{Results and Discussion}
Table~\ref{cifar_result} summarizes the results.
Although the manual search algorithms achieve the state-of-the-art accuracy, the model sizes are much larger than automatic architecture search techniques.
The NAS methods without weight sharing suffer from huge search cost while the approaches with weight sharing have a reasonable balance between cost and accuracy.
Note that EDNAS achieves competitive accuracy to the techniques with weight sharing in terms of accuracy and model size, but it is substantially faster for architecture search.
Our architecture search procedures are fully observable and Figure~\ref{cifar_heatmap} visualizes the policy vector for sampling operations during training on CIFAR-10 dataset.
The policy vectors are represented as a matrix, where each column denotes an operation and each row means an edge in the DAG.
Each edge is identified by its source and destination, which are shown as the numbers in the row label.
The number in each cell is the value after applying the softmax function.
The background color and its transparency of each cell are normalized within the cells of the same destination nodes.
The dark red means a large number while the dark blue is a small one.
Note that the distribution of values in the policy vectors are almost random at epoch 50 while the policy vectors at epoch 300 prefer convolution operations than non-trainable operations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\linewidth]{figures/normal_cifar.pdf}
\vspace{0.2cm}
\centerline{\small (a) Normal cell}
\includegraphics[width=0.85\linewidth]{figures/reduction_cifar.pdf}
\centerline{\small (b) Reduction cell}
\vspace{-0.2cm}
\caption{The CNN model discovered by EDNAS on CIFAR-10.}
\vspace{-0.5cm}
\label{arch_cifar}
\end{center}
\end{figure}
\iffalse
\begin{figure}[t!]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{figures/Comparson_search_cost.pdf}}
\vspace{-0.1cm}
\caption{A graph showing the search cost of an epoch as the number of nodes increases.}
\vspace{-0.3cm}
\label{comparson_search_cost}
\end{center}
\end{figure}
\fi
\iffalse
In Figure, ~\ref{edges_stat_normal} and ~\ref{edges_stat_reduction} present accumulated ratio of sampled edges in a time sequence for every 50 epochs.
Through these results, the searched architecture was trained to be driven by a small number of inputs. So the architecture was designed to parallel architecture such as Figure~\ref{normal_cifar}.
These results of ~\ref{operations_stat_normal} and ~\ref{operations_stat_reduction} are able to interpret that they tend to train more filter layers like convolutional layers than pooling layers or the identity layer. Normal cells have larger this tendency than reduction cells. It can be verified through summed operations count.
\fi
\begin{table*}[t]
\caption{Comparison between EDNAS and the state-of-the-art neural architecture search methods for image classification on ImageNet dataset with respect to computational cost, params and test errors.}
\label{imagenet_result}
\vskip 0.05in
\begin{center}
\scalebox{0.85}{
\begin{tabular}{clccccc}
\toprule
& \multicolumn{1}{c}{}& \multicolumn{1}{c}{Search Cost} & \multicolumn{1}{c}{Params} & \multicolumn{2}{c}{Test Error (\%)} & \multicolumn{1}{c}{}\\ \cmidrule{5-6}
Category & \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{(GPU days)} & \multicolumn{1}{c}{(M)}& top-1 & top-5 & Search Method\\
\midrule
\multirow{4}{*}{Manual} & Inception-v1~\cite{szegedy2015going} & - & 6.6 & 30.2 & 10.1 & manual\\
&MobileNet~\cite{howard2017mobilenets} & - & 4.2 & 29.4 & 10.5 & manual\\
&ShuffleNet 2$\times$ (v1)~\cite{zhang2017shuffle} & - & 5 & 29.1 & 10.2 & manual\\
&ShuffleNet 2$\times$ (v2)~\cite{zhang2017shuffle} & - & 5 & 26.3 & - & manual\\
\midrule
\multirow{7}{*}{Transfer learning} & NASNet-A~\cite{zoph2018learning} & - & 5.3 & 26.0 & 8.4 & RL\\
&AmoebaNet-C~\cite{real2018regularized} & - & 6.4 & 24.3 & 7.6 & EA\\
&PNAS~\cite{liu2018progressive} & - & 5.1 & 25.8 & 8.1 & RL\\
&DARTS~\cite{liu2018darts} & - & 4.9 & 26.9 & 9.0 & gradient\\
&GHN~\cite{zhang2018graph} & - & 6.1 & 27.0 & - & hypernet\\
&DSO-NAS~\cite{zhang2018you} & - & 4.7 & 26.2 & 8.6 & gradient \\
\cmidrule{2-7}
&EDNAS & - & 5.2 & 26.8 & 8.9 & RL\\
\midrule
\midrule
\multirow{2}{*}{Directly search} & DSO-NAS~\cite{zhang2018you} & 6 & 4.8 & 25.4 & 8.4 & gradient \\
\cmidrule{2-7}
&EDNAS & 3.67 & 4.7 & 26.9 & 8.9 & RL\\
\bottomrule
\end{tabular}
}
\end{center}
\vskip -0.1in
\end{table*}
To further analyze the search procedure, we present the statistics of the sampled architectures during our training procedure in Figure~\ref{histograms}.
Specifically, we illustrate the cumulative distributions of the sampled edges of the DAG and the operations in the normal and reduction cells over every 50 epoch.
Figure~\ref{edges_stat_normal} shows that $e_{0,3}$ and $e_{1,3}$ are selected more frequently at the later stage of training while the sampling ratio of $e_{2,3}$ drops consistently over time.
In general, the edges from input nodes are preferred to the ones from the hidden nodes.
On the other hand, when we observe the operation sampling patterns in the normal cell, we can see the clear tendency of individual operations; the frequency of pooling (max, avg) and identity (id) operations decreases gradually while the separable convolutions and dilated convolutions with a relatively large kernel size ($5 \times 5$) become more popular at the later stage of the optimization process.
It implies that the searched models attempt to extract the high-level information from the inputs to improve accuracy.
The statistics in the reduction cells do not change much over time for both edge and operation samples.
The derived architectures of the normal and reduction cells are demonstrated in Figure~\ref{arch_cifar}.
The characteristics of the two models are different in the sense that the normal cell has many parallel operations which coincides with the tendency illustrated in Figure~\ref{edges_stat_normal} while the operations in the reduction cell tend to be serial.
\subsection{Convolutional Cell Search with ImageNet}
\subsubsection{Data and Experiment Setting}
The ImageNet dataset contains almost 1.2M images in 1,000 classes for training~\cite{deng2009large}. For our architecture search, we use 1M images to train the model and 0.2M images to train policy vectors.
Our algorithm for ImageNet has the exactly same search space with CIFAR-10 except that it has the additional stem module to convert input images from $224 \times 224$ to $28 \times 28$, which is the similar size to the input images in CIFAR-10.
\iffalse
the same consisted of the same cell architecture and network architecture which was for calculating the accuracy of validation dataset as CIFAR-10 case in architecture search.
The first stem module was composed of two 3 $\times$ 3 separable convolutional layers with stride 2 and there was ReLU layer between them. Through stem modules, the input images converted from 224 by 224 to 28 by 28 as almost the same as the size of input on CIFAR-10 dataset.
\fi
We employ the SGD optimizer for architecture search without the Nesterov momentum, where the initial learning rate is 0.05, which is reduced by the factor of 10 at every 10 epoch.
Adam is used for policy vector search with learning rate 0.00035.
The batch sizes for training model and policy vector search are 200 and 50, respectively.
The training is carried out for 50 epochs.
The architecture for performance evaluation is composed of 14 cells (12 normal cells and 2 reduction cells), where each reduction cells follows a series of 4 normal cells.
Also, we integrate the auxiliary classifier to learn the model using an SGD with the learning rate 0.1, the batch size 200, and the network is trained for 250 epochs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\linewidth]{figures/normal_imagenet.pdf}
\vspace{0.2cm}
\centerline{\small (a) Normal cell}
\includegraphics[width=0.65\linewidth]{figures/reduction_imagenet.pdf}
\centerline{\small (b) Reduction cell}
\vspace{-0.3cm}
\caption{The CNN model discovered by EDNAS on ImageNet.}
\vspace{-0.8cm}
\label{arch_imagenet}
\end{center}
\end{figure}
\iffalse
the first block is traditional methods which should search an architecture manually. ShuffleNet 2 $\times$ v2 has the state-of-the-art performance in manual methods on ImageNet dataset.
The second block denotes transferring architectures which mean they searched the architecture on CIFAR-10 dataset. Because ImageNet dataset is more massive than CIFAR-10, so they need huge search costs and computer resources.
The third block presents DSO-NAS directly searched neural architecture on ImageNet dataset. The method has 6 GPU days as search cost and 25.7\% top1 test error.
Our method conducted both experiments which were transferring the architecture from CIFAR-10 dataset and directly searching the neural architecture on ImageNet dataset. In transferring the architecture case, we had 26.8\% top1 test error. Our directly searched architecture on ImageNet has 3.67 GPU days as search cost which is the lowest search cost in the direct architecture search on ImageNet.
\fi
\iffalse
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{figures/heatmap_imagenet.pdf}}
\caption{An example of policy vectors for sampling operations in a time sequence of epochs on ImageNet dataset. This table defines columns as operations and rows as connecting to each edge.}
\label{imagenet_heatmap}
\end{center}
\end{figure*}
\fi
\begin{table*}[t!]
\caption{Comparison between EDNAS and the state-of-the-art neural architecture search methods for language modeling on Penn Treebank dataset with respect to computational cost, params and test perplexity.}
\label{ptb_result}
\vskip 0.1in
\begin{center}
\scalebox{0.85}{
\begin{tabular}{clcccc}
\toprule
& \multicolumn{1}{c}{}& \multicolumn{1}{c}{Search Cost} & \multicolumn{1}{c}{Params} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}\\
Category & \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{(GPU days)} & \multicolumn{1}{c}{(M)}& \multicolumn{1}{c}{Test PPL} & Search Method\\
\midrule
\multirow{2}{*}{Manual} & LSTM~\cite{merity2017regularizing} & - & 24 & 58.8 & manual\\
& LSTM + 15 softmax experts~\cite{yang2017breaking} & - & 22 & 56.0 & manual\\
\midrule
\multirow{10}{*}{\shortstack{NAS \\ methods}} & NAS~\cite{zoph2016neural} & $10^4$ CPU days & 25 & 64.0 & RL\\
& ENAS~\cite{pham2018efficient} & 0.5 & 24 & 55.8 & RL \\
& ENAS (reproduction)~\cite{liu2018darts} & 0.5 & 24 & 63.1 & RL \\
& DARTS (first order)~\cite{liu2018darts} & 0.13 & 23 & 60.5 & gradient \\
& DARTS (first order)$^\dagger$~\cite{liu2018darts} & 0.09 & 23 & 64.2 & gradient\\
& DARTS (second order)~\cite{liu2018darts} & 1 & 23 & 56.6 & gradient\\
& NAONet~\cite{luo2018neural} & 300 & 27 & 56.0 & gradient\\
& NAONet-WS~\cite{luo2018neural} & 0.4 & 27 & 56.6 & gradient\\
\cmidrule{2-6}
& Random & 0.10 & 23 & 61.24 & -\\
& EDNAS & 0.11 & 23 & 59.45 & RL\\
\bottomrule
\end{tabular}
}
\end{center}
\vskip -0.1in
\end{table*}
\subsubsection{Results and Discussion}
Table~\ref{imagenet_result} presents the overall comparison with other methods on ImageNet dataset.
Most of the existing approaches identify the best architecture on CIFAR-10 dataset and use the same model to evaluate performance on ImageNet after fine-tuning.
This is mainly because their search costs are prohibitively high and it is almost impossible to apply their algorithms on the large-scale dataset directly.
Contrary to these methods, DSO-NAS~\cite{zhang2018you} and our algorithm, denoted by EDNAS, are sufficiently fast to explore the search space directly even on the ImageNet dataset.
The performance of EDNAS is as competitive as DSO-NAS in terms of model size and accuracy, but the search cost of EDNAS is substantially smaller than DSO-NAS.
Figure~\ref{arch_imagenet} illustrates the identified normal and reduction cells, which have the same graph topology while operations are somewhat different.
\subsection{Recurrent Cell Search with PTB}
\subsubsection{Data and Experiment Settings}
Penn Treebank dataset is a widely-used benchmark dataset for language modeling task.
Our experiment is conducted on the standard preprocessed version~\cite{zaremba2014recurrent}.
For architecture search, we set the embedding size and the hidden state size to 300, and train the child network using the SGD optimizer without momentum for 150 epochs.
We set the learning rate to 20 and the batch size to 256.
Also, dropout is applied to the output layer with a rate of 0.75.
To train the policy vectors, we use the Adam optimizer with learning rate $3\times10^{-3}$.
For evaluation, the network is trained using the averaged SGD (ASGD) with the batch size 64 and the learning rate 20.
The network is trained for 1600 epochs.
The embedding size are set to 850, and the rest of hyper-parameters are identical to the architecture search step.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/RNN_cell.pdf}
\vspace{0.1cm}
\caption{The RNN model obtained by EDNAS on Penn Treebank.}
\label{rnn_discovered_cell}
\end{center}
\end{figure}
\vspace{-0.3cm}
\subsubsection{Results and Discussion}
Table~\ref{ptb_result} illustrates the comparison results of various NAS methods in RNNs.
The best architecture discovered by EDNAS achieves 59.45 in terms of test perplexity, and its search cost is 0.11 GPU days only.
The performance of EDNAS is competitive compared to the other NAS methods.
EDNAS shows better accuracy than NAS and DARTS (first order) while it is several times faster than most of the other approaches.
Figure~\ref{rnn_discovered_cell} presents the best architecture identified by EDNAS in RNN.
\section{Conclusion}
\label{sec:conclusion}
We presented a novel neural architecture search algorithm, referred to as EDNAS, which decouples the structure search and the operation search by applying the separated policy vectors.
Since the policy vectors in EDNAS are fully observable, we can analyze the architecture search process in EDNAS.
The experimental results demonstrate that the proposed algorithm is competitive in terms of accuracy and model size on the various benchmark dataset.
In particular, the architecture search procedure in EDNAS is significantly faster than most of the existing techniques.
{\small
\bibliographystyle{ieee}
|
2,869,038,156,654 | arxiv | \section{Introduction}
Semantic memory~\cite{martin2001semantic}, which can be
considered as a part of explicit memory, is responsible for
the brain's ability to memorize the meaning of words and
concepts and also their mental representation, including their
properties and functions and the relation to each
other~\cite{tulving1990priming}. One possible tool to
study semantic memory is the task of free association, where
a subject is asked to express the first word to come to mind
related to some given cue. This task has a long history in
psychology, dating back to the late 19th
century~\cite{galtonFreeAssoc1880}. It is an instance of
verbal fluency tasks which are commonly used for the study
of the structure of concept to concept associations within
the network organization of semantic memory \cite{goni2011semantic}.
A range of distinct semantic memory models have been
suggested over the years, beginning in the sixties' and
seventies' models recording dictionary information~\cite{quillian1967}.
It has been observed that priming effects, namely when a
semantically related cue has been presented to the test person
before, play a substantial role in memory retrieval and task
performances~\cite{mcnamara_book,ratcliff-priming,tulving1990priming},
with prime and target possibly forming a compound
object~\cite{ratcliff88}.
A range of lexigraphical and associative semantic databases
have been collected over the years, like WordNet~\cite{wordnet},
the South Florida collection of free association, rhyme,
and word fragment norms~\cite{NelsonNet} and
ConceptNet~\cite{conceptnet}. These word association networks
typically exhibit small-world structures, with short
average distances between words, together with strong local
clustering~\cite{Steyvers-SemanticNetsModel,gros2011complex},
a property shared with lexigraphical spaces obtained
from word co-occurances~\cite{lund96}.
A comprehensive database of free associations, obtained
from the participation of a large number of individuals
(in the order of 6000), was made public by Nelson
et al.~\cite{NelsonNet}. This database, which we will
denote as {\em South Florida Free Associations}, SFFA,
in the following, can be considered an example
of a semantic space~\cite{steyvers2002}. The data essentially
constitutes a weighted
directed network, since both the forward and the backwards
connectivity strength between all associatively related
pairs of around 5000 words, the vertices of the network,
are provided. These association strengths are averaged over
all subjects taking place in compiling the database. Therefore,
individual associative preference may differ from that of
the SFFA database. In addition, external effects like the
environment, the last happenings before the experiment,
etc. are ignored by the database. Also native and non-native
English speaker may have different associative preferences,
depending on the respective countries of origin.
In this work, we use the SFFA database as a basis
for a guided association task. In this task, the subjects
(either human or simulated models) navigate the network of
words obtained from the SFFA database by connecting words
in a free association task. By comparing the statistical
properties of word repetitions, as obtained by the the
associations chains created by human subjects, with those
of the models for semantic memory retrieval, we expect to
deepen our understanding of which properties may be important
for modelling semantic spreading~\cite{collins1975spreading}
on associative nets. Our works may be embedded in the context
of related studies employing the SFFA database, for which
the Google page rank has been computed and compared to the
experimental results of a lexical association task
\cite{griffiths2007google}. It is also possible to
simulated stochastic cognitive navigation on the SFFA
database in order to study possible mechanisms for
information retrivial \cite{borge2010categorizing}.
Our work on exploration of free association networks can be
considered also in the general context of semantic language
networks \cite{sole2010language}, with the structure and
the dynamics of the respective network properties being
studied intensively \cite{borge2010semantic}. From the
perspective of neurobiology an interesting question regards
the relation to possible underlying neural network
correlate for the association network studied here and
its relation to functional brain networks in
general \cite{bressler2010large}. We also remark in this
context that the association network used for our study
corresponds to that of adults, with the development
of the human semantic network during childhood being
an interesting but separate topic \cite{beckage2011small}.
\section{Methods}
We set up an online experiment for a guided association task
\footnote{\url{http://itp.uni-frankfurt.de/\~mehran}}, attracting
at total of 450 voluntary participants, mostly from the University
of Frankfurt/Germany, the United Kingdom and the United States.
The goal of the experiment was to study associative exploration on
the SFFA network.
For the online experiment a randomly selected word from the SFFA,
the cue, is presented to the subjects on the screen, along with
a list of varying numbers of related words. The the list of words
presented are all linked to the cue with a strength higher than
5\% in the SFFA. The subjects are instructed to select the word
from the list that seems to them most related to the cue. Then the
selected word is taken as the next cue and presented to the subject
along with a new list of related words, extracted again from the
SFFA. The subject can select one word, as in the previous step.
The task repeats itself until the subject voluntarily decides to
quit.
The sequence of words chosen by the participants is called a
\emph{chain}, and the set of the 1688 chains collected constitutes
the data from which statistical properties of the free association
task were derived.
\subsection{Models}
In order to evaluate comparatively the data collected from the online
experiment we consider two models of memory retrieval. We use these
models to generate exploration chains in the SFFA network and
to compare the obtained simulated associative latching
with the actual data obtained from the online experiment.
\subsubsection{Mem model}
The \emph{Mem} model (for ``Memory'') consist in a random
exploration through the SFFA network. The exploration starts
at a random node (a word) from the network and moves to the
next one, which is selected randomly with a probability
proportional to the association strength to the present node,
as given by the SFFA database. The process is followed until
a node is reached for which no outgoing link is provided. In
addition to this simple exploration, there is a limitation for
repeated words. When the exploration would jump to a word which
was already visited during the exploration (the word is already in
the chain), it will be visited again only with a probability $c$.
The word will therefore be ignored (at this step) with a probability
$1-c$, which means that if such word is the only outgoing link of
the present node, the exploration will end with said probability $1-c$.
The parameter $c$ is chosen in this work to the value $c=0.08$, for
which it reproduces the experimental results for the distance between
repetitions as closely as possible.
Notice that this model represents a memoryless probabilistic model for the
exploration of the word association network, in contrast to the one-step memory
model represented by the ACT-R model shown in next section.
\subsubsection{ACT-R model}
Within the \emph{ACT-R} model (Adaptive Control of Thought-Rational)
one tries to model both the activity and the retrieval dynamics of
previously acquired memories~\cite{Anderson04anintegrated}.
In the \emph{ACT-R} model, a memory element $i$ has an activity
$A_i(t)$, which is calculated as the sum of the base-level
activity $B_i$ and an attentional weight $S_i$,
\begin{equation}
A_i(t)=B_i(t) + S_i(t) \; .
\label{eq:activity}
\end{equation}
The task attention term $S_i(t)$ is calculated as
\begin{equation}
S_i(t)=\sum_j \omega_j(t) W_{ji} \; ,
\end{equation}
where $\omega_j(t)$ is the attentional weight of the elements
that are part of the current task, and $W_{ji}$ are the
strengths of the connections between element $j$ and $i$.
For our purpose we have taken then $W_{ji}$ as the
association strengths of to the SFFA database.
In our work, we have chosen to set $\omega_{j}(t)=1$
if $j$ is the presently active memory (the node visited
at previous the moment), and $\omega_{j}(t)=0$ otherwise.
Thus, in our version of the model, a word has a
higher task attention $S_i(t)$ if it is strongly related to
the last observed word.
The base level activation $B_i(t)$ in Eq.~\ref{eq:activity}
of node $i$ is given by
\begin{equation}
B_i(t) = \log\left( \sum_{t_k}^{t_k<t} \left(\frac{1}{t-t_k}\right)^{d}\right) \; ,
\end{equation}
where $t_k$ is the time of the $k$th last recall of the
element $i$, and the exponent $d$ is a constant. Thus a
given word has a high base activity level if it has been
evoked many times lately.
Having defined the activity $A_i(t)$, the probability that an element $i$ is
remembered at time $t$, viz the retrieval probability, is given by
\begin{equation}
P_i(t) = \frac{1}{1+\exp\left[\frac{-(A_i(t)-\tau)}{s}\right]} \;
\end{equation}
where $\tau$ is the activity threshold and $s$ is a parameter
introduced to account for the effect of noise onto the
activation levels~\cite{Anderson04anintegrated}.
A word $i$ is recalled with probability $P_i(t)$ and the
averseness of a subject to repeat a word is given by $1-P_i(t)$.
Finally, the exploration of the network follows the same
procedure as in the \emph{Mem} model. Being at site $j$
of the SFFA network a word $m$ is selected with probability
$W_{mj}$ and accepted with probability $1-P_m(t)$. If this
word is accepted, than all $A_i(t)$, $B_i(t)$ and $S_i(t)$ are
updated. If not, the procedure repeats until one word is selected
out of the list of candidates linked to the current site $j$.
The chain is terminated if all candidate sites are rejected.
For our simulations of this model, we have taken
$d=0.5$, $s=0.4$, $\tau = 0.35 * s$, which is a fairly standard
set of values~\cite{Anderson04anintegrated}.
A different set of values may be chosen to obtain a better fit to
the experimental results. However, it is our intention maintain a
range of values comparable with other studies in the literature.
\section{Results}
In Fig.~\ref{fig:chainlengthfreq}, the probability distribution
of chain lengths is shown in a normal-log representation, as well as
the corresponding complementary cumulative distribution function (CCDF) in the inset.
We observe an approximately exponential
decay in the frequency of chain lengths for the experimental
data as well as of both models. Also included in
Fig.~\ref{fig:chainlengthfreq} are exponential fits, given by
respective solid lines, evaluated using a maximum likelihood
estimation (MLE)~\cite{newman-powerlawdistro,S}, evaluated with
the corresponding code from the \emph{GNU R} software package~\cite{R-book}.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{combi_chainlength-comparison}
\caption{Probability to observe word association
chains of length $l$, the vertical axis in log scale while the horizontal axis is linear. The data is obtained from the 1688 chains of the experimental data (red) and from $10^6$ chains generated by the
\emph{Mem} model (blue) as well as by the \emph{ACT-R} model (yellow).
The solid lines are respective exponential
fits (see text for exponents). The inset shows the complementary cumulative distribution function of the same data, using the same representation. }
\label{fig:chainlengthfreq}
\end{figure}
The experimental data can be fitted well with a single exponential
having an exponent $\lambda=-0.068(1)$. By chain length $\sim 50$
the number of data points is too low for reliable data analysis.
The \emph{Mem} model allows for larger chain length, having
an exponent $\lambda=-0.03593(3)$.
For chain lengths of size smaller than $\sim 20$ elements,
the \emph{ACT-R} model follows closely the behavior
of the experimental data, with an exponent $\lambda=-0.0400(1)$.
There is a kink for chain lengths $\sim 20$, with larger
chain length becoming progressively more unlikely for the
\emph{ACT-R} model. This decay for larger chain lengths can
be fitted well by an exponential with an exponent $\lambda=-0.335(2)$.
The theoretical models' data has been obtained, for both the \emph{Mem}
and for the \emph{ACT-R} model, using $10^6$ chains generated from
random starting points on the SFFA network. It is hence interesting,
that the \emph{ACT-R} data show a substantial amount of scattering
for small chain lengths.
The experimental data is scarce and noise for chain length $\sim50$
and longer, as only very few subjects enjoyed engaging in the task
as long. One may hence disregard, for further data analysis, all
long chains. This would, however, involve setting a somewhat arbitrary
cutoff. We have tested this procedure and found that the property
of the experimental data remains essentially unaffected when keeping
or removing long chains. We therefore opted, for simplicity, to present
the results corresponding to the whole sample, including long
chains.
In Fig.~\ref{fig:repperchain} we present the probability $p$ that
a word is repeated one or more times, averaged over all chain
lengths. Only the data involving five or less repetitions is significant,
for the results of the online experiment. The subject would prefer to
stop a chain altogether and try with a new cue, than go on once a
large number of repetitions did occur.
In this respect, we found that 19\% of all chains in our experimental results
end in a cycle. We observe that the behavior of the chainlength distribution
remains unperturbed if these chains are not included.
The experimental results could, as a matter of principle, be
approximated by a power law, but the small number of data points
does not allow for any definite judgement. This behavior seems
to be shared with the \emph{ACT-R} models for the initial repetitions.
However, when the complete trend for larger number of repetitions is
analyzed, a seemingly concave curve in the log-log plot can be
devised both for the \emph{Mem} and for the \emph{ACT-R} model. This
behavior cannot be cross-checked with the experimental data, due
to the lack of data for larger numbers of word repetitions. We also
tried to fit the data for the \emph{Mem} both with a Gaussian and
with a simple exponential decay, but both approximations are not convincing.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{all_probRepeatPerChain}
\caption{The probability of observing $r$ word repetitions, averaged
over all chains lengths. The data is obtained from the 1688 chains of
the online experiment (red) and from the $10^6$ chains generated by
the \emph{ACT-R} model (yellow) as well as by the \emph{Mem} model (blue).}
\label{fig:repperchain}
\end{figure}
In Fig.~\ref{fig:distrepdistro} the distribution of distances
between consecutive repetitions of the same word is presented.
All three datasets presented, for the two models and for
the experimental data, agree quite well up to repetition
distances of $\approx 10$. However, for larger repetition distances,
marked discrepancies are observed for both models, which
exhibit concave behaviors. The experimental data can, suggestively,
be approximated by a power-law with an exponent $\gamma=-1.9(1)$.
For the distribution of distances between repetitions,
the \emph{Mem} model reproduces the experimental results
somewhat better. This is not a coincidence, as the free
parameter of the \emph{Mem} model, the repetition probability
$c=0.08$, has been selected to reproduce the experimental
results for this property as closely as possible.
Although the decay of both models seem to fit relatively
well the experimental data for small distances, they do
not follow a similar law for the complete range. Due to
the lack of enough data in the tail of the experimental
distribution, we do not consider this as strong evidence
to disregard either of the models.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{combi_probDistanceBetweenReps}
\caption{Log-log plot of the distribution of distances $d_r$ between repetitions
of a given word. The data is obtained from the $10^6$ chains generated by the
\emph{ACT-R} model (yellow) and by the \emph{Mem} model (blue), as well
as from the 1688 chains of the online experiment (red).
The solid line represents, for comparison, a power law decay with
exponent $-1.9$. The inset shows the complementary cumulative distribution
functions of the same data, the solid line has in this case an exponent of $-0.9$. }
\label{fig:distrepdistro}
\end{figure}
An interesting result can be observed in Fig.~\ref{fig:qrepchain}, where
we present the probability density $\rho$ to find a given ratio $r/l$ of
word repetitions ($r$) per chain length ($l$). A word that occurs three
times in a chain of length ten, to give an example, would
contribute to the frequency $\rho(r/l)$ of chains having a
ratio of $r/l=3/10=0.3$. One observes a highly non-monotonic
distribution of ratios $r/l$. Experimentally
the maximal density is 0.5, which corresponds to a binary
loop like \emph{warm-cold-warm-cold-\dots}. There are additional
peaks at $r/l=1/3$ and $r/l=1/4$, corresponding to word repetition
loops of length three and four respectively.
It is evident from Fig.~\ref{fig:qrepchain}, that the \emph{ACT-R} model
exhibits the same peaks as found by the online experiment with human subjects,
with approximately similar amplitudes for the respective word repetition
frequencies. This seems to be an indication that the \emph{ACT-R} model
is suited for predicting the human behavior in this guided association task.
It may be also a hint that this distribution is strongly influenced by the
inclusion of a memory, which the Mem model lacks.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{all_densRepeatPerChainlength}
\caption{The probability density $\rho$ to find a given ratio $r/l$ of
$r$ repetitions of a word per chain length $l$, obtained from the 1688
chains of the experimental data (red) and from $10^6$ chains generated
by the \emph{Mem} model (blue) as well as by the \emph{ACT-R} model (yellow).
The peaks at $l/r=2,3,4,\dots$ correspond to associative loops of length
$2,3,4,\dots$.}
\label{fig:qrepchain}
\end{figure}
Finally we present in Fig.~\ref{fig:probreplength} the distribution
(as an histogram) of chain length, just as in
Fig.~\ref{fig:chainlengthfreq}, but retaining only word association
chains with at least one repetition, which are mostly long chains.
The human subjects tend to repeat words, on the average, substantially
before both the \emph{Mem} and the \emph{ACT-R} model, which have have
their distribution maxima at larger chains lengths. This result can
be regarded as robust, despite the observation that the results
from the online experiment is quite noisy. Note, however, that the
substantial scattering of the \emph{ACT-R}, which had been generated
using $10^6$ chain realizations, as for the other results.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{binned_histoChainlengthWRep}
\caption{The same data as in Fig.~\ref{fig:chainlengthfreq}, but
only for word association chains with a least one word occurring twice,
with histogram bin size 10,
and plotted in a normal-log representation.}
\label{fig:probreplength}
\end{figure}
\section{Conclusion}
Here we suggest, that online experiments for guided and related
associative tasks may provide interesting databases for human
association dynamics. The drawback of online experiments is,
to date, that there is no real control of how serious the individual
subjects take the task, some participants may just play around
randomly. There may be hence a certain fraction of non-characteristic
subjects which may, as a matter of principle, be taken into account
by considering models with two populations of participants.
Our experimental database is however not large enough for this type
of analysis, for which a substantially larger number of participants
would be necessary. We however believe that this first online
experiment indicates that interesting data can be acquired. In particular
we analyzed the distribution of the lengths of guided associative world
chains and various features of word repetitions. We attempted to
model the experimental results with cognitive models for human
memory retrieval dynamics, finding, in general, good qualitative
agreement.
\bibliographystyle{spmpsci}
|
2,869,038,156,655 | arxiv | \section{Introduction}
In recent days, Picard-Vessiot theory for differential equations in
characteristic zero and for iterative differential equations in
positive characteristic has been extended to the case of non
algebraically closed fields of constants (cf. \cite{dyckerhoff}
resp. \cite{maurischat}). In the classical setting the Galois group of
a PV-extension is given by the points of a linear algebraic group
over the constants. In characteristic zero, one then has a Galois
correspondence between all intermediate differential fields and the
Zariski closed subgroups of the Galois group. In positive
characteristic this correspondence was restricted to intermediate
iterative differential fields over which the PV-field is
separable. This restriction in positive characteristic and similar
problems in the case of a non algebraically closed field of constants
have been removed in \cite{dyckerhoff} resp. \cite{maurischat} by
regarding the Galois group as a group scheme and not as the group of
rational points. Every intermediate (iterative) differential field is
then obtained as the field of invariants of some closed subgroup
scheme. For example an intermediate ID-field over which the PV-field
is inseparable is the field of invariants of a nonreduced subgroup
scheme. In general, a PV-extension $E/F$ can be inseparable itself and
in this case the fixed field of $E$ under
the full group of iterative differential automorphisms of $E$ over $F$
is strictly bigger than $F$. Since classically one assumes equality,
the more general extensions are called pseudo Picard-Vessiot
extensions (PPV-extensions) here.
In this article, we treat questions concerning purely inseparable
PPV-exten\-sions. This is done in the setting of
fields with a {\em multivariate} iterative derivation and having a
perfect field of constants. (Although some of the minor results hold without
the assumption of perfectness.)
We first show
that a PPV-extension is purely
inseparable if and only if its Galois group scheme is an infinitesimal
group scheme and that the exponent of the extension and the height of
the group scheme are equal (cf. Cor. \ref{infinitesimal_group}). The
main result is a necessary and sufficient condition to decide whether
an infinitesimal group scheme occurs as Galois group of a
PPV-extension over a given ID-field or not
(cf. Thm. \ref{general-realisation} and
Cor. \ref{special-realisation}).
In Section \ref{basics}, we introduce the reader to the basic notation of
multivariate iterative differential rings and PPV-extensions. Some
properties, general results on PPV-extensions and the Galois
correspondence are given in Section \ref{galois-theory} and can also
be found in \cite{maurischat}
(see also \cite{heiderich}). Section \ref{purely-insep} is
dedicated to purely inseparable PPV-extensions and the corresponding
infinitesimal group schemes. In the last section, we give some
examples to illustrate the previous results.
\medskip
{\bf Acknowledgements:} I would like to thank J.~Hartmann,
B.~H.~Matzat and M.~Wibmer for helpful comments and suggestions on the
paper.
\section{Basic notation}\label{basics}
All rings are assumed to be commutative with unit.
We use the usual notation for
multiindices, namely
$\binom{\vect{i}+\vect{j}}{\vect{i}}=\prod_{\mu=1}^m
\binom{i_\mu+j_\mu}{i_\mu}$ and $\vect{T}^{\vect{i}}=T_1^{i_1}
T_2^{i_2}\cdots T_m^{i_m}$ for $\vect{i}=(i_1,\dots,
i_m),\vect{j}=(j_1,\dots, j_m)\in{\mathbb N}^m$ and $\vect{T}=(T_1,\dots, T_m)$.
An {\bf $m$-variate iterative
derivation} on a ring $R$ is a homomorphism of rings $\theta:R\to
R[[T_1,\dots, T_m]]$, such that $\theta^{(\vect{0})}={\rm id}_R$ and for
all $\vect{i},\vect{j}\in {\mathbb N}^m$, $\theta^{(\vect{i})}\circ \theta^{(\vect{j})}=\binom{\vect{i}+\vect{j}}{\vect{i}}\theta^{(\vect{i}+\vect{j})}$,
where the maps $\theta^{(\vect{i})}:R\to R$ are defined by
$\theta(r)=:\sum_{\vect{i}\in{\mathbb N}^m}
\theta^{(\vect{i})}(r)\vect{T}^{\vect{i}}$
(cf. \cite{heiderich}, Ch. 4).
In the case $m=1$ this is equivalent to
the usual definition of an iterative derivation given for example in
\cite{mat_hart}.
The pair $(R,\theta)$ is then called an ID-ring and
$C_R:=\{ r \in R\mid \theta(r)=r\}$ is called the {\bf ring of
constants} of $(R,\theta)$.\footnote{The name {\em constants} is due to the
fact that all $\theta^{(\vect{i})}$ ($\vect{i}\ne \vect{0}$) vanish at
these elements analogous to the vanishing of derivations in
characteristic zero.}
An ideal $I\unlhd R$ is called an
{\bf ID-ideal} if $\theta(I)\subseteq I[[\vect{T}]]$ and $R$ is
{\bf ID-simple} if $R$ has no nontrivial
ID-ideals. Iterative derivations are extended to localisations by
$\theta(\frac{r}{s}):=\theta(r)\theta(s)^{-1}$ and to tensor products
by
$$\theta^{(\vect{k})}(r\otimes s)=\sum_{\vect{i}+\vect{j}=\vect{k}}
\theta^{(\vect{i})}(r)\otimes \theta^{(\vect{j})}(s)$$
for all $\vect{k}\in {\mathbb N}^m$.
The $m$-variate iterative derivation $\theta$ is called
{\bf non-dege\-ne\-ra\-te} if the $m$ additive maps
$\theta^{(1,0,\dots,0)}, \theta^{(0,1,0,\dots,0)}, \dots ,
\theta^{(0,\dots,0,1)}$ (which acutally are derivations on $R$) are
$R$-linearly independent.
Given an ID-ring $(R,\theta_R)$ over an ID-field $(F,\theta)$, we call
an element
$x\in R$ {\bf differentially finite over $F$} if the $F$-vector
space spanned by all $\theta^{(\vect{k})}(x)$ ($\vect{k}\in{\mathbb N}^m$)
is finite dimensional. It is easy to see that the set of elements
which are differentially finite over $F$ form an ID-subring of $R$
that contains~$F$.
\begin{rem}\label{rem-on-IDs} (see also \cite{heiderich}, Ch. 4)
Given an $m$-variate iterative derivation $\theta$ on a ring $R$, one
obtains a set of $m$ ($1$-variate) iterative derivations
$\theta_1,\dots, \theta_m$ by defining
$$\theta_1^{(k)}:=\theta^{(k,0,\dots,0)},\quad
\theta_2^{(k)}:=\theta^{(0,k,0,\dots,0)}, \quad \dots ,\quad \theta_m^{(k)}:=
\theta^{(0,\dots,0,k)}$$
for all $k\in{\mathbb N}$. By the iteration rule
for $\theta$ these iterative derivations commute, i.\,e. satisfy the
condition $\theta_i^{(k)}\circ
\theta_j^{(l)}=\theta_j^{(l)}\circ\theta_i^{(k)}$ for all $i,j\in
\{1,\dots, m\}, k,l\in{\mathbb N}$. On the other hand, given $m$
commuting $1$-variate iterative derivations $\theta_1,\dots, \theta_m$ one obtains
an $m$-variate iterative derivation $\theta$ by defining
$$\theta^{(\vect{k})}:=\theta_1^{(k_1)}\circ \dots \circ
\theta_m^{(k_m)}$$ for all $\vect{k}=(k_1,\dots, k_m)\in{\mathbb N}^m$.
Using the iteration rule one sees that the $m$-variate iterative
derivation $\theta$ is determined by the derivations $\theta_1^{(1)},\dots,
\theta_m^{(1)}$ if the characteristic of $R$ is zero, and by the set
of maps $\{\theta_1^{(p^\ell)},\dots, \theta_m^{(p^\ell)}\mid
\ell\in{\mathbb N}\}$ if the characteristic of $R$ is $p>0$. Furthermore,
$\theta$ is non-degenerate if and only if for all $j=1,\dots,
m$ the derivation $\theta_j^{(1)}$ is nontrivial on $\bigcap_{i=1}^{j-1}
\Ker(\theta_i^{(1)})$.
\smallskip
Next we consider the case that $R=:F$ is a field of positive characteristic
$p$ and that $\theta$ is non-degenerate. Then the derivations
$\theta_1^{(1)},\dots, \theta_m^{(1)}$ are nilpotent $C_F$-endomorphisms of
$F$. Since they commute and $\theta$ is non-degenerate, there exist
$x_1,\dots, x_m\in F$ such that $\theta_i^{(1)}(x_j)=\delta_{ij}$ for
all $i,j$, where $\delta_{ij}$ denotes the Kronecker delta.
Therefore $\{ x_1^{e_1}\cdots x_m^{e_m} \mid 0\leq e_j\leq p-1 \}$
is a basis of $F$ as a vector space over $F_1:= \bigcap_{i=1}^{m}
\Ker(\theta_i^{(1)})$. Hence $F/F_1$ is a field extension of degree $p^m$.
Furthermore, the maps $\theta_1^{(p)},\dots, \theta_m^{(p)}$ are
derivations on $F_1$, they also are nilpotent and commute, and
$$\theta_i^{(p)}(x_j^p)=\left(\theta_i^{(1)}(x_j)\right)^p=\delta_{ij}.$$
So by the same argument, $F_1$ is a vector space over
$F_2:=F_1\cap \bigcap_{i=1}^{m} \Ker(\theta_i^{(p)})$ and
$[F_1:F_2]=p^m$. Repeating this, one obtains a descending sequence of
subfields $F_\ell:= F_{\ell-1}\cap \bigcap_{i=1}^{m}
\Ker(\theta_i^{(p^{\ell-1})})$ satisfying $[F_{\ell-1}:F_\ell]=p^m$.
This sequence will be useful in Section \ref{purely-insep}.
\end{rem}
\begin{defn}
Let $(F,\theta)$ be an ID-field, and
let $A=\sum_{\vect{k}\in {\mathbb N}^m} A_{\vect{k}} \vect{T}^{\vect{k}}\in
\GL_n(F[[\vect{T}]])$ be a matrix satisfying the properties
$A_{\vect{0}}=\mathds{1}_n$ and
$\binom{\vect{k}+\vect{l}}{\vect{l}}A_{\vect{k}+\vect{l}}=\sum_{\vect{i}+\vect{j}=\vect{l}}
\theta^{(\vect{i})}(A_{\vect{k}}) A_{\vect{j}}$ for all $\vect{k},\vect{l}\in{\mathbb N}^m$.
Then an equation
$$\theta(\vect{y})=A\vect{y},$$
where $\vect{y}$ is a vector of indeterminants, is called an {\bf
iterative differential equation} (IDE) over $F$.\footnote{Throughout
this article, iterative derivations are applied componentwise to
vectors and matrices.}
\end{defn}
\begin{defn}
An ID-ring $(R,\theta_R)\geq (F,\theta)$ is called a {\bf pseudo
Picard-Vessiot ring} (PPV-ring) for $\theta(\vect{y})=A\vect{y}$ if the
following holds:
\begin{enumerate}
\item $R$ is an ID-simple ring.
\item There is a fundamental solution matrix $Y\in\GL_n(R)$, i.\,e.{} an invertible
matrix satisfying $\theta(Y)=AY$.
\item As an $F$-algebra, $R$ is generated by the coefficients of $Y$
and $\det(Y)^{-1}$.
\item $C_R=C_F$.
\end{enumerate}
The quotient field $E=\Quot(R)$ (which exists, since such a PPV-ring
is always an integral domain) is called a {\bf pseudo
Picard-Vessiot field} (PPV-field)
for the IDE $\theta(\vect{y})=A\vect{y}$.
\end{defn}
\begin{rem}
The condition on the $A_{\vect{k}}$ given in the definition of the IDE
is equivalent to the condition that
$\theta_R^{(\vect{k})}(\theta_R^{(\vect{l})}(Y_{ij}))=
\binom{\vect{k}+\vect{l}}{\vect{k}}\theta_R^{(\vect{k}+\vect{l})}(Y_{ij})$ holds for a fundamental
solution matrix $Y=(Y_{ij})_{1\leq i,j\leq n}\in\GL_n(R)$.
Furthermore, the condition $A_{\vect{0}}=\mathds{1}_n$ already implies
that the matrix $A$ is invertible.
\end{rem}
\begin{notation}
From now on, $(F,\theta)$ denotes an ID-field of positive
characteristic $p$, and $K=C_F$ its field of
constants. We assume that $K$ is perfect, and that the $m$-variate
iterative derivation $\theta$ is non-degenerate.
\end{notation}
\section{Galois theory}\label{galois-theory}
In this section, we deal with the Galois group scheme corresponding to a
PPV-extension. We will see various facettes of the group structure and
group action, and provide the Galois correspondence for
PPV-extensions.
We begin with a characterisation of the PPV-ring in a PPV-field.
\begin{prop}\label{diff-finite}
Let $(R,\theta_R)$ be a PPV-ring over $F$ for an IDE
$\theta(\vect{y})=A\vect{y}$ and $E=\Quot(R)$. Then $R$ is equal to
the set of elements in $E$ which are differentially finite over $F$.
\end{prop}
\begin{proof} (Compare \cite{mat_hart}, Thm. 4.9, for the case
when $K$ is
algebraically closed and $\theta$ is univariate%
.)\\
Let $Y\in\GL_n(R)$ be a fundamental solution matrix for the IDE. Then
by definition $\theta^{(\vect{k})}(Y)=A_{\vect{k}}Y$ and hence for all
$i,j$ and all $\vect{k}\in{\mathbb N}^m$ the derivatives
$\theta^{(\vect{k})}(Y_{ij})$ are in the
$F$-vector space spanned by all $Y_{ij}$, i.\,e. all $Y_{ij}$ are
differentially finite. Furthermore, one has
$\theta(\det(Y)^{-1})=\det(\theta(Y))^{-1}=\det(AY)^{-1}=\det(A)^{-1}\det(Y)^{-1}$,
i.\,e. $\det(Y)^{-1}$ is differentially finite. Therefore, $R$ is
generated by differentially finite elements, and since the
differentially finite elements form a ring, all elements of $R$ are
differentially finite.
On the other hand, let $x\in E$ be differentially finite over
$F$ and let $W_F(x)$ be the $F$-vector
space spanned by all $\theta^{(\vect{k})}(x)$ ($\vect{k}\in{\mathbb N}^m$).
Then the set $I_x:=\{ r\in R\mid r\cdot W_F(x)\subseteq R\}$ is
an ID-ideal of $R$. Since $W_F(x)$ is finite dimensional and $E$ is
the quotient field of $R$, one has $I_x\ne 0$. Since $R$ is ID-simple,
this implies $I_x=R$. Hence $1\cdot W_F(x)\subseteq R$, and in
particular $1\cdot x=x\in R$.
\end{proof}
From this characterisation of the PPV-ring as the ring of differentially finite
elements, we immediately get the following.
\begin{cor}\label{unique-PPV-ring}
Let $E$ be a PPV-field over $F$ for several IDEs. Then the PPV-ring
inside $E$ is unique and independent of the particular IDE.
\end{cor}
\subsection{The Galois group scheme}
For a PPV-ring $R/F$ we define the functor
$$\underline{\Aut}^{{\rm ID}}(R/F): (\cat{Algebras} / K) \to (\cat{Groups}),
L\mapsto \Aut^{{\rm ID}}(R\otimes_K L/F\otimes_K L)$$
where $L$ is provided with the trivial iterative derivation.
In \cite{maurischat}, Sect. 10, it is shown that the functor
${\mathcal G}:=\underline{\Aut}^{{\rm ID}}(R/F)$ is represent\-able by a $K$-algebra of
finite type and hence is an affine group scheme of finite type over
$K$, which is called the (iterative differential) {\bf Galois
group scheme} of the extension $R$ over $F$ -- denoted by
$\underline{\Gal}(R/F)$ --, or also the
Galois group scheme of the extension $E$ over $F$, $\underline{\Gal}(E/F)$, where
$E=\Quot(R)$ is the corresponding PPV-field.\footnote{This is justified by the
fact given in Corollary \ref{unique-PPV-ring} that the PPV-ring can be
recovered from the PPV-field without regarding an IDE.
Also take care that the functor $\underline{\Aut}^{{\rm ID}}(E/F)$ is not
isomorphic to $\underline{\Aut}^{{\rm ID}}(R/F)$. Hence the Galois group
scheme of $E/F$ has to be defined using the PPV-ring.}
Furthermore $\Spec(R)$ is a $({\mathcal G} \times_{K}F)$-torsor and the
corresponding isomorphism of rings
$$\gamma:R\otimes_F R\to R\otimes_K K[{\mathcal G}]$$
is an $R$-linear ID-isomorphism.
By restricting $\gamma$ to the constants, one obtains that
$K[{\mathcal G}]$ is isomorphic
to $C_{R\otimes_F R}$. One checks by calculation (see also \cite{takeuchi})
that the comultiplication on $K[{\mathcal G}]$ is induced via
this isomorphism by the map
$$R\otimes_F R\longrightarrow (R\otimes_F R)\otimes_R (R\otimes_F R),
a\otimes b\mapsto (a\otimes 1)\otimes (1\otimes b),$$
and the counit map $ev:K[{\mathcal G}]\to K$ is induced by the
multiplication
$$R\otimes_F R\longrightarrow R, a\otimes b\mapsto ab.$$
\comment{
More generally, we have
\begin{prop}\label{central_iso}
Let $R$ be a PPV-ring for an IDE $\theta(\vect{y})=A\vect{y}$ with
fundamental solution matrix $\bar{X}\in\GL_n(R)$. Let
$T\geq F$ be an ID-simple
ID-ring with $C_T=K$ such that there exists a fundamental solution
matrix $Y\in \GL_n(T)$. Furthermore, let $U:=C_{T\otimes_F R}$ be the
$K$-algebra of constants of $T\otimes_F R$.
Then there exists a $T$-linear ID-isomorphism
$$\gamma_T: T\otimes_F R \to T\otimes_K U,$$
which is given by $\bar{X}_{ij}\mapsto \sum_{k=1}^n Y_{ik}\otimes
\bar{Z}_{kj}$ for some elements $\bar{Z}_{kj}\in U$.
\end{prop}
\begin{thm}
Let $(T,\theta_T)$ be an ID-simple ID-ring over $F$ with $C_T=C_F$ and
having a fundamental solution matrix $Y\in\GL_n(T)$ for some IDE
$\theta(\vect{y})=A\vect{y}$. Then the subalgebra of $T$ generated by
the coefficients of $Y$ and by $\det(Y)^{-1}$ is the unique PPV-ring
inside $T$ for this IDE.
\end{thm}
\begin{proof}
Since for two fundamental solution matrices $Y$ and $\tilde{Y}$ in
$T$, the coefficients of $Y^{-1}\tilde{Y}$ are constants,
i.\,e. elements in $C_T=C_F$, the subalgebra of $T$ generated by the
coefficients and the inverse of the determinant is the same for every
fundamental solution matrix. This proves the uniqueness of a PPV-ring
inside $T$.
For showing that this subalgebra is a PPV-ring, we only have to show
that it is ID-simple.
\vdots
\end{proof}
}
\comment{
\subsection{Galois correspondence}\label{galois_corres}
Let $S$ be a $K$-algebra and ${\mathcal H}/K$ be a subgroup functor of the
functor $\underline{\Aut}(S/K)$, i.\,e.{} for every $K$-algebra $L$, the set
${\mathcal H}(L)$ is a group acting on $S_L$ and this action is functorial in $L$. An
element $s\in S$ is then called {\bf invariant} if for all $L$,
the element $s\otimes 1\in S_L$ is invariant under ${\mathcal H}(L)$.
The ring of invariants is denoted by $S^{{\mathcal H}}$. (In \cite{jantzen},
I.2.10 the invariant elements are called ``fixed points''.)
Let $E=\Quot(S)$ be the localisation of $S$ by all non zero divisors.
Since every automorphism of $S\otimes_K L$ extends uniquely to an
automorphism of $\Quot(S\otimes_K L)$, the functor
$\underline{\Aut}(S/K)$ is a subgroup functor of the group functor
$$(\cat{Algebras} / K) \to (\cat{Groups}),
L\mapsto \Aut(\Quot(S\otimes_K L)/\Quot(F\otimes_K L)).$$
In this sense, we call an element $e=\frac{r}{s}\in E$ {\bf
invariant} under ${\mathcal H}$, if for all
$K$-algebras $L$ and all $h\in{\mathcal H}(L)$,
$$\frac{h.(r\otimes 1)}{h.(s\otimes 1)}=\frac{r\otimes 1}{s\otimes
1}=e\otimes 1.$$
The ring of invariants of $E$ is denoted by $E^{{\mathcal H}}$.
In the following, let again $R$ be a PPV-ring over $F$, $E=\Quot(R)$ its
field of fractions and ${\mathcal G}=\underline{\Gal}(R/F)$ the Galois group
scheme of $R$ over $F$. Furthermore, let
$\rho:=\gamma\restr{1\otimes R}:R\to R\otimes K[{\mathcal G}]$ be the
ID-homomorphism which describes the action of ${\mathcal G}$ on $R$.
}
Let ${\mathcal H} \leq {\mathcal G}$ be a subgroup functor, i.\,e.{} for every
$K$-algebra $L$, the set ${\mathcal H}(L)$ is a group acting on $R_L:=R\otimes_K
L$ and this action is functorial in $L$.
An element $r\in R$ is then called {\bf invariant} under ${\mathcal H}$ if
for all $L$, the element $r\otimes 1\in R_L$ is invariant under
${\mathcal H}(L)$. The ring of invariants is denoted by $R^{{\mathcal H}}$. (In
\cite{jantzen}, I.2.10 the invariant elements are called ``fixed
points''.)
Let $E=\Quot(R)$ be the quotient field and for all $L$ let
$\Quot(R\otimes_K L)$ be the localisation by all nonzero divisors.
Since every automorphism of $R\otimes_K L$ extends uniquely to an
automorphism of $\Quot(R\otimes_K L)$, the functor
$\underline{\Aut}(R/F)$ is a subgroup functor of the group functor
$$(\cat{Algebras} / K) \to (\cat{Groups}),
L\mapsto \Aut(\Quot(R\otimes_K L)/\Quot(F\otimes_K L)).$$
In this sense, we call an element $e=\frac{r}{s}\in E$ {\bf
invariant} under ${\mathcal H}$, if for all
$K$-algebras $L$ and all $h\in{\mathcal H}(L)$,
$$\frac{h.(r\otimes 1)}{h.(s\otimes 1)}=\frac{r\otimes 1}{s\otimes
1}=e\otimes 1.$$
The ring of invariants of $E$ is denoted by $E^{{\mathcal H}}$.
\begin{rem}
The action of ${\mathcal G}:=\underline{\Gal}(R/F)$ on $R$ is fully described by the
ID-homomorphism $\rho:=\gamma\restr{1\otimes R}:R\to R\otimes_K K[{\mathcal G}]$.
Namely, for a $K$-algebra $L$ and $g\in {\mathcal G}(L)\cong \Hom(K[{\mathcal G}],L)$, one
has $g.(r\otimes 1)=(1\otimes g)(\rho(r))\in R\otimes_K L$ for all
$r\in R$.
\end{rem}
\begin{prop}
Let $E/F$ be a PPV-extension with PPV-ring $R$ and Galois group scheme ${\mathcal G}$.
An ID-field $\tilde{F}$, with $F\leq \tilde{F}\leq E$, is a PPV-field over $F$, if
and only if it is stable under the action of ${\mathcal G}$, i.\,e. if $\rho(R\cap
\tilde{F})\subseteq (R\cap \tilde{F})\otimes K[{\mathcal G}]$.
\end{prop}
\begin{proof}
If $\tilde{F}$ is a PPV-field, its PPV-ring $\tilde{R}$ is the set of elements in
$\tilde{F}$ which are
differentially finite over $F$ (cf. Prop \ref{diff-finite}), in
particular we have $\tilde{R}=\tilde{F}\cap R$.
Hence we obtain a commutative diagram:
\begin{center}$\xymatrix{
\tilde{R}\otimes_F \tilde{R} \ar[r]^-{\cong} \ar[d] & \tilde{R}\otimes_K K[\underline{\Gal}(\tilde{R}/F)]=\tilde{R}\otimes_K
C_{\tilde{R}\otimes_F \tilde{R}} \ar[d] \\
R\otimes_F R \ar[r]^-{\cong}& R\otimes_K K[{\mathcal G}]= R\otimes_K C_{R\otimes_F R}
}$\end{center}
But this implies $\rho(\tilde{R})\subseteq \tilde{R}\otimes_K
C_{\tilde{R}\otimes_F \tilde{R}}\subseteq \tilde{R}\otimes_K K[{\mathcal G}]$, i.\,e. $\tilde{F}$ is
stable under the action of ${\mathcal G}$.
The converse is given in Theorem \ref{galois_correspondence},iii).
\end{proof}
\begin{thm}{\bf (Galois correspondence)}\label{galois_correspondence}
Let $E/F$ be a PPV-extension with PPV-ring $R$ and Galois group scheme
${\mathcal G}$.
\begin{enumerate}
\item There is an antiisomorphism of the lattices
$${\mathfrak H}:=\{ {\mathcal H} \mid {\mathcal H}\leq{\mathcal G} \text{ closed subgroup scheme of }{\mathcal G}
\}$$
and
$${\mathfrak M}:=\{ M \mid F\leq M\leq E \text{ intermediate ID-field} \}$$
given by
$\Psi:{\mathfrak H} \to {\mathfrak M},{\mathcal H}\mapsto E^{{\mathcal H}}$ and
$\Phi:{\mathfrak M} \to {\mathfrak H}, M\mapsto \underline{\Gal}(E/M)$.
\item\label{normal_subgroup} If ${\mathcal H}\leq {\mathcal G}$ is normal, then $E^{{\mathcal H}}=\Quot(R^{{\mathcal H}})$ and
$R^{{\mathcal H}}$ is a PPV-ring over $F$ with Galois group scheme
$\underline{\Gal}(R^{{\mathcal H}}/F)\cong {\mathcal G}/{\mathcal H}$.
\item If $M\in{\mathfrak M}$ is stable under the action of ${\mathcal G}$, then ${\mathcal H}:=\Phi(M)$
is a normal subgroup scheme of ${\mathcal G}$, $M$ is a PPV-extension of $F$ and
$\underline{\Gal}(M/F)\cong {\mathcal G}/{\mathcal H}$.
\item For ${\mathcal H}\in {\mathfrak H}$, the extension $E/E^{{\mathcal H}}$ is separable if and
only if ${\mathcal H}$ is reduced.
\end{enumerate}
\end{thm}
\begin{proof}
See \cite{maurischat}, Thm. 11.5.
\end{proof}
For a purely inseparable field extension $E/F$ one denotes by ${\rm
e}(E/F)$ the {\bf exponent} of the extension, i.\,e. the minimal number
$e\in{\mathbb N}$ such that $E^{p^e}\subseteq F$. For an infinitesimal group
scheme ${\mathcal G}$ over $K$, the {\bf height} of ${\mathcal G}$, denoted by ${\rm h}({\mathcal G})$, is the
minimal number $h\in {\mathbb N}$ such that $x^{p^h}=0$ for all $x\in
K[{\mathcal G}]^+$. (Here $K[{\mathcal G}]^+$ is the kernel of the counit map
$ev:K[{\mathcal G}]\to K$ and is a nilpotent ideal by the definition of an
infinitesimal group scheme.)
\begin{cor}\label{infinitesimal_group}
Let $E/F$ be a PPV-extension with Galois group scheme ${\mathcal G}$.
Then $E/F$ is a purely inseparable extension if and only if
${\mathcal G}$ is an infinitesimal group scheme. In this case, the exponent ${\rm
e}(E/F)$ and the height ${\rm h}({\mathcal G})$ are equal.
\end{cor}
\begin{proof}
Let ${\mathcal G}$ be infinitesimal of height $h$ and let $ev:K[{\mathcal G}]\to K$ denote the
evaluation map corresponding to the neutral element of the group. Then
for any $\frac{r}{s}\in E$, we have
$({\rm id}\otimes ev)(\gamma(r\otimes s-s\otimes r))=0$,
i.\,e. $\gamma(r\otimes s-s\otimes r)\in R\otimes_K K[{\mathcal G}]^+$.
Since ${\mathcal G}$ is of height $h$, we obtain
$(r\otimes s-s\otimes r)^{p^h}=0$. Therefore
$r^{p^h}\otimes s^{p^h}=s^{p^h}\otimes r^{p^h} \in R\otimes_F R$
which means that $\frac{r^{p^h}}{s^{p^h}}\in F$. So $E/F$ is purely
inseparable of exponent $\leq h$.
On the other hand, let $E/F$ be purely inseparable of exponent $e$.
For arbitrary $x\in K[{\mathcal G}]^+$, let $\gamma^{-1}(1\otimes x)=:\sum_j
r_j\otimes s_j$. Then
$$1\otimes x^{p^e}
= \gamma\left(\sum_j r_j^{p^e} \otimes s_j^{p^e}\right)
= \gamma\left(\sum_j r_j^{p^e}s_j^{p^e}\otimes 1\right)
= \sum_j r_j^{p^e}s_j^{p^e}\otimes 1.$$
Hence (e.g. by applying ${\rm id} \otimes ev$), one obtains $\sum_j
r_j^{p^e}s_j^{p^e}=0$ and $x^{p^e}=0$. Therefore ${\mathcal G}$ is infinitesimal
of height $\leq e$.
\end{proof}
\section{Purely inseparable extensions}\label{purely-insep}
As in the previous section, $F$ denotes a field of positive
characteristic $p$ with a non-degenerate $m$-variate iterative
derivation $\theta$ and a perfect field of constants $K=C_F$.
\begin{notation}
For all $\ell\in{\mathbb N}$,
let $J_\ell:=\left\{ (j_1,\dots,j_m)\in{\mathbb N}^m\setminus
\{\vect{0}\}\mid \forall\, i: j_i<p^\ell \right\}$ and let
$$F_\ell:=\bigcap_{\vect{j}\in J_\ell} \Ker(\theta_F^{(\vect{j})}).$$
Actually, the subfields $F_\ell$ are the same as the ones defined in
Remark \ref{rem-on-IDs}.
Since $\theta_F(F_\ell)\subseteq F_\ell[[T_1^{p^\ell},\dots, T_m^{p^\ell}]]$, one obtains an
iterative derivation on $F_{[\ell]}:=(F_\ell)^{p^{-\ell}}$ by
$\theta_{F_{[\ell]}}(x):=\left(\theta_F(x^{p^\ell})\right)^{p^{-\ell}}$.
Obviously, it is the unique iterative derivation which turns
$F_{[\ell]}$ into an ID-extension of $F$.
\end{notation}
\begin{prop}\label{max-id-extension}
\begin{enumerate}
\item\label{leq-ell} For all $\ell\in{\mathbb N}$, $F_{[\ell]}$ is the unique maximal purely
inseparable ID-extension of $F$ of exponent $\leq \ell$.
\item\label{formula} For all $\ell_1,\ell_2\in{\mathbb N}$,
$(F_{[\ell_1]})_{[\ell_2]}=F_{[\ell_1+\ell_2]}$.
\item\label{trivial} If $F_{[1]}=F$ then $F_{[\ell]}=F$ for all $\ell\in{\mathbb N}$.
\item\label{eq-ell} If $F_{[1]}\ne F$ and $\theta$ is non-degenerate, then for all
$\ell\in{\mathbb N}$, the exponent of $F_{[\ell]}/F$ is exactly $\ell$.
\end{enumerate}
\end{prop}
\begin{proof}
For the proof of part \ref{leq-ell}, we have already seen that
$F_{[\ell]}/F$ is an ID-extension, and by definition it is purely
inseparable of exponent $\leq \ell$. If $E$ is
a purely inseparable ID-extension of $F$ of exponent $\leq \ell$, then
$E^{p^\ell}\subseteq F \cap E_{\ell}\subseteq F_{\ell}$ and therefore
$E\subseteq F_{[\ell]}$. Hence $F_{[\ell]}$ is the unique maximal
ID-extension of this kind.
By definition $(F_{[\ell_1]})_{[\ell_2]}$ is an ID-extension of $F$ of
exponent $\leq \ell_1+\ell_2$. Hence by part \ref{leq-ell}, we have
$(F_{[\ell_1]})_{[\ell_2]}\subseteq F_{[\ell_1+\ell_2]}$. On the other
hand $\left(F_{[\ell_1+\ell_2]}\right)^{p^{\ell_1+\ell_2}}\subseteq F$
and so $\left(F_{[\ell_1+\ell_2]}\right)^{p^{\ell_2}}\subseteq
F_{[\ell_1]}$. Hence $F_{[\ell_1+\ell_2]}$ is an ID-extension of
$F_{[\ell_1]}$ of exponent $\leq \ell_2$ and therefore contained in
$(F_{[\ell_1]})_{[\ell_2]}$. This proves part \ref{formula}.
Part \ref{trivial} is a direct consequence of part \ref{formula}. So
it remains to prove \ref{eq-ell}. For this it suffices to show that
$F_{[\ell+1]}\ne F_{[\ell]}$ for all $\ell$, because this implies that
${\rm e}(F_{[\ell]}/F)\geq {\rm e}(F_{[\ell-1]}/F)+1\geq \dots \geq
{\rm e}(F_{[1]}/F)+\ell-1=\ell$.
By Remark \ref{rem-on-IDs}, one has $\dim_{F_{\ell+1}}(F_\ell)=p^m$,
since $\theta$ is non-degenerate. Assume that
$F_{[\ell+1]}=F_{[\ell]}$. Then
$F_{\ell+1}=\left(F_{[\ell+1]}\right)^{p^{\ell+1}}
=\left(F_{[\ell]}\right)^{p^{\ell+1}}=(F_\ell)^p$
and therefore $F$ is a finite extension of $(F_\ell)^p$ of degree
$[F:(F_\ell)^p]=[F:F_{\ell+1}]=p^{(\ell+1) m}$. On the other hand,
$$[F:(F_\ell)^p]=[F:F^p]\cdot [F^p:(F_\ell)^p]=[F:F^p]\cdot
[F:F_\ell]=p^{\ell m}[F:F^p].$$
So $[F:F^p]=p^m=[F:F_1]$, and hence $F_1=F^p$, in contradiction to
$F_{[1]}\ne F$.
\end{proof}
\begin{thm}
Let $E/F$ be a PPV-extension and let $\ell\in{\mathbb N}$. Then
$E_{[\ell]}/F_{[\ell]}$ is a PPV-extension, and its Galois group
scheme is related to $\underline{\Gal}(E/F)$ by $({\bf
Frob}^\ell)^{*}\left(\underline{\Gal}(E_{[\ell]}/F_{[\ell]})\right)\cong
\underline{\Gal}(E/F)$, where ${\bf Frob}$ denotes the Frobenius
morphism on $\Spec(K)$.
\end{thm}
\begin{proof}
Let $R\subseteq E$ be the corresponding PPV-ring and $Y\in\GL_n(R)$ a
fundamental solution matrix for a corresponding IDE
$\theta(\vect{y})=A\vect{y}$. Since the $m$-variate iterative
derivation is non-degenerate on $F$, on has
$[F:F_\ell]=p^{m\ell}=[E:E_\ell]$.
Hence, there is a matrix $D \in \GL_n(F)$ such that
$\tilde{Y}:=D^{-1}Y\in \GL_n(R_\ell)$. The matrix $\tilde{Y}$
satisfies
$$\theta(\tilde{Y})=\theta(D^{-1}Y)=\theta(D)^{-1}AD\tilde{Y},$$
i.\,e. it is a fundamental solution matrix for the IDE
$\theta(\vect{y})=\tilde{A}\vect{y}$, where
$\tilde{A}=\theta(D)^{-1}AD\in\GL_n(F[[\vect{T}]])$.
We first show that $\tilde{A}\in\GL_n(F_\ell[[T_1^{p^\ell},\dots, T_m^{p^\ell}]])$:
Clearly $\tilde{A}\in\GL_n(F[[\vect{T}^{p^\ell}]])$, since
$\theta^{(\vect{k})}(\tilde{Y})=0$ for all
$\vect{k}\in J_\ell$ and since $\theta$ is
iterative.
Then for all $\vect{j}\in{\mathbb N}^m$ and all $\vect{k}\in J_\ell$ we have
$$\theta^{(\vect{k})}\left(\theta^{(\vect{j})}(\tilde{Y})\right)=
\theta^{(\vect{j})}\left(\theta^{(\vect{k})}(\tilde{Y})\right)=0,$$
and
$$\theta^{(\vect{k})}\left(\theta^{(\vect{j})}(\tilde{Y})\right)=\theta^{(\vect{k})}\left(\tilde{A}_{\vect{j}}\cdot
\tilde{Y}\right)=
\theta^{(\vect{k})}(\tilde{A}_{\vect{j}})\tilde{Y}.$$
Hence, $\theta^{(\vect{k})}(\tilde{A}_{\vect{j}})=0$. Therefore
$\tilde{A}_{\vect{j}}$ has coefficients in $F_\ell$.
Since $\tilde{A}\in\GL_n(F_\ell[[\vect{T}^{p^\ell}]])$,
$R_\ell$ is actually a PPV-ring over $F_\ell$ with fundamental solution matrix $\tilde{Y}$.
\comment{\bf !! erwaehnen weshalb $R_\ell$ ID-simple?}
By taking $p^{\ell}$-th roots, we obtain that $R_{[\ell]}$ is a
PPV-ring over $F_{[\ell]}$ with fundamental solution matrix $\left((\tilde{Y}_{i,j})^{p^{-\ell}}\right)_{i,j}$.
For obtaining the relation between the Galois groups, we first observe that
$F$ and $R_\ell$ are linearly disjoint over $F_\ell$ and hence
$F\otimes_{F_\ell} R_\ell\cong R$,
which induces a natural isomorphism of the Galois groups
$\underline{\Gal}(R/F)\cong \underline{\Gal}(R_\ell/F_\ell)$.
Furthermore the $p^\ell$-th power Frobenius endomorphism leads to an
isomorphism
$$R_{[\ell]}\otimes_{F_{[\ell]}} R_{[\ell]} \xrightarrow{()^{p^\ell}}
R_\ell\otimes_{F_\ell} R_\ell.$$
Since $\underline{\Gal}(R_\ell/F_\ell)$ (resp.$\underline{\Gal}(R_{[\ell]}/F_{[\ell]})$ is isomorphic as $K$-group scheme to
$\Spec(C_{R_\ell\otimes_{F_\ell} R_\ell})$ (resp.
$\Spec(C_{R_{[\ell]}\otimes_{F_{[\ell]}} R_{[\ell]}})$), this gives
the desired property
$$({\bf Frob}^\ell)^{*}\left(\underline{\Gal}(E_{[\ell]}/F_{[\ell]})\right)\cong \underline{\Gal}(E_\ell/F_\ell)
\cong\underline{\Gal}(E/F).$$
\end{proof}
From this theorem we obtain a criterion for $E_{[\ell]}/E$ being a
PPV-extension.
\begin{cor}\label{E_ell-is-ppv}
Let $E/F$ be a PPV-extension and suppose that $F_1=F^p$. Then the
extension $E_{[\ell]}/E$ is a PPV-extension, for all $\ell\in{\mathbb N}$.
\end{cor}
\begin{proof}
From $F_1=F^p$, it follows that $F_{[\ell]}=F$ for all $\ell$. Hence
by the previous theorem, $E_{[\ell]}/F$ is a PPV-extension and
therefore $E_{[\ell]}/E$ is a PPV-extension.
\end{proof}
\begin{prop}\label{finite-id-ext}
Let $E$ be a finite ID-extension of some ID-field $F$ with $C_E=K$.
Then there is a finite field extension $L$ over $K$ such that $E$ is
contained in a PPV-extension of $FL=F\otimes_K L$.
\end{prop}
\begin{proof}
Let $e_1,\dots, e_n\in E$ be an $F$-basis of $E$. Then there are unique
$A_{\vect{k}}\in F^{n\times n}$, such that $\theta_E^{(\vect{k})}(e_i)=\sum_{j=1}^n
(A_{\vect{k}})_{ij}e_j$ for all $\vect{k}\in{\mathbb N}^m$ and $i=1,\dots, n$.
Since the $A_{\vect{k}}$ are unique, the property of $\theta_E$ being an
iterative derivation implies that $\theta(\vect{y})=A\vect{y}$ is an
iterative differential equation, where $A=\sum_{\vect{k}\in{\mathbb N}^m}
A_{\vect{k}} \vect{T}^{\vect{k}}\in \GL_n(F[[\vect{T}]])$.
Let $U:=E[X_{ij},\det(X)^{-1}]$ be the universal solution ring for
this IDE over $E$ (i.\,e. $\theta_U(X)=A X$). Then the ideal
$(x_{11}-e_1,x_{21}-e_2,\dots, x_{n1}-e_n)\unlhd U$ is an ID-ideal and
there is a maximal ID-ideal $P$ containing $(x_{11}-e_1,\dots,
x_{n1}-e_n)$. Then the field of constants $L:=C_{U/P}$ of $U/P$ is a
finite field extension of $K$ and by construction $U/P$ is a
PPV-extension of $FL$ which contains $E$.
\end{proof}
\begin{thm}\label{general-realisation}
Let $F$ be an ID-field with $C_F=K$ perfect.\\
Let $\tilde{C}_{\ell}$ denote the
maximal subalgebra of $C_{F_{[\ell]}\otimes_F F_{[\ell]}}$ which is a
Hopf algebra with respect to the comultiplication induced by
$$F_{[\ell]}\otimes_F F_{[\ell]}\longrightarrow
\left(F_{[\ell]}\otimes_F F_{[\ell]}\right)\otimes_{F_{[\ell]}}\left(
F_{[\ell]}\otimes_F F_{[\ell]}\right), a\otimes b\mapsto (a\otimes
1)\otimes (1\otimes b).$$
Then an infinitesimal group scheme of height $\leq \ell$ is realisable as
ID-Galois group scheme over $F$, if and only if it is a factor group
of $\Spec(\tilde{C}_{\ell})$.
\end{thm}
\begin{proof}
Let $\tilde{{\mathcal G}}$ be an infinitesimal group scheme of height $\leq \ell$
which is realisable as Galois group scheme over $F$ and let $F'/F$ be
an extension with Galois group scheme $\tilde{{\mathcal G}}$. By Cor.
\ref{infinitesimal_group} and Prop. \ref{max-id-extension}, $F'$ is an
ID-subfield of $F_{[\ell]}$. Therefore,
$K[\tilde{{\mathcal G}}]\cong C_{F'\otimes_F F'}$ is a subalgebra of $C_{F_{[\ell]}\otimes_F
F_{[\ell]}}$ and is a Hopf algebra with comultiplication as
given in the statement. Hence it is a sub-Hopf algebra of
$\tilde{C}_{\ell}$ and so $\tilde{{\mathcal G}}$ is a factor group of $\Spec(\tilde{C}_{\ell})$.
For the converse,
we first assume that there is a PPV-extension $E/F$ such that
$E\supseteq F_{[\ell]}$. Let $R$ denote the corresponding PPV-ring and
${\mathcal G}:=\underline{\Gal}(E/F)$ the Galois group scheme. Since $F_{[\ell]}$ is an
intermediate ID-field, there is a subgroup ${\mathcal H}\leq {\mathcal G}$ such that
$F_{[\ell]}=E^{{\mathcal H}}$. Since all elements in $F_{[\ell]}$ are
differentially finite over $F$ we even have $F_{[\ell]}=R^{{\mathcal H}}$.
Then $\tilde{C}_{\ell}\subseteq C_{F_{[\ell]}\otimes_F
F_{[\ell]}}\subseteq C_{R\otimes_F R}\cong K[{\mathcal G}]$ is a sub-Hopf
algebra, i.\,e. $\Spec(\tilde{C}_{\ell})$ is a factor group of ${\mathcal G}$.
If $\tilde{{\mathcal G}}$ is a factor group of $\Spec(\tilde{C}_{\ell})$ then it
is a factor group of ${\mathcal G}$ and therefore there is a normal subgroup
${\mathcal G}'\unlhd {\mathcal G}$ such that $\tilde{{\mathcal G}}\cong {\mathcal G}/{\mathcal G}'$. Then by the
Galois correspondence, $\tilde{F}:=E^{{\mathcal G}'}$ is a PPV-extension of $F$ with
Galois group scheme $\tilde{{\mathcal G}}$.
If there is no PPV-extension $E/F$ containing $F_{[\ell]}$, then by
Prop. \ref{finite-id-ext}, there is a finite Galois extension $K'$ of
$K$ such that there
is a PPV-extension $E'/FK'$ containing $F_{[\ell]}K'$. By the
previous arguments there is a PPV-field $F'$ over $FK'$ with Galois
group $\tilde{{\mathcal G}}\times_K K'$. Since $F'$ is a purely inseparable
extension of $FK'$, it is defined over $F$, i.\,e. there is an
ID-field $\tilde{F}/F$ such that $F'=\tilde{F}\otimes_K K'$.
Since $\Gal(K'/K)$ acts on $F'=\tilde{F} K'$ by ID-automorphisms, the
constants of $\tilde{F}\otimes_F \tilde{F}\cong (F'\otimes_F
\tilde{F})^{\Gal(K'/K)}\cong (F'\otimes_{FK'} F')^{\Gal(K'/K)}$ are equal to
the $\Gal(K'/K)$-invariants of $C_{F'\otimes_{FK'} F'}\cong
K'[\tilde{{\mathcal G}}]$ inside $C_{F_{[\ell]}\otimes_F
F_{[\ell]}}K'$, i.\,e.{} are equal to $K[\tilde{{\mathcal G}}]$. By comparing
dimensions, one obtains that the $\tilde{F}$-linear mapping $\tilde{F}\otimes_K
K[\tilde{{\mathcal G}}]\to \tilde{F}\otimes_F \tilde{F}$ is in fact an isomorphism, and
hence by \cite{maurischat}, Prop. 10.12, $\tilde{F}/F$ is a PPV-extension with
Galois group scheme $\tilde{{\mathcal G}}$.
\end{proof}
\begin{cor}\label{special-realisation}
Let $E$ be an ID-field and suppose that $E$ is a PPV-extension of
some ID-field $F$ satisfying $F_1=F^p$.
An infinitesimal group scheme of height $\leq \ell$ is realisable as
ID-Galois group scheme over $E$, if and only if it is a factor group
of $\underline{\Gal}(E_{[\ell]}/E)$.
\end{cor}
\begin{proof}
This follows directly from Theorem \ref{general-realisation} and the
fact that in this case $E_{[\ell]}/E$ is a PPV-extension by Corollary
\ref{E_ell-is-ppv}.
\end{proof}
\section{Examples}
In this section we consider some examples. Troughout this section $K$
denotes a perfect field of characteristic $p>0$ and $K((t))$ is
equipped with the univariate iterative derivation $\theta$ given by
$\theta(t)=t+T$.
\begin{exam}
We start with the easiest case. If $F=K(t)$ or $F$ is a finite
ID-extension of $K(t)$ inside $K((t))$, then $F_1=F^p$,
i.\,e. $F_{[1]}=F$, and therefore by Prop. \ref{max-id-extension},
there exist no purely inseparable ID-extensions of $F$.
For $F=K(t)$, the property $F_1=F^p$ is obvious, and for $F$ being a finite
extension of $K(t)$, it is obtained by a simple dimension argument.
\end{exam}
\begin{exam}
We present an example for an ID-field $F$ with $F_{[\ell]}\gneq F$
which nevertheless has no purely inseparable PPV-extensions. More
precisely, we show that the constants of $F_{[\ell]}\otimes_F
F_{[\ell]}$ are equal to $K=C_F$ for all $\ell\in{\mathbb N}$.
\smallskip
Let $\alpha\in {\mathbb Z}_p\setminus {\mathbb Q}$ be a $p$-adic integer, and for all
$k\in {\mathbb N}$, let $\alpha_k\in \{0,\dots, p^k-1\}$ be chosen such that
$\alpha\equiv \alpha_k \mod{p^k}$. Then we define
$r:=\sum_{k=1}^\infty t^{\alpha_k} \in K[[t]]$.
The field $F:=K(t,r)$ is then an ID-subfield of $K((t))$, since for
all $j\in{\mathbb N}$,
\begin{eqnarray*}
\theta^{(p^j)}(r)&=& \sum_{k=1}^\infty
\theta^{(p^j)}\left(t^{\alpha_k}\right)
= \sum_{k=1}^\infty \binom{\alpha_k}{p^j} t^{\alpha_k-p^j}\\
&=& \binom{\alpha_{j+1}}{p^j} t^{-p^j}\sum_{k=j+1}^\infty t^{\alpha_k}
= \binom{\alpha_{j+1}}{p^j} t^{-p^j}\left( r- \sum_{k=1}^j
t^{\alpha_k}\right)\in K(t,r).
\end{eqnarray*}
Here we used that $\binom{a}{p^j}=0$ if $a<p^j$ and
$\binom{a}{p^j}\equiv \binom{b}{p^j} \mod{p}$ if $a\equiv b \mod{p^{j+1}}$.
\smallskip
We will show now that $r$ is transcendental over $K(t)$:
Let $s$ be a solution for the $1$-dimensional IDE
$\theta^{(p^j)}(y)=\binom{\alpha_{j+1}}{p^j}t^{-p^j}y$ ($j\in{\mathbb N}$) in
some extension field of $F$. Since $\alpha\not\in {\mathbb Q}$, the element
$s$ is transcendental over $K(t)$ by \cite{mat_hart}, Thm. 3.13.
One then easily verifies
$$\theta^{(p^j)}\begin{pmatrix} s& r\\ 0&1\end{pmatrix}=
\begin{pmatrix} \binom{\alpha_{j+1}}{p^j}t^{-p^j} & -\binom{\alpha_{j+1}}{p^j} \sum_{k=1}^j
t^{\alpha_k-p^j} \\ 0&0\end{pmatrix} \cdot \begin{pmatrix} s&
r\\ 0&1\end{pmatrix},$$
which shows that $K(t,r,s)$ is a PPV-field over $K(t)$ with Galois
group inside ${\mathbb G}_m\ltimes {\mathbb G}_a\cong \{\left(\begin{smallmatrix}
x&a\\ 0&1\end{smallmatrix}\right)\in \GL_2\}$.
Since $s$ is transcendental over $K(t)$, the full subgroup ${\mathbb G}_m$ is
contained in the Galois group. The only subgroups of ${\mathbb G}_a$ which are
stable under the ${\mathbb G}_m$-action are the Frobenius kernels
$\alpha_{p^m}$. But all Galois groups over $K(t)$ are reduced
(cf. \cite{maurischat}, Cor. 11.7), and
hence we have $\underline{\Gal}(K(t,r,s)/K(t))={\mathbb G}_m\ltimes {\mathbb G}_a$ or
$={\mathbb G}_m$. In both cases $K(t,r,s)$ contains no elements that are
algebraic over $K(t)$. Since the power series of $r$ does not become
eventually periodic, $r\not\in K(t)$ and so $r$ has to be
transcendental over $K(t)$.
Next we are going to calculate the constants of $F_{[\ell]}\otimes_F
F_{[\ell]}$:
It is easily seen that $F_{[\ell]}=K(t,r_{[\ell]})$, where
$$r_{[\ell]}:= \left( t^{-\alpha_\ell}(r-\sum_{k=1}^\ell
t^{\alpha_k})\right)^{p^{-\ell}}= \sum_{k=1}^\infty
t^{(\alpha_{k+\ell}-\alpha_\ell)p^{-\ell}}\in K[[t]],$$
and the derivatives of $r_{[\ell]}$ are given by:
$$\theta^{(p^j)}(r_{[\ell]})=
\binom{(\alpha_{j+1+\ell}-\alpha_{\ell})p^{-\ell}}{p^j}
t^{-p^j}\left( r_{[\ell]} - \sum_{k=1}^j
t^{(\alpha_{k+\ell}-\alpha_\ell)p^{-\ell}} \right).$$
Hence, one obtains for all $n\in{\mathbb N}$: $$\theta^{(n)}(r_{[\ell]})\in
\binom{(\alpha-\alpha_{\ell})p^{-\ell}}{n} t^{-n}r_{[\ell]}
+ K(t).$$
For calculating the constants in $F_{[\ell]}\otimes_F F_{[\ell]}$, we
remark that $\{ r_{[\ell]}^i\otimes r_{[\ell]}^j \mid 0\leq i,j\leq
p^\ell-1\}$ is a basis of $F_{[\ell]}\otimes_F F_{[\ell]}$ as an
$F$-vector space. A further calculation shows that for $n\in{\mathbb N}$ and
$k\in{\mathbb Z}$
$$\theta^{(n)}\left( t^k r_{[\ell]}^i\otimes r_{[\ell]}^j\right)
\equiv
\binom{k+ (i+j)(\alpha-\alpha_{\ell})p^{-\ell}}{n}t^{-n}\left(
t^k r_{[\ell]}^i\otimes r_{[\ell]}^j\right)$$
modulo terms in $r_{[\ell]}^{\mu}\otimes r_{[\ell]}^{\nu}$ with $\mu+\nu<i+j$.
So an element $x:=\sum_{i,j} c_{i,j}r_{[\ell]}^i\otimes r_{[\ell]}^j\in
F_{[\ell]}\otimes_F F_{[\ell]}$ can only be constant, if for the terms
of maximal degree these binomial coefficients vanish for all $n$.
Since $\alpha$ is not rational, this is only possible if $i=j=0$ is
the maximal degree and if $k=0$, i.\,e.{} $x\in K$.
So we have shown that $C_{F_{[\ell]}\otimes_F F_{[\ell]}}=K$ for all
$\ell\in{\mathbb N}$, which implies by Theorem \ref{general-realisation} that
there are no purely inseparable PPV-extensions over $F=K(t,r)$.
\end{exam}
\begin{exam}
The following example is quite contrary to the previous one. In this example all
purely inseparable ID-extensions are PPV-extensions.
Let $\alpha_1,\dots, \alpha_n\in{\mathbb Z}_p$ be $p$-adic integers such that
the set $\{1,\alpha_1,\dots, \alpha_n\}$ is ${\mathbb Z}$-linear independent,
and let $\alpha_i=:\sum_{k=0}^\infty a_{i,k}p^k$ ($i=1,\dots, n$) be
their normal series, i.\,e.{} $a_{i,k}\in\{ 0,\dots, p-1\}$. For
$i=1,\dots, n$, we then
define
$$s_i:=\sum_{k=0}^\infty a_{i,k}t^{p^k}\in K((t))$$
and consider the field $F:=K(t,s_1,\dots, s_n)$ which obviously is an
ID-subfield of~$K((t))$.
Since $\theta^{(p^\ell)}(s_i)=a_{i,\ell}$ for all $\ell\in{\mathbb N}$ and
$i=1,\dots, n$, the extension $F/K(t)$ is a PPV-extension and its
Galois group scheme is a subgroup scheme of~${\mathbb G}_a^{\,n}$. Actually, the
condition on the $\alpha_i$ implies that the $s_i$ are algebraically
independent over $K(t)$ and hence the Galois group scheme is the full
group~${\mathbb G}_a^{\,n}$. Therefore by Corollary \ref{E_ell-is-ppv}, for all
$\ell\in{\mathbb N}$ the extension $F_{[\ell]}/F$ is a PPV-extension and
$\underline{\Gal}(F_{[\ell]}/F)\cong (\boldsymbol{\alpha}_{p^\ell})^n$, where
$\boldsymbol{\alpha}_{p^\ell}$ denotes the kernel of the $p^\ell$-th
power Frobenius map on~${\mathbb G}_a^{\,n}$.
Furthermore, $(\boldsymbol{\alpha}_{p^\ell})^n$ is a commutative group
scheme and so all its subgroup schemes are normal subgroup
schemes. By Theorem \ref{galois_correspondence}, this implies that
every intermediate ID-field $F\leq E\leq F_{[\ell]}$ is a
PPV-extension of $F$. So all purely inseparable ID-extensions of $F$
are PPV-extensions over~$F$.
Furthermore, by Cor. \ref{special-realisation}, an infinitesimal
group scheme is realisable over $F$ if and only if it is a closed subgroup
scheme of $(\boldsymbol{\alpha}_{p^\ell})^n$ for some $\ell$, i.\,e.{} if
and only if it is a closed infinitesimal subgroup scheme of~${\mathbb G}_a^{\,n}$.
\end{exam}
\comment{
\section{Some Conjectures}
\begin{conj}
Prop. \ref{finite-id-ext} with $C_F$ perfect.
\end{conj}
\begin{conj}
Let $(F,\theta)$ be an ID-field with a perfect field of constants
$C:=C_F$. Assume further that $trdeg(F/C)<\infty$.
Then there exist inseparable PPV-extensions of $F$ if and
only if $F_{[1]}\ne F$.
\end{conj}
\begin{proof}
The only if part has already been proved in \cite{maurischat}. So we
have to show the if-part. Let $E:=F_{[1]}$. By
assumption, $E$ is a purely inseparable ID-extension of $F$.
By Prop. \ref{finite-id-ext}, there is a PPV-extension $\tilde{E}$
over $F':=FC^{\rm alg}$ containing $EC^{\rm alg}$.
\dots maybe:
By Galois decent (see [Oes09]), we obtain that
$E':=\tilde{E}^{\Gal(C^{\rm alg}/C)}$ is a PPV-extension of $F$. So
$E'$ is in fact an inseparable PPV-extension of $F$.
\end{proof}
}
|
2,869,038,156,656 | arxiv | \section{Introduction}
The standard theoretical approach to neutrino oscillations is based on the flavour neutrino states postulated in the seminal work of Gribov and Pontecorvo in 1968 \cite{Grib_Pont}. We have recently presented a consistent and universal definition of oscillating neutrino states
as coherent superpositions of massive neutrino states \cite{AT_neutrino}. The work was motivated by the necessity to formulate within a rigorous quantum field theoretical frame the coherence of particle states of different masses, which are known to be always emitted incoherently. The idea is that in quantum field theory any state has to be created by the action of some quantum operator on the physical vacuum of the theory, in the spirit of the Klauder--Sudarshan--Glauber coherent states \cite{Klauder, Sudarshan,Glauber}. With this purpose in mind, we have been able to formulate for the first time a prescription for the definition of inherently coherent oscillating neutrino states, which are the closest possible to the flavour neutrino states conjectured by Gribov and Pontecorvo, and with which they formally coincide in the ultrarelativistic approximation \cite{AT_neutrino}.
In summary, the quantum field theoretical solution for describing the coherent oscillating neutrino states is to create them by the action of the creation operator for Standard Model {\it massless} neutrinos on the physical vacuum of the theory, namely the vacuum of the massive neutrino fields \cite{AT_neutrino}. In order to find such an action, we employed the method of unitarily inequivalent representations, based on Bogoliubov transformations which connect creation and annihilation operators acting in different Fock spaces (in this case, the Fock space of massless neutrinos and the Fock space of massive neutrinos).
The method is due to Bogoliubov and it was the technical innovation which led him to the elegant explanation of superfluidity in the seminal work of 1947 \cite{Bog_superf} and superconductivity in 1958 \cite{Bog_superc}, by introducing the notion of Bogoliubov quasiparticles. At the same time, these works were signaling for the first time the concept of spontaneous breaking of symmetry. Later on, the same method was employed by Nambu and Jona-Lasinio in 1961 \cite{NJL}, in the model of dynamical generation of nucleon masses, which brought the spontaneous breaking of (chiral) symmetry in the realm of particle physics. The celebrated Haag theorem \cite{Haag} is also a product of the method of unitarily inequivalent representations. This short list of some of the most influential results in theoretical condensed matter and particle physics of the previous century shows the power and versatility of the method of unitarily inequivalent representations. Our recent works \cite{AT_neutrino, AT_neutron} testify to its potential in solving standing problems in the theory of quantum coherence, applied to the quantum field theory of particle oscillations. In this framework, the oscillating neutrino states as coherent superpositions of the massive Dirac neutrino states with equal momenta $\bf p$ and helicity $\lambda$, $|\nu_{i\lambda}({\bf p})\rangle$, $i=1,2,3$ are found to be
\begin{eqnarray}\label{3-nu_state}
|\nu_l({\bf p},\lambda)\rangle=\sum_{i=1,2,3} U^*_{li}\sqrt{\frac{1}{2}\left(1+\frac{\tp}{\Omega_{i\tp}}\right)}|\nu_{i\lambda}({\bf p})\rangle, \ \ \ l=e,\mu,\tau,\ \ \ \tp=|{\bf p}|,\ \ \Omega_{i\tp}=\sqrt{{\tp}^2+m_i^2},
\end{eqnarray}
where $U_{li}$ are the elements of the unitary Pontecorvo--Maki--Nakagawa--Sakata mixing matrix.
The comment \cite{BS} claims that the quantization prescription proposed in \cite{AT_neutrino} can be reproduced in the so-called flavour Fock space scheme initiated in \cite{BV1}. In the following, we shall justify why the latter scheme has nothing to do with our proposal for the definition of oscillating particle states and we shall prove that a flavour Fock space for massive mixed neutrinos cannot exist.
\section{Quantum systems with interaction}\label{QFI}
To understand why the flavour Fock space for mixed neutrinos cannot exist, we shall make a short detour to the basic formulation of quantum systems with interaction, as it appears in the monographs of Bjorken and Drell \cite{BD} (pp. 90-91) and of Bogoliubov and Shirkov \cite{Bog-Shirk} (pp. 117-119). We consider relativistically invariant local quantum field theories, described by a Lagrangian density:
\be
{\cal L}(x)={\cal L}(u_i(x),\partial_\mu u_j(x)),
\ee
where the fields $u_i(x), i=1,2,\ldots, N$ form a system with interactions.
The system is quantized by imposing equal time commutation or anticommutation relations (ETCR), depending on the spin of the fields, just as in the free case, namely
\bea\label{etcr}
\left[u_i({\bf x},t),\frac{\partial{\cal L}}{\partial{\dot u_j}}({\bf y},t)\right]&=& i\delta_{ij}\delta({\bf x} -{\bf y}),\cr
\left[u_i({\bf x},t),u_j({\bf y},t)\right]&=& 0,\cr
\left[\frac{\partial{\cal L}}{\partial{\dot u_j}}({\bf x},t),\frac{\partial{\cal L}}{\partial{\dot u_j}}({\bf y},t)\right]&=& 0,
\eea
if we take the fields $u_i(x)$ to be of integer spin. Using Noether's theorem and the translational invariance of the Lagrangian, one obtains the Hamiltonian
\be
H=H(u, \nabla u, \dot u.)
\ee
Knowing the Hamiltonian, one determines the equations of motion for the quantized system.
Usually, in the case of systems with interaction, the equations of motion are not exactly solvable, therefore we cannot know their exact dependence on the space-time coordinates.
In practice, one usually starts by writting the solutions of the equations of motion at $t=0$ (i.e. in the Schr\"odinger picture) and describe their time development by the Heisenberg equations,
\be\label{H_eq}
u_i({\bf x},t)=e^{-iHt}u_i({\bf x},0)e^{iHt}.
\ee
The solution at $t=0$ is written as
\be\label{u_0}
u_i({\bf x},0)=\int\frac{d{\bf k}}{(2\pi)^{3/2}\sqrt{2\omega_{{\bf k},i}}}\big(a_{{\bf k},i}(0)e^{i{\bf k}{\bf x}}+b^\dagger_{{\bf k},i}(0)e^{-i{\bf k}{\bf x}}\big),
\ee
with $\omega_{{\bf k},i}=\sqrt{{\bf k}^2+m_i^2}$, such that, in the limit of no interaction, it coincides with the free field solution at the time $t=0$.
Imposing the ETCR \eqref{etcr} at $t=0$ and using the expansion \eqref{u_0}, one finds the
commutation relations of $a_{{\bf k},i}(0)$ and $b_{{\bf k},i}(0)$ as for the free fields, namely:
\bea\label{cr0}
&&[a_{{\bf k},i}(0),a^\dagger_{{\bf q},j}(0)]=[b_{{\bf k},i}(0),b^\dagger_{{\bf q},j}(0)]=\delta_{ij}\delta({\bf k}-{\bf q}),\cr
&&[a_{{\bf k},i}(0),a_{{\bf q},j}(0)]=[a^\dagger_{{\bf k},i}(0),a^\dagger_{{\bf q},j}(0)]=[b_{{\bf k},i}(0),b_{{\bf q},j}(0)]=[b^\dagger_{{\bf k},i}(0),b^\dagger_{{\bf q},j}(0)]=0,
\eea
but also commutation relations between the operators and their time derivatives, for example:
\be\label{newcr0}
[a_{{\bf k},i}(0),\dot a^\dagger_{{\bf q},i}(0)]=\delta({\bf k}-{\bf q}),\ \ \ \mbox{etc}.
\ee
The exact form of the latter commutation relations depends, of course, on the coupled equations of motion satisfied by the fields $u_i(x)$.
Since the Hamiltonian is time-independent, it can be written in terms of the mode operators and their time derivatives at $t=0$,
\be\label{Ham0} H=H\Big(a_{{\bf k},i}(0), a^\dagger_{{\bf k},i}(0), b_{{\bf k},i}(0), b^\dagger_{{\bf k},i}(0),\dot a_{{\bf k},i}(0), \dot a^\dagger_{{\bf k},i}(0), \dot b_{{\bf k},i}(0), \dot b^\dagger_{{\bf k},i}(0)\Big).\ee
At this point, one can write the exact solution at an arbitrary time $t$ as
\be\label{exact_sol}
u_i(x)=u_i^+(x)+u_i^-(x)=\int\frac{d{\bf k}}{(2\pi)^{3/2}\sqrt{2\omega_{{\bf k},i}}}\big(a_{{\bf k},i}(t)e^{i{\bf k}{\bf x}}+b^\dagger_{{\bf k},i}(t)e^{-i{\bf k}{\bf x}}\big),
\ee
where
\bea\label{H_eq_ab}
a_{{\bf k},i}(t)&=&e^{-iHt}a_{{\bf k},i}(0)e^{iHt},\ \ \ \ \ b_{{\bf k},i}(t)=e^{-iHt}b_{{\bf k},i}(0)e^{iHt},\cr
a^\dagger_{{\bf k},i}(t)&=&e^{-iHt}a^\dagger_{{\bf k},i}(0)e^{iHt},\ \ \ \ \ b^\dagger_{{\bf k},i}(t)=e^{-iHt}b^\dagger_{{\bf k},i}(0)e^{iHt}.
\eea
Using \eqref{cr0}, \eqref{newcr0} and \eqref{Ham0}, the relations \eqref{H_eq_ab} lead to some complicated nonlinear operator equations for $a_{{\bf k},i}(t), a^\dagger_i({\bf k},t)$ and $b_{{\bf k},i}(t), b^\dagger_{{\bf k},i}(t)$, which usually cannot be solved exactly. The solution of these equations is equivalent to solving the equations of motion.
The issue in which we are interested in particular is whether the operators $a_{{\bf k},i}(t)$ and $b^\dagger_{{\bf k},i}(t)$ may be regarded as annihilation and creation operators of actual particles. One has to bear in mind that the separation into positive and negative frequency parts \eqref{exact_sol} may be relativistically noninvariant, in which case the operators $a_{{\bf k},i}(t)$ and $b^\dagger_{{\bf k},i}(t)$ may not have the meaning of annihilation and creation operators \cite{Bog-Shirk, BD}. For instance, the relativistic invariance is preserved and the operators $b^\dagger_{{\bf k},i}(t)$ are creation operators only if they can be written as
\be\label{criterion1}
b^\dagger_{{\bf k},i}(t)=\sum_\alpha e^{i\omega_\alpha t} b^\dagger_{i\alpha}({\bf k}).
\ee
If, on the contrary, we have an expression with mixed positive and negative energy parts, such as
\be\label{criterion2}
b^\dagger_{{\bf k},i}(t)=\sum_\alpha e^{i\omega_\alpha t} b^\dagger_{i\alpha}({\bf k})+\sum_\beta e^{-i\omega_\beta t} b_{i\beta}({\bf k}),
\ee
the operators $b^\dagger_{{\bf k},i}(t)$ {\it can not} be interpreted as creation operators. We emphasize that in the above formulas the operators $b^\dagger_{i\alpha}({\bf k})$ and $b_{i\beta}({\bf k})$ do NOT depend on time.
\section{Flavour Fock space does not exist}
We consider the flavour number violating Lagrangian density in the case of two-neutrino mixing with Dirac mass terms (see, for example, \cite{mohapatra,fukugita,giunti,bilenky,valle}):
\be \label{mixlag}
\mathcal{L}(x)\ =\ \overline{\nu}(x) \, \lf(i \ga^\mu \pa_\mu \ - \ M \ri) \, \nu(x) \, ,
\ee
with
\be
\nu(x) \ = \
\bbm
\nu_e (x)\\ \nu_\mu (x)
\ebm \, , \qquad
M \ = \
\bbm
m_e & m_{e \mu} \\ m_{e \mu} & m_\mu
\ebm \, .
\ee
The field equations are
\bea \label{neuteq}
\lf(i \ga^\mu \pa_\mu \ - \ M \ri)\nu(x) \ = \ 0 \, ,
\eea
i.e. the flavour fields $\nu_e (x)$ and $\nu_\mu (x)$ satisfy {\it coupled} equations of motion, namely
\bea \label{neuteq'}
(i\gamma^\mu\partial_\mu-m_e)\nu_e(x)-m_{e\mu}\nu_\mu(x)=0,\cr
(i\gamma^\mu\partial_\mu-m_\mu)\nu_\mu(x)-m_{e\mu}\nu_e(x)=0.
\eea
The Lagrangian \eqref{mixlag} is diagonalized by the unitary change of variables:
\bea
\label{PontecorvoMix}
\bbm
\nu_e (x)\\ \nu_\mu (x)
\ebm \, = \
\bbm
\cos\theta & \sin\theta\\ -\sin\theta & \cos\theta
\ebm \, \bbm
\nu_1 (x)\\ \nu_2 (x)
\ebm \, ,\eea
with $\tan 2 \theta = 2 m_{e\mu}/(m_\mu-m_e)$. The massive Dirac fields $\nu_1$ and $\nu_2$ satisfy free Dirac equations:
\bea\label{eom_m}
\lf(i \ga^\mu \pa_\mu \ - \ m_j\ri)\nu_j(x) \ = \ 0 \, , \qquad j=1,2 \, ,
\eea
where the masses $m_1$ and $m_2$ are given by the relations:
\bea
m_e & =& m_1 \, \cos^2 \theta \ + \ m_2 \, \sin^2 \theta \, , \cr
m_\mu & =& m_1 \, \sin^2 \theta \ + \ m_2 \, \cos^2 \theta \, .
\eea
According to the flavour Fock space scheme, the quantization of the theory in the set of variables $(\nu_1,\nu_2)$ leads to a Fock space of massive states, with the vacuum $|0\ran_{1,2}$, while the quantization in the variables $(\nu_e,\nu_\mu)$ would lead to an infinity of Fock space of flavour states, with the respective vacua $|0(t)\ran_{e,\mu}$ which are all orthogonal on the vacuum of massive states \cite{BHV}:
\be\label{orthog1}
_{e,\mu}\langle0 (t)|0\ran_{1,2}=0,\ \ \ \ \forall\ t,
\ee
and also orthogonal among themselves:
\be\label{orthog}
_{e,\mu}\langle0 (t)|0(t')\ran_{e,\mu}=0,\ \ \ \ \forall\ t\neq t'.
\ee
Due to \eqref{orthog}, the massive and flavour Fock spaces do not share any states, namely the flavour states cannot be written as a superposition of the massive states \`a la Pontecorvo.
The claim of the possible existence of the flavour vacua \cite{BV1,BS} is wrong, because a unitary change of variables like \eqref{PontecorvoMix} can never modify the structure of the quantized theory.
The Fock representation is selected by the Hamiltonian and only by it \cite{SW,Strocchi}. In the following, we shall prove directly that only the vacuum $|0\ran_{1,2}$ exists, while $|0 (t)\ran_{e,\mu}$ cannot be constructed, contrary to the claims of the flavour Fock space scheme.
\subsection{Canonical quantization of the free fields $\nu_1,\nu_2$ and their Fock space}
It is clear that the system described by the Lagrangian \eqref{mixlag} is exactly solvable by the unitary change of variables \eqref{PontecorvoMix}. This is a system of two free Dirac fields $(\nu_1,\nu_2)$ of masses $m_1,m_2$, which is easily quantized canonically.
The fields $\nu_1$ and $\nu_2$, upon quantization, are expanded as
\bea
\nu_j(x) & = & \int\frac{d{\bf k}}{(2\pi)^{3/2}\sqrt{2\omega_{{\bf k},j}}}\sum_{r} \, \lf[u_{\mathbf{k},j}^r \, \alpha^r_{\mathbf{k},j} \, e^{-i \, \om_{\G k,j} \, t}e^{i{\bf k}\cdot {\bf x}} + v_{\mathbf{k},j}^r \, \beta_{\mathbf{k},j}^{r\dagger} \, e^{i \om_{\G k,j} \, t} e^{-i{\bf k}\cdot {\bf x}} \ri],\
j=1,2 \, , \label{Fourierfield}
\eea
where the creation and annihilation operators satisfy the canonical anticommutation relations
\bea
\{\alpha^r_{\mathbf{k},j},\alpha^{\dagger s}_{\mathbf{q},j}\}=\delta_{ij}\delta_{rs}\delta({\bf k}-{\bf q}), \ \ \ \{\beta^r_{\mathbf{k},j},\beta^{\dagger s}_{\mathbf{q},j}\}=\delta_{ij}\delta_{rs}\delta({\bf k}-{\bf q}),
\eea
all the other anticommutators being zero.
The Fock space of the fields $\nu_1$ and $\nu_2$ is built on the vacuum $|0\ran_{1,2}$, which is annihilated by $\al^r_{\G k,j}$, $\bt^r_{\G k,j}$:
\bea
\al^r_{\G k,j}|0\ran_{1,2}=\bt^r_{\G k,j}|0\ran_{1,2}=0,\ \ j=1,2.
\eea
So far, we have described the standard manner of introducing Dirac masses and mixing of the different flavour fields, encountered in all the neutrino physics textbooks (see, for example, \cite{mohapatra,fukugita,giunti,bilenky,valle}). The state $|0\ran_{1,2}$ is the physical vacuum of the theory, according to the rules of quantum field theory which state that the physical vacuum is the vacuum of the Fock space of those operators that diagonalize the Hamiltonian (see the monographs \cite{Bog-Shirk, Umezawa-book}). The vacuum $|0\ran_{1,2}$ is relativistically invariant and the lowest-lying state of the Fock space, satisfying
\be
H|0\ran_{1,2}=0,
\ee
where
\be\label{Ham_free}
H=\int d{\bf k}\sum_{j,r} \om_{\G k,j}(\alpha^{\dagger r}_{\mathbf{k},j}\alpha^r_{\mathbf{k},j}+\beta^{\dagger r}_{\mathbf{k},j}\beta^r_{\mathbf{k},j}).
\ee
\subsection{Quantization of the interacting fields $\nu_e,\nu_\mu$}
The fields $\nu_e,\nu_\mu$ which satisfy the coupled equations of motion \eqref{neuteq'} are regarded as {\it interacting fields}, where the interaction term in the Lagrangian is
\be
{\cal L}_{int}(x)= -m_{e\mu}(\bar\nu_e(x)\nu_\mu(x)+\bar\nu_\mu(x)\nu_e(x)).
\ee
The parameter $m_{e\mu}$ is the "coupling constant".
We can quantize the theory as an interacting one, using the method described in Sect. \ref{QFI}. We impose the canonical equal time anticommutation relations:
\begin{eqnarray}\label{acr_nu}
\{\nu_\sigma({\bf x},t), \Pi_{\nu_{\sigma'}}({\bf y},t)\} &=&\{\nu_\sigma({\bf x},t), i\nu_{\sigma'}^{\dagger}({\bf y},t)\}=i\delta_{\sigma\sigma'}\delta({\bf x}-{\bf y}),\nonumber\\
\{\nu_\sigma({\bf x},t), \nu_{\sigma'}({\bf y},t)\} &=&0,\nonumber\\
\{\nu_\sigma^{\dagger}({\bf x},t), \nu_{\sigma'}^{\dagger}({\bf y},t)\} &=&0, \ \ \ \ \ \ \ \sigma,\sigma'=e,\mu,
\end{eqnarray}
and find the Hamiltonian corresponding to the Lagrangian \eqref{mixlag}:
\be\label{Ham_nu}
H=\int d{\bf x}\left[\sum_{\sigma=e,\mu}\left(
-\bar\nu_\sigma(x)i\gamma^{k}\partial_{k}\nu_\sigma(x) + m_\sigma\bar\nu_\sigma(x)\nu_\sigma(x)
\right)+m_{e\mu}\left(\bar\nu_e(x)\nu_\mu(x)+\bar\nu_\mu(x)\nu_e(x)\right)\right].
\ee
Using \eqref{acr_nu} and \eqref{Ham_nu} in the Hamilton equations,
\be
i\partial_t\nu_\sigma({\bf x},t)= [\nu_\sigma({\bf x},t),H],
\ee
we find the equations of motion \eqref{neuteq'}.
We write their solutions at $t=0$ as
\bea\label{sol0}
\nu_\si({\bf x},0)= \int\frac{d{\bf k}}{(2\pi)^{3/2}\sqrt{2\omega_{{\bf k},\sigma}}}\sum_{r} \, \lf[u_{\mathbf{k},\si}^r \, a^r_{\mathbf{k},\si}(0) \, e^{i{\bf k}\cdot {\bf x}} \ri. + \lf. v_{\mathbf{k},\si}^r \, b_{\mathbf{k},\si}^{r\dagger}(0) \, e^{-i{\bf k}\cdot {\bf x}} \ri],\
\si=e,\mu \,
\eea
where $\omega_{{\bf k},\sigma}$ as well as the spinors $u_{\mathbf{k},\si}^r, v_{\mathbf{k},\si}^r$ correspond to the masses $m_e$ and $m_\mu$, thus fulfilling the requirement that in the absence of interaction ($m_{e\mu}=0$), the solutions \eqref{sol0} coincide with the solutions of the free Dirac equation with the masses $m_e$ and $m_\mu$.
In principle, now we can continue like in Sect. \ref{QFI} and determine $H$ in terms of $a^r_{\mathbf{k},\si}(0), a^{\dagger r}_{\mathbf{k},\si}(0)$ and $b^r_{\mathbf{k},\si}(0), b^{\dagger r}_{\mathbf{k},\si}(0)$ and subsequently determine $a^r_{\mathbf{k},\si}(t), b^r_{\mathbf{k},\si}(t)$ and their hermitian conjugates by using \eqref{H_eq_ab}.
Since we know that the equations of motion \eqref{neuteq'} are diagonalized by the change of variables \eqref{PontecorvoMix}, which is valid at each and every space-time point, and the solutions for $\nu_1,\nu_2$ are already known and given by \eqref{Fourierfield}, one can write:
\bea\label{mixt0}
\nu_e({\bf x},0)&=&\cos\,\theta\nu_1({\bf x},0)+\sin\theta\,\nu_2({\bf x},0),\cr
\nu_\mu({\bf x},0)&=&-\sin\theta\,\nu_1({\bf x},0)+\cos\theta\,\nu_2({\bf x},0).
\eea
Then from \eqref{mixt0}, using \eqref{Fourierfield} and \eqref{sol0}, we find:
\bea
u_{\mathbf{k},e}^r \, a^r_{\mathbf{k},e}(0) \, + v_{-\mathbf{k},e}^r \, b_{-\mathbf{k},e}^{r\dagger}(0) \,&=&
\sqrt{\frac{\omega_{{\bf k},e}}{\omega_{{\bf k},1}}} c_\theta \left[u_{\mathbf{k},1}^r \, \alpha^r_{\mathbf{k},1}(0) + v_{-\mathbf{k},1}^r \, b_{-\mathbf{k},1}^{r\dagger}(0)\right]\cr
&+&\sqrt{\frac{\omega_{{\bf k},e}}{\omega_{{\bf k},2}}} s_\theta \left[u_{\mathbf{k},2}^r \, \alpha^r_{\mathbf{k},2}(0) +v_{-\mathbf{k},2}^r \, b_{-\mathbf{k},2}^{r\dagger}(0)\right], \cr
u_{\mathbf{k},\mu}^r \, a^r_{\mathbf{k},\mu}(0) \, + v_{-\mathbf{k},\mu}^r \, b_{-\mathbf{k},\mu}^{r\dagger}(0) \, &=& -\sqrt{\frac{\omega_{{\bf k},\mu}}{\omega_{{\bf k},1}}} s_\theta\left[u_{\mathbf{k},1}^r \, \alpha^r_{\mathbf{k},1}(0) + v_{-\mathbf{k},1}^r \, b_{-\mathbf{k},1}^{r\dagger}(0)\right]\cr
&+&\sqrt{\frac{\omega_{{\bf k},\mu}}{\omega_{{\bf k},2}}} c_\theta \left[u_{\mathbf{k},2}^r \, \alpha^r_{\mathbf{k},2}(0) +c_\theta v_{-\mathbf{k},2}^r \, b_{-\mathbf{k},2}^{r\dagger}(0)\right],
\eea
where $c_\theta\equiv \cos\theta$, $s_\theta\equiv \sin\theta$. Using for the spinors a normalization such that
\begin{eqnarray}\label{spinor_norm}
u^{r\dagger}_{{\bf k},\sigma}\,u^{r'}_{{\bf k},\sigma}&=&2\omega_{{\bf k},\sigma}\delta_{rr'},\cr
u^{r\dagger}_{{\bf k},\sigma}\,v^{r'}_{-{\bf k},\sigma }&=&0,\ \quad \quad\quad \sigma=e,\mu,
\end{eqnarray}
we obtain
\bea \label{4x4Bog_0}
&& \left[ \begin{tabular}{c} $a^r_{\G k,e}(0)$ \\ $b_{-\G k,e}^{r \dagger}(0)$
\\$a^r_{\G k,\mu}(0)$ \\ $b_{-\G k,\mu}^{r \dagger}(0)$ \end{tabular} \right] = \ \left[\begin{array}{cccc}
c_\theta\, A^{\G k}_{e 1}& \, c_\theta \,B^{\G k}_{e 1} &
s_\theta \,A^{\G k}_{e 2} &
\, s_\theta \,B^{\G k}_{e 2} \\
\, c_\theta \,C^{\G k}_{e 1} & c_\theta\, D^{\G k}_{e 1} & \, s_\theta
\,C^{\G k}_{e 2} & s_\theta \,D^{\G k}_{e 2}
\\
- s_\theta \,A^{\G k}_{\mu 1} & -\, s_\theta \,B^{\G k}_{\mu 1}& c_\theta
\,A^{\G k}_{\mu 2}
& \, c_\theta \,B^{\G k}_{\mu 2} \\
- \, s_\theta \,C^{\G k}_{\mu 1} & -
s_\theta\,
D^{\G k}_{\mu 1} & \, c_\theta\, C^{\G k}_{\mu 2}& c_\theta\, D^{\G k}_{\mu 2}
\end{array}\right]
\left[ \begin{tabular}{c} $\al^r_{\G k,1}$ \\ $\bt_{-\G k,1}^{r\dagger}$ \\ $\al^r_{\G k,2}$ \\ $\bt_{-\G k,2}^{r\dagger} $\end{tabular} \right] \, ,
\eea
with
\begin{eqnarray}\label{notation}
A^\G k_{\sigma i} & = &\frac{1}{2\sqrt{\omega_{{\bf k},\sigma}\omega_{{\bf k},i}}}u^{r\dagger}_{{\bf k},\sigma}\,u^{r}_{{\bf k},i},
\quad\quad
B^\G k_{\sigma i} =\frac{1}{2\sqrt{\omega_{{\bf k},\sigma}\omega_{{\bf k},i}}}u^{r\dagger}_{{\bf k},\sigma}\,v^{r}_{-{\bf k},i},\\
C^\G k_{\sigma i} & = &\frac{1}{2\sqrt{\omega_{{\bf k},\sigma}\omega_{{\bf k},i}}}v^{r\dagger}_{-{\bf k},\sigma}\,u^{r}_{{\bf k},i},\quad\quad
D^\G k_{\sigma i} =\frac{1}{2\sqrt{\omega_{{\bf k},\sigma}\omega_{{\bf k},i}}}v^{r\dagger}_{-{\bf k},\sigma}\,v^{r}_{-{\bf k},i}.
\end{eqnarray}
The exact expressions for the coefficients $ A^\G k_{\sigma i}, B^\G k_{\sigma i},C^\G k_{\sigma i},D^\G k_{\sigma i}$ are not important (although they can be easily calculated), it suffices to say that they are nonvanishing.
Now, using the Hamiltonian in the form \eqref{Ham_free}, we determine the time dependence of the operators (see \eqref{H_eq_ab}):
\bea \label{4x4Bog_t}
&& \left[ \begin{tabular}{c} $a^r_{\G k,e}(t)$ \\ $b_{-\G k,e}^{r \dagger}(t)$
\\$a^r_{\G k,\mu}(t)$ \\
$b_{-\G k,\mu}^{r \dagger}(t)$ \end{tabular} \right] = \ \left[\begin{array}{cccc}
c_\theta\, A^{\G k}_{e 1}e^{-i \, \om_{\G k,1} \, t}& \, c_\theta \,B^{\G k}_{e 1} e^{i \, \om_{\G k,1} \, t}&
s_\theta \,A^{\G k}_{e 2}e^{-i \, \om_{\G k,2} \, t} &
\, s_\theta \,B^{\G k}_{e 2}e^{i \, \om_{\G k,2} \, t} \\
\, c_\theta \,C^{\G k}_{e 1}e^{-i \, \om_{\G k,1} \, t} & c_\theta\, D^{\G k}_{e 1} e^{i \, \om_{\G k,1} \, t}& s_\theta
\,C^{\G k}_{e 2}e^{-i \, \om_{\G k,2} \, t} & s_\theta \,D^{\G k}_{e 2}e^{i \, \om_{\G k,2} \, t}
\\
- s_\theta \,A^{\G k}_{\mu 1}e^{-i \, \om_{\G k,1} \, t} & -\, s_\theta \,B^{\G k}_{\mu 1}e^{i \, \om_{\G k,1} \, t}& c_\theta
\,A^{\G k}_{\mu 2}e^{-i \, \om_{\G k,2} \, t}
& \, c_\theta \,B^{\G k}_{\mu 2} e^{i \, \om_{\G k,2} \, t}\\\nonumber
- \, s_\theta \,C^{\G k}_{\mu 1}e^{-i \, \om_{\G k,1} \, t} & -
s_\theta\,
D^{\G k}_{\mu 1} e^{i \, \om_{\G k,1} \, t}& \, c_\theta\, C^{\G k}_{\mu 2}e^{-i \, \om_{\G k,2} \, t}& c_\theta\, D^{\G k}_{\mu 2}e^{i \, \om_{\G k,2} \, t}
\end{array}\right]
\left[ \begin{tabular}{c} $\al^r_{\G k,1}$ \\ $\bt_{-\G k,1}^{r\dagger}$ \\ $\al^r_{\G k,2}$ \\ $\bt_{-\G k,2}^{r\dagger} $\end{tabular} \right] \, .\\
\eea
Then the solutions of the equations of motion \eqref{neuteq'}, for arbitrary time, have the form:
\bea\label{sol_t}
\nu_\si({\bf x},t)= \int\frac{d{\bf k}}{(2\pi)^{3/2}\sqrt{2\omega_{{\bf k},\sigma}}}\sum_{r} \, \lf[u_{\mathbf{k},\si}^r \, a^r_{\mathbf{k},\si}(t) \, e^{i{\bf k}\cdot {\bf x}} \ri. + \lf. v_{\mathbf{k},\si}^r \, b_{\mathbf{k},\si}^{r\dagger}(t) \, e^{-i{\bf k}\cdot {\bf x}} \ri],\ \ \
\si=e,\mu \, .
\eea
This completes the quantization of the interacting fields $\nu_e,\nu_\mu$, according to the rules of quantum field theory \cite{BD,Bog-Shirk}. Let us emphasize that the time dependence of the operators $a^r_{\mathbf{k},\si}(t)$ and $b_{\mathbf{k},\si}^{r\dagger}(t)$ in \eqref{sol_t} is only through phases of the type $e^{\pm i\omega_{{\bf k},i}t}$ and not $e^{\pm i\omega_{{\bf k},\sigma}t}$ (see eq. \eqref{4x4Bog_t}).
The question is whether the operators $a^r_{\mathbf{k},\si}(t)$ and $b_{\mathbf{k},\si}^{r\dagger}(t)$ with the expressions given by \eqref{4x4Bog_t} can be interpreted as annihilation and creation operators, respectively. Recalling the criteria expressed in \eqref{criterion1} and \eqref{criterion2}, the answer is definitely no, because of the presence of $e^{i \, \om_{\G k,i} \, t}$ in the expansion of $a^r_{\mathbf{k},\si}(t)$ and the presence of $e^{-i \, \om_{\G k,i} \, t}$ in the expansion of $b_{\mathbf{k},\si}^{r\dagger}(t)$. {\it Consequently, one cannot define new flavour vacua $|0 (t)\ran_{e,\mu}$ by requiring them to be annihilated by $a^r_{\mathbf{k},\si}(t)$ and $b^r_{\mathbf{k},\si}(t)$.} The operators appearing in the expansion \eqref{sol_t} do not generate flavour Fock space(s). The only vacuum that can be defined in the theory is by the quantization of the set of free massive fields $(\nu_1,\nu_2)$, namely $|0\ran_{1,2}$.
\subsection{Inconsistencies of a flavour Fock space scheme}\label{inconsist}
Despite the facts explained above, in the flavour Fock space scheme, new creations and annihilation operators were introduced. How is this possible? The answer is: by an illegitimate manipulation of formula \eqref{sol_t}, as will be shown below.
Let us examine how the expansion \eqref{sol_t} is written in formula (10) of the comment \cite{BS}:
\bea
\nu_\si({\bf x},t) = \frac{1}{\sqrt{V}}\sum_{\mathbf{k},r} \, e^{i{\bf k}\cdot {\bf x}} \, \lf[u_{\mathbf{k},\si}^r \, \alpha_{\mathbf{k},\si}(t) \, e^{-i \, \om_{\G k,\si} \, t} + v_{-\mathbf{k},\si}^r \, \beta_{-\mathbf{k},\si}^{r\dagger}(t) \, e^{i \om_{\G k,\si} \, t}\ri] \, ,\qquad
\si=e,\mu \, , \label{Fourierfieldf}
\eea
where $\om_{\G k,\si} = \sqrt{|\G k|^2+\mu_\si^2}$ and $\mu_\si$ are mass parameters which are (partly) specified by the requirement that in the limit when $m_{e\mu}=0$, they become identical to $m_\si$. This leads to an infinity of possibilities. (Incidentally, it was proved long ago \cite{CG} that the hypothesis that neutrinos produced or detected in charged-current weak interaction processes are described by flavor neutrino Fock states implies that measurable quantities depend on the arbitrary unphysical flavor neutrino mass parameters $\mu_\sigma$.) We shall consider in \eqref{Fourierfieldf} $\mu_e=m_e$ and $\mu_\mu=m_\mu$, in order to match the formula \eqref{sol_t} above.
The salient fact when comparing \eqref{sol_t} and \eqref{Fourierfieldf}, is that the operators $\alpha^r_{\mathbf{k},\si}(t)$ and $\beta_{-\mathbf{k},\si}^{r\dagger}(t)$ were introduced in formula (10) of \cite{BS} by hand, in order to {\it force} their interpretation as annihilation and creation operators, respectively. The only way to obtain $\alpha^r_{\mathbf{k},\si}(t)$ and $\beta_{-\mathbf{k},\si}^{r\dagger}(t)$ is by taking the operators $a^r_{\mathbf{k},\si}(t)$ and $b_{-\mathbf{k},\si}^{r\dagger}(t)$ determined by \eqref{4x4Bog_t} and multiplying them "conveniently" by a phase:
\bea\label{sol_t'}
\nu_\si({\bf x},t)= \frac{1}{\sqrt{V}}\sum_{\mathbf{k},r} \, e^{i{\bf k}\cdot {\bf x}}\Big[u_{\mathbf{k},\si}^r \, \left(a^r_{\mathbf{k},\si}(t) \,e^{i \, \om_{\G k,\si} \, t}\right)e^{-i \, \om_{\G k,\si} \, t} + v_{-\mathbf{k},\si}^r \, \left(b_{-\mathbf{k},\si}^{r\dagger}(t) \, e^{-i \, \om_{\G k,\si} \, t}\right)e^{i \, \om_{\G k,\si} \, t} \Big],
\eea
such that
\bea\label{fallacy}
\alpha^r_{\mathbf{k},\si}(t)=a^r_{\mathbf{k},\si}(t)e^{i \, \om_{\G k,\si} \, t} ,\quad\quad \beta_{-\mathbf{k},\si}^{r\dagger}(t)=b_{-\mathbf{k},\si}^{r\dagger}(t)e^{-i \, \om_{\G k,\si} \, t} .
\eea
In this way, in \eqref{Fourierfieldf} the operator $\alpha^r_{\mathbf{k},\si}(t)$ comes as if in the positive-frequency part, while $\beta_{-\mathbf{k},\si}^{r\dagger}(t)$ appears in the negative-frequency part, in the hope that this will be conducive to their interpretation as annihilation and creation operators. However, this is not the case: the so-called "creation operator" $\beta_{\mathbf{k},\si}^{r\dagger}(t)$ cannot be written as a superposition of {\it time-independent} operators, therefore, by the criterion \eqref{criterion1}, it does not have the meaning of a creation operator \cite{Bog-Shirk}. Similarly it can be argued why $\alpha^r_{\mathbf{k},\si}(t)$ cannot be an annihilation operator\footnote{As an aside, in \cite{BS} formula (11), reproduced below, the operators $\alpha^r_{\mathbf{k},\si}(t)$ and $\beta_{-\mathbf{k},\si}^{r\dagger}(t)$ are given as:
\bea \label{4x4Bog_BS}
\left[ \begin{tabular}{c} $\alpha^r_{\G k,e}$ \\ $\beta_{-\G k,e}^{r \dagger}$
\\$\alpha^r_{\G k,\mu}$ \\ $\beta_{-\G k,\mu}^{r \dagger}$ \end{tabular} \right]
= \ \left[\begin{array}{cccc}
c_\theta\, \rho^{\G k}_{e 1}& i \, c_\theta \,\lambda^{\G k}_{e 1} &
s_\theta \,\rho^{\G k}_{e 2} &
i \, s_\theta \,\lambda^{\G k}_{e 2} \\
i \, c_\theta \,\lambda^{\G k}_{e 1} & c_\theta\, \rho^{\G k}_{e 1} & i \, s_\theta
\,\lambda^{\G k}_{e 2} & s_\theta \,\rho^{\G k}_{e 2}
\\
- s_\theta \,\rho^{\G k}_{\mu 1} & -i \, s_\theta \,\lambda^{\G k}_{\mu 1}& c_\theta
\,\rho^{\G k}_{\mu 2}
& i \, c_\theta \,\lambda^{\G k}_{\mu 2} \\ - i \, s_\theta \,\lambda^{\G k}_{\mu 1} & -
s_\theta\,
\rho^{\G k}_{\mu 1} & i \, c_\theta\, \lambda^{\G k}_{\mu 2}& c_\theta\, \rho^{\G k}_{\mu 2}
\end{array}\right]
\left[ \begin{tabular}{c} $\al^r_{\G k,1}$ \\ $\bt_{-\G k,1}^{r\dagger}$ \\ $\al^r_{\G k,2}$ \\ $\bt_{-\G k,2}^{r\dagger} $\end{tabular} \right] \, ,
\eea
where $\rho^\G k_{ab}=|\rho^\G k_{ab}|e^{i(\om_{\G k,a}-\om_{\G k,b})t}$, $\la^\G k_{ab}=|\la^\G k_{ab}|e^{i(\om_{\G k,a}+\om_{\G k,b})t}$, $c_\theta\equiv \cos\theta$, $s_\theta\equiv \sin\theta$ and
\begin{eqnarray*}
|\rho^\G k_{a b}| & \equiv & \cos\frac{\chi_a - \chi_b}{2}, \quad
|\lambda^\G k_{a b}| \ \equiv \ \sin\frac{\chi_a -
\chi_b}{2} \, , \non \\[2mm] \label{rholambda}
\chi_a & \equiv & \cot^{-1}\lf[\frac{k}{m_a}\ri] \, , \qquad m_a, \, m_b \ = \ m_1, \, m_2, \, \mu_e, \, \mu_\mu\, .
\end{eqnarray*}
The mismatch in the time dependence between \eqref{fallacy} and \eqref{4x4Bog_BS} indicates that formula (11) of \cite{BS} is incorrect.}.
Let us remark that, were the manipulation as in \eqref{sol_t'} allowed, then one could construct at will Fock spaces also for {\it any} nontrivial interacting quantum field theory, by just multiplying the equation \eqref{criterion2} by $1=e^{i\omega_{\bf k} t}e^{-i\omega_{\bf k} t}$:
\be\label{criterion2'}
b^\dagger_{{\bf k},i}(t)=e^{i\omega_{\bf k} t}\left[\sum_\alpha e^{i(\omega_\alpha-\omega_{\bf k}) t} b^\dagger_{i\alpha}({\bf k})+\sum_\beta e^{-i(\omega_\beta+\omega_{\bf k}) t} b_{i\beta}({\bf k})\right],
\ee
where $\omega_{\bf k}=\sqrt{{\bf k}^2+\mu^2}$, with $\mu$ an arbitrary mass parameter. It is well known that for nontrivial interacting quantum field theories the Fock space representation does not exist (see, for example, Refs. \cite{SW, Strocchi}).
In conclusion, the procedure employed in the flavour Fock space scheme to identify creation and annihilation operators for interacting flavour fields in \cite{BS} is deceptive and the conclusion regarding the existence of flavour vacua is wrong.
\subsubsection* {Flavour vacuum and Coleman's theorem}
Still, let us assume for a moment that the flavour vacua $|0(t)\ran_{e,\mu}$ could be defined by
\be
\al^{r}_{\G k, \si}(t) \, |0(t)\ran_{e,\mu} \ = \ \bt^{r}_{\G k, \si}(t) \, |0(t)\ran_{e,\mu} \ = \ 0 \, ,\ \ \sigma=e,\mu,
\ee
and the vacuum at $t=0$ is chosen as the physical one and denoted $|0\ran_{e,\mu}$. We shall find some contradictions that such a scheme leads to.
Further, one can define the flavour charge operators
by
\bea
Q_{\nu_{\si}} (t) & = & \intx \,
:\nu_{\si}^{\dag}(x)\nu_{\si}(x):\cr
& = & \ \intk \, \lf(\al^{r\dag}_{\G k,\si}(t) \, \al^r_{\G k,\si}(t) \, - \, \bt^{r\dag}_{\G k,\si}(t) \, \bt^r_{\G k,\si}(t) \ri) \, , \quad \si=e,\mu \, , \label{QflavLept}
\eea
where $\nu_\sigma(x)$ are given in the mode expansion \eqref{Fourierfieldf}. The vacuum is invariant under the flavour charge global transformations generated by $Q_{\nu_{\si}} (0)$:
\be\label{flavour_inv_vac}
Q_{\nu_{\si}} (0)|0\ran_{e,\mu}=0.
\ee
In the flavour Fock space scheme, the fact that the flavour states of mixed neutrinos are eigenstates of the flavour operator \eqref{QflavLept} is regarded as a merit \cite{BCJV} (see also \cite{BS}).
However, the total Hamiltonian, including the Standard Model interactions and the mixed mass terms for the neutrinos, is by construction flavour number violating \cite{mohapatra,fukugita,giunti,bilenky,valle}.
Thus, in the flavour Fock space scheme (see \cite{BS}, Sect. 4) one has:
\begin{enumerate}
\item the physical flavour vacuum of the theory is invariant under flavour charge transformations;
\item the total Hamiltonian of the theory violates flavour symmetry.
\end{enumerate}
We assume by reductio ad absurdum that the above statements are true. Let us now recall the well-known theorem of Coleman \cite{Coleman}, which is summarized as {\it "the invariance of the vacuum is the invariance of the world"}. Coleman proved that if the vacuum is invariant under the group generated by
the space integral of the time component of a local vector current, then the Hamiltonian is invariant
also. Therefore, from the physical vacuum being flavour invariant as in \cite{BS} would follow that the Hamiltonian of the theory is also flavour invariant, which is in contradiction with the obvious fact that the Hamiltonian is flavour violating by construction. The logical conclusion is that the claim no. 1. above cannot be true, namely {\it the physical vacuum cannot be flavour invariant}.
\subsubsection*{Energy nonconservation }
The flavour vacuum is not the lowest eigenstate of the Hamiltonian:
\be
H|0(t)\ran_{e,\mu}\neq 0, \quad \forall t,
\ee
and a time dependent vacuum is not invariant under any of the external transformations involving the time, in particular under translation in time. This means that energy is not conserved in the interactions of the flavour neutrinos. (Ironically, neutrinos were introduced in the first place to save energy conservation!)
\subsection{Oscillating neutrino states \cite{AT_neutrino} cannot be reproduced in the flavour Fock space scheme}
In the comment \cite{BS} it is claimed that the oscillating neutrino states defined in \cite{AT_neutrino} can be reproduced in the flavour Fock space scheme, by taking $\mu_e=\mu_\mu=0$ in \eqref{Fourierfieldf}. Without going into the details of the procedure employed in \cite{AT_neutrino}, we will briefly show that there is no connection with the flavour Fock space scheme.
1. As mentioned earlier in Sect. \ref{inconsist}, the only allowed values for $\mu_e,\mu_\mu$ in \eqref{Fourierfieldf} are those which become identical to $m_\si$ in the limit when $m_{e\mu}=0$. Although there is an infinity of possibilities, the choice $\mu_e=\mu_\mu=0$ is not among them, consequently, this case does not belong to the flavour Fock space scheme. As a result, eqs. (13) in the comment \cite{BS} are wrong even within that scheme.
2. In addition, the operators used as creation operators in \cite{BS}, formula (14), cannot be regarded as creation operators in any Fock space, as they do not fulfill the general criterion \eqref{criterion1}.
3. It is a pure manipulation to use a sequence of wrong formulas in order to make the flavour Fock space scheme look formally as the states defined in \cite{AT_neutrino}.
4. The procedure developed in \cite{AT_neutrino} for defining oscillating neutrino states on the vacuum of the massive neutrinos is based on a genuine application of the method of unitarily inequivalent representations, by relating the field theory of massless Standard Model neutrinos with the field theory of massive neutrinos.
In this quantization prescription, {\it at a certain moment $t=0$ and only at that moment}, we impose the identification\footnote{Nota bene: The formulas \eqref{aa} are {\it not identical to} \eqref{mixt0}, though they look formally the same. The difference is that the fields $\psi_{\nu_e},\psi_{\nu_\mu}$ are solutions of the massless Dirac equations \eqref{aaaa}, and not of the coupled equations \eqref{neuteq'}.}
\bea
\label{aa}&&\nu_1 ({\bf x},0) = \psi_{\nu_e} ({\bf x},0)\cos\theta - \psi_{\nu_\mu}({\bf x},0)\sin\theta \, , \cr
&&\nu_{2}({\bf x},0) = \psi_{\nu_e}({\bf x},0)\sin\theta + \psi_{\nu_\mu} ({\bf x},0)\cos\theta \, ,
\eea
when the fields $\nu_1,\nu_2$ and $\psi_{\nu_e},\psi_{\nu_\mu}$ are solutions of the equations of motion
\bea\label{aaa}
&&\lf(i \ga^\mu \pa_\mu \ - \ m_j\ri)\nu_j(x) \ = \ 0 \, , \qquad j=1,2 \, ,\\\label{aaa}
&&i \ga^\mu \pa_\mu \ \psi_{\nu_\sigma}(x) \ = \ 0 \, , \qquad \sigma=e,\mu \, .\label{aaaa}
\eea
The set of equations \eqref{aa}, \eqref{aaa} and \eqref{aaaa} {\it is compatible}, as it means that the solution of a massive Dirac equation and the solution of a massless Dirac equation coincide at a given time moment, but otherwise evolve according to their respective Hamiltonians. Such a set of equations is allowed by the method of unitarily inequivalent representations and similar equations are encountered in the papers of Nambu and Jona-Lasinio \cite{NJL} (see also \cite{UTK, Umezawa-book}) or Haag \cite{Haag}. The reasons behind the equations \eqref{aa} and \eqref{aaa} are amply explained in Ref. \cite{AT_neutrino} and we feel that it is unnecessary to repeat here those detailed explanations. It suffices to say that \eqref{aa} and \eqref{aaa} are required in order to diagonalize the Hamiltonian corresponding to the flavour number violating Lagrangian \eqref{mixlag}, and to establish the Bogoliubov transformations between the creation and annihilation operators of the massive neutrino fields (corresponding to the observable quasiparticles) and the operators of the massless flavour neutrino fields (corresponding to the inobservable bare particles). In the end, the coherent oscillating particle states are defined by the application of the massless neutrino creation operators to the vacuum of the massive neutrinos, and their exact form is given in eq. \eqref{3-nu_state}.
\section{Conclusions}
We have proven explicitly that {\it flavour neutrino Fock spaces cannot be constructed}, when the flavour neutrino fields are linear combinations of massive neutrino fields with different masses as in \eqref{PontecorvoMix}.
Indeed, the equations of motion for the free fields $(\nu_1,\nu_2)$ are generated by the same Hamiltonian as the equations of motion for the interacting fields $(\nu_e,\nu_\mu)$. Thus, the two sets of fields are related by a unitary change of variable, therefore they represent the same degrees of freedom. The vacuum of the theory described by the Lagrangian \eqref{mixlag} is unique. This already invalidates the claims made in the comment \cite{BS}.
Furthermore, the flavour Fock space scheme cannot be viable by very general arguments, such as Coleman's theorem \cite{Coleman}, since it leads to the absurd conclusion that the Hamiltonian of mixed neutrino fields is, simultaneously, flavour number violating and flavour number symmetric.
Our work on the formulation of oscillating neutrino states \cite{AT_neutrino} is completely disconnected from the so-called flavour Fock space scheme described in the comment \cite{BS} and in the references therein.
The assumption of the existence of time-dependent flavour vacua is based on a fallacious definition of creation and annihilation operators in interacting theories \eqref{fallacy}, in contradiction with the principles of quantum field theory \cite{BD,Bog-Shirk, SW, Strocchi}. Additionally, such an assumption would lead to sensational consequences, such as the energy nonconservation in interactions, along with the breaking of several other symmetries. It would have been really a great surprise if the trivial system of two non-interacting massive Dirac fields were to hide inside so much drama.
|
2,869,038,156,657 | arxiv | \section{Introduction}
The main goal of this work is to study
the transmission problem in isotropic linear elasticity. Let $\Omega\subset\R^3$ be a smooth bounded domain.
Let $\Gamma_1,\dots, \Gamma_k$ be closed disjoint smooth surfaces (interfaces) splitting $\Omega$ into subdomains $\Omega_k$ with exterior boundary $\Gamma_{k-1}$ (with $\Gamma_0:=\bo$) and interior one $\Gamma_k$, see Figure~\ref{pic_el_waves}, left. Assume that the density $\rho$ and the Lam\'e parameters $\mu$, $\nu$ are smooth up to those surfaces with possible jumps there. We also assume that at at every point, at least one coefficient has a non-zero jump. We impose the following transmission conditions
\be{tr}
[u]= 0, \quad [Nu]=0 \quad\text{on $\Gamma_j$, $j=1,\dots,k$},
\ee
where $[v]$ stands for the jump of $v$ from the exterior to the interior across any of those surfaces, and $Nf$ are the normal components of the stress tensor, see \r{2a1}.
We are motivated by the isotropic elastic model of the Earth where the density and the Lam\'e parameters jump across the boundary between the crust and the mantle, etc.
We study the time-dependent elastic system, see \r{el_eq}.
The first goal of this paper is to describe qualitatively the microlocal behavior of solutions of this problem.
At any interface $\Gamma_i$, an incoming S or P wave can generate two reflected waves, one S wave and one P wave through mode conversion and two transmitted ones. Then each branch can generate four more, etc., see Figure~\ref{pic_el_waves}. In some cases, there might be a full internal reflection for one or both of the waves, and there could be no transmitted or reflected waves of a certain kind. In fact, the missing waves would be evanescent modes.
\begin{figure}[!ht]
\includegraphics[page=8,scale=1]{DN_earth_layers_pics}
\caption{Left: the domain $\Omega$ and the layers. Right: Propagation of rays from a single source and direction. P waves are denoted with a solid line; S waves are dotted.
}\label{pic_el_waves}
\end{figure}
While works on geometric optics for the elasticity system exist (no transmission) \cite{Brytik_11, HansenUhlmann03, Rachele_2000, Rachele00, Rachele03,SUV_elastic, Bhattacharyya_18}, a comprehensive analysis of the transmission problem in linear elasticity has not been done to authors' knowledge. In case of a flat surface and constant coefficients, some cases have been analyzed in the geophysics literature, see, e.g., \cite{Petrashen_seismic_I, Petrashen_seismic_II, aki2002quantitative, Sheriff_Seismology, slawinski2003seismic}. In that case, if there is no full internal reflection, one looks for solutions in terms of potentials to reduce the number of variables; and the potentials of the four waves corresponding to an incoming one solve a system which decouples into a $4\times4$ and a $2\times2$ one, see also \r{u_C4} and \r{u_C4a}. Those equations were derived by Knott \cite{Knott1899} and Zoeppritz \cite{Zoeppritz19} more than a century ago, see also \cite{aki2002quantitative}. In a recent paper \cite{caday2019recovery}, the hyperbolic-hyperbolic (HH) case is analyzed for variable $\lambda(x)$, $\mu(x)$ and $\rho=1$ but the construction for a curved boundary is partial only. The (HH) case is characterized by the wavefront of the Cauchy data on $\Gamma$: it could belong to projected S and P waves on either side of it, and in particular, there are no evanescent modes, see Section~\ref{sec_el_bvp}. This is just one of the many cases since we may have full internal reflection of some or both waves on one or both sides of $\Gamma$; and mode conversion to evanescent modes, see Section~\ref{sec_Tr_summary} for a summary. The most general study we are aware of is \cite{Yamamoto_elastic_89} where the coefficients are constant but cases other than the (HH) one are considered, even though not as extensively as we do it in this paper.
We analyze the general case of variable coefficients and a curved interface in all cases, away from glancing rays. We are interested in two main questions: is the problem well posed microlocally; and (control) can we create every configuration on one side with suitably chosen waves on the other. By doing that, we also compute the principal parts of the reflected and the transmitted waves. The microlocal well posedness reduces to showing the ellipticity of some \PDO\ system on $\Gamma$ with not particularly simple looking entries. Its solution serves as initial conditions for the corresponding transport equations for the hyperbolic of for the evanescent modes. In the flat, constant coefficients case, this system is actually the computation giving us the whole solution. Going back to the general case, in the (HH) microlocal region, we have four outgoing waves, each one being 3D vector-valued. This gives as a $12\times12$ \PDO\ system for showing-well posedness. If we allow both S and P waves coming from both sides, we would have a $12\times 24$ system which we want to solve for some group of variables. The control question is reduced to solving the same system with a rearrangement of the unknowns: we are given the waves on one side and want to solve for the waves on the other.
Doing this analysis with brute force does not seem to be a promising approach. Instead, we look for inspiration in the geophysics (and the existing math) literature using the flat constant coefficient case as a starting point. We express the P and the S waves in terms of potentials, as the divergence and as the curl of such potentials on a principal symbol level first; and we extend this to an arbitrary order. We adapt this to the boundary value problem. Having such microlocal mode separation, we also split the S waves in the SV (shear-vertical) and SH (shear-horizontal) waves. This decomposition is valid on $\Gamma$ only, and depends on the point (and the codirection). Then we reduce those systems to more manageable decoupled $4\times4$ plus $2\times2$ ones for the outgoing solutions given the incoming ones; their extended versions are $4\times 8$ plus $2\times 4$ ones, see \r{main_system} and \r{2x2}. If the boundary is flat and the coefficients are constant, those are exactly Knott's equations \cite{Knott1899}. Their ellipticity, needed to show well posedness, turns out to be a consequence of energy preservation (even though the determinant can be computed and analyzed \cite{aki2002quantitative}), another observation due to Knott. Ellipticity needed to show control can be verified easily and follows from the microlocal well posedness of the boundary value Cauchy problem.
We do this analysis in all microlocal cases with some or even all waves being evanescent; in that case we call them modes. The corresponding matrix symbols do not need to be recomputed; we just need to be careful which imaginary square roots to chose. Ellipticity based on energy preservation needs modifications though. Evanescent waves do not carry (high frequency) energy on the principal symbol level, at least.
We do such analysis for the boundary value problem for the outgoing solutions as well with Dirichlet or Neumann, homogeneous or not, boundary conditions. We also analyze the microlocal boundary value Cauchy problem. We start with the (principally scalar) acoustic equation first for two reasons: it is a needed ingredient in the analysis of the elastic system and SH waves behave as acoustic ones (no mode conversion).
We also study the surface waves propagating along the boundary (Rayleigh waves) or along an internal interface $\Gamma$ (Stoneley waves). Taylor \cite{Taylor-Rayleigh} characterized Rayleigh waves as a propagation of singularities phenomenon when $n=2$ and $\bo$ is flat, and he also mentions that the analysis applies to the general case as well. The existence of such waves is due to lack of ellipticity of the Dirichlet-to-Neumann (DN) operator in the elliptic region on $\bo$ and in the elliptic-elliptic one on an internal interface. Restricted to the surface $\bo$ or $\Gamma$, they solve a real principal type of system; and the solution extends as an evanescent one in $\bar \Omega$. Yamamoto \cite{Yamamoto_elastic_89} viewed Stoneley waves in a similar fashion. A more detailed analysis of the Rayleigh and the Stoneley waves will appear in a work of Y.~Zhang.
We also present an application of this analysis to the inverse problem of recovering the coefficients form the outgoing DN map. We recover first the lens relation associated with incoming S and P waves in the first layer $\Omega_1$; then we use the recent results by the authors \cite{SUV_localrigidity} about local recovery of a sound speed (or a conformal factor) from localized travel times. By \cite{Bhattacharyya_18}, we can recover $\rho$ in $\Omega_1$ as well, therefore we can recover all three coefficients $\mu$, $\lambda$ and $\rho$ there.
In \cite{SUV_localrigidity} we prove conditional H\"older stability as well which makes this approach for the inverse problem in this paper potentially stable as well; when it can be applied.
In the case of no internal interfaces, this was done in \cite{SUV_elastic}.
The inverse problem for transversely anisotropic media is studied in \cite{Vasy2019recovery}. The presence of interfaces however complicates the geometry considerably, see Figure~\ref{pic_el_waves} for the recovery of the coefficients in the deeper layers. The lens relation corresponding to a single S or P wave (ray) is multi-valued in general and there is no direct way to tell which branch is coming from which layer, roughly speaking. This makes the inverse problem much different. An essential difficulty following this approach is that there could be totally internally reflected rays in the interior side of one interface which never get out, not even through mode conversion. Then they cannot be generated by rays from the exterior (by ``earthquakes''). We show that if there is no total internal reflection of S waves on the interface $\Gamma_1$ (from the interior), we can recover $c_s$ below it. This is more general than the result in \cite{caday2019recovery} where $\rho=1$, and there is the implicit assumption that there is no full reflection of S \textit{and} P waves. Since we do not recover all three coefficients below the first interface, we use arguments based on the geometry and the directions of the polarization only, which depend on the speeds only. Next, we also show that if there is no total internal reflection of $P$ waves as well, one can recover $c_p$ in $\Omega_2$. Those arguments can be used to get even deeper into $\Omega$ with the appropriate assumptions on the speeds.
\section{Preliminaries}
\subsection{The elastic system}
The isotropic elastic system in a smooth bounded domain $\Omega\subset \R^3$ is described as follows. The elasticity tensor is defined by
\[
c_{ijkl} = \lambda \delta_{ij} \delta_{kl} +\mu(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}),
\]
where $\lambda$, $\mu>0$ are the Lam\'e parameters.
Assume for now that the coefficients $\lambda$, $\mu$ and $\rho$ are smooth in $\bar \Omega$. The elastic wave operator is given by
\[
(Eu)_i = \rho^{-1}\sum_{jkl} \partial_j c_{ijkl} \partial_l u_k,
\]
where $\rho>0$ is the density and the vector function $u$ is the displacement.
The corresponding elastic wave equation is given by
\be{el_eq}
u_{tt}-Eu=0,
\ee
see, e.g., \cite{slawinski2003seismic}.
The stress tensor $\sigma_{ij}(u)$ is defined by
\be{1s0}
\sigma_{ij}(u) = \lambda \nabla\cdot u\delta_{ij} + \mu(\partial_j u_i + \partial_i u_j).
\ee
Note that $Eu = \rho^{-1}\delta\sigma(u)$, where $\delta$ is the divergence of the 2-tensor $\sigma(u)$.
The Dirichlet boundary condition for $E$ is prescribing $u$ on the boundary; while the natural Neumann boundary condition is to prescribe the normal components of the stress tensor
\be{2a1}
Nu:= \sum_j \sigma_{ij}(u)\nu^j\big|_{\bo},
\ee
where $\nu$ is the outer unit normal on $\bo$. This is the operator appearing in the Green's formula \r{9G} for $E$ but also has the physical meaning as the infinitesimal deformation of the material in normal direction.
Let $\Gamma$ be a smooth surface where the coefficients $\rho$, $\lambda$, $\mu$ may jump. The physical transmission conditions across $\Gamma$ are the following. First, kinematic ones: the displacements $u$ on both sides of $\Gamma$ should match (no slipping of the material w.r.t.\ each other); and second, dynamical ones: the normal components $Nu$ on both sides should match (same traction). Therefore, if we declare one side of $\Gamma$ external and the other one internal, and denote by $[u]_\Gamma$ the jump of $u$ across $\Gamma$ from the exterior to the interior, we obtain the transmission conditions \r{tr} on $\Gamma$. Note that in $[Nu]$, the operator $N$ depends on $\rho$, $\mu$ and $\lambda$ and has different coefficients on each side of $\Gamma_j$.
The operator $E$ is symmetric on $L^2(\Omega;\mathbf{C}^3,\rho\,\d x)$. It has a principal symbol
\be{s0}
\sigma_p(-E)v = \frac{\lambda+\mu}{\rho} \xi (\xi\cdot v) + \frac{\mu}{\rho} |\xi|^2 v,\quad v\in\mathbf{C}^n,
\ee
which can be also written as
\be{s0'}
\sigma_p(-E)v = \frac{\lambda+2\mu}{\rho} \xi (\xi\cdot v) + \frac{\mu}{\rho} \left(|\xi|^2 -\xi\xi\cdot\right)v.
\ee
Taking $v=\xi$ and $v\perp\xi$, we recover the well known fact that $\sigma_p(-E)$ has eigenvalues $c_p^2$ and $c_s^2$ with
\be{speeds}
c_p= \sqrt{(\lambda+2\mu)/\rho}, \quad c_s = \sqrt{\mu/\rho}
\ee
of multiplicities $1$ and $2$ and eigenspaces $\R\xi$, and $\xi^\perp$, respectively.
We have $c_s<c_p$. Those are known as the speeds of the P waves and the S waves, respectively.
The eigenspaces correspond to the polarization of those waves. The characteristic variety $\det \sigma_p((\partial_t^2-E)) =0$ is the union of $\Sigma_p := \{\tau^2=c_p^2|\xi|^2\}$ and $\Sigma_s := \{\tau^2=c_s^2|\xi|^2\}$, each one having two connected components (away from the zero section), determined by the sign of $\tau$.
Let $u$ solve the elastic wave equation
\be{1}
\begin{cases}
u_{tt} -Eu &=0\quad \text{in $\R\times\Omega$},\\
\ u|_{\R\times\bo} &=f,\\
\ \ \ \ u|_{t<0}&=0,
\end{cases}
\ee
with $f$ given so that $f=0$ for $t<0$ and all coefficients smooth in $\Omega$ (no transmission interfaces). The (outgoing) Dirichlet-to-Neumann $\Lambda$ map is defined by
\be{2a}
(\Lambda f)_i = (Nu)_i = \sum_j \sigma_{ij}(u)\nu^j\big|_{\bo},
\ee
see \r{2a1}, where $\nu$ is the outer unit normal on $\bo$, and $\sigma_{ij}(u)$ is the stress tensor \r{1s0}.
\subsection{An invariant metric based formulation} \label{sec_m}
We have
\be{E}
(Eu)_i = \rho^{-1} ( \partial_i\lambda\partial_j u_j + \partial_j\mu \partial_j u_i + \partial_j\mu \partial_i u_j ),
\ee
where we sum over repeating indices even if they are both lower or upper.
This can also be written in the following divergence form
\be{L0}
Eu = \rho^{-1}( \d \lambda \delta u+ 2 \delta \mu \d^s u ),
\ee
where $\d^su=(\partial_ju_i+ \partial_iu_j)/2$ is the symmetric differential, and $\delta= -(\d^s)^*$ is the divergence of symmetric fields with the adjoint in $L^2$ sense.
To prepare ourselves for changes of variables needed in the analysis near surfaces that we will flatten out, we will write $E$ in an invariant way in the presence of a Riemannian metric $g$. We view $u$ as an one form (a covector field) and we define the symmetric differential $\d^s$ and the divergence $\delta$ by
\[
(\d^s u)_{ij}= \frac12\left(\nabla_i u_j+\nabla_j u_i\right), \quad (\delta v)_i = \nabla^j v_{ij},\quad \delta u = \nabla^iu_i,
\]
where $\nabla$ is the covariant differential, $\nabla^j = g^{ij}\nabla_i$, $u$ is a covector field, and $v$ is a symmetric covariant tensor field of order two. Note that $\d^s$ increases the order of the tensor by one while $\delta$ decreases it by one. Then we define $E$ by \r{L0}. We still have $\delta= -(\d^s)^*$, where the adjoint is in the $L^2(\Omega,\d\Vol)$ space of contravariant tensor fields, see, e.g., \cite{Sh-book}.
The stress tensor \r{1s0} is given by
\be{1s2}
\sigma(u) = \lambda (\delta u)g + 2\mu \d^s u,
\ee
and then $Eu=\rho^{-1}\delta\sigma(u)$.
The Neumann boundary condition $Nu$ at $\bo$ is still given by prescribing the values of $\sigma_{ij}(u)\nu^j$ on it as in \r{2a}.
The operator $E$, defined originally on
$C_0^\infty(\Omega)$ extends to a self-adjoint operator in $L^2(\Omega, \rho\, \d\!\Vol)$. This extension is the one satisfying the zero Dirichlet boundary condition on $\R\times\bo$. In particular, this shows that the mixed problem \r{1} is solvable with regular enough data $f$ at least since one can always extend $f$ inside and reduce the problem to solving one with a zero boundary condition and a non-zero source term; and then use the Duhamel's principle for the latter.
The principal symbol of $E$ in the metric setting is still given by \r{s0} with the proper interpretation of the dot product there:
\be{s0g}
(\sigma_p(-E)v)_i = \frac{\lambda+\mu}{\rho} \xi_i \xi^j v_j + \frac{\mu}{\rho} |\xi|_g^2 v,\quad v\in\C^n,
\ee
where $\xi^j=g^{jk}\xi_k$ as usual.
In particular, the speeds $c_p$ and $c_s$ remain as in \r{speeds}. The eigenspaces of the symbol are still $\R\xi$ and $\xi^\perp$, the latter being the covectors normal to $\xi$.
Notice that under coordinate changes, the coordinate expression for $u$ changes as well, as a covector.
We recall that the cross product on an oriented three dimensional Riemannian manifold is defined in the following way. If $\xi$ and $\eta$ are covectors at some fixed point $x$, then $\xi\times \eta$ is defined as the unique covector satisfying
\[
\langle \xi\times \eta,\zeta\rangle = \omega(g^{-1} \xi,g^{-1}\eta,g^{-1}\zeta),
\]
where $\langle\cdot, \cdot\rangle$ is the metric inner product of covectors, and $\omega$ is the volume form on the tangent bundle. To compute it in local coordinates, let $\alpha=\xi\times\eta$. Then we get
\[
g^{ij}\alpha_i\zeta_j = (\det g)^{-\frac12}\det(\xi,\eta,\zeta),
\]
where the latter is the determinant of the matrix with the indicated columns (also, the Euclidean volume form of them). Therefore, $(\det g)^{1/2}g^{-1}\alpha$ equals the Euclidean cross product
\[
(\det g)^{1/2}g^{-1}\alpha = (\xi_2\eta_3-\xi_3\eta_2, -\xi_1\eta_3+\xi_3\eta_1, \xi_1\eta_2-\xi_2\eta_1).
\]
This yields
\be{cross}
\xi\times\eta = (\det g)^{-\frac12}g (\xi_2\eta_3-\xi_3\eta_2, -\xi_1\eta_3+\xi_3\eta_1, \xi_1\eta_2-\xi_2\eta_1).
\ee
Similarly, the curl $\nabla\times u$ of a covector field $u$ is defined as the Hodge star of the exterior derivative $\d u $, and we have
\be{curl}
\nabla\times u = (\det g)^{-\frac12}g (\partial_2u_3-\partial_3 u_2, -\partial_1 u_3+\partial_3u_1, \partial_1 u_2-\partial_2u_1).
\ee
The divergence of $u$ is given by $\delta u = \nabla^i u_i$ and in particular, $\delta\nabla\times u=0$. We will use the notation $\nabla \cdot u$ for $\delta u$ as well.
One can verify that the double vector product of two covectors in the metric still satisfies $\xi\times \eta\times \zeta = \langle \xi,\zeta\rangle \eta -\langle \xi,\eta\rangle \zeta$, as in the Euclidean case.
\subsection{Existence of dynamics}
We assume now, as in the rest of the paper, that $\Omega$ can be expressed as a union of layers as explained in the Introduction and $\lambda$, $\mu$ and $\rho$ are smooth up to their boundaries with possible jumps at them. We also assume that $E$ is the metric based operator \r{L0}.
\begin{lemma}
Let $\lambda,\mu,\rho$ be as above. Then $E$, defined originally on functions smooth up to $\Gamma_1,\dots \Gamma_k$ and $\bo$, satisfying the transmission conditions \r{tr}, and zero boundary conditions on $\bo$, extends to a self-adjoint operator in $L^2(\Omega, \rho\,\d\!\Vol)$.
\end{lemma}
\begin{proof} We start with Green's formula. Let $D$ be a bounded domain with a smooth boundary so that $\lambda,\mu,\rho$ are smooth in $\bar D$. Then
\be{9G}
\int_D \langle Eu,v \rangle\rho \,\d\!\Vol- \int_D \langle u,Ev \rangle\rho\,\d\!\Vol = \int_{\p D} \left( \langle Nu , v\rangle- \langle u, Nv \rangle \right)\d A,
\ee
where $\d A$ is the area measure in $\p D$ induced by $g$.
To prove it, write
\[
\int_D \langle Eu,v \rangle\rho \,\d\!\Vol
= -\int_D \big( \lambda \langle \delta u,\delta v\rangle + 2\mu \langle\d^s u, \d^s v\rangle \big)\d\!\Vol+ \int_{\p D}\sigma_{ij}(u)\nu^j v^i \, \d A,
\]
since $Eu=\rho^{-1}\delta\sigma(u) $.
The last integral equals
\[
\int_{\p D} \langle Nu, v\rangle \, \d A.
\]
Switch $u$ and $v$ and subtract the resulting formulas to prove \r{9G}.
Assume now that $u$ and $v$ are smooth up to the interfaces, may jump there and satisfy the transmission conditions \r{tr}.
We apply \r{9G} to $\Omega\setminus \Omega_1$, $\Omega_1\setminus \Omega_2$, \dots, $\Omega_k$ and sum up the results. Note that the outer normal to $\Omega\setminus \Omega_1$ at $\Gamma_1$ is the inner one at the same $\Gamma_1$ when viewed from $\Omega_1\setminus \Omega_2$, etc. As a result, we get \r{9G} in $\Omega$ as well, despite the discontinuities because by the transmission conditions \r{tr}, all contributions from $\Gamma_1,\dots,\Gamma_k$ cancel. By the zero boundary condition on $\bo$, the r.h.s.\ of \r{9G} vanishes. Therefore, $E$ is symmetric.
To show that there is a natural self-adjoint extension, it is enough to show that the quadratic form $(-Eu,u)$ is bounded from below. For every smooth $u$ satisfying the Dirichlet boundary condition, by \r{L0} we have
\[
(-Eu,u)= \int_\Omega \left( \lambda |\delta u|^2 + 2\mu |\d^s u|^2 \right)\d\!\Vol,
\]
which is non-negative.
We can write the Cauchy problem at $t=0$ for \r{el_eq} with Dirichlet boundary conditions now as
\[
\partial_t (u_1,u_2) = \mathbf{E}(u_1,u_2) := (u_2,Eu_1), \quad (u_1,u_2)|_{t=0}=(f_1,f_2).
\]
The operator $\mathbf{E}$ is self-adjoint on the energy $\mathcal{H}$ space with norm
\[
\|(f_1,f_2)\|^2_\mathcal{H} = \int_\Omega \left( \lambda |\delta f_1|^2 + 2\mu |\d^s f_1|^2 +|f_2|^2\right)\d\!\Vol.
\]
Then by Stone's theorem, the Cauchy problem at $t=0$ for \r{el_eq} with Dirichlet boundary conditions is solved by a unitary group. Problem \r{1} can be solved for regular enough $f$ by extending $f$ inside $\Omega$ and reducing it to a problem with a source but with homogeneous Dirichlet boundary conditions; and solving it by Duhamel's formula.
\end{proof}
\subsection{The Neumann boundary operator}
Let $x=(x',x^3)$ be semigeodesic coordinates to a given surface $\Gamma$, with $x^3>0$ on one side of it, defining the orientation in the metric setup.
The metric then takes the form $g$ in those coordinates with $g_{\alpha 3}=\delta_{\alpha 3}$ for $1\le\alpha\le 3$. Then, see also \cite{SUV_elastic},
\[
(Nu)_j = \lambda (\delta u) \delta_{j3} + \mu\left( \partial_3 u_j + \partial_j u_3- 2 \Gamma_{j3}^ku_k\right).
\]
Therefore,
\be{Nu}
\begin{split}
(Nu)_j &=
\mu (\partial_3u_{j}+ \partial_j u_{3})-2\mu \Gamma_{j3}^\nu u_\nu \ ,\quad j=1,2,\\
(Nu)_3 &= \lambda (\partial_1 u_{1} +\partial_2 u_{2}) + (\lambda +2\mu)\partial_3 u_{3} ,
\end{split}
\ee
where $\nu=1,2$ and we used the fact that $\Gamma_{33}^k= \Gamma_{3k}^3=0$.
\section{Geometric optics for the wave equation with \PDO\ lower order terms} \label{sec_GO}
We recall the well known geometric optics construction for a hyperbolic pseudo-differential equation generalizing the acoustic wave equation, see, e.g., \cite{Taylor-book0, Treves}. We allow the equation to be a system but we still assume that the principal part is scalar, see also \cite{Dencker_polar}. In this generality, the construction is done in \cite[VIII.3]{Taylor-book0}. We are not going to formulate results about the propagation of the polarization set which can be derived from \cite{Dencker_polar}. The reason to do study the acoustic equation in this generality is two-fold. First, the elastic system decomposes into such pseudo-differential equations; and second, SH waves propagate like acoustic ones as we show below.
\subsection{The Cauchy Problem with data at $t=0$} \label{sec_Ac_Cauchy}
Our interest is in the acoustic wave equation with lower order classical pseudo-differential term $A\in\Psi^1$
\be{ac}
(\partial_t^2- c^2 \Delta_{{\gzero}}+A)u=0
\ee
with Cauchy data $(u,\partial_t u)=(h_1,h_2)$ at $t=0$. Here, ${\gzero}$ is a Riemannian metric that we include in order to have the flexibility to change coordinates easily; and $\Delta_{{\gzero}}$ is the Laplace-Beltrami operator. The distribution $u$ is vector valued and $A$ is a matrix valued \PDO.
Up to lower order terms, $c^2\Delta_g$ coincides with $\Delta_{c^{-2}g}$.
The characteristic variety $\Sigma$ is given by $\tau^2=c^2|\xi|_{{\gzero}}^2$ and has two connected components $\Sigma_\pm$ corresponding to $\tau<0$ and $\tau>0$, away from the zero section (notice the convention that $\tau<0$ corresponds to $\Sigma_+$).
We are looking for solutions of the form
\be{o1}
\begin{split}
u(t,x) = (2\pi)^{-n} \sum_{\sigma=\pm}\int e^{\i\phi_\sigma(t,x,\xi)} &\Big( a_{1,\sigma}(t,x,\xi) \hat h_1(\xi)\\
&+ a_{2,\sigma}(t,x,\xi) |\xi|_{{\gzero}}^{-1}\hat h_2(\xi)\Big) \d \xi,
\end{split}
\ee
modulo terms involving smoothing operators of $h_1$ and $h_2$, defined in some neighborhood of $t=0$, $x=x_0$ with some $x_0$. This parametrix differs from the actual solution by a smoothing operator applied to $\mathbf{h}=(h_1,h_2)$, as it follows from standard hyperbolic estimates. The signs $\sigma=\pm$ correspond to solutions with wave front sets in $\Sigma_\mp$, respectively as it can be seen by applying the stationary phase lemma.
Here, $a_{j,\sigma}$ are classical amplitudes of order zero depending smoothly on $t$ of the form
\be{a}
a_{j,\sigma} \sim \sum_{k=0}^\infty a_{j,\sigma}^{(k)},\quad \sigma=\pm, \; j=1,2,
\ee
where $a_{j,\sigma}^{(k)}$ is homogeneous in $\xi$ of degree $-k$ for large $|\xi|$.
The phase functions $\phi_\pm$ are positively homogeneous of order $1$ in $\xi$ solving the eikonal equations
\be{o2}
\partial_t\phi\pm c(x)|\nabla_x\phi|_{{\gzero}}=0, \quad
\phi_\pm|_{t=0}=x\cdot\xi.
\ee
Such solutions exist locally only, in general. While the principal symbol is the only one determining the eikonal equations and therefore the geometry, the subprincipal symbol in \r{ac} depending on the principal one of $A$, affects the leading amplitude below.
Since the principal symbol of the hyperbolic operator in \r{ac} allows the decomposition $-\tau^2+c^2|\xi|_{{\gzero}} = (-\tau+c|\xi|_{{\gzero}})(\tau+c|\xi|_{{\gzero}}) $, in a conic neighborhood of $\Sigma_+$, one can apply a parametrix of $D_t-c|D|_{{\gzero}}$ to write \r{ac} there as
\be{4.5}
(\partial_t+\i c|D|_{{\gzero}}+A_+)u_+=0\quad \text{mod $C^\infty$}
\ee
with $A_+$ of order zero and $u_+$ being the sum of the $\sigma=+$ terms in \r{o1}. This is the case studied in \cite[VIII.3]{Taylor-book0} with a more general elliptic $-\lambda(t,x,D)$ replacing $\i c|D|_{{\gzero}}+A_+$, allowing $u_+$ to be a vector function, and $A_+$ to be matrix valued.
The main tool is the ``fundamental lemma'' allowing us to understand the action of a \PDO\ $P$ on $e^{\i\phi}a$ in terms of a homogeneous expansion in $\xi$, see \cite[VIII.7]{Taylor-book0} and \cite{Treves2}. The lemma remains true for principally scalar systems and it is used for such in \cite{Taylor-book0}.
We recall the construction of the amplitude. Let $u_+$ be as the first term in \r{o1} with the indices there dropped, corresponding to $\sigma=+$. We seek the amplitude of the form $a=a_0+a_1+\dots$ as in \r{a} but the upper index $(k)$ is a lower one now.
The order two terms in the expansion of $(\partial_t -\i \lambda(t,x,D))u$ cancel because $\psi$ solves the eikonal equation \r{o2} with the plus sign. Equate the order $1$ terms, we must solve
\be{trans}
\left( \partial_t - \frac{\partial \lambda_1}{\partial\xi_j} \frac{\partial}{\partial x^j} \right)a_0 -\bigg( \i\lambda_0+ \sum_{|\alpha|=2} \frac{\partial^\alpha_\xi\lambda_1}{\alpha!}\partial_x^\alpha\phi \bigg) a_0 =0,
\ee
where $\lambda=\lambda_1+\lambda_0+\dots$ is the expansion of $\lambda$ and they are evaluated at $\xi=\nabla_x\phi$. In our case, $\lambda_1= -c(x)|\xi|_{{\gzero}}$, therefore, $\partial \lambda_1/\partial \xi = -cg^{-1}\xi/|\xi|_{{\gzero}}$, which for $\xi=\nabla_x\phi$ yields $\partial \lambda_1/\partial \xi = -c{\gzero}^{-1}\nabla_x\phi/|\nabla_x\phi|_{{\gzero}} =c^2 {\gzero}^{-1}\nabla_x\phi/\phi_t$. Therefore, the vector field in \r{trans} is proportional to the vector field $(\phi_t, c^2 {\gzero}^{-1}\nabla_x\phi)$ which is the Hamiltonian covector field of the wave equation \r{ac} on $\Sigma_+$ identified with a vector one, since the Laplacian there is the one associated with the metric $\tilde g:= c^{-2}{\gzero}$. As it is well known, this is also the geodesic vector field of $\tilde g$ in the tangent bundle.
The potential-like term in \r{trans} involves $\lambda_0=-A_+$, see \r{4.5}. Now, the transport equation \r{trans} is a first order linear ODE along the bicharacteristics for the vector valued $a_0$ with a matrix valued zero order potential-like term. Given initial conditions at $t=0$, it is solvable as long as $\phi$ is well defined.
The higher order transport equations for $a_1$, $a_2$, etc., are derived in a similar way. They are non-homogeneous, with the same left-hand side but on the right we have functions computed in the previous steps.
We return to \r{o1} now and look for $u$ as a sum of four terms as indicated here, each one of the type we described. We can use the Cauchy data to derive initial conditions for the transport equations, see e.g., \cite{St-Encyclopedia}, to complete the construction.
The integrals appearing in \r{o1} are Fourier Integral Operators (FIOs) either with $t$ considered as a parameter, or as $t$ considered as one of the variables. In the former case, singularities of $(h_1,h_2)$ propagate along the zero bicharacteristics. More precisely, for every $t$,
\be{C1}
\WF(\mathbf{u}(t,\cdot)) = C_+(t)\circ\WF(\mathbf{h}) \cup C_-(t)\circ\WF(\mathbf{h}),
\ee
where $\mathbf{u}:=(u,u_t)$, $\mathbf{h}=(h_1,h_2)$ and
\[
\begin{split}
C_+(t)(x,\xi) &= \left( \gamma_{x,\xi/|\xi|_{\tilde g}}(t),|\xi|_g g\dot \gamma_{x,\xi/|\xi|_{\tilde g}}(t) \right), \\
C_-(t)(x,\xi)& = \left( \gamma_{x,-\xi/|\xi|_{\tilde g}}(t), -|\xi|_{\tilde g} {\tilde g}\dot \gamma_{x,-\xi/|\xi|_{\tilde g}}(t) \right) = C_+(-t)(x,\xi),
\end{split}
\]
and for $(x,\eta)\in T^*\R^3\setminus 0$, $\gamma_{x,\eta}$ is the geodesic issued from $x$ in direction $\tilde g^{-1}\eta$.
On the other hand, considering $t$ as one of the variables,
\be{C2}
\WF(\mathbf{u}) = C_+\circ\WF(\mathbf{h}) \cup C_-\circ\WF(\mathbf{h}),
\ee
where
\[
\begin{split}
C_+(x,\xi) &= \left\{\left( t ,\gamma_{x,\xi/|\xi|_{\tilde g}}(t), -|\xi|_{\tilde g}, |\xi|_{\tilde g}{\tilde g}\dot \gamma_{x,\xi/|\xi|_{\tilde g}}(t) \right),\; t\in\R\right\}, \\
C_-(x,\xi)& = \left\{\left( t,\gamma_{x,-\xi/|\xi|_{\tilde g}}(t), |\xi|_{\tilde g}, -|\xi|_{\tilde g} {\tilde g}\dot \gamma_{x,-\xi/|\xi|_{\tilde g}}(t)\right)\; t\in\R\right\}.
\end{split}
\]
In the analysis below, we will consider $C_+$ only.
The construction above can be done in some neighborhood of a fixed point $(0,x_0)$ in general. To extend it globally, we can localize it first for $\mathbf{h}$ with $\WF(\mathbf{h})$ in a conic neighborhood of some fixed $(x_0,\xi^0)\in T^*\R^3\setminus 0$.
Then $u$ will be well defined near the geodesic issued from that point but in some neighborhood of $(0,x_0)$ in general. We can fix some $t=t_1$ at which $u$ is still defined, take the Cauchy data there and use it to construct a new solution. Then we get an FIO which is a composition of the two local FIOs each one associated with a canonical diffeomorphism, then so is the composition. Then we can use a partition of unity to conclude that while the representation \r{o1} is local, the conclusions \r{C1} and \r{C2} are global. In fact, it is well known that both $\mathbf{h}\mapsto \mathbf{u}$ and $\mathbf{h}\mapsto \mathbf{u}(t,\cdot)$ with $t$ fixed are global FIOs associated with the canonical relations in \r{C1} and \r{C2}.
In particular, if $\Gamma$ is a smooth hypersurface, and $\gamma_{x,\xi}(t)$ hits $\Gamma$ for the first time $t=t(x,\xi)$ transversely locally, then $\mathbf{h}\mapsto u|_\Gamma$ is an FIO again with a canonical relation as $C_+$ above but with $t=t(x,\xi)$ and $\dot\gamma$ replaced by its tangential projection $\eta':= \dot\gamma'$. Notice that $\tau=-|\xi|_{\tilde g}<0$ for $C_+$ and $\tau=|\xi|_{\tilde g}>0$ for $C_-$. Also, $|\tau|<|\eta'|_{\tilde g}$ with equality for tangent rays that we exclude; therefore, $\WF(u|_{\R\times \Gamma})$ is in the hyperbolic region, as defined below.
\subsection{The boundary value problem for the acoustic equation} \label{sec_Ac_BVP}
Let $\Gamma$ be a smooth hypersurface near a fixed point $x_0$ given locally by $x^n=0$. We take $x=(x',x^n)$ to be local semigeodesic coordinates. We define $\Omega_\pm=\{\pm x^n>0\}$ to be the ``positive'' and the ``negative'' sides of $\Gamma$. At the beginning, we work in $\Omega_+$ only and omit the superscript or the subscript $+$ from the corresponding quantities. For all possible solutions $u$ (not restricted to incoming or outgoing ones) with singularities not tangent to $\Gamma$, we want to understand how the Dirichlet data $f:= u|_{\R\times \Gamma}$ and the Neumann data $h:= \partial_\nu u|_{\R\times \Gamma}$ are related. Once we have this, we can understand microlocally the boundary value problems with either Dirichlet or Neumamn boundary conditions, or with Cauchy data.
The analysis depends on where the wave front set of the Cauchy data is.
Let $(f,h) \in \mathcal{E}'(\R\times \R^{n-1})$ be supported near some $(t_0,x')$.
Then $T^*(\R\times \R^{n-1})\setminus 0$ has a natural decomposition into the \textit{hyperbolic region} $c^2|\xi'|_{{\gzero}}< \tau^2 $, the \textit{glancing one} $\tau^2=c^2 |\xi'|_{{\gzero}}$, and the \textit{elliptic one} $c^2 |\xi'|_{{\gzero}}> \tau^2$. Each one has two disconnected components corresponding to $ \mp\tau>0$. We will recall the analysis in the $\tau<0$ component in more detail and will point out the needed changes when $\tau>0$. Also, we will not analyze (a neighborhood of) the glancing region; for that, see, e.g., \cite{Taylor-book0} for a strictly convex boundary. We are looking for a parametrix of the \textit{outgoing} solution $u$ of \r{ac} with boundary data $f$, i.e., the solution with singularities propagating in the future only. Solutions with singularities propagating to the past only will be called \textit{incoming}.
\subsubsection{The outgoing and the incoming Neumann operators}
If $u_\textrm{out}$ is the outgoing solution with boundary data $f$ with $\WF(f)$ in the hyperbolic region, we call the operator $\Lambda_\textrm{out}f=\partial_\nu u|_{\R\times \Gamma}$ the outgoing Neumann operator. Similarly we define the incoming Neumann operator by $\Lambda_\textrm{in}$. In those definitions, it is implicit that the solutions are defined in $\bar\Omega$ and $\nu$ is the unit normal exterior to it. i.e., $\partial_\nu=-\partial_{x^n}$. If we have $\Omega_\pm$ as above, we use the notation $\Lambda^\pm_\textrm{in}$, $\Lambda^\pm_\textrm{out}$ to denote the four Neumann operators with the convention that we preserve $\nu$ for $\Omega_0$, i.e., $\nu$ is interior for it. If the coefficients of the wave equation are smooth across $\Gamma$, we have $\Lambda_\textrm{out} ^+= \Lambda_\textrm{in} ^-$, $\Lambda_\textrm{in} ^+= \Lambda_\textrm{out} ^-$ up to smoothing operators. In the transmission problem below however, this is not the case.
\subsubsection{Wave front set in the hyperbolic region $c^2|\xi'|_{{\gzero}}<\tau^2$} Assume that $\WF(f)$ is in the hyperbolic region with $\tau<0$. We are looking for a representation of $u$ of the form
\be{10c}
u = (2\pi)^{-n}\iint_{\R\times\R^{n-1}} e^{\i\phi(t,x,\tau,\xi')} a(t,x,\tau,\xi') \hat f(\tau,\xi') \, \d\tau\, \d\xi',
\ee
with a phase function $\phi$ and an amplitude $a$.
The phase function solves the eikonal equation in \r{o2} with the plus sign but with a boundary condition on the timelike boundary $x^n=0$ now
\be{eik0}
\partial_t \phi +c(x)|\nabla_x\phi|_g=0, \quad \phi|_{x^n=0} = t\tau +x'\cdot\xi'.
\ee
The choice of the positive square root reflects the assumption $\tau<0$. In the hyperbolic region, there are two solutions depending on the choice of the sign of $\partial_{x^n}\phi$ at $x^n=0$. It is easy to see that what corresponds to outgoing solutions is the positive choice
\be{4.11}
\partial_{x^n}\phi\big|_{x^n=0}= \sqrt{ c^{-2}\tau^2- |\xi'|_g}.
\ee
We solve \r{eik0} with this condition locally. To construct the amplitude, we solve the same transport equations \r{trans} as above but with initial condition $a=1$ for $x^n=0$, i.e., the principal part $a_0$ of $a$ is one there; and all others vanish.
The case $\tau>0$ is similar: we seek the solution in a similar way but the sign in \r{eik0} is negative. This does not change the construction.
Incoming solutions are constructed similarly. We choose the negative square root in \r{4.11}. in particular we get that the outgoing and the incoming Neumann operators are \PDO s of order one with principal symbols equal to $\mp \i$ multiplied by \r{4.11}, see also Proposition~\ref{pr_N} below.
\subsubsection{Wave front set in the elliptic region $c^2|\xi'|> \tau^2$. Evanescent waves}\label{sec_Ac_evan}
We proceed formally in the same way but the problem here is that the eikonal equation has no real valued solution because the expression under the square root in \r{4.11} is negative. It may not even have a complex valued solution.
This is a well known case of an evanescent mode described by a complex valued phase function (and amplitude). We follow \cite{Gerard}, see also \cite[VIII.4]{Taylor-book0}. Since the construction in \cite{Gerard} is done for the Helmholtz equation with a large parameter and in \cite[VIII.4]{Taylor-book0} it is done for an elliptic boundary value problem, respectively, we need to do them in our hyperbolic case as well, even though the construction is essentially the same.
We assume that $(t,x,\tau,\xi')$ belong to a conically compact neighborhood, contained in the elliptic region, of a fixed point there.
Plugging the ansatz in the elasticity equation, we use the ``fundamental lemma'' for complex phase functions in \cite[X.4]{Treves2} to get an asymptotic expansion which formally look the same as in the hyperbolic case.
We are looking for a solution of the eikonal equation \r{11} for $\phi$ up to an error $O(|x^n|^\infty)$ at $x^n=0$ as a formal infinite expansion of the form
\[
\phi = t\tau+ x'\cdot\xi'+ x^n\psi_1(t,x',\tau,\xi')+(x^n)^2 \psi_2(t,x',\tau, \xi')+\dots,
\]
where $\psi_j$ are symbols of order $1$. We denote this class by $\tilde S^1$, and by replacing the order $1$ by some $m$, we denote by $\tilde S^m$ the corresponding class.
To avoid exponentially large modes, we require $\Im \phi \ge0$. To construct the formal series, we first write the eikonal equation \r{4.11} in the form
\be{6.9a}
\partial_{x^n}\phi = \i\sqrt{|\nabla_x'\phi|_g^2 - (\partial_t \phi)^2}
\ee
(note that there are no incoming/outgoing choices here) and then differentiate it w.r.t.\ $x^n$ at $x^n=0$. If such a solution exists, the error term would not affect those derivatives. We have
\be{6.9aa}
\psi_1 = \i\sqrt{ |\xi'|_g^2-c^{-2} \tau^2 }.
\ee
To find the higher order derivatives, we write \r{6.9a} in the form
\[
\partial_{x^n}\phi = F(x,\partial_{t,x}\phi);
\]
with $F(x,\eta)$ homogeneous in $\eta$ of order one. Then
\[
\partial_{x^n}^{k+1}\phi = \sum_{|\beta|+k_0+k_1+\dots+k_{|\beta|}=k} \partial_{x^n}^{k_0} \partial_\eta^\beta F(x,\partial_{t,x} \phi) \partial_{x^n}^{1+k_1}\phi_{t,x} \dots \partial_{x^n}^{1+k_{|\beta|}}\phi_{t,x} .
\]
Since $\partial_{x^n} \phi$ is a symbol of order one, we prove the claim. Note also that $\Im\phi\ge x^n(|\tau|+|\xi|)/C$.
The next step is to solve the transport equations. Since they have complex coefficients, they may not be solvable exactly and we solve them up to an $O(|x^n|^\infty)$ error as well. The rest is as in \cite{Gerard} and \cite[VIII.4]{Taylor-book0}.
\begin{proposition}\label{pr_N}
In the hyperbolic region, $\Lambda_\text{\rm out}$ and $\Lambda_\text{\rm in}$ are \PDO s of order one with principal symbols
\be{pr_N1}
\sigma_p(\Lambda_\text{\rm out}) = -\i \sqrt{ c^{-2}\tau^2- |\xi'|_g}, \quad \sigma_p(\Lambda_\text{\rm in}) = \i\sqrt{ c^{-2}\tau^2- |\xi'|_g}.
\ee
In the elliptic one, they are \PDO s of order one again with principal symbols
\be{pr_N2}
\sigma_p(\Lambda_\text{\rm out}) = \sigma_p(\Lambda_\text{\rm in}) = \sqrt{|\xi'|_g- c^{-2}\tau^2}.
\ee
\end{proposition}
We recall that $\partial_\nu= - \partial_{x^n}$ in the coordinates we used to compute the principal symbols. The expressions we got are invariant however. In both cases, the DN maps are elliptic. As shown in \cite{Taylor-book0}, they are elliptic even in the glancing region but they belong to a different class of \PDO s. The principal symbols of the Neumann operators on the negative side $\Omega_-$ are similar but with opposite signs.
\subsubsection{The boundary value problem with Dirichlet data} \label{sec_Ac_BVP_D} The problem of constructing the outgoing solution $u_\text{out}$ with Dirichlet data on $\R\times \Gamma$ was solved above when $\WF(f)$ is either in the hyperbolic of the elliptic region. Similarly, we construct $u_\text{in}$. Notice that in the elliptic region, the construction is the same for both. In particular, we proved Proposition~\ref{pr_N} by taking the normal derivatives of those solutions.
Next, we can construct a reflected wave. Assume we have an incoming solution $u_\text{in}$ with singularities hitting $\Gamma$ transversely. We want to construct a solution $u$ equal to $u_\text{in}$ for $t\ll0$ satisfying $u=0$ on the boundary. Then $f:=u|_{\R\times \Gamma}$ has a wave front set in the hyperbolic region only. We construct the reflected wave $u_R$ as the outgoing solution with Dirichlet data $-f$. Then $u=u_\text{in}+u_R$ is the solution we seek.
\subsubsection{The boundary value problem with Neumann data} \label{sec_Ac_BVP_N}
Consider the outgoing solution $u_\text{out}$ with boundary data $\partial_\nu u=h$ on $\R\times \Gamma$. We reduce it to the Dirichlet problem above by inverting the DN map in $\Lambda_\text{out}f=h$. Since the latter is elliptic in the two regions we work in, this can be done microlocally. Then we solve a Dirichlet problem. We do the same for the incoming solution.
If we want to construct a reflected wave so that the solution $u$ satisfies $\partial_\nu u=0$, we need to solve $N_\text{out}f = -\partial_\nu u_\text{in}|_{\R\times \Gamma}$ which is possible since $N_\text{out}$ is elliptic. Having $f$, then we construct the outgoing solution with that Dirichlet data.
\subsubsection{The boundary value problem with Cauchy data} \label{sec_Ac_BVP_C}\
We are looking for a microlocal solution $u$ of the acoustic equation \r{ac} satisfying $u=f$ and $\partial_\nu u=h$ on $\R\times \Gamma$ with given $f$ and $h$ having wave front sets in the hyperbolic region first. The global Cauchy problem is over-determined because the singularities can hit the boundary again and therefore the Cauchy data have a structure (consisting of pairs in the graph of the lens relation); therefore prescribing them arbitrarily is not possible. On the other hand, one can construct a microlocal solution locally, when the wave front sets of $f$ and $h$ are localized in small conic sets excluding tangential directions, until the singularities hit the boundary again. We are looking for $u$ as a sum of two solutions $u=u_\text{in}+u_\text{out}$, one incoming and the other one outgoing. To determine the boundary values of the two solutions and to reduce the problem to section~\ref{sec_Ac_BVP_D}, we need to solve
\be{7.1}
u_\text{in} + u_\text{out}=f, \quad
\Lambda_\text{in} u_\text{in} + \Lambda_\text{out} u_\text{out} =h,
\ee
where $u_\text{in}$ and $u_\text{out}$ are the boundary values of those solutions.
Let $\WF(f,h)$ be in the hyperbolic region first.
Then on principal symbol level, the leading amplitudes solve
\[\
a_\text{in} + a_\text{out}=\hat f, \quad -\i\xi_3(a_\text{in} - a_\text{out} )=\hat h\quad \text{on $x^3=0$},
\]
where $\xi_3$ is defined by \r{4.11}. This in an elliptic system. This shows that the matrix valued operator in \r{7.1} is elliptic (if we reduce the order of the second equation to $0$ by applying an elliptic \PDO\ of order $-1$).
Therefore, the Cauchy data determine uniquely a decomposition into an incoming and an outgoing solution, locally. This reduces the problem to the one we solved in section~\ref{sec_Ac_BVP}.
If $\WF(f,h)$ is in the elliptic region, there is only one parametrix, no incoming or outgoing ones. The corresponding DN map $\Lambda$ is an elliptic \PDO \ of order one with principal symbol \r{pr_N2}. Then for $(f,h)$ to be Cauchy data of an actual solution (up to smooth functions) it is needed that it belongs to the range of $(\Id,\Lambda)$ (up to smooth functions). This makes this problem over-determined. If $h=\Lambda f$, a microlocal solution exists, as we showed above. It propagates no singularities away from $\Gamma$, and it does not propagate singularities along $\Gamma$ either (unlike the Rayleigh waves in elasticity which propagate along $\Gamma$).
\subsection{The transmission problem}\label{sec_Ac_RT}
We recall the setup in section~\ref{sec_Ac_BVP}.
We work locally in a small neighborhood of a point on $\Gamma$ and call one of its sides, $\Omega_-$ negative, the other one, $\Omega_+$, positive. For the speed $c$, we have $c=c_{-}$ in $\Omega_-$, and $c = c_{+}$ in $\Omega_-$, where $c_{-}, c_{+}$ are smooth up to $\Gamma$ and $c_-\not=c_+$ pointwise. We impose the transmission conditions
\be{trA}
[u]=[\partial_\nu u]=0\quad\text{on $\Gamma$},
\ee
where $\nu$ is the normal derivative.
Let $(x',x^3)$ be semi-geodesic coordinates near $\Gamma$ so that $\pm x^3>0$ in $\Omega_\pm$.
Let $u_I$ be an incident solution of the acoustic equation \r{ac} with speed $c$ and background metric ${\gzero}$ with a wave front set localized near a small conic neighborhood of some covector (at some time) approaching $\Gamma$ from the positive side. $\Omega_+$ As mentioned above, we consider singularities $(x,\xi)$ which move in the direction of $\xi$ only, i.e, associated with $\phi_+$ in \r{o1}, as we did in section~\ref{sec_GO}. Then on $\WF(u_I)$, with $t$ considered as a variable, we have $\tau<0$.
Extend the speed $c$ form the negative to the positive side in a smooth way (recall that $c$ jumps across $\Gamma$) and extend $u_I$ smoothly across $\Gamma$ as a solution with that speed. Set
\be{14h}
f:= u_I|_{\R\times \Gamma}.
\ee
Let $(x_0,\xi_0)$ with $x_0\in \Gamma$ be one of the singularities of $u_I$. We assume that $\xi_0$ is a unit covector w.r.t.\ $c_{+}^{-2}{\gzero}$. We have that $\WF(f)$ is in the hyperbolic region $c_+|\xi'|<-\tau$ in $\Omega_+$. We are looking for a parametrix $u$ near $x_0$ of the form
\be{14u}
u = u_I + u_R+u_T,
\ee
where $u_I$ is incoming and restricted to $\bar\Omega_+$; $u_R$ is the reflected outgoing solution supported in $\bar\Omega_+$, and $u_T$ is the transmitted outgoing one or an evanescent mode, supported in $\bar\Omega_-$. It is enough to find the boundary values of those functions.
\subsubsection{The hyperbolic-hyperbolic case}
Assume that $\WF(f)$ is in the hyperbolic region in $\Omega_-$ as well, i.e., $c_{-}^2|\xi'|^2<\tau^2$ on $\WF(f)$. If $c_{-}<c_{+}$ at $x_0$ (transmission from a fast to a slow region), that condition is satisfied regardless of $\xi_0'$. If $c_{-}>c_{+}$(transmission from a slow to a fast region), existence of a transmitted ray depends on $\xi_0'$.
Let $\theta_+$ be the angle which an incoming ray makes with the normal, then the reflected angle will be the same and the angle $\theta_-$ of the transmitted ray, see Figure~\ref{pic1ac}, is related to $\theta_+$ by Snell's law
\be{Snella}
\frac{\sin\theta_+}{\sin\theta_-} = \frac{c_+}{c_-},
\ee
which follows directly from \r{eik0} with $c=c_-$ and $c=c_+$ there, see also \cite{SU-thermo_brain}. This relation shows that a transmitted ray will exist only if $\theta_+$ does not exceed the critical angle
\be{theta_cr}
\theta_\textrm{cr}=\arcsin(c_+/c_-).
\ee
\begin{figure}[!ht]
\includegraphics[page=7,scale=1]{DN_earth_layers_pics}
\caption{Reflected and transmitted acoustic waves with an incoming ray from the top (left) and with incoming rays from both sides (right)
}\label{pic1ac}
\end{figure}
The transmission conditions \r{trA} are equivalent to
\be{acTC1}
\begin{split}
u_I + u_R & = u_T,\\
N_\textrm{in}^+u_I + N_\textrm{out}^+u_R & = N_\textrm{out}^-u_T.
\end{split}
\ee
Assume now that we want to satisfy transmission conditions requiring continuity of $u$ and its normal derivative across the boundary. Then we get the following linear system for the leading terms $a_T^{(0)}$ and $a_R^{(0)}$ of the amplitudes $a_T$ and $a_R$:
\be{15}
\begin{array}{rll}\medskip
a_T^{(0)}-a_R^{(0)} &=a_I^{(0)} & \text{for $x^n=0$},\\ \displaystyle
-\xi_n^- a_T^{(0)} - \xi_n^+ a_R^{(0)} &=\displaystyle -\xi_n^+ a_I^{(0)} &\text{for $x^n=0$},
\end{array}
\ee
where
\be{15b}
\xi_n^\pm = \sqrt{c_{\pm}^{-2} \tau^2-|\xi'|_g^2}, \quad \text{for $x^n=0$}.
\ee
In particular, this shows that the determinant of \r{15} is negative, and therefore, the system is solvable, i.e., elliptic after reducing the order of the second equation to zero.
Since the system \r{acTC1} is elliptic, it can be solved up to infinite order, i.e., we can find the all terms $a_{R,T}^{(k)}$ at $x^n=0$. The solutions serve as initial conditions for the transport equations of the corresponding modes.
Multiplying the first by the conjugate of the second equation, we get
\[
\xi_n^-|a^{(0)}_T|^2+ \xi_n^+|a^{(0)}_R|^2 = \xi_n^+|a^{(0)}_I|^2,
\]
which can be considered (and justified) as preservation of the energy across $\Gamma$.
\subsubsection{Total internal reflection}\label{sec_ac_FIR}
Assume now that $\WF(f)$ is in the elliptic region for $c_-$. This happens when $\theta_+> \theta_\textrm{cr} $. In that case, there will be no transmitted singularity. Indeed, we are looking for an evanescent mode in $\Omega_-$. Then $N_\text{out}^-$ in \r{acTC1} is in the elliptic region. The analog of \r{15} then is
\be{ac_fir1}
\begin{pmatrix} 1&-1\\ -\xi_n^-&-\xi_n^+ \end{pmatrix} \begin{pmatrix} a_T^{(0)}\\ a^{(0)}_R \end{pmatrix}
= a_I^{(0)}\begin{pmatrix} 1\\ -\xi_n^+\end{pmatrix}
\ee
where $\xi_n^-=\i |\xi_n^-|$ is pure imaginary and given by \r{pr_N2} times $\i$. Equivalently,
\be{ac_fir2}
a_I^{(0)}+ a_R^{(0)}= a_T^{(0)}, \quad \xi_n^+\left( a_I^{(0)}- a_R^{(0)}\right)= \i |\xi_n^-| a_T^{(0)}.
\ee
Take the real part of the first equation multiplied by the conjugate of the second one to get
\be{ac_fir3}
\big|a_R^{(0)}\big|^2 = \big|a_I^{(0)}\big|^2.
\ee
In other words, on principal level, the whole energy is reflected and nothing is transmitted. We could have obtained this directly by solving \r{ac_fir1}, of course.
\subsubsection{Incoming waves from both sides of $\Gamma$. }
A more general setup is to assume incoming waves from each side, see Figure~\ref{pic1ac}, right. We do not need to assume hyperbolic ones; they could be evanescent. In fact, this is an analogue of the Cauchy data case in the boundary value problem, see section~\ref{sec_Ac_BVP_C}. The point of view we adopt and will keep in the elastic case, is to classify the cases by the wave front set of the Cauchy data on the boundary.
We are interested in two questions: (i) well posedness of the transmission problem: given all incoming waves, is the problem well posed for the outgoing ones; and (ii) given all waves on one side of $\Gamma$, can we solve for all waves on the other one? We show that (i) is true as it can be expected (and well known). The answer to (ii) is not always affirmative; and when it is; this means that we can control the configuration on one side from the other one; in particular we can kill either the incoming or the outgoing wave on that side.
\textbf{The hyperbolic-hyperbolic case.} We assume now that the Cauchy data $(f,h)$ (the same on both sides by the transmission conditions) has a wave front set in the hyperbolic region on each side of $\Gamma$. Then on each side, we have two solutions: one incoming and one outgoing.
Let $u^+_{\textrm{in}}$ and $u^-_{\textrm{in}}$ be the two incoming solutions from the positive and from the negative side, respectively, and let ${u}^+_{\textrm{out}} $, ${u}^-_{\textrm{out}}$ be the two outgoing ones. A usual, we assume no tangent rays. Then the transmission conditions are given by
\be{acTC}
\begin{split}
u_\textrm{in}^++ u_\textrm{out}^+ & = u_\textrm{in}^-+ u_\textrm{out}^-,\\
N_\textrm{in}^+u_\textrm{in}^++ N_\textrm{out}^+u_\textrm{out}^+ & = N_\textrm{in}^- u_\textrm{in}^-+ N_\textrm{out}^-u_\textrm{out}^-.
\end{split}
\ee
This is a generalization of \r{acTC1} with one more wave added.
If the corresponding principal amplitudes are $ a^+_{\textrm{in}}$, $a^-_{\textrm{in}}$, $a^+_{\textrm{out}}$, $a^-_{\textrm{out}}$, we get
\be{ac20}
\begin{pmatrix} 1&1\\ -\xi_n^+&\xi_n^+ \end{pmatrix} \begin{pmatrix} a_\textrm{in}^+\\ a^+_\textrm{out} \end{pmatrix}
= \begin{pmatrix} 1&1\\ \xi_n^-&-\xi_n^- \end{pmatrix} \begin{pmatrix} a^-_\textrm{in}\\ a^-_\textrm{out} \end{pmatrix}
\ee
Clearly, each matrix is elliptic. This implies that we have control from each side: given any choice of two amplitudes on one side, say $\Omega_-$, one gets an elliptic problem for finding the amplitudes on the other one, in this case $\Omega_+$.
We also get ellipticity for solving for the outgoing/incoming waves given the incoming/outgoing ones, i.e., the transmission problem is well posed. This also follows from energy conservation. Indeed, multiplying the first by the conjugate of the second equation, and then taking the real part above yields
\be{ac_en1}
\xi_n^+ \left( |a^+_{\textrm{out}}|^2 - |a^+_{\textrm{in}}|^2 \right) + \xi_n^- \left( |a^-_{\textrm{out}}|^2 - |a^-_{\textrm{in}}|^2 \right)=0.
\ee
This energy preservation across the boundary implying in particular that if all incoming waves vanish, then so do the outgoing ones; i.e., that problem is elliptic.
\textbf{The hyperbolic-elliptic case.} We assume now that the Cauchy data $(f,h)$ (the same on both sides by the transmission conditions) has a wave front set in the hyperbolic region w.r.t.\ $c_+$ and in the elliptic one for $c_-$. Then in $\Omega_+$ we have two solutions: one incoming and one outgoing but in $\Omega_-$ there is only one (evanescent) solution. This case is analyzed in section~\ref{sec_ac_FIR} with $u_\text{in}^+=u_I$, $u_\text{out}^+=u_R$, $u^-$ (no incoming or outgoing ones) corresponding to $u_T$ there. We found out there that the incoming wave (or the outgoing one) determines uniquely the outgoing (respectively, the incoming) one and the evanescent one $u_-$. On the other hand, we cannot control $u_\text{out}^+$ and $u_\text{in}^+$ by choosing appropriately the evanescent mode $u^-=u_T$ appropriately; in fact $u_\text{in}^+$ alone determines the whole configuration already.
A slightly different point of view into this case is that we cannot have arbitrary (up to smooth functions) Cauchy data on $\Gamma$ in the hyperbolic region for $\Omega_+$, since that data falls in the elliptic region on the negative side, and then it has to be in the graph of the Neumann operator $\Lambda_-$. On other hand, if that data satisfy that compatibility condition, the solution in $\Omega_+$ consists of an incoming and a reflected wave. This is in contrast to the hyperbolic-hyperbolic case, where we can cancel one of the waves on the top, for example.
\textbf{The elliptic-elliptic case.} We assume now that the Cauchy data $(f,h)$ has a wave front set in the elliptic region w.r.t.\ both $c_+$ and $c_-$. It is interesting to see if we can have evanescent modes on both sides but still a non-trivial wave front set on $\Gamma$. We would need $(|\xi'|_g^2-c_+^{-2} \tau^2 )^{1/2}=- (|\xi'|_g^2-c_-^{-2} \tau^2 )^{1/2}$ which cannot happen. Therefore, there are no Rayleigh or Stoneley kind of waves in the acoustic case.
\subsection{Justification of the parametrix} \label{sec_just_ac}
In each particular construction up to section~\ref{sec_Ac_BVP_C}, we constructed a parametrix satisfying the equation and the corresponding initial/boundary conditions up to a smooth error. Then the difference of the parametrix and the true solution satisfies all those conditions up to smooth errors. Standard hyperbolic estimates imply that the difference is smooth. In section~\ref{sec_Ac_BVP_C}, the Cauchy problem on a timelike boundary needs to be solved microlocally only and it is a tool to handle the transmission one. The justification of the parametrix for the latter can be done with the aid of \cite{Hansen84, Williams-transmission}, guaranteeing smooth solutions if the transmission conditions \r{tr} hold up to a smooth error only.
\section{Geometric optics for the elastic wave equation} \label{sec_GOel}
We study the Cauchy problem at $t=0$ and propagation of singularities in the elastic case.
We present the geometric optics construction for the elastic wave equation in an open set first, where the coefficients are smooth. Such a construction is well known for systems with characteristics of constant multiplicities, see, e.g., \cite{Taylor-book0, Treves} and \cite{Dencker_polar}. Our goal is to make the elastic case more explicit and to do a complete mode separation which we will use eventually near a boundary, see Proposition~\ref{pr1} below.
The elastic case has been studied form microlocal point of view in \cite{Yamamoto_elastic_89, Rachele_2000, Rachele00, Rachele03, HansenUhlmann03, Brytik_11, SUV_elastic}.
Consider the elastic wave equation
\be{el}
\begin{split}
u_{tt}-Eu&=0,\\
(u,u_t)|_{t=0}&=(h_1,h_2)
\end{split}
\ee
with Cauchy data $\mathbf{h}:=(h_1,h_2)$ at $t=0$. We want to solve it microlocally for $t$ in some interval and $x$ in an open set. The operator $E$ is associated with a Riemannian metric $g$ as in section~\ref{sec_m}.
If $\lambda$, $\mu$ and $\rho$ are constant and $g$ Euclidean, one can use Fourier multipliers. In that case,
let $\Pi_p=\Pi_p(D)$ be the projection to the p-modes, i.e., $\Pi_p$ is the Fourier multiplier $\hat u\mapsto (\xi/|\xi|)[(\xi/|\xi|)\cdot \hat u] $ and let $\Pi_s=\Id-\Pi_p$. It is easy to see that
$\Pi_s$ is the Fourier multiplier $\hat u\mapsto -(\xi/|\xi|)\times (\xi/|\xi|)\ \times \hat u$. Also, we may regard $h = \Pi_ph+\Pi_s h$ as the potential/solenoidal (or the Hodge) decomposition of the 1-form $h$, see, e.g., \cite{Sh-book}. Then , $ {E} = c_p^2\Delta\Pi_p+ c_s^2\Delta \Pi_s$. We have a complete decoupling of the system into P and S waves.
In the variable coefficient case, we will do this up to smoothing operators.
We recall the construction in \cite{Taylor-book0}, which provides another proof of the propagation of singularities in this case.
The principal symbol $\sigma_p(-E)$ of $-E$ has eigenvalues of constant multiplicities. Near every $(x_0,\xi_0)\in T^*\bar\Omega\setminus 0$, one can decouple the full symbol $\sigma(-E)$ fully up to symbols of order $-\infty$. Namely, there exist an elliptic matrix valued \PDO\ $U$ of order $0$ microlocally defined near $(x_0,\xi_0)$, so that
\be{VEU}
U^{-1}EU= \begin{pmatrix} c^2_s\Delta_g + A_s&0\\0&c_p^2\Delta_g +A_p\end{pmatrix}
\ee
modulo $S^{-\infty}$ near $(x_0,\xi_0)$, where the matrix is in block form; with an $1\times 1$ block on the lower right and a $2\times2$ one on the upper left ($c_s^2\Delta_g+A_s$ is actually $c_s^2\Delta_g\,I_2+A_s$ with $I_2$ being the identity in two dimensions). Moreover, $A_s$ and $A_p$ are \PDO s of order one.
In other words, the top non-zero block is scalar and the lower non-zero one is principally scalar. We recall this construction briefly. We seek $U$ as a classical \PDO\ with a principal symbol $U_0$ which diagonalizes $E$; there are many microlocal choices, and we fix one of them.
Then
\be{VEU2}
U_0^{-1}EU_0= \begin{pmatrix} -c_s^2\Delta_g I_2&0\\0&-c_p^2\Delta_g \end{pmatrix} + R_1,
\ee
where $R_1$ is of order one. Then we correct $U_0$ by replacing it with $U_0(I +K_1)$ with some \PDO\ $K$ of order $-1$, i.e., we apply $I+K_1$ to the right and $(I+K_1)^{-1}= I-K_1+\dots$ to the left to get
\be{VEU3}
(I-K_1)U_0^{-1}EU_0(I+K_1)= (I-K_1)\begin{pmatrix} -c_s^2\Delta_g I_2&0\\0&-c_p^2\Delta_g \end{pmatrix}(I+K) + R_1,\quad \text{mod\ $\Psi^0$},
\ee
where we used the fact that $(I-K_1)R_1(I+K_1)=R_1$ mod $\Psi^0$. Let us denote the matrix operator there by $G$. To kill the off diagonal terms on the right up to zeroth order, we need to do that for $GK-KG+R$. Note that $G$ and $K_1$ do not commute up to a lower order because they are matrix valued \PDO s. We look for $K$ in block form with zero diagonal entries and off-zero ones $K_{12}$ (an $1\times2$ vector) and $K_{21}$ (a $2\times1$ vector). If we represent $R_1$ in a block form as well, we reduce the problem to solving
\[
\begin{split}
K_{12}(-c_s^2\Delta_g) - (-c_p^2\Delta_g)K_{12}&=-R_{12},\\
K_{21}(-c_p^2\Delta_g) - (-c_s^2\Delta_g) K_{21}&=-R_{21}
\end{split}
\]
modulo $\Psi^0$.
The solvability of this system on a principal symbol level follows by the general lemma in \cite[IX.1]{Taylor-book0} because $c_s\not=c_p$ but in this particular case, it is straightforward. Note that the principal symbols of $K_{12}$ and $K_{21}$ represent the coupling of the P and the S waves on a sub-principal symbol level, see also \cite{Brytik_11}.
We apply $I-K_2$ to the left and $I+K_2$ to the right to kill the off diagonal terms of $(I+K_1)^{-1}G(I+K_1)$, etc. In fact, $U$ can be chosen to be unitary in microlocal sense \cite{MR1777025}. In our case however, we prefer $U$ to be of order one.
From now on, we will do all principal symbol computation at a fixed point where $g$ is transformed to an Euclidean one (via the exponential map, for example) to simplify the notation. Then we will interpret the final result in invariant sense.
The principal symbol, of $U$, at that fixed point, will be chosen to be
\be{U}
\sigma_p(U) = \begin{pmatrix} 0&-\xi_3&\xi_1\\ \xi_3&0&\xi_2\\ -\xi_2 & \xi_1&\xi_3\end{pmatrix}
\ee
when $\xi_3\not=0$. The third column is the eigenvector $\xi$ associated with $c_p^2$, while the first and the second ones are a basis of the eigenspace of $\sigma_p(-E)$ associated with $\sigma_s$; and that basis is (micro) local only. In fact, a global one does not exist since those vectors are characterized as being conormal to $\xi$. In this particular case, we chose $\xi\times e_1$ and $\xi\times e_2$ with $e_1=(1,0,0)$, etc.
Recall that the principal symbol computations so far are at a single point where $g$ is Euclidean. To extend it to all points, an invariant way to choose $\sigma_p(U)$ is to replace the first and the second column there by $\xi\times e_1$ and $\xi\times e_2$ with $e_{1,2}$ considered as covectors, and the cross product as in \r{cross}. In other words, the first two columns in \r{U} are considered as vectors, then converted to covectors by the metric and multiplied by $(\det g)^{-1/2}$. Then we still get \r{5.12} but in $u^s$ we have curl in terms of the metric, see \r{curl}.
It then follows that microlocally, the elasticity system can be written as $(\partial_t^2 - U^{-1}EU) w=0$ for
\be{uv}
w= (w^s,w^p)= U^{-1}u,
\ee
where $w^s=(w_1^s,w_2^s)$ and $w^p$ is scalar. This system decouples into the wave equations
\be{SP-dec}
\begin{split}
(\partial_t^2-c_s^2\Delta_g -A_p)w^s-R_sw&=0,\\
(\partial_t^2-c_p^2\Delta_g -A_s )w^p-R_pw&=0,
\end{split}
\ee
with $A_{p,s}$ of order one, $R_{p,s}$ smoothing; the first one is a $2\times 2$ system and the second one is scalar. The first one has $\Sigma_s$ as a characteristic manifold, while the second one has $\Sigma_p$.
Even though $U$ depends on the microlocal neighborhoods of the characteristic varieties $\Sigma_{s,p}$ we work in, the wave front sets of $U^{-1}f$, in those neighborhoods, we can apply the propagation of singularities results, or directly the microlocal geometric optics construction used below. Then we conclude that singularities in those neighborhoods propagate along the zero bicharacteristics of $\tau^2-c_s^2|\xi|^2$ and $\tau^2-c_p^2|\xi|^2$, respectively (which, of course, is well known). This implies a global result, as well.
For $u= Uw$ we get
\be{5.11}
u= u^s +u^p, \quad u^s:= U( w^s_1,w^s_2,0), \quad u^p:= U(0,0,w^p),
\ee
where $u^s$ and $u^p$ have wave front sets in $\Sigma_s$ and $\Sigma_p$, respectively. We call such solutions microlocal S and P waves. We have
\be{5.12}
u^p = (D + V_p)w^p, \quad u^s = (\det g)^{-1/2} g(-D_3 w_2^s, D_3 w_1^s, -D_2 w_1^s+D_1 w_2^s) + V_sw^s,
\ee
where $V_p$ and $V_s$ are of order zero and are formed by the lower order entries of $U$.
Here $u^s$ can also be written as $u^s = D\times (w_1^s,w_2^s,0)+V_sw^s$.
Therefore, we proved the following.
\begin{proposition}[mode separation]\label{pr1}
Let $u$ be a solution of the elastic wave equation in the metric setting in some open set in $\R\times\R^3$. Let $u^p$ and $u^s$ be $u$ microlocalized near $\Sigma_p$ and $\Sigma_s$, respectively.
Then, microlocally, in any conic subset where $\xi_3\not=0$, there exist a scalar function $w^p$ and a vector valued function $w^s=(w_1^s,w_2^s)$ solving \r{SP-dec} so that $u=u_p+u_s$, where
\be{5.13}
u^p = (D + V_p)w^p, \quad u^s = D\times (w_1^s,w_2^s,0 )+V_sw^s
\ee
with $V_p$ and $V_s$ \PDO s of order zero and the curl in $D\times$ is in Riemannian sense.
\end{proposition}
The assumption $\xi_3\not=0$ does not restrict us. We can always rename the variables or rotate the coordinate system. On the other hand, the proposition does not provide a global mode separation. We are going to use it with $x^3$ being the distance to the boundary. Note also that $u$ and $w^p$, $w^s$ are related by \r{uv}.
In the geophysics literature, $w^p$ and $w^s$ such that $u^s=\nabla\times w^s$ (in our case, $w^s=(w_1^s, w_2^s,0)$) are called potentials. We have some freedom to choose $w^s$ so that \r{5.13} hold: adding an exact form to $(w_1^s,w_2^s,0 )$ would not change the principal part of $u^s$ at least. One possible gauge to get unique $w^s$ is to take one of the components, in some coordinate system, to be zero. We have $w_3^s=0$ in \r{5.13}. The analysis however must be restricted microlocally to $\xi_3\not=0$.
In what follows, $x^3$ will be the normal coordinate to the boundary. Another choice is to require $w^s$ to be solenoidal, i.e., divergence free.
This proposition is a generalization of the well known representation of the solution of the isotropic constant coefficient elastic equation into potentials $u=\nabla w^p +\nabla\times w^s$ solving \r{SP-dec} with the operators $A_{p,s}$ and $R_{p,s}$ there vanishing. To guarantee uniqueness, it is often assumed that $w^p=(-\Delta)^{-1}\nabla\cdot u$, $w^s= -(-\Delta)^{-1}\nabla\times u$. We can prove a version of this in the variable coefficient case as well.
\section{The boundary value problem for the elastic system. Dirichlet boundary conditions} \label{sec_el_bvp}
Consider the elastic wave equation $u_{tt}-{E}u=0$ with boundary data $u=f$ on $\R\times\bo$. Assume that $f=0$ for $t\ll0$ and we are looking for the outgoing solution, i.e., the one which vanishes for $t\ll0$. We also introduce the notion of a microlocally outgoing solution along a single bicharacteristic requiring singularities of such a solution to propagate to the future. We define similarly incoming solutions by reversing time. Note that an outgoing solution does not need to consist of microlocally outgoing ones only since some incoming ones may be canceled at interfaces by outgoing ones.
We will construct a parametrix of those solutions using the analysis in section~\r{sec_Ac_BVP}. Moreover, we study the Cauchy data problem as well. We will use the analysis in the acoustic case essentially.
We work in semigeodesic coordinates $x=(x',x^3)$, with $x^3>0$ in $\Omega$. We denote the dual variables by $(\xi',\xi_3)$. The Euclidean metric then takes the form $g$ in those coordinates with $g_{\alpha 3}=\delta_{\alpha 3}$ for $1\le\alpha\le 3$.
The analysis however works if we start with an arbitrary metric $g$ in $\R^n$, not just with the Euclidean one. Norms and inner products below are always in the metric $g$ or $g^{-1}$ (for covectors).
The phase space on the cylindrical boundary $\R\times \bo $ can be naturally split into the following regions (recall that $c_s<c_p$):
\begin{description}
\item [Hyperbolic region] $c_p|\xi'|_g<|\tau|$. Then $c_s|\xi'|_g<|\tau|$ as well, so it is hyperbolic for both speeds.
\item [P-glancing region] $c_p|\xi'|_g=|\tau|$. It is glancing for $c_p$ and hyperbolic for $c_s$.
\item [Mixed region] $c_s|\xi'|_g< |\tau|<c_p|\xi'|_g$. It is elliptic for $c_p$ but hyperbolic for $c_s$.
\item [S-glancing region] $c_s|\xi'|_g=|\tau|$. It is glancing for $c_s$ and elliptic for $c_p$.
\item [Elliptic region] $|\tau|< c_s|\xi'|_g$. Then $|\tau|< c_p|\xi'|_g$, as well, so it is elliptic for both speeds.
\end{description}
We will not analyze wave fronts in the two glancing regions $|\tau|=c_p|\xi'|_g$ and $|\tau|=c_s|\xi'|_g$. For the purpose of the inverse problem, it is enough to analyze the propagation of singularities away from a set of measure zero. Therefore, there is no need to build a parametrix near the glancing regions (as in \cite{MR1334206} or \cite{Yamamoto_09}, for example) or work as in \cite{Hansen84}; so we can avoid the glancing regions.
By the calculus of the wave front sets, the traces of microlocal P waves on $\R\times\bo$ have wave front sets in the hyperbolic region under the assumption that all singularities hit the boundary transversely. The traces of transversal microlocal S waves belong to $c_s|\xi'|_g< |\tau|$, i.e, either to the hyperbolic, the mixed one, or to the p-glancing one. In particular, the trace of any solution of the elastic system with singularities hitting transversely, has wave front disjoint from the elliptic region. On the other hand, boundary values of solutions of the boundary value or the transmission problem may have wave front set on that surface, as Rayleigh and Stoneley waves do.
The analysis we have done so far, see next section, allows us to decouple the P and the S modes on the boundary completely by their polarizations. Then in terms of the potentials $w^s$ and $w^p$, we can think of the system as a decoupled one. When modes hit a free boundary, or a transparent one, however, the reflected and the transmitted modes may change type. The reason for this is that the boundary trace of an incoming S or P wave does not belong to the same subspace as that of an outgoing one.
\subsection{Wave front set in the hyperbolic region} \label{sec_ED1}
Let $f(t,x')$ be supported near some $(t_0,x_0')\in \R\times \R^{2}$, where $\R^{2}$ represents $\bo$, flattened. Assume first that $\WF(f)$ is supported in the hyperbolic region. The later has two disconnected components determined by the sign of $\tau$ there. Let us assume that $\WF(f)$ is contained with the one with $\tau<0$; the $\tau>0$ case is similar. Then the characteristic varieties reduce to $\tau+c_p|\xi|_g=0$ and $\tau+c_s|\xi|_g=0$, respectively.
We are looking for a parametrix of the outgoing solution of the form $u = Uw = u_p+u_s$
as in \r{5.11} with $w$ a potential. Note that this construction excludes $\xi_3=0$, which in our case corresponds to tangent rays which we avoid. We will work in a conic open microlocal region which does not contain such rays, i.e., $\xi_3\not=0$ there.
We seek the potentials $w^p$ and $w^s$ as geometric optics solutions as in section~\ref{sec_Ac_BVP}, i.e., of the form \r{10c} (where the solution is called $u$, not $w$) with phases $\phi_p$ and $\psi_s$, respectively, and a scalar amplitude $a^p$ and a 2D vector-valued one $a_s= (a_1^s, a_2^s)$. The phase functions solve the eikonal equations
\be{11}
\partial_t\phi_p + c_p|\nabla_x\phi_p|_g=0, \quad \phi_p|_{x^3=0}=t\tau+ x'\cdot\xi',
\ee
and similarly for $\phi_s$, where $x'=(x^1,x^2)$. The choice of the positive sign in front of the square root in the eikonal equation is determined by the choice $\tau<0$.
By \r{5.13}, the principal part of the amplitude of $u_p$ is $(D_x\phi_p) a^p$ and that of $u_s$ is $D_x\phi_s \times (a^s_1,a^s_2,0)$. Restricted to the boundary, we have $\nabla_x\phi_p= (\xi',\xi_3^p)$, $\nabla_x\phi_s= (\xi',\xi_3^s)$, where
\be{14}
\xi_3^p: = \sqrt{c_p^{-2}\tau^2-|\xi'|_g^2}, \quad \xi_3^s: = \sqrt{c_s^{-2}\tau^2- |\xi'|_g^2}, \quad \text{for $x^3=0$}.
\ee
We will use the notation
\be{14n}
\xi^p := (\xi',\xi_3^p) , \quad \xi^s :=(\xi',\xi^s_3) .
\ee
Those are the codirections of the rays emitted from the boundary, see Figure~\ref{pic_HR}. The angles $\theta^p$ and $\theta^s$ with the normal satisfy Snell's law
\be{Snell}
\frac{\sin\theta^p}{\sin\theta^s} = \frac{c_p}{c_s}>1,
\ee
as it follows directly from \r{14}, see also \cite{SU-thermo_brain}.
\begin{figure}[!ht]
\includegraphics[page=4,scale=1]{DN_earth_layers_pics}
\caption{The Dirichlet problem for the outgoing solution with wave front in the hyperbolic region. There are emitted S and P waves.
}\label{pic_HR}
\end{figure}
As we stated above, we are going to do all principal symbol calculations at $(t_0,x_0')$, where $g$ can always be arranged to be Euclidean.
In the hyperbolic region we work in, the expressions under the square roots are positive.
The positive square roots guarantee that the singularities are outgoing. We determine next the boundary conditions for the transport equations.
Since $u=Uw$, the boundary values of $w$ can be obtained from those of $u$ given by $f$ by an application of a certain \PDO. By the ``fundamental lemma'', see \cite[VIII.7]{Taylor-book0} and \cite{Treves2}, $Uw$ near the boundary is given by an oscillatory integral of the type \r{10c} with the amplitude there multiplied by a classical symbol with principal part $U(x,\nabla_x\phi)$, where $\phi$ equals either $\phi_s$ or $\psi_p$ depending on which components of $w$ we take. Restricted to the boundary, we get
\be{fuU}
f= u|_{x^3=0} =U_\textrm{out}\left(w |_{x^3=0}\right)
\ee
with $U_\textrm{out}$ a classical \PDO\ on $\R_t\times \R^2_{x'}$ with principal symbol
\be{Ub}
\sigma_p(U_\textrm{out}) = \begin{pmatrix} 0&-\xi_3^s&\xi_1\\ \xi_3^s&0&\xi_2\\ -\xi_2 & \xi_1&\xi_3^p\end{pmatrix}.
\ee
The subscript ``out'' is a reminder that we used the outgoing solution to define $U_\textrm{out}$. Similarly, we define $U_\textrm{in}$ using the incoming $u$. Its principal symbol is as above but with $\xi_3^s$ and $\xi_3^p$ having opposite signs. Note that $U$ acts locally in $\R_t\times\R_x^3$ while the two new operators act on $\R_t\times \R_x^2$.
The symbol $\sigma_p(U_\textrm{out})$ is elliptic, in fact
\be{detU}
\det\sigma_p(U_\textrm{out})= \xi_3^s(|\xi'|^2+\xi_3^s\xi_3^p),
\ee
which also equals $\xi^s_3 \langle\xi^s, \xi^p\rangle$.
The inverse of $\det\sigma_p(U_\textrm{out})$ is easy to compute and we do that below.
To find the boundary conditions for $w=(w^s_1,w^s_2,w^p)$, we write $w|_{x^3=0}=U_\textrm{out}^{-1}f$ (recall that all our inverses are parametrices). Then for $w^p$ and $w^s$ we get \r{5.12} with $\xi_3$ in all symbols replaced by $\xi_3^p$ for $u^p$ and $\xi_3^s$ for $u^s$. Once we have the boundary conditions for $w$, we construct $w$ near the boundary by the geometric optics construction \r{10c}. To get $u=u^p+u^s$, we apply $U$ to the result, see \r{5.11}.
\begin{remark}\label{rem_Rachele}
In \cite{Rachele03}, Rachele showed that when $g$ is Euclidean, the leading amplitudes (polarizations) of $u^p$ and $u^s$ are independent of $\rho$ if we think of the three parameters being $(\rho, c_s,c_p)$ instead of $(\rho,\mu,\lambda)$. We will use this in Section~\ref{section_GC}.
\end{remark}
In what follows, we will make the calculations above more geometric.
By \r{5.13}, $u^s$ and $u^p$ have representations of the kind \r{10c} with the corresponding phase functions and matrix valued amplitudes having principal parts $f\to \xi\times (A^sf,0)$ and $f\to \xi A^p\cdot f$, where $A_p$ is a vector, and $A_s$ is a $2\times3$ matrix.
Then one can show that on the boundary, $h\mapsto \xi^s\times (A_sh,0)$ is the non-orthogonal projection to the plane $(\xi^s)^\perp$ parallel to $\xi^p$, and $h\mapsto \xi^p A_p\cdot h$ is the non-orthogonal projection to $\xi^p$ parallel to the latter plane. In other words, they are the projection operators related to the direct sum $\xi^p \oplus (\xi^s)^\perp$.
Finally in this section, we notice that the same analysis holds for the incoming solutions with given Dirichlet boundary data. Then in the formulas above, we have to take the negative square roots of $\xi_3^p$ and $\xi_3^p$ in \r{14}.
\subsection{Wave front set in the mixed region} \label{sec_ED1m}
Let $\WF(f)$ be in the mixed region next. We show below that the outgoing solution has a microlocal S wave only. The eikonal equation for $\phi_s$ still has the same real valued solution locally, corresponding to the outgoing choice of the solution $u_s$. On the other hand, the eikonal equation \r{11} for $\phi_p$ has no real solution.
Indeed, we have $\nabla_{t,x'}\phi_p=(\tau,\xi')$ on $x^3=0$ and there is no real-valued function $\phi_p$ that could solve \r{11} and have such a gradient because in \r{14}, $\xi_3^p$ would be pure imaginary.
This is the case of an evanescent mode described in Section~\ref{sec_Ac_evan}.
We are still looking for a solution of the form $u=u_s+u_p = U(w^s,w^p)$ but this time $w_p$, and therefore, $u_p$ is an evanescent mode as the one constructed in Section~\ref{sec_Ac_evan}. The eikonal equation for $\phi_p$ implies, see \r{6.9aa}, that $\xi_3^p$ in this case reduces to
\be{xi_3pi}
\xi_3^p = \i\sqrt{ |\xi'|_g^2-c_p^{-2} \tau^2 }.
\ee
Then as in \r{fuU}, \r{Ub}, applying the ``fundamental lemma'' for FIOs with a complex phase, see \cite[X.4]{Treves2}, we deduce as before that the boundary values for $w$ are given by \r{fuU} with a classical \PDO\ $U$ having principal symbol as in \r{Ub} (with the new pure imaginary $\xi^p_3$). The operator $U$ is still elliptic because the determinant \r{detU} has non-zero imaginary part. Then we can determine the boundary conditions for $w^s$ and $w^p$, construct the microlocal solutions, and apply $U$ to get $u$.
\subsection{Wave front set in the elliptic region} \label{sec_6.3}
Assume that $\WF(f)$ is in the elliptic region. Then we proceed as before, looking for both $w^s$ and $w^p$ as evanescent modes with complex phase functions. In this case, both $\xi_3^p$ and $\xi_3^s$ are pure imaginary with positive imaginary parts, see \r{xi_3pi}, and for $\xi_3^s$ we get
\be{xi_3si}
\xi_3^s = \i\sqrt{ |\xi'|_g^2-c_s^{-2} \tau^2 }.
\ee
We have
\[
\det\sigma_p(U_\textrm{out})=|\xi'|_g^2 -\sqrt{ |\xi'|_g^2-c_s^{-2} \tau^2 } \sqrt{ |\xi'|_g^2-c_s^{-2} \tau^2 }>0.
\]
Therefore, $U_\textrm{out}$ is elliptic and we can proceed as above and construct the solution as in Section~\ref{sec_Ac_evan}.
\subsection{Summary} We established that the Dirichlet problem is well posed microlocally and we have the following:
\begin{itemize}
\item[(i)] $\WF(f)$ in the hyperbolic region: there are outgoing P and S waves.
\item[(ii)] $\WF(f)$ in the mixed region: there is an outgoing S wave only (plus an evanescent P mode).
\item[(iii)] $\WF(f)$ in the elliptic region: there are no outgoing waves; there are two evanescent modes.
\end{itemize}
\section{The boundary value problem for the elastic system. Neumann boundary conditions and the Neumann operator} \label{sec_el_bvpN}
Assume now that we want to find the outgoing solution of the elastic wave equation with boundary data $Nu=h$. The strategy below is find the Dirichlet boundary data $f$ from this equation and then to proceed as in section~\ref{sec_el_bvp}. In other words, we want to solve $\Lambda f=h$ for $f$ microlocally if possible by showing that $\Lambda$ is elliptic (or not). Lack of ellipticity of $\Lambda$ in the elliptic region leads to Rayleigh waves, see, e.g., \cite{popov_elast, Taylor-Rayleigh, MR1376435,MR1334206}.
\subsection{Wave front set in the hyperbolic region}
We are looking again for an outgoing solution of the type $u=u^s+u^p$ as in \r{5.11}. The boundary values $w_b = w |_{x^3=0}$ of $w$ are computed by solving
\be{fuUN}
h= Nu|_{x^3=0}, =M_\textrm{out}w_b,\quad M_\textrm{out} := \Lambda U_\textrm{out}
\ee
for $w_b$, compare with \r{fuU}, where $\Lambda$ is the microlocalized Dirichlet-to-Neumann map \r{2a}, i.e., $\Lambda h: = Nu|_{x^3=0}$
for $u$ an outgoing microlocal solution of the elasticity equation with boundary data $u=h$ on $x^3=0$. We can use \r{fuUN} and \r{Ub} to compute $\sigma_p(\Lambda)$.
We define the incoming $M_\textrm{in}$ in a similar way as in \r{fuUN} but with $u$ being the incoming solution. More precisely, $M_\textrm{in}w_b$ is defined as $Nu|_{x^3=0}$ where $u$ is the incoming solution with boundary data $U_\textrm{in}w_b$. This also means that $M_\textrm{in}= \Lambda_\text{in} U_\textrm{in}$, where $\Lambda_\text{in}$ is defined as $Nu|_{x^3=0}$ with $u$ being the incoming solution. The operator $\Lambda$ the should be denoted by $\Lambda_\text{out}$ but we will keep the simpler one $\Lambda$. Below, we compute the principal symbols of $M_\text{out}$ and $M_\text{in}$. Combining that with \r{Ub}, we can compute the principal symbol of $\Lambda$ as well but we will not need it.
By \r{Nu} and \r{Ub}, in semigeodesic coordinates,
\be{Nb0}\begin{split}
\sigma_p(M_\textrm{out}) &= \begin{pmatrix} \mu \xi_3^s& 0&\mu \xi_1\\ 0&\mu \xi_3^s& \mu \xi_2\\ \lambda \xi_1 & \lambda \xi_2&(\lambda+2\mu)\xi_3^s\end{pmatrix}
\begin{pmatrix} 0&-\xi_3^s&0\\ \xi_3^s&0&0\\ -\xi_2 & \xi_1&0\end{pmatrix}\\
& + \begin{pmatrix} \mu \xi_3^p& 0&\mu \xi_1\\ 0&\mu \xi_3^p& \mu \xi_2\\ \lambda \xi_1 & \lambda \xi_2&(\lambda+2\mu)\xi_3^p\end{pmatrix}
\begin{pmatrix} 0&0&\xi_1\\ 0&0&\xi_2\\ 0 & 0 &\xi_3^p\end{pmatrix}.
\end{split}
\ee
Therefore,
\be{Nb}\begin{split}
\sigma_p(M_\textrm{out})&= \begin{pmatrix} -\mu \xi_1\xi_2& \mu(2 \xi_1^2 +\xi_2^2 ) -\rho {\tau}^{2}&2 \mu \xi_1\xi_3^p
\\ -\mu (\xi_1^2+2 \xi_2^{2})+ \rho \tau^{2}&\mu\xi_1\xi_2&2 \mu \xi_2 \xi_3^p
\\ -2\mu\xi_2\xi_3^s&2\mu \xi_1\xi_3^s&-2 \mu |\xi|^{2}+\rho\tau^{2}\end{pmatrix} .
\end{split}
\ee
Similarly, we define $ M_\text{in}$ to be the principal symbol of the same operator but related to the incoming DN map; i.e., the same as above but with $\xi_3^s$ and $\xi^p_3$ the negative square roots in \r{14}.
A direct computation yields
\be{RNC_det}
\begin{split}
\det \sigma_p(M_\text{out}) &= - \left( \mu |\xi'|^2-\rho{\tau}^{2} \right) \left( 4|\xi'|^2
\mu^2\left( \xi_3^p \xi_3^s+|\xi'|^2\right) -4\mu \rho{\tau}^{2
}|\xi'|^2 +\rho^2{\tau}^{4}
\right) \\
&= - \rho\left( c_p^2 |\xi'|^2-{\tau}
^{2} \right) \left(( 2\mu |\xi'|^2 -\rho \tau^2)^2 +4 \mu^2|\xi'|^2
\xi_3^p \xi_3^s
\right)>0.
\end{split}
\ee
The determinant of $\sigma_p(M_\text{in})$ is the same.
Since $U_\textrm{out}$ is elliptic, we get that $\Lambda$ is elliptic in the hyperbolic region as well. Therefore, we can invert $\Lambda$ microlocally and reduce the Neumann boundary value problem to the Dirichlet one, which can be solved as in section~\ref{sec_ED1}. More directly, we invert $\Lambda U_\textrm{out}$ and we get boundary conditions for $w$; which we use to solve the problem.
\subsection{Wave front set in the elliptic region.} \label{sec_7.2}
In this case, we seek both $w^s$ and $w^p$ as evanescent modes. The calculations are as in section~\ref{sec_el_bvp} but $\xi_3^s$ and $\xi_3^p$ are pure imaginary as in \r{xi_3pi} and \r{xi_3si}. Then
\be{7.3}
\det \sigma_p(M_\textrm{out}) = - \rho\left( c_p^2 |\xi'|^2-{\tau}
^{2} \right) \left(( 2\mu |\xi'|^2 -\rho \tau^2)^2 -4|\xi'|^2
{\mu}^{2}|\xi_3^p| |\xi_3^s|
\right).
\ee
We have $ c_p^2 |\xi'|^2-{\tau}^{2}>0$. For the third factor above, introduce the function
\[
R(s) = \left(s-2\right)^2 - 4\left(1-s\right)^\frac12 \left(1-c_s^2c_p^{-2}s\right)^\frac12.
\]
Then, up to an elliptic factor, $\det \sigma_p(M_\textrm{out})$ equals $R(c_s^{-2}\tau^2|\xi'|^{-2})$. It is well known and can be proven easily that on the interval $s\in (0,1)$, this function has a unique simple root $s_0$. This corresponds to $s_0c_s^2 |\xi'|^2=\tau^2$. Therefore, if we set $c_R(x)=c_s\sqrt{s_0}$, known as the Rayleigh speed, we get a characteristic variety
\be{S_R}
\Sigma_R := \left\{c_R^2|\xi'|^2_g=\tau^2\right\}
\ee
on which \r{7.3} has a simple zero. Note that $0<c_R<c_s<c_p$.
Since $U_{\rm out}$ is elliptic here, see Section~\ref{sec_6.3}, we get that $\Lambda$ is elliptic in the elliptic region away from $\Sigma_R$ and its principal symbol has a simple zero there. This generates the Rayleigh waves, see Section~\ref{sec_Rayleigh}. For every $f$ with $\WF(f)$ in the elliptic region but away from $\Sigma_R$, we can proceed as above to solve the Neumann problem.
\subsection{Wave front set in the mixed region} In this case, we seek both $w^s$ as a hyperbolic wave and $w^p$ as an evanescent one. The calculations are as in section~\ref{sec_el_bvp} with $\xi_3^s$ real as in \r{14} and $\xi_3^p$ pure imaginary as in \r{xi_3pi}. Then $ c_p^2 |\xi'|^2-{\tau}^{2}>0$ as well and for $\det \sigma_p(M_\textrm{out}) $ we have an expression similar to \r{7.3} given, up to an elliptic factor, by $R(c_s^{-2}\tau^2|\xi'|^{-2})$ with
\be{RNC_detm}
R(s) = \left(s-2\right)^2 +4\i (s-1)^\frac12 \left(1-c_s^2c_p^{-2}s\right)^\frac12.
\ee
For $1<s<c_p^2c_s^{-2}$, which corresponds to the mixed region, $R$ is elliptic. This shows that, as above, one can construct $w|_{x^3=0}$ microlocally given $\Lambda f$. Then we construct $w^s$ and $w^p$, the latter as an evanescent mode; and then $u$. In particular, only microlocal S waves propagate from $\bo$.
\subsection{Incoming solutions} The construction of incoming solutions (singularities propagating to the past only) is similar and we will skip the details. One can obtain them from the outgoing solutions by reversing the time.
\section{The boundary value problem for the elastic system. Cauchy data} \label{sec_el_bvpC}
We analyze the boundary value problem for the elastic system on one side of $\Gamma$ with Cauchy data $u=f$, $\partial_\nu u=h$ on $\R_t\times \Gamma$. Similarly to section~\ref{sec_Ac_BVP_C}, we assume wave front set away from the glancing regions. This analysis is needed for the transmission problem when we want to control the behavior of the waves on one side by the other. We show in particular that this problem is well posed microlocally even though globally it is not, in general.
\subsection{Wave front in the hyperbolic region} \label{sec_u_C}
Assume first that the wave front set of $(f,h)$ is in the hyperbolic region. We are looking for a solution
\be{u_C1}
u = u_\text{in} + u_\text{out}= (u_\text{in}^p + u_\text{in}^s)+ (u_\text{out}^p + u_\text{out}^s),
\ee
having both an incoming and an outgoing part, see Figure~\ref{pic_C}.
\begin{figure}[!ht]
\includegraphics[page=5,scale=1]{DN_earth_layers_pics}
\caption{The Cauchy problem with wave front in the hyperbolic region. The angle of incidence is the same as the angle of reflection for each type. Given any Cauchy data in the hyperbolic region, there is a unique solution (it is an elliptic problem).
}\label{pic_C}
\end{figure}
Then on $\Gamma$, we need to solve
\be{u_C2}
u_{\text{in},b} + u_{\text{out},b} = f, \quad
\Lambda_\text{in} u_{\text{in},b} + \Lambda_\text{out}u_{\text{out},b} = h,
\ee
for the boundary traces $u_{\text{in},b}$ and $u_{\text{out},b}$ of $u_{\text{in}}$ and $u_{\text{out}}$. We pass to the corresponding solutions $w$ as in \r{fuUN} to get
\be{u_C3}
U_\text{in} w_{\text{in},b} + U_\text{out}w_{\text{out},b} = f, \quad
M_\text{in} w_{\text{in},b} + M_\text{out}w_{\text{out},b} = h.
\ee
Let $(a_{1,\text{in}}^s, a_{2,\text{in}}^s, a_\text{in}^p)^T $ be the principal amplitude of $w_\text{in} $ and similarly for $w_\text{out} $. By the rotational invariance w.r.t.\ rotations in the $(\xi_1,\xi_2)$ plane (we justify this later), we can assume $\xi_2=0$. Then by \r{Nb},
\be{u_C3a}
\begin{split}
\sigma_p(M_\text{out})\big|_{\xi_2=0} &= \begin{pmatrix} 0 &2 \mu\xi_1^2-\rho {\tau}^{2}&2 \mu \xi_1\xi_3^p
\\ -\mu \xi_1^2+ \rho \tau^{2}&0&0
\\ 0&2\mu \xi_1\xi_3^s&-2 \mu \xi_1^{2}+\rho\tau^{2}\end{pmatrix}, \\
\sigma_p(U_\textrm{out})\big|_{\xi_2=0} &= \begin{pmatrix} 0&-\xi_3^s&\xi_1\\ \xi_3^s&0&0\\ 0 & \xi_1&\xi_3^p\end{pmatrix},
\end{split}
\ee
and similarly for $\sigma_p(M_\text{in})$, $\sigma_p(U_\text{in})$. Then on principal symbol level, \r{u_C3} decouples into the following two systems
\be{u_C4}
A_\text{in} ( a_\text{in}^p, a_{2,\text{in}}^s)^T + A_\text{out} (a_\text{out}^p, a_{2,\text{out}}^s)^T = (\hat f_1,\hat f_3,\hat h_1, \hat h_3)^T,
\ee
and
\be{u_C4a}
\begin{pmatrix}
\xi_{3}^s &- \xi_{3}^s \\
\mu(\xi_{3}^s)^2 & \mu(\xi_{3}^s)^2 \end{pmatrix}
\begin{pmatrix}a_{1,\text{in}}^s\\ a_{1,\text{out}}^s \end{pmatrix}
= \begin{pmatrix}\hat f_2\\ \hat h_2\end{pmatrix} ,
\ee
where
\be{u_C5}
A_\text{in}:= \begin{pmatrix}
\xi_1&-\xi_{3}^s \\
\xi_{3}^p&\xi_1 \\
2\mu\xi_{3}^p \xi_1 & \mu(2\xi_1^2-c_{s}^{-2}\tau^2) \\
- \mu( 2\xi_1^2-c_{s}^{-2}\tau^2 ) & 2\mu \xi_{3}^s\xi_1
\end{pmatrix},
\quad
A_\text{out}:= \begin{pmatrix}
\xi_1&\xi_{3}^s \\
-\xi_{3}^p&\xi_1 \\
-2\mu\xi_{3}^p \xi_1 & \mu(2\xi_1^2-c_{s}^{-2}\tau^2) \\
- \mu( 2\xi_1^2-c_{s}^{-2}\tau^2 ) & - 2\mu \xi_{3}^s\xi_1
\end{pmatrix}.
\ee
We have
\be{u_C6}
\begin{split}
\frac12(A_\text{in}+ A_\text{out}) &= \begin{pmatrix}
\xi_1&0 \\
0&\xi_1 \\
0 & \mu(2\xi_1^2-c_s^{-2}\tau^2) \\
- \mu(2\xi_1^2-c_s^{-2}\tau^2) & 0
\end{pmatrix},\\
\frac12(A_\text{out}- A_\text{in})&= \begin{pmatrix}
0&\xi_{3}^s \\
-\xi_{3}^p& 0 \\
-2\mu\xi_{3}^p \xi_1 & 0 \\
0 & -2\mu\xi_{3}^s \xi_1
\end{pmatrix}.
\end{split}
\ee
This shows that the system \r{u_C4} decouples to two $2\times 2$ systems after rewriting it as a system for the sum and the difference of the original vectors. The determinants of those two systems are $c_s^{-2}\tau^2\xi_3^p$ and $c_s^{-2}\tau^2\xi_3^s$, respectively; therefore, elliptic (after applying an elliptic operator of order $-1$ to the last two rows to equate their order with the rest, and we will use this notion of ellipticity below as well). Therefore, \r{u_C4} is elliptic as well. Clearly, so is \r{u_C4a}, which behaves as the acoustic case \r{7.1}. Thus we proved the following.
\begin{lemma}\label{lemma_A}
The matrix valued symbol $(A_\text{\rm in}, A_\text{\rm out})$ is elliptic.
\end{lemma}
Therefore, \r{u_C3} is elliptic as well.
Lemma~\ref{lemma_A} remains true in the mixed and in the elliptic regions as well, where $\xi_3^s$ or $\xi_s^p$ could be pure imaginary as in \r{xi_3pi}, \r{xi_3si}. Then there is no incoming/outgoing choice of the sign of $\xi_3^s$ and $\xi_s^p$ (which distinguishes $A_\text{in}$ and $A_\text{out}$) but this does not matter because later, we will multiply those expressions, when pure imaginary, with the ``wrong'' signs by zero, see \r{u_C4''}, for example.
\subsection{SV-SH decomposition of S waves} \label{sec_SV-SH}
The principal amplitude of the S wave $u^s=D\times (w_1^s,0,0) = (0,D_3,-D_2)w_1^s$ (plus smoother terms),
see Proposition~\ref{pr1} and \r{5.12}, corresponding to $w_2^s=0$, evaluated for $\xi_2=0$, has only its second component possibly non-zero. Then it is tangent to $\Gamma$ and normal to the direction of the propagation $\xi=(\xi_1,0,\xi_3)$ (as it should be because it is an S wave).
In the geophysical literature (for constant coefficients and a flat boundary), such waves
are called \textit{shear-horizontal} (SH) waves since their polarization is tangent to the plane $\Gamma$. Equation \r{u_C4a} then describes the SH waves generated by the Cauchy data when $\xi_2=0$. Note that in our case, ``horizontal'' makes sense only at the boundary.
The $a_2^s$ terms appearing in \r{u_C4} are the \textit{shear-vertical} (SV) components of the potentials $w$ of the incoming and the outgoing waves. Indeed, using the subscript $b$ to indicate a boundary value (as we did above), when $w_{1,b}=0$, then the principal term of the outgoing/incoming $u^s_b$ is $(\mp\xi_3(x',D'),0,D_1) w_{2,b}^s$, which gives us a principal amplitude perpendicular to the $\xi_2$ axis (and to the direction $\xi$ of propagation, of course). Then the oscillations happen in the $\xi_1\xi_2$ plane, vertical to $\Gamma$ (and parallel to $\xi$), hence the name. System \r{u_C4} then describes how the SV and the P waves are created from given Cauchy data.
So far, the computations were done at a fixed point $x_0$ and a fixed covector $\xi^0$ at it, where the metric is chosen to be Euclidean. Then the orthogonal projection of the principal amplitude to $\Gamma=\{x^3=0\}$ (actually, to $T_{x_0}^*\Gamma$) is the SH component of it, while the projection to the plane through it and the normal is the SV component. We will do this decomposition microlocally near $(x_0,\xi^0)$ on the principal symbol level.
Note first that at $x_0$, there is a rotational invariance in the $\xi_1\xi_2$ plane. We already have a confirmation of that since we are free to choose coordinates in which $\xi_2=0$ and then we found out that the geometry of the rays and their principal amplitudes depend on the angles with the normal but not on $\xi$ in any other way. To derive this, we conjugate both symbols in \r{u_C3a} with the rotational matrix
\be{u_C7}
V:= \begin{pmatrix} \xi_1/|\xi|&\xi_2/|\xi|&0\\ -\xi_2/|\xi|&\xi_1/|\xi|&0\\ 0 & 0&1\end{pmatrix}.
\ee
A direct computation yields
\be{u_C8}
V^{-1} \sigma_p(M_\text{out}){(|\xi|,0}) V = \sigma_p(M_\text{out})(\xi), \quad V^{-1} \sigma_p(U_\text{out}){(|\xi|,0}) V = \sigma_p(U_\text{out})(\xi)
\ee
at $x=x_0$. So far, we assumed that the metric was Euclidean at $x_0$. To get that, one can set $\tilde \xi =g^{-1/2}(x_0)\xi$ which can be achieved by a linear change in the $x$ variables; then the Euclidean product in the $\tilde\xi$ variable corresponds to the metric one in the original $\xi$ one. Therefore, replacing $\xi$ above by $g^{1/2}(x_0)\xi$ gives us the principal symbols in the original local coordinates. Varying the point $x_0$, we get principal symbols locally.
This allows us to define an SV-SH decomposition of S waves on a principal symbol level. In Proposition~\ref{pr1}, if $u^s$ is the S wave of a solution with certain Cauchy data at $t=0$, then $u^s$ will be an SH wave on $\Gamma$ (up to lower order terms) if $\langle \nu, u^s\rangle|_{\Gamma}= 0$ up to lower order terms applied to the Cauchy data, where $\nu$ is a unit normal covector field. It would be an SV wave on $\Gamma$ if $\langle \nu, (D\times u^s) \rangle|_{\Gamma}= 0$, up to lower order.
An outgoing S wave $u_\text{out}^s$ near $\Gamma$, which is determined uniquely (up to a smooth term) by its Dirichlet data on $\Gamma$; and therefore by its potential $w_{\text{out},b}$ on $\Gamma$, is an SV wave on $\Gamma$, if $D'\times w_{\text{out},b}=0$ up to a first order \PDO\ applied to $w_{\text{out},b}$, which corresponds to the requirement that the second component of $w_{\text{out},b}$ must vanish when $\xi'=(\xi_1,0)$. Here, $D'$ is the tangential differential.
To construct such SV waves, one can take the gradients on $\Gamma$ of scalar functions with non-trivial wave front sets.
The $u^s$ wave is an SH one on $\Gamma$, if $D'\cdot w_{\text{out},b}=0$ up to a lower order (divergence free). To construct such SH waves, one can take the curl on $\Gamma$ of scalar functions with non-trivial wave front sets.
\subsection{Wave front in the mixed region} The P wave is evanescent, and there is only one (not incoming and an outgoing one). The number of the unknown amplitudes on the boundary is reduced by one, and the system can be seen to be over-determined. Indeed, then $\xi_3^p$ is pure imaginary and given by \r{xi_3pi}. We still define $A_\textrm{in}$ and $A_\textrm{out}$ as in \r{u_C5}.
Then \r{u_C4} becomes
\be{u_C4'}
A_\text{in} (0,a_{2,\text{in}}^s)^T + A_\text{out} ( a^p, a_{2,\text{out}}^s)^T = (\hat f_1,\hat f_3,\hat h_1, \hat h_3)^T,
\ee
and \r{u_C4a} stays the same.
By the expressions of the determinants following \r{u_C6}, the matrix $(A_\text{in}, A_\text{out})$ is still elliptic in this case, i.e., Lemma~\ref{lemma_A} still holds. System \r{u_C4'} then is over-determined and solvable (uniquely) only if the r.h.s.\ belongs to a certain 3D subspace.
\subsection{Wave front in the elliptic region} In this case, $\xi_3^s$ is pure imaginary as well as in \r{xi_3si}, both waves are evanescent and the problem is overdetermined, as well. Equation \r{u_C4'} reduces to
\be{u_C4''}
A_\text{out} ( a^p, a_{2}^s)^T = (\hat f_1,\hat f_3,\hat h_1, \hat h_3)^T,
\ee
and Lemma~\ref{lemma_A} still holds with both $\xi_3^p$ and $\xi_3^s$ pure imaginary as in \r{xi_3pi}, \r{xi_3si}; therefore we get an overdetermined system as well. In system \r{u_C4a}, both amplitudes are equal and that system is overdetermined as well.
\section{Reflection and mode conversion of S and P waves from a free boundary with Neumann boundary conditions} \label{sec_ref}
Let $\Gamma$ be a surface which separates an elastic medium from a free space (like the Earth from air). The natural boundary condition then is
\be{RMC1}
Nu=0\quad\text{on $\Gamma$},
\ee
which means zero traction on $\Gamma$, i.e., no normal force, because the exterior has zero stiffness. We study reflection and mode conversion of S and P waves when they come from the elastic side of $\Gamma$ and hit $\Gamma$.
This is actually a partial case of the analysis of the boundary value problem with Cauchy data in Section~\ref{sec_el_bvpC} with zero Neumann and Dirichlet data.
The strategy is the following. We take the trace $Nu_I$ of the incoming wave $u_I$ on the boundary and look for a reflected wave as a sum of an S and P wave as in \r{RMC2} below. Then $Nu_I$ determines Neumann boundary conditions for those two waves. If $Nu_I$ has a wave front set in the hyperbolic region, we can recover the Dirichlet data for the reflected wave by inverting the elliptic \PDO\ $\Lambda U_\textrm{out}$ in \r{Nb}. Knowing the Dirichlet data, we reduce the problem of constructing an outgoing solution as in section~\ref{sec_ED1}. If $\WF(Nu_I)$ is in the mixed region, we use the construction in section~\ref{sec_ED1m}. Finally, $\WF(Nu_I)$ cannot be in the elliptic region since it corresponds to an incoming solution; therefore, Rayleigh waves cannot be generated by reflection of S and P waves. One can verify that the principal amplitudes of the reflected S and P waves can only vanish for a discrete number of incident angles (i.e., on a finite number of curves on the sphere of directions) because they depend analytically on $\xi$ and one can easily eliminate the scenario of one of the waves to vanish for all incoming directions. Those principal amplitudes can actually be computed and in the case of constant coefficients and a flat boundary, they have been computed in the geophysics literature, see, e.g., \cite{aki2002quantitative}. They do have zeros. For our purposes, it is enough to express their solution by Cramer's Rule since we will prove that the determinant does not vanish. Vanishing amplitudes at finite number of angles is not an obstacle for the inverse problem we solve because the missing rays can be added to the data by continuity (but that may affect stability).
\subsection{$\WF(u_{I,b})$ in the hyperbolic region}
Assume that we have an incident P wave $u_I= u_I^p+u^s_I$, in other words a sum of microlocal solutions near $\Gamma$ with $\WF(u^p_I)\subset \Sigma_p$ and $\WF(u^s_I)\subset \Sigma_s$. As in Section~\ref{sec_GO}, we will restrict the wave front set to $\tau<0$. We extend $u_I$ to a two sided neighborhood of $\Gamma$ as a microlocal solution by extending the coefficients $\lambda$, $\mu$ and $\lambda$ in a smooth way in the exterior. Set $u_{I,b}=u_I|_{\R\times S}$. It follows form the analysis above that $\WF(u_{I,b})$ is in the mixed region. As above, we assume no wave front set in the glancing region. In fact, $\WF(u_{I,b}^p)$ is in the hyperbolic region while $\WF(u_{I,b}^s)$ is there only if the angle of the corresponding rays with the normal is smaller than the critical one given by $c_p|\xi'|=|\tau|$, and it is in the mixed one if the incident angle is greater than the critical one.
We look for a solution of the form
\be{RMC2}
u = u_I+u_R = (u_I^p+u_I^s) + (u_R^p + u_R^s),
\ee
where $u_R^p$ and $u_R^s$ are reflected P and S waves, respectively.
Let $x=(x',x^3)$ be semigeodesic coordinates near $x_0=0$ so that $x^3>0$ on the elastic side. All equalities below are at a fixed point $x_0$ which can be chosen to be $0$ and modulo lower order terms for the amplitudes. As above, we assume without loss of generality that the metric $g$ is Euclidean at $x=0$ to simplify the notation. We can get the equations below by using \r{Nb}. Let $w_I=(w_{1,I}^s,w_{2,I}^s,w_I^p)$ and $w_R=(w_{1,R}^s,w_{2,R}^s,w_R^p)$ be the solutions $w$ as in \r{5.11} related to $u_I$ and $u_R$. Since they solve \r{SP-dec}, each singularity of the S or the P part of $w_I$ reflects by the laws of geometric optics. On the other hand, if $\theta^p$ is the angle which an incoming P singularity makes with the normal, then the corresponding angle $\theta^s$ of the reflected S singularity, see Figure~\ref{pic1}, is related to $\theta^p$ by Snell's law \r{Snell}
as it follows directly from \r{14}, see also \cite{SU-thermo_brain}. Also, the incoming and the outgoing directions, and the normal belongs to the same plane, which determines the reflected direction uniquely. The same law applies to an incoming S wave generating a reflected P one. In the latter case, there is a critical incoming angle $\theta_\text{cr}=\arcsin(c_s/c_p)$ of an S wave so that if $\theta^s>\theta_\text{cr}$, \r{Snell} has no solution for $\theta^p$. Then a reflected P wave does not exist and instead we have an evanescent mode, as we show below.
\begin{figure}[!ht]
\includegraphics[page=1,scale=1]{DN_earth_layers_pics}
\caption{Reflected P and S waves from an incident P wave. The covectors shown are parallel to the velocity vectors $c_p^2\xi_I^p$ of the incident P wave and the velocities $c_p^2\xi_R^p$ and $c_s^2\xi_R^s$ of the reflected P and S waves, respectively. The amplitudes depend on the type of the boundary condition.}\label{pic1}
\end{figure}
We need to solve
\be{RMC2LU}
M_\text{out} w_{R,b}= -M_\text{in}w_{I,b}
\ee
for $w_{R,b}$.
Since $M_\text{out}$ is elliptic in the hyperbolic region, \r{RMC2LU} is microlocally solvable. We only need to verify that $w_R$ has non-trivial S and P components for almost all incoming rays.
We express $w_I$, $w_R$ and $w_T$ in the form \r{10c} with phase functions solving \r{eik0} with for either $c_p$ or $c_s$ and a choice of the square root sign corresponding to the incoming or the outgoing property of each wave. The corresponding principal amplitudes are $(a_1^s, a_2^s,a^p)$ subindices $I$, $R$, and $T$ distinguishing between the three waves.
Without loss of generality, we may assume $\xi_2=0$ as in section~\ref{sec_el_bvpC}. We get, see \r{u_C3a},
\be{RMC_11}
\begin{split} \left(2 \mu\xi_1^2 -\rho {\tau}^{2}\right)\left( a_{2,R}^s+ a_{2,I}^s \right)
+ 2 \mu \xi_1\xi_3^p \left( a_{R}^p- a_{I}^p \right)&=0,\\
2\mu \xi_1\xi_3^s\left( a_{2,R}^s- a_{2,I}^s \right) -( 2 \mu \xi_1^{2}-\rho\tau^{2}) \left(a_{R}^p+ a_{I}^p \right)&=0,\\
a_{1,R}^s+ a_{1,I}^s& =0.
\end{split}
\ee
The system \r{RMC_11} is uniquely solvable, as we know. We determine $ a_{1,R}^s = - a_{1,I}^s$ first, which says that the SH wave $U(a_{1,R}^s,0)$ just flip a sign at reflection. The first two equations can be solved to get $a_{2,R}^s$ and $a^p_R$. If $a_R^p =a_{1,R}^s =0$, then $U(0,a_{2,R}^s)$ is the SV wave oscillating in the plane normal to the boundary.
Let $w_I$ be a purely P wave, i.e., $w_{1,I}^s =w_{2,I}^s =0$. We want to find out when there is no reflected either P or an S wave. One could just solve the system but we will analyze it without solving it. If there is no reflected P wave, i.e., if $w^p_R=0$, then \r{RMC_11} implies that both components of the reflected wave must vanish as well which is a contradiction, unless $2\mu\xi_1^2-\rho\tau^2=0$, i.e., if $2c_s^2\xi_1^2=\tau^2$. This may or may not be in the hyperbolic region and defines a cone of incoming directions when it does. Now, assume that there is no reflected S wave, i.e., $w_{1,R}^s =w_{2,R}^s =0$. This is possible only when $\xi_1=0$, i.e., when the incoming P wave is normal to the boundary.
Now, assume that $w_I$ is an S wave. If there is a reflected S wave only, we are in the situation above with the time reversed --- it can only happen for normal rays. Similarly, if there is a reflected P wave only, this can only happen for incident directions on a specific cone, or it does not.
\subsection{Wave front set in the elliptic region, Rayleigh waves}\label{sec_Rayleigh}
We are looking for microlocal solutions satisfying $Nu=0$ with wave front set on the boundary in the elliptic region. We follow Taylor \cite{Taylor-Rayleigh}, where the coefficients are constant and $n=2$ but as noted there, the construction extends to the general case; and will sketch that extension.
As shown in Section~\ref{sec_7.2}, $\Lambda$ has a characteristic variety $\Sigma_R$, see \r{S_R} and the determinant of its principal symbol, up to an elliptic factor near $\Sigma_R$, is given by $H:=\tau^2-c_R^2|\xi'|^2$. Therefore, microlocal solutions to $Nu=0$ with boundary wave front sets on $\Sigma_R$ would solve a \PDO\ system on $\R_t\times\Gamma$ of real principal type in the sense of \cite{Dencker_polar}. Here, $|\xi'|$ is the norm of the covector $\xi'$ in the metric on $\Gamma$ induced by $g$ (the latter is Euclidean in the isotropic elastic case).
One can impose Cauchy data at $t=0$ to get unique (in microlocal sense) solution. Singularities propagate along the null bicharacteristics of $H$, i.e., along the null bicharacteristics of a wave equation on $\R_t\times\Gamma$ with speed $c_R$.
Next, one uses the solution on $\R_t\times\Gamma$ constructed above as Dirichlet data for a solution near $\Gamma$, in $\Omega$, as in Section~\ref{sec_Ac_evan}.
\subsection{Wave front set in the mixed region} This can only happen if there is a non-zero incident S wave hitting the boundary at an angle (with the normal) greater than the critical one $\theta_\textrm{cr}$, see \r{theta_cr}. We are still looking for a solution of the kind \r{RMC2}, where $u_I^p=0$ and all singularities of $u_I^s$ hit the boundary at angles greater than $\theta_\textrm{cr}$. Then $u_R^p$ would be actually an evanescent mode (not actually a P wave by our definition because it would be smooth away from $\Gamma$). To find the boundary values for $w_R$, we need to solve \r{RMC2LU} again with $M_\text{out}$ as in \r{Nb} but $\xi_3^p$ is given by \r{xi_3pi}. The matrix $M_\text{out}$ is elliptic, see \r{RNC_detm}. Once we have the boundary values for $w_R$, we can construct both solutions as in section~\ref{sec_el_bvp}.
We also see that the reflected S wave cannot have zero amplitude except for possibly one incident angle; the proof is like in the hyperbolic case.
\subsection{Summary}
\begin{itemize}
\item[(i)] An incident P wave produces a reflected P wave and a reflected S wave.
\item[(ii)] An incident S wave produces a reflected S wave. It produces a reflected P wave only if the incident angle is greater than the critical one; otherwise there is an evanescent P solution.
\item[(iii)] By time reversal, given an outgoing P wave, there are incoming S and P ones which produce that P wave and no S wave. The roles of those waves can be reversed only when the incident angle of the S wave is greater than the critical one.
\item[(iv)] An incident SH wave produces a reflected SH wave only.
\end{itemize}
\section{The transmission problem for the elastic system} \label{sec_trans}
\subsection{Transmission and reflections of incoming S and P waves. Zoeppritz' and Knott's equations} \label{sec_Tr_1}
We are interested first how an incoming wave, either and S or an P one, is reflected and transmitted across $\Gamma$. We assume first that the wave front set of the incoming waves on the boundary is in the hyperbolic region on the other side of $\Gamma$ as well. This is a classical case with a long history.
As in section~\ref{sec_Ac_RT}, we assume that $\Gamma$ divides $\R^3$ locally into $\Omega_+$, where the waves come from, and $\Omega_-$, where they may transmit.
Let, as above, $u_I$ be a microlocal solution of the elastic system.
Similarly to \r{14u}, we are looking for a local solution of the form
\be{u's}
u = u_I+u_R+u_S= (u_I^p+u_I^s) + (u_R^p + u_R^s)+ (u_T^p+ u_T^s),
\ee
where the expressions in each parentheses is a decomposition into P and S waves,
$u_T$, is supported in $\bar\Omega_+$, and $u_I, u_R$ are supported in $\bar\Omega_-$. The terms with a superscript $s$ are microlocally S waves; and those with a superscript $p$ are P waves.
Denote the restriction of $c_p$ and $c_s$ to $\bar\Omega_+$ and $\bar\Omega_-$, respectively by $c_{p,+}$ or $c_{s,+} $; and $c_{p,-}$, $c_{s,-}$, respectively. A subscript $b$ denotes a boundary value. We know that $\WF(u_{I,b})$ is in the hyperbolic or the mixed region on $T^*S$ w.r.t.\ the speeds $c_{p,+}$ and $c_{s,+}$ assuming non-trivial incoming S and P waves. This may not be true
on the negative side, i.e., with respect to the speeds $c_{p,-}$ and $c_{s,-}$ but as we said above, in this section, we are assuming that $\WF(u_{I,b})$ is in the hyperbolic region with respect to them as well.
\begin{figure}[!ht]
\includegraphics[page=14,scale=1]{DN_earth_layers_pics}
\caption{The elastic transmission problem: Reflected and transmitted P and S waves from an incident P wave (the incoming S wave not shown).
In this diagram, each speed gets faster in the lower half space which decreases the angles of the transmitted rays with $\xi'$ compared to the reflected ones or it would create evanescent modes.
}\label{pic2}
\end{figure}
The transmission conditions $[u]_\Gamma=0$, $[Nu]_\Gamma=0$ in \r{tr} are equivalent to
\be{RMC4.1}
\begin{split}
U^+_\text{in}w_{I,b}+ U_\text{out}^+w_{R,b} &= U^-_\text{out}w_{T,b},\\
M^+_\text{in}w_{I,b}+ M^+_\text{out}w_{R,b} &= M^-_\text{out}w_{T,b},
\end{split}
\ee
where the $\pm$ superscripts indicate that the corresponding operators act in $\Omega_\pm$. We will show next that this system is elliptic for recovery of $w_{R,b}$ and $w_{T,b}$ given $w_{I,b}$. In fact, ellipticity is a consequence of the energy preservation. Take the dot product of the two equations above (recall that we work at a fixed point where the metric is transformed to an Euclidean one). We get
\be{RMC_en0}
\langle U^+_\text{in}w_{I,b},M^+_\text{in}w_{I,b}\rangle + \langle U^+_\text{out}w_{R,b},M^+_\text{out}w_{R,b}\rangle =\langle U^-_\text{out}
w_{T,b},M^-_\text{out}w_{T,b}\rangle
\ee
because it can be shown that $(U^+_\text{in})^* M^+_\text{out}+ (M^+_\text{in})^* U^+_\text{out}=0$ up to smoothing terms. The latter can be proven in the following way. The quadratic form $\langle U_\textrm{in}^+ w_{I,b} , M_\textrm{in}^+ w_{I,b}\rangle$ is proportional to the energy flux of $u_I$ through $\R\times \Gamma$ as can be shown by integration by parts: we get $2\Re\int_{\R\times \Gamma} \langle u_t,\Lambda u\rangle$, see, e.g., \cite{SU-thermo_brain}. Similarly, the other two forms are proportional to energy fluxes, and the signs, after a multiplication by the same constant, are $+,-,+$.
Then if $w_{I,b}=0$ (i.e., if \r{RMC4.1} is homogeneous), the signs of the forms imply the zero solution only.
The cancellation equality above reflects the fact that the incoming and the outgoing wave are microlocally separated. We are not going to prove it this way because below we will get a direct confirmation for the principal symbols, which is what we need.
In matrix form, that system is given by
\be{RMC4.1a}
\begin{pmatrix} U^+_\text{out} & -U^-_\text{out}\\ M^+_\text{out} & -M^-_\text{out}\end{pmatrix}
\begin{pmatrix} w_{R,b}\\ w_{T,b}\end{pmatrix}
= -\begin{pmatrix} U^+_\text{in} w_{I,b}\\ M^+_\text{in} w_{I,b}\end{pmatrix}.
\ee
We compute the principal symbol of the matrix operator applied to $(w_{R,b},w_{T,b})$. As in the previous section, we work at a fixed point where the boundary metric is chosen to be Euclidean. By the invariance under rotations in the $x^1x^2$ plane, we can perform the computations when $\xi_2=0$, as in the previous section. For the principal amplitude of $w$ on $\Gamma$, we will adopt the following notation: $(\SH,\SV,P)^T$, i.e., $\PP=a^p$ in the notation of the previous section, and $a^s=(\SH,\SV)$ is the decomposition of the principal amplitude of the potential (on the boundary) of the S wave $u^s=D\times w^s$ into shear-horizontal and shear-vertical terms. We use the subscripts $I,R,T$ for the same purpose as above.
The system \r{RMC4.1a} then decouples into a $4\times 4$ one and a $2\times2$ one. The $4\times 4$ system has the form
\be{RMC_1}
A_\textrm{in}^+(\PP_I , \SV_I)^T + A_\textrm{out}^+(\PP_R, \SV_R)^T=
A_\textrm{out}^-(\PP_T, \SV_T)^T.
\ee
We use the notations $A_\text{in}$ and $A+\text{out}$, see \r{u_C5} with plus or minus superscripts depending on which side of $\Gamma$ they are related to. By Lemma~\ref{lemma_A}, $(A_\text{in}, A_\text{out})$ is elliptic.
The second system, describing the reflection and the transmission of SH waves, is
\be{RMC4.2}
\begin{pmatrix}
\xi_{3,+}^s & \xi_{3,-}^s \\
\mu_+(\xi_{3,+}^s)^2 & -\mu_-(\xi_{3,-}^s)^2 \end{pmatrix}
\begin{pmatrix}SH_R\\ SH_T \end{pmatrix}
= SH_I\begin{pmatrix}\xi_{3,+}^s \\-\mu_+(\xi_{3,+}^s)^2 \end{pmatrix} .
\ee
It has a negative determinant, therefore it is elliptic. This decoupling shows that the SH waves do not convert to other modes and reflect and transmit similarly to acoustic waves. We can write \r{RMC4.2} as
\[
\xi_{3,+}^s(\SH_R - \SH_I)=-\xi_{3,-}^s \SH_T, \quad
\mu_+(\xi_{3,+}^s)^2(\SH_R + \SH_I)=\mu_-(\xi_{3,-}^s)^2 \SH_T.
\]
Multiply those equations to get
\be{RMC101}
\rho_+c_{s,+}^2(\xi_{3,+}^s)^3\left(|\SH_R|^2 - |\SH_I|^2\right) +\rho_- c_{s,-}^{2} (\xi_{3,-}^s)^3| \SH_T|^2=0.
\ee
when all $w$'s are real. If they are complex, we can justify this by the equality $\Re (z-w)(\bar z+\bar w)=|z|^2-|w|^2$. Without going into details, we mention that this is actually an energy equality of the kind \r{RMC_en0} with $c_{s,\pm}^2$ normalization factors since the column vectors of $U$ in \r{U} are not normalized according to the corresponding speed, $\rho_\pm$ are volume element factors, the $(\xi_{3,\pm}^s)^2$ factors come from the contribution of an S wave with principal term proportional to $(\xi_1,0,-\xi_3^s )\times ( 1,0,0)= \xi_3^s(0,1,0)$ to $Nu$; and the extra $\xi_{3,\pm}^s$ factor accounts for the angle of incidence of reflection/transmission.
Equations \r{RMC4.2} imply that when $\SH_I\not=0$, we have $\SH_T\not=0$; and $\SH_R=0$ when $\mu_+ \xi_{3,+}^s - \mu_- \xi_{3,-}^s$ which can happen for a fixed $|\xi'|$.
We are going back to the system \r{RMC_1}.
We will transform it into a form used in the geophysics literature. Let $\theta_+^p$, $\theta_+^s$, $\theta_-^p$ and $\theta_-^s$ be the angles between the normal and $\xi^p_R$, $\xi^s_R$, $\xi_T^p$ and $\xi_T^s$, respectively, see Figure~\ref{pic2}. Note that those angles are in $[0,\pi/2)$ and we exclude the zero ones below just to be able to put the equations into the desired form and to compare them with classical results. The singularity at $0$ can be resolved by multiplying the corresponding equations by the appropriate sine functions. Then
\be{RMC_cot}
\xi^p_{3,+}/\xi_1=\cot\theta_+^s, \quad ( 2\xi_1^2-c_{s,+}^{-2}\tau^2 ) /\xi_1^2=1-(\xi_3^p)^2/\xi_1^2=1-\cot^2\theta_+^s,
\ee
and similarly for the other angles.
Divide the first two equations in \r{RMC_1} by $\xi_1$ and the last two by $\xi_1^2$, for $\xi_1\not=0$, to put the system in the form $A'\mathbf{a} =B' \mathbf{b}$ with
\[
A':= \begin{pmatrix}
1&-\cot\theta_{+}^s&-1&-\cot\theta_{-}^s\\
\cot\theta_{+}^p&1& \cot\theta_{-}^p&-1\\
2\mu_+\cot\theta_{+}^p & \mu_+ (1-\cot^2\theta_+^s)
&2\mu_-\cot\theta_{-}^p & - \mu_- (1-\cot^2\theta_-^s) \\
- \mu_+(1-\cot^2\theta_+^s) & 2 \mu_+ \cot\theta_{+}^s &
\mu_-(1-\cot^2\theta_-^s) & 2\mu_- \cot\theta_{-}^s
\end{pmatrix}
\]
and similarly, $B'$ is the most right $4\times 2$ block of $A'$ with all minus subscripts replaced by plus ones. Here, $\mathbf{a} =(\PP_R, \SV_R, \PP_T, \SV_T)^T $, $\mathbf{b}= (\PP_I, \SV_I)^T$.
The resulting system is the \textit{Knott's equations} \cite{Knott1899} derived by Knott in 1899 for a flat boundary and constant coefficients. The form here corresponds to \cite{Sheriff_Seismology}. We write them as
\[
\begin{split}
(\PP_R+\PP_I)-\cot\theta_{+}^s( \SV_R - \SV_I )&=w_T^p +\cot\theta_{-}^s \SV_T, \\
\cot\theta_{+}^p (\PP_R-\PP_I) +( \SV_R + \SV_I )&= -\cot\theta_{-}^pw_{2,T}^p+\SV_T,\\
2\mu_+\cot\theta_{+}^p(\PP_R-\PP_I) + \mu_+ (1-\cot^2\theta_+^s) ( \SV_R + \SV_I )&=
-2\mu_-\cot\theta_{-}^p w_T^p+ \mu_- (1-\cot^2\theta_-^s)\SV_T , \\
- \mu_+(1-\cot^2\theta_+^s) (\PP_R+\PP_I)+ 2\mu_+ \cot\theta_{+}^s ( \SV_R - \SV_I ), &=
- \mu_-(1-\cot^2\theta_-^s)w_T^p - 2\mu_- \cot\theta_{-}^s \SV_T .
\end{split}
\]
Following \cite{Knott1899}, we multiply the corresponding sides of the first and the third equations; then do the same thing with the second and the fourth one and add the results to get
\be{RMC_en}
\begin{split}
\mu_+\frac{\cot\theta_+^p}{\sin^2\theta_+^s}\left(|\PP_R|^2- |\PP_I|^2\right) &+
\mu_+\frac{\cot\theta_+^s}{\sin^2\theta_+^s}\left(|\SV_R|^2 - |\SV_I|^2\right)\\
&+
\mu_-\frac{\cot\theta_-^p}{\sin^2\theta_-^s}|\PP_T|^2+
\mu_-\frac{\cot\theta_-^s}{\sin^2\theta_-^s}|\SV_T|^2= 0,
\end{split}
\ee
therefore,
\be{RMC7}
\begin{split}
\rho_+\cot\theta_+^p \left(|\PP_R|^2- |\PP_I|^2\right) &+
\rho_+ \cot\theta_+^s (|\SV_R|^2-|\SV_I|^2 )\\
& +
\rho_- \cot\theta_-^p |\PP_T|^2+
\rho_- \cot\theta_-^s |\SV_T|^2= 0.
\end{split}
\ee
We used here that $\rho_+\sin^2\theta_+^s=(\xi_1^2/\tau^2)\mu_+$ and similarly for the other terms.
As noted by Knott \cite{Knott1899}, this is an energy equality, stating that the sums of the energy fluxes of the four generated waves, on a principal symbol level, equals that of the incident one. It is also a version of \r{RMC101}.
Equation \r{RMC7} implies that the homogeneous system $A'\mathbf{a}=0$ has the zero solution only. Therefore, $A'$ is elliptic. Explicit formulas for the solution of this system can be found in \cite{aki2002quantitative} for the flat constant coefficient case, and those formulas generalize to our case once we make them invariant.
\subsection{The general case with incoming waves from both sides} \label{sec_9.2}
We assume waves coming from both sides, see Figure~\ref{pic2a} some of them possibly evanescent, with Dirichlet (and therefore Cauchy) data of their traces on $\Gamma$ in a small neighborhood of some covector in $T^*\Gamma$.
We classify the cases by hyperbolic-hyperbolic (HH), hyperbolic-mixed (HM), mixed-mixed (MM), mixed-elliptic (ME) and elliptic-elliptic (EE) according to the location of the wave front of the Cauchy data on the positive/negative side of $\Gamma$.
\subsubsection{The hyperbolic-hyperbolic (HH) case} \label{sec_HH}
Assume a wave front set in the hyperbolic region on both sides.
This is automatically true if on each side, we have both S and P waves. The construction in section~\ref{sec_Tr_1} then generalizes directly. We are going to denote the incoming and the outgoing solutions $w$ on each side by $w_\text{in}^+$, $w_\text{out}^+$, $w_\text{in}^-$, $w_\text{in}^-$. The transmission conditions \r{tr} then take the form
\be{RMC4.1n}
\begin{split}
U^+_\text{in}w_\text{in,b}^++ U_\text{out}^+w_\text{out,b}^+ &=U^-_\text{in}w_\text{in,b}^-+ U_\text{out}^- w_\text{out,b}^-,\\
M^+_\text{in}w_\text{in,b}^++ M_\text{out}^+w_\text{out,b}^+ &=M^-_\text{in}w_\text{in,b}^-+ M_\text{out}^- w_\text{out,b}^-,
\end{split}
\ee
compare with \r{RMC4.1} and \r{u_C3}.
We use the notation in section~\ref{sec_u_C} but we put superscripts $+$ and $-$ depending on the side of $\Gamma$ we work on.
We use the notation $ (\PP,\SV,\SH)$ as above for the principal amplitude of $w$ on $\Gamma$, with the corresponding subscripts and the superscripts.
Then \r{RMC4.1n} decouples into the following two equations
\be{main_system}
A_\textrm{in}^+(\PP_\text{in}^+ , \SV_\text{in}^+)^T + A_\textrm{out}^+(\PP_\text{out}^+, \SV_\text{out}^+)^T=
A_\textrm{in}^-(\PP_\text{in}^- , \SV_\text{in}^-)^T + A_\textrm{out}^-(\PP_\text{out}^-, \SV_\text{out}^-)^T
\ee
and
\be{2x2}
\begin{pmatrix} \Big.
-\xi_{3,+}^s & \xi_{3,+}^s \\
\mu_+(\xi_{3,+}^s)^2 & \mu_-(\xi_{3,+}^s)^2 \end{pmatrix}
\begin{pmatrix} \Big. \SH_\text{in}^+\\ \SH_\text{out}^+ \end{pmatrix}
=\begin{pmatrix}\Big.
\xi_{3,-}^s & -\xi_{3,-}^s \\
\mu_+(\xi_{3,-}^s)^2 & \mu_-(\xi_{3,-}^s)^2 \end{pmatrix}
\begin{pmatrix}\Big. \SH_\text{in}^-\\ \SH_\text{out}^- \end{pmatrix},
\ee
compare to \r{RMC_1} and\r{RMC4.2}.
\begin{figure}[!ht]
\includegraphics[page=3,scale=1]{DN_earth_layers_pics}
\caption{The transmission problem in the (HH) case: the general case of eight waves with wave front set projected to the same covector. The SH waves behave as acoustic ones.
}\label{pic2a}
\end{figure}
The properties of the SH components are similar to those of acoustic waves at an interfaces, see \r{ac20} there, and the discussion following it. In particular, there is no mode conversion (on principal symbol level at least, which we study).
As above, we can derive the following energy equality:
\be{RMC_en2}
\begin{split}
&\rho_+ \cot\theta_+^p \left(|\PP_\text{out}^+|^2- |\PP_\text{in}^+|^2\right) +
\rho_+ \cot\theta_+^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right)\\
&\quad +\rho_- \cot\theta_-^p \left(|\PP_\text{out}^-|^2 -|\PP_\text{in}^- |^2\right) +
\rho_- \cot\theta_-^s \left( |\SV_\text{out}^-|^2- |\SV_\text{in}^-|^2\right)=0.
\end{split}
\ee
For future reference in the case of evanescent modes, we write \r{RMC_en2} as
\be{RMC_en3}
\begin{split}
&\Re\Big(\rho_+ \xi_{3,+}^p \left(|\PP_\text{out}^+|^2- |\PP_\text{in}^+|^2\right) +
\rho_+ \xi_{3,+}^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right) \\
&\quad +\rho_- \xi_{3,-}^p \left(|\PP_\text{out}^-|^2 -|\PP_\text{in}^- |^2\right) +
\rho_- \xi_{3,-}^s \left( |\SV_\text{out}^-|^2- |\SV_\text{in}^-|^2\right)\Big) =0,
\end{split}
\ee
see \r{RMC_cot}. Written this way, \r{RMC_en3} holds even if the quantities above are not necessarily real; and the proof requires to multiply the first row of \r{main_system} by the \textit{conjugate} of the third one and the same for the second and the fourth ones. This is an energy identity, see the paragraph following \r{RMC_en0}. It says that the combined energy flux of all incoming waves on $\Gamma$ (on principal level) equals that of the outgoing ones.
\begin{lemma}\label{lemma_A1}
The matrices $(A_\textrm{\rm in}^+, A_\textrm{\rm out}^+)$, $(A_\textrm{\rm in}^-, A_\textrm{\rm out}^-)$, $(A_\textrm{\rm in}^+, A_\textrm{\rm in}^-)$, $(A_\textrm{\rm out}^+, A_\textrm{\rm out}^-)$ are elliptic. Also, system \r{2x2} is elliptic for $(\SH_\textrm{\rm in}^+, \SH_\textrm{\rm out}^+)$, and also for $(\SH_\textrm{\rm in}^+, \SH_\textrm{\rm in}^-)$.
\end{lemma}
\begin{proof}
The ellipticity of the first two follows from Lemma~\ref{lemma_A}. The ellipticity of the next two follows from the energy equality \r{RMC_en2}. The second statement follows from the fact that the corresponding determinants are negative, and positive, respectively.
\end{proof}
Note that the ellipticity of $(A_\textrm{\rm in}^+, A_\textrm{\rm out}^+)$ and $(A_\textrm{\rm in}^-, A_\textrm{\rm out}^-)$ holds in the mixed and in the elliptic case as well by the proof of Lemma~\ref{lemma_A}.
This has the following implications (without the claim that none of the amplitudes vanishes so far); compare with the discussion following \r{ac20}. Recall that we assume that the Cauchy data on the boundary is in the hyperbolic region with respect to all four speeds.
\begin{itemize}
\item[(ii)] For every choice of the four incoming waves, there is a unique solution (ellipticity) for the four outgoing ones. Indeed, \r{RMC_en3} implies unique solution of the homogeneous problem.
\item[(ii)] An incoming P wave (without any other incoming waves on either side) creates reflected P and S waves and transmitted P and S waves.
\item[(iii)] The same is true for an incoming S wave.
\item[(iv)] [Control] For every choice of a principal amplitude of an outgoing transmitted P wave, one can choose incoming S and P waves which would give that pre-assigned transmitted P wave and no (on the principal level) transmitted S wave. The same is true for incoming P waves.
\end{itemize}
\subsubsection{The hyperbolic-mixed (HM) case} \label{sec_hm}
Assume the wave front set of the Cauchy data is in the mixed region in $\Omega_-$ but still in the hyperbolic one in $\Omega_+$. Since we work in the elliptic region for $c_{p,-}$, we will call the principal amplitude of the corresponding microlocal solution $P_-$ (no in/out), see Figure~\ref{pic_hm}.
\begin{figure}[!ht]
\includegraphics[page=10,scale=1]{DN_earth_layers_pics}
\caption{The transmission problem in the hyperbolic-mixed (HM) case: The $P^-$ wave is evanescent, no incoming/outgoing parts. SH waves do not create P waves.
}\label{pic_hm}
\end{figure}
The approach we follow is the same as above --- we want to analyze the system \r{RMC4.1n}, and for example solve it for all outgoing waves given the incoming ones by proving ellipticity. What changes is that $\xi_{3,-}^p$ becomes pure imaginary, see \r{xi_3pi}. One should also change the sign of $\xi_{3,-}^p$ in $A_\text{out}$ since there are not plus/minus square roots but those entries will be multiplied by zero below.
Then \r{main_system} reduces to
\be{main_system2}
A_\textrm{in}^+(\PP_\text{in}^+ , \SV_\text{in}^+)^T + A_\textrm{out}^+(\PP_\text{out}^+, \SV_\text{out}^+)^T=
A_\textrm{in}^-(0 , \SV_\text{in}^-)^T + A_\textrm{out}^-(\PP^-, \SV_\text{out}^-)^T,
\ee
see \r{u_C4'}.
The energy equality \r{RMC_en3} reduces to
\be{RMC_en4}
\rho_+ \xi_{3,+}^p \left(|\PP_\text{out}^+|^2- |\PP_\text{in}^+|^2\right) +
\rho_+ \xi_{3,+}^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right) +
\rho_- \xi_{3,-}^s \left( |\SV_\text{out}^-|^2- |\SV_\text{in}^-|^2\right)=0,
\ee
see also \r{ac_fir3}.
We get that for any choice of the three incoming waves, the resulting system for the three outgoing ones plus $P^-$ is elliptic. Indeed, it is enough to show this for the homogeneous system. If all incoming waves vanish, then
\r{RMC_en4} implies $P_\text{out}^+ = SV_\text{out}^+= SV_\text{out}^-=0$. Then the only possible non-zero vector in \r{main_system2} is $P^-$ but then we can see directly that \r{main_system2} implies $P^-=0$. System \r{2x2} about the SH waves is unaffected by the ellipticity of the $P$ wave.
Therefore, constructing the outgoing solution is a well-posed (elliptic) problem.
As far as control from each side is concerned, on the negative one, where $P^-$ lies, the Cauchy data is structured; then so is on the positive side. Therefore, the configuration on the positive side cannot be controlled from the negative one. On the other hand, we can create any hyperbolic configuration on the negative side with appropriate waves on the positive one. In particular, if we want $P^-=0$, $SV^-_\text{in}=0$ and $SV^-_\text{out}\not=0$, we can take the Cauchy data of it and solve \r{main_system2} for the plus amplitudes since on the positive side, we are in the hyperbolic region and the Cauchy problem is elliptic.
Control for SH waves on principal level is the same as in the acoustic case since those waves do not create reflected/transmitted P or SV waves. Since we defined SH/SV waves on principal level only, and the system for the amplitudes is decoupled only on a principal level a priori, the control question needs a further clarification when evanescent P modes are possible. Let us say that we want to create S waves on the negative side with given principal amplitudes $\SV_\text{in}^-$, $\SV_\text{out}^-$, $\SH_\text{in}^-$, $\SH_\text{out}^-$, $P^-$. The argument above says that we can chose the principal amplitudes of the waves on the top to make this happen on a principal symbol level, see Figure~\ref{pic_hm}. Then we fix the six waves on the positive side which have those amplitudes as their full ones in those coordinates. For each one, we need to solve, up to infinite order, a transmission, not a control problem, which is well posed. This would possibly create lower order waves on the negative side but it will not change the principal parts. In particular, if we want $\SV_\text{in}^- =\SV_\text{out}^- = \SH_\text{in}^-$ but $\SH_\text{out}^-\not=0$, this step could create lower order $\SV_\text{in}^-$, $\SV_\text{out}^-$, $\SH_\text{in}^-$ waves. This is not a problem since we will need the principal parts later only. We apply the same argument in the cases below.
\subsubsection{The mixed-mixed (MM) case} \label{sec_t_mm}
Then on both sides, the S waves are hyperbolic, and $P^-$ and $P^+$ are evanescent, see Figure~\ref{pic_mm}. In this case, $\xi_{3,\pm}^p$ are pure imaginary, see \r{xi_3pi}. Then there is only one evanescent P wave
in $\Omega_-$ and one in $\Omega_+$ and we omit the subscripts ``in/out'' for them.
As above, we show below that on a principal symbol level, the energy is carried by the S waves only. We also check directly that the homogeneous problem (no incoming waves) has the trivial solution only, including trivial evanescent modes $P^-$ and $P^+$. Therefore, we still get a well-posed problem for the outgoing solution.
In \r{main_system2}, we can formally set $P_\text{in}^+=0$, $P_\text{out}^+=P^+$ and in the energy equality \r{RMC_en4}, we remove the $P$ amplitudes to get
\be{msmm}
A_\textrm{in}^+(0, \SV_\text{in}^+)^T + A_\textrm{out}^+(\PP^+ , \SV_\text{out}^+)^T=
A_\textrm{in}^-(0 , \SV_\text{in}^-)^T + A_\textrm{out}^-(\PP^-, \SV_\text{out}^-)^T,
\ee
and
\be{RMC_en4mm}
\rho_+ \xi_{3,+}^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right) +
\rho_- \xi_{3,-}^s \left( |\SV_\text{out}^-|^2- |\SV_\text{in}^-|^2\right)=0,
\ee
with $\xi_{3,\pm}^p$ pure imaginary as in \r{xi_3pi}. We will show that \r{msmm} is elliptic for $\SV_\text{out}^-$, $\SV_\text{out}^+$, $P^-$, $P^-$, given $\SV_\text{in}^-$, $\SV_\text{in}^+$. As before, it is enough to show that the homogeneous system is uniquely solvable. This follows from Lemma~\ref{lemma_A} or Lemma~\ref{lemma_A1} which remain true in the elliptic and the mixed regions.
\begin{figure}[!ht]
\includegraphics[page=11,scale=1]{DN_earth_layers_pics}
\caption{The transmission problem in the mixed-mixed (MM) case: The $P^-$ and the $P^+$ waves are evanescent, no incoming/outgoing parts. The SH waves behave as acoustic ones.
}\label{pic_mm}
\end{figure}
The SH waves behave as in the acoustic case, see \r{ac20} and \r{ac_en1} and as in the (HM) case.
Control is possible for the SH waves. Let us say that we want to create SH waves on the negative side with prescribed principal amplitudes $\SH_\text{in}^-$, $\SH_\text{out}^-$ and no other waves there. On principal level, we choose $\SH_\text{in}^+$, $\SH_\text{out}^+$ to achieve that. Then, as above, we chose such S waves on the positive side with those principal amplitudes. Solving the direct transmission problem with (hyperbolic only) sources on the positive side, we may get additional waves on the negative ones as shown on Figure~\ref{pic_mm}, left, but they are lower order.
One can also show that control for SV waves on either side is possible from the other one, which would create evanescent $P^+$ and $P^-$ modes as well. Indeed, to show that given any $SV^-_\text{in}$, $SV^-_\text{out}$, we can choose $SV^+_\text{in}$, $SV^+_\text{out}$ creating those waves plus the ``byproducts'' $P^-$, $P^+$, we need to show that \r{msmm} is elliptic for $SV^+_\text{in}$, $SV^+_\text{out}$, $P^-$, $P^+$. A direct but tedious computation shows that the determinant of this system equals
\[
-2\xi_{3,+}^s\mu_+\tau^2c_{s,+}^{-2} c_{s,+}^{-4} \left( 2\xi_1^2(\mu_+-\mu_-)(\xi_{3,+}^p+\xi_{3,-}^p)c_{s,-}^{2}c_{s,+}^{2} +\tau^2(\xi_{3,+}^p\mu_- c_{s,+}^{2}- c_{s,-}^{2} \mu_+\xi_{3,-}^p ) \right).
\]
The algebraic structure of this expression implies that this determinant is not identically zero for all $\xi_1$ unless none of the coefficients jump at the interface, and we assumed that this could not happen. Therefore, it could be zero for a discrete set of $\xi_1$'s only and then we have control.
\subsubsection{The hyperbolic-elliptic (HE) case} \label{sec_t_he}
Assume that both the P and the S waves on the negative side are evanescent but they are hyperbolic on the plus side, see Figure~\ref{pic_he}. Then we have full reflection on the positive side with respect to all waves. System \r{main_system2} reduces to
\be{ms32}
A_\textrm{in}^+(\PP_\text{in}^+ , \SV_\text{in}^+)^T + A_\textrm{out}^+(\PP_\text{out}^+, \SV_\text{out}^+)^T= A_\textrm{out}^-(\PP^-, \SV^-)^T,
\ee
where $P^-$ and $SV^-$ are evanescent and $\xi_{3,-}^p$ and $\xi_{3,-}^s$ are pure imaginary as in \r{xi_3pi} and \r{xi_3si}. The energy equality takes the form
\be{en_mm}
\rho_+ \xi_{3,+}^p \left(|\PP_\text{out}^+|^2- |\PP_\text{in}^+|^2\right) +
\rho_+ \xi_{3,+}^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right) =0.
\ee
\begin{figure}[!ht]
\includegraphics[page=12,scale=1]{DN_earth_layers_pics}
\caption{The transmission problem in the hyperbolic-elliptic (HE) case: The $P^-$ and the $S^-$ waves are evanescent, no incoming/outgoing parts. The SH waves behave as acoustic ones with a total reflection on the top.} \label{pic_he}
\end{figure}
This is similar to the hyperbolic case in the Cauchy boundary value problem, see \r{sec_el_bvpC}. System \r{ms32} is elliptic for solving for $\SV_\text{out}^-$, $\SV_\text{out}^+$, $P^-$, $\SV^-$ by \r{en_mm} and Lemma~\ref{lemma_A1}.
The SH waves are treated similarly. They experience a full reflection as in the acoustic case.
\subsubsection{The mixed-elliptic (ME) case} \label{sec_t_me}
Assume that only the $\SV^+$ waves are hyperbolic. Then we have full reflection of the S wave on the positive side with transmitted evanescent $P^-$ and $S^-$ waves and mode converted $P^+$ one on the positive side, see Figure~\ref{pic_me}. System \r{main_system2} reduces to
\be{me32}
A_\textrm{in}^+(0, \SV_\text{in}^+)^T + A_\textrm{out}^+(\PP^+, \SV_\text{out}^+)^T= A_\textrm{out}^-(\PP^-, \SV^-)^T,
\ee
where $P^-$, $\SV^-$ and $\PP^+$ are evanescent and $\xi_{3,\pm }^p$ and $\xi_{3,-}^s$ are pure imaginary. The energy equality takes the form
\be{en_me}
\rho_+ \xi_{3,+}^s \left( |\SV_\text{out}^+|^2 - |\SV_\text{in}^+|^2\right) =0.
\ee
\begin{figure}[!ht]
\includegraphics[page=13,scale=1]{DN_earth_layers_pics}
\caption{The transmission problem in the mixed-elliptic (ME) case: Only the $\SV^+$ waves are hyperbolic. The SH waves behave as in the acoustic case and as in the (HE) case.} \label{pic_me}
\end{figure}
System \r{me32} is elliptic for solving for $\SV_\text{out}^+$, $P^-$, $\SV^-$ by \r{en_me} and Lemma~\ref{lemma_A1}.
The SH waves are treated similarly. They experience a full reflection as in the acoustic case.
\subsubsection{The elliptic-elliptic (EE) case. Stoneley waves} \label{sec_t_ee}
We assume now that all waves on both sides are evanescent. Such solutions cannot be created by S or P waves hitting $\Gamma$ but they could be created by boundary sources. We will sketch the construction of such solution known as Stoneley waves first described by R.~Stoneley \cite{Stoneley} in 1924 in case of flat boundary and constant coefficients, see also \cite{Yamamoto_elastic_89} for a curved boundary and constant coefficients.
We call the evanescent amplitudes $P^-$, $P^+$, $SV^-$, $SV^+$.
Then
\be{ms_ee}
A_\textrm{out}^+(\PP^+, \SV^+)^T= A_\textrm{out}^-(\PP^-, \SV^-)^T.
\ee
Since $\xi_{3,-}^p$, $\xi_{3,-}^s$, $\xi_{3,+}^p$ and $\xi_{3,+}^s$ are all pure imaginary, with a positive imaginary part, the matrices above do not really have outgoing properties and the subscript ``out'' could be omitted. In this region,
\be{u_C5x}
A_\text{out}^\pm := \begin{pmatrix}
\xi_1&\xi_{3,\pm}^s \\
-\xi_{3,\pm}^p&\xi_1 \\
-2\mu\xi_{3,\pm}^p \xi_1 & \mu(2\xi_1^2-c_{s}^{-2}\tau^2) \\
- \mu( 2\xi_1^2-c_{s}^{-2}\tau^2 ) & -2\mu \xi_{3,\pm}^s\xi_1
\end{pmatrix},
\ee
see \r{u_C5}. Then $F:= \det(A_\text{out}^+,- A_\text{out}^-)$ is a positively homogeneous function of $(\tau,\xi_1)$ of order $6$. Writing $F$ as $\xi_1^6$ times a function $F_0$ of $s:= |\tau|/\xi_1$ (and the base point $x'$), we get that $(A_\text{out}^+,- A_\text{out}^-)$ is elliptic (again, after adjusting the order of the last two rows from 2 to 1) where $F_0(s,x')\not=0$. Passing to an invariant formulation as in Section~\ref{sec_SV-SH}, we can replace $\xi_1$ by $|\xi'|$; then $s= |\tau|/|\xi'|$ with the norm of $\xi'$ being the covector one w.r.t.\ the metric $g$, which in the isotropic case is the boundary metric induced by the Euclidean one. Then $F$ is a homogeneous symbol. Assume that $F_0$ has a simple zero for some $s=c_\text{St}$ corresponding to the elliptic-elliptic region, i.e., in $s<\min(c_{s,-},c_{s,+})$. Then $F=\left(\tau^2-c_\text{St}^2|\xi'|^2\right)\!\tilde F$ with $\tilde F$ elliptic near $\Sigma_\text{St}:= \{\tau^2=c_\text{St}^2|\xi'|^2\}$. Then $(A_\text{out}^+,- A_\text{out}^-)$ is a \PDO\ of real principal type (again, the order can be adjusted to be one for all rows) in the sense of \cite{Dencker_polar}. Singularities on $T^*\Gamma$ propagate along the null bicharacteristics of the Hamiltonian $H:= \left(\tau^2-c_\text{St}^2|\xi'|^2\right)$. This is a wave type of Hamiltonian with a wave speed $c_\text{St}$ which is slower that the S and the P speeds on either part of $\Gamma$. A well posed problem would be, for example, one with Cauchy data on $\{t=0\}\times\Gamma$.
For every microlocal solution on $\R_t\times\Gamma$, we can use its Dirichlet data to extend it to a microlocal solution on both sides of $\Gamma$ as in Section~\ref{sec_6.3}, see also Section~\ref{sec_Rayleigh}. Rayleigh waves, can be considered as a limit case of Stoneley waves.
The function $F_0$ does have (simple) zero in some cases, at least. Some examples can be found in Stoneley's original paper \cite{Stoneley}.
\subsection{Summary}\label{sec_Tr_summary}
We summarize some of the results above as follows.
\begin{itemize}
\item[(HH)] the \textbf{hyperbolic-hyperbolic} case: we have both P and S waves on either side; each incoming wave creates two reflected and two transmitted (refracted) ones, with mode conversion.
\item[(HM)] The \textbf{hyperbolic-mixed} case: on one side there are both P and S waves, on the other one, only S waves exists (as solutions propagating singularities); the P wave is evanescent. On other hand, there is total internal reflection of P waves but they can still create transmitted S waves.
\item[(HE)] The \textbf{hyperbolic-elliptic} case: the S and the P waves on one side are hyperbolic; the S and the P waves on other side are evanescent. Then there is a full reflection from the first side, and the transmitted waves are only evanescent.
\item[(MM)] The \textbf{mixed-mixed} case: the S waves on both sides are hyperbolic (propagate singularities); the P waves on both sides are evanescent. In particular, an incoming S wave reflects and refracts; and it creates two evanescent P waves on either side by mode conversion.
\item[(ME)] The \textbf{mixed-elliptic} case: Only the S wave on one side is hyperbolic. In particular, an incoming S wave reflects; and it creates two evanescent P waves on either side by mode conversion and one ``reflected'' P evanescent one.
\item[(EE)] The \textbf{elliptic-elliptic} case: All waves are evanescent. Such waves cannot be created by a P or an S wave hitting $\Gamma$ but it could be created by a boundary source. The transmission problem may lose ellipticity and allow for solutions (Stoneley waves) concentrated near $\Gamma$.
\end{itemize}
\subsection{Justification of the parametrix}
In the construction above, we work with microlocal solutions which may have singularities but they, and their first derivatives have traces in timelike surfaces. We assume that solutions have wave front set disjoint from bicharacteristics tangential to some of interfaces which can be achieved by choosing the wave front of their Cauchy data disjoint from projections of such directions in $T^*\Gamma$. The later set has a zero measure on $S^*\bo$ for $t$ restricted to any fixed finite interval. The construction actually provides an FIO, mapping $f$ to the microlocal outgoing solution $u$ with that boundary data.
To justify the parametrix, we need to subtract it from the actual solution and show that the difference is smooth up to each interface $\Gamma_i$. Such a difference $w$ would solve a non-homogeneous problem
\be{1A}
\left\{\begin{array}{lr}
u_{tt} -Ew \in C^\infty(\R\times\bar \Omega),\\
w|_{\R\times\bo} \in C^\infty(\R\times \bo),\\
\ \! \! [w]|_{\Gamma_j} , [Nw]|_{\Gamma_j} \in C^\infty(\R\times \Gamma_j), \quad j=1,\dots,k, \\
w|_{t<0}=0.
\end{array}\right.
\ee
A slightly weaker version of this claim can be proven, which is sufficient for our purposes. We claim that $w$ is $C^\infty$ away from $\R\times \Gamma_j$ and $\R\times\bo$, and indeed is conormal at these two in the precise sense that $w\in\Hbloc^{1,\infty}$, meaning $w$ and its first derivatives are in $L^2$ locally, and the same remains true if vector fields tangent to $\R\times \Gamma_j$ and $\R\times\bo$ are applied to these iteratively. While this is standard in the scalar case, a proof for (principally) scalar wave equations, for transmission problems, based on quadratic form considerations, showing regularity relative to the quadratic form domain, is given in \cite{DUV:Diffraction}. This proof uses b-pseudodifferential operators, introduced by Melrose \cite{Melrose:Transformation}, see also \cite{Melrose:Atiyah}, and \cite{DUV:Diffraction} for a brief summary. The simple observation made in \cite[Section~4]{DUV:Diffraction} is that when one has an internal hypersurface, such as $\R\times\Gamma_j$, one can treat it as a boundary for this b-analysis by using b-pseudodifferential operators on each half-space (which are manifolds with boundary) with matching normal operators at the common boundary; this was used in \cite[Section~4]{DUV:Diffraction} to prove propagation of singularities in the principally scalar setting. The elastic problem is not principally scalar, which indeed makes the proof of propagation of singularities significantly more difficult using these tools. However, the propagation of global regularity, in the sense that regularity, as measured by $\Hbloc^{1,m}$ (i.e.\ the space with $m$ b-, or tangential, derivatives relative to $H^1_\loc$), propagates from $t<0$ to $t\geq 0$ when the right hand side has regularity in $\Hbloc^{-1,m+1}$ (i.e.\ the space with $m+1$ b-, or tangential, derivatives relative to $H^{-1}_\loc$) is straightforward as it does not require microlocalization; slightly modified energy estimates work. This has been carried out in detail by Katsnelson for the elastic wave equation on manifolds with edges in \cite[Chapter~11]{Katsnelson:PhD}. The latter are actually more complicated than our setting as the domain of the operator is more delicate, and an essentially identical method of proof works in our case. We also refer to \cite{Katsnelson:Diffraction} for a brief summary.
We refer to \cite{Yamamoto_09} as well, where boundary regularity in the case of constant parameters has been studied.
\section{The inverse problem}\label{section_GC}
Assume that there exist two smooth non-positive functions $\foliation_s$ and $\foliation_p$ in $\Omega$ with $\d\foliation\not=0$, $\foliation^{-1}(0)=\bo$, and $\foliation^{-1}(-j)=\Gamma_j$, $j=1,\dots,k$ where $\foliation$ is either $\foliation_s$ or $\foliation_p$. Assume that the level sets $\foliation_{s}^{-1}(c)$, $\foliation_{p}^{-1}(c)$ are strictly convex w.r.t.\ the speed $c_s$, $c_p$, respectively, when viewed from $\Gamma_{0}=\bo$.
Of course, we may have just one such function, i.e., $\foliation_s=\foliation_p$ is possible.
Recall that the foliation condition implies non-trapping as noted in \cite{SUV_localrigidity}, for example. In our case, this means that rays in $\Omega_j$ not hitting $\Gamma_j$ would hit $\Gamma_{j-1}$ both in the future and in the past.
\subsection{Recovery in the first layer $\Omega_1$}
We show first that we can recover $c_p$ and $c_s$, and then $\rho$, in the first layer $\Omega_1$, i.e., between $\bo$ and $\Gamma_1$. In other words, if $\tilde \rho$, $\tilde \mu$, $\tilde \nu$ is another triple of coefficients which have the same piecewise smooth structure with jumps across some $\tilde \Gamma_j$, producing the same DN map, then they coincide with the non-tilded ones. In the lemmas below, we need solutions with a single incoming singularity (more precisely, with a single radial ray due to the conic nature of the wave front sets) which we can trace until its branches hit $\bo$ again. We can do this in two ways: first, we can have $f$ in \r{1} with such a single singularity but when we need a specific polarization, we can achieve that by choosing the potential $w$ appropriately, with that singularity. Since the operators $U_\text{in}$ and $U_\text{out}$, see \r{Ub} are elliptic in all regions, then the boundary trace of the potentials would have the same wave front sets as the boundary trace of the solution $u$. Or, one can have $\WF(f)$ in a small set by choosing $\WF(w)$ on the boundary small enough and then pass to a limit when $\WF(f)$ shrinks to a single point. Since the arguments based on SH/SV waves require us to trace the leading singularities, i.e., we want to have a well defined order, working with singularities in a small conic set, for example conormal ones, is more convenient. We assume in this section that $g$ is Euclidean since we will need the results of Rachele \cite{Rachele_2000, Rachele03}, and Bhattacharyya \cite{Bhattacharyya_18}, see Remark~\ref{rem_Rachele}.
\begin{lemma}\label{lemma_G1}
Under the convex foliation assumption, $\Lambda$, known for $T\gg1$ determines uniquely $\Gamma_1$, $c_s$ and $c_p$ in $\Omega_1$. If, in addition, $c_p\not=2c_s$ pointwise in $\Omega_1$, then $\rho$ is uniquely determined in $\Omega_1$ as well.
\end{lemma}
\begin{proof}
In this and in the following proof, we consider another triple $\tilde \rho$, $\tilde \mu$, $\tilde \nu$ with the same $\Lambda$, and show that the corresponding quantities, in this case $\Gamma_1$ and the three coefficients, coincide. Sometimes, we say that a certain quantity, for example $c_s$, is known or can be recovered in some region to indicate that $c_s=\tilde c_s$ there.
First, by \cite{Rachele_2000}, we can recover the full jets of $\rho$, $c_p$ and $c_s$ on $\bo$. We will recover the speeds $c_s$ and $c_p$ first.
This follows from \cite{SUV_localrigidity}, in any subdomain separated from $\Gamma_1$, i.e., for $-1+\eps\le \foliation\le0$, $\forall \eps\in(0,1)$, with $\foliation =\foliation_s$ or $\foliation =\foliation_p$,
and it is also H\"older stable there. Indeed, for every unit P or S geodesic connecting boundary points and not intersecting $\Gamma_1$, we can construct a microlocal P or S solution in a small neighborhood of that geodesic, extended a bit outside $\Omega$ where the coefficients are extended smoothly as well. Let $f$ be the Dirichlet data of that solution on $\R_+\times\bo$. Then the outgoing solution $\tilde u$ having the same Dirichlet data has the same Neumann data as well. Also, the solution will be a P or an S wave, respectively as well, since this property is determined by the trace of $c_p$ and $c_s$ on $\bo$, which we recovered, see the end of section~\ref{sec_ED1m}.
Therefore, singularities hitting $\bo$ from inside, will be the same (a singularity hitting $\bo$ must create singular Cauchy data by the analysis in Section~\ref{sec_Ac_Cauchy}). So the scattering relations related to $c_s$ and $c_p$ are the same as those of $\tilde c_s$ and $\tilde c_p$ restricted to those geodesics. Note that this argument requires us to know that the corresponding geodesics for the second system do not hit $\tilde \Gamma_1$. If they do, we would get reflected waves of both kinds (with a possible exception of specific angles which does not change the argument), and we would not get the same Cauchy data. Another way to exclude such rays is to note that they would create singularities of the lens relation near rays tangent to $\tilde \Gamma_1$.
This proves that $\Omega_1\subset\tilde \Omega_1$, i.e., $\tilde \Gamma_1$ is below $\Gamma_1$, and that $c_p=\tilde c_p$, $c_s=\tilde c_s$ in $\Omega_1$. On the other hand, we can swap $\tilde \Gamma_1$ and $\Gamma_1$ in this argument, therefore $\tilde \Gamma_1=\Gamma_1$. Then $c_s$ and $c_p$ are uniquely determined there. By \cite{Bhattacharyya_18}, one can recover $\rho$ in $\Omega_1$ as well under the stated condition, therefore then we can recover $\lambda$, $\mu$, too.
\end{proof}
Note that here, and in what follows, we have precise control of $T$ which we do not make explicit. Also, local knowledge of $\Lambda$ up to a smoothing operator yields recovery in an appropriate domain of influence, see also \cite{SUV_elastic} for the case of smooth coefficients.
\subsection{Recovery in the second layer $\Omega_2$}
In the next lemmas, we show that we can recover the two speeds in $\Omega_2$ under some conditions. The obstruction to the application of the method (but not necessarily to the uniqueness) is existence of totally reflected P and/or S rays on the interior side of $\Gamma_1$ for all times (or for long enough, for the case of data on a finite time interval).
Since we need rays converging to tangential ones, the microlocal conditions can be described in terms of the sign of the jumps of the speeds at $\Gamma_1$.
In what follows, $c|_{\Gamma^\pm}$ denotes the limit of $c(x)$ as $x$ approaches $\Gamma$ from the exterior/interior.
\begin{lemma}\label{lemma_G2}
Under the assumption in the first sentence of Lemma~\ref{lemma_G1}, assume additionally that
\be{G3}
c_s|_{\Gamma_1^+}< c_s|_{\Gamma_1^-}.
\ee
Then $\Gamma_2$ and $c_s$ are determined uniquely in (the uniquely determined) $\Omega_2$.
\end{lemma}
We can interpret \r{G3} as strict convexity of $\Gamma_1$ w.r.t.\ $c_s$ with a jump since increasing the speeds with depth guarantees strict convexity of the level surfaces. It guarantees no total full reflection of S ways from $\Omega_2$ to $\Omega_1$. On the other hand, \r{G3} implies
\be{G5}
c_s|_{\Gamma_1^+}<c_s|_{\Gamma_1^-} < c_p|_{\Gamma_1^-}
\ee
but the only thing we know about $c_p|_{\Gamma_1^+}$ is that it is greater than $c_s|_{\Gamma_1^+}$. In particular, there could be evanescent S to P or P to S transmission from $\Omega_2$ to $\Omega_1$; or they all could be hyperbolic.
\begin{figure}[!ht]
\includegraphics[page=9,scale=1]{DN_earth_layers_pics}
\caption{Solid curves are P waves. Dotted curves are S waves. We can create an SH wave connecting points on $\Gamma_1$ and no other waves from or to $x$ below $\Gamma_1$ by choosing it to be SH on $\Gamma_1$ near $x$. The reflected and/or the transmitted P waves at $x$ and $y$ could be evanescent.
}\label{pic_IP1}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{lemma_G2}]
Let $x$, $y$ be on $\Gamma_1$ connected by a unit speed S geodesic $\gamma_0$ staying between $\Gamma_1$ and $\Gamma_2$.
We take an outgoing microlocal solution $u$ concentrated near $\gamma_0$, so that $u$ is singular near $x$ when $t$ is near $t_1$; and $t=t_2$ corresponds to $y$.
We choose $u$ to be an SH wave on $\Gamma_1$ near $x\in \Gamma_1$, see Figure~\ref{pic_IP1}.
The SH waves behave as acoustic ones on both sides in the (HH), (MH), (HM) and the (MM) cases on principal symbol level, and all those cases are possible. Recall that our convention is to list the top first; in particular, the (MH) configuration is the (HM) one in Section~\ref{sec_hm} with the top and the bottom swapped. To create such a wave, we just need to take an S wave coming from $\bo$ so that its trace on $\Gamma_1$ is SH; this can be done by time reversal. On principal level, there will be no other singularities below $\Gamma_1$ until that wave hits $\Gamma_1$ again.
Then that solution will create singular Cauchy data near $y$ and $t$ near $t_2$. It is an S wave but not necessarily an SH one at $y$. At least one of the two waves transmitted back to $\Omega_1$ would have non-zero principal amplitude if there are two hyperbolic ones, or if there is an S one only, it would be non-zero by the results of the previous section. Then there will be at least one singularity hitting $\bo$ (which we allow to leave $\Omega$, as above). On the other hand, there might be other waves hitting $\bo$ at the same place and time coming from waves at $y$ below $\Gamma_1$ which can reflect of refract.
Since we allow all those waves to leave $\Omega$ freely, they would have different wave front sets or polarizations, and in particular they cannot cancel or alter the singularity of the Cauchy data generated by the waves coming directly from $y$. The simplest way to see that is to do time reversal from the exterior of $\Omega$ back to $\Omega$.
The speeds $c_s$ and $c_p$ are the same for both systems in $\Omega_1$ by Lemma~\ref{lemma_G1}.
We can assume $t_1\gg1$ so that $u$ is smooth on $\bo$ for $t<\eps$ for some $\eps>0$.
Since the solutions constructed above for both systems have the same Cauchy data on $(0,T)\times \bo$ and we can choose $T\gg1$, we conclude that the principal part of $u$ on $\Gamma_1$ near $t=t_1$ is uniquely reconstructed. Note that this argument does not require recovery of $\rho$ in $\Omega_1$ since we only need the principal amplitudes and by \cite{Rachele_2000}, they do not depend on $\rho$.
There might be other singularities on $\Gamma_1$ but we can identify $y$ as the first point a singularity comes back to $\Gamma_1$, and we can determine the travel time through $\Omega_2$ as well.
Taking $y\to x$, we can recover the full jet of $c_s$ on $\Gamma_1^+$ by \cite[Lemma~2.1]{SUV_localrigidity}.
Since we now know the S metric on $\Gamma_1^-$, this is enough to recover the lens relation related to $c_s$ on $\Gamma_1^-$, restricted to rays not hitting $\Gamma_2$. By \cite{SUV_localrigidity}, this determines $c_s$ in $\Omega_2$ uniquely, i.e., $c_s= \tilde c_s$ in $\Omega_2\cap\tilde\Omega_2$.
We remark that the magnitudes of the refracted SH waves into $\Omega_2$ at $x$ may vary for each of the two systems since we do not know $\rho_-:=\rho|_{\Gamma_1^-}$; see \r{RMC4.2} where we can write $\mu_\pm = \rho_\pm c_s^2$. Their directions however do not depend on $\rho_-$ and each one can vanish only for a specific incidence angle (a priori different for each system).
Finally, $\Gamma_2=\tilde\Gamma_2$ since the presence of the interface $\Gamma_2$ would create a singularity of the lens relation of the reflected S wave (plus a possible P wave); which would be detected om $\Gamma_1$.
\end{proof}
\begin{lemma}\label{lemma_G3}
Under the assumptions of Lemma~\ref{lemma_G2}, assume in addition
\be{G6}
c_p|_{\Gamma_1^+}< c_p|_{\Gamma_1^-} .
\ee
Then $c_p$ is uniquely determined in $\Omega_2$.
\end{lemma}
\begin{remark}
Conditions \r{G5} and \r{G6} say that there is no total internal reflection of $P\to P$ and $S\to S$ rays from $\Omega_2$ to $\Omega_1$. On can still have evanescent transmitted $S\to P$ waves from the interior. More precisely, we have the following two generic cases (excluding $c_s|_{\Gamma_1^-} = c_p|_{\Gamma_1^+}$):
\be{G8}
c_s|_{\Gamma_1^+}<c_s|_{\Gamma_1^-} < c_p|_{\Gamma_1^+} < c_p|_{\Gamma_1^-},
\ee
and
\be{G9}
c_s|_{\Gamma_1^+} < c_p|_{\Gamma_1^+} <c_s|_{\Gamma_1^-} < c_p|_{\Gamma_1^-},
\ee
see also \r{G5}. Evanescent $S\to P$ transmission from the interior happens when \r{G8} holds. This is not a problem for the proof since we recovered $c_s$ in $\Omega_2$ using $SH$ waves. On the other hand, \r{G9} implies that all rays from the interior create transmitted rays, i.e., the wave front on $\Gamma_1^+$ is in the hyperbolic region.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lemma_G3}]
We want to use $P$ rays in $\Omega_2$ not hitting $\Gamma_2$, in particular having Cauchy data on $\Gamma_1^-$ in the hyperbolic region $c_p|_{\Gamma_1^-}|\xi'|<|\tau|$. By \r{G6}, that Cauchy data will fall in the hyperbolic region on $\Gamma_1^+$; in other words, we have the (HH) case. We use the control argument in \cite{caday2019recovery} now.
In the (HH) case, near $x$ and $t=t_1$, we can create an outgoing P wave in $\Omega_2$ with no other S or P waves there; in other words, in Figure~\ref{pic2}, only $P^-_\text{out}\not=0$ among all waves on the bottom. Then we extend the four waves on the top until they leave $\Omega$. At $y$, where that wave hits $\Gamma_1$ again near $t=t_2$, we can apply the same argument to make sure that there are no reflected rays in $\Omega_2$, see Figure~\ref{pic_IP2}.
\begin{figure}[!ht]
\includegraphics[page=15,scale=1]{DN_earth_layers_pics}
\caption{Solid curves are P waves. Dotted curves are S waves. We can create a P wave connecting points on $\Gamma_1$ and no other waves from or to $x$ below $\Gamma_1$ by choosing carefully the sources on the top. At $y$, we can make sure that there are no reflected waves by choosing the sources on the top as well.
}\label{pic_IP2}
\end{figure}
By energy preservation, we cannot have zero principal amplitudes of all four rays above $y$. Then by time reversal from $\bo$, we would know that there is a singularity on $\Gamma_1$ at $y$ and $t=t_2$, and we would know its wave front set. Note that we do not require knowledge of $\rho$ in $\Omega_1$ and $\Omega_2$. In principle, the second (tilded) system may have an S wave starting from $x$ at $t=t_1$. By the paragraph following Remark~\ref{rem_Rachele}, we must have a non-trivial P wave near $x$ and $y$ (since we have recovered $c_s$ already). The P wave arriving at $y$ at $t=t_2$ might a priori be due to an S wave from $x$ in $\Omega_2$ which has reflected at $\Gamma_1$ and mode converted by this is not possible because this would have created a singularity at a moment in the interval $(t_1,t_2)$ but we know that such singularity does not exist for either system. Therefore, this recovers the P travel time from $x$ to $y$.
Then we recover $c_p$ in $\Omega_2$ as in the proof of the previous lemma.
\end{proof}
Combining those two lemmas, we get the following.
\begin{theorem}\label{thm_ip}
Assume we have two triples of coefficients $ \rho$, $ \mu$, $ \nu$ and $\tilde \rho$, $\tilde \mu$, $\tilde \nu$; and $\Lambda=\tilde\Lambda$ with $T\gg1$. Assume the foliation condition and \r{G3} and \r{G6} for each one of them.
Then $\Gamma_1=\tilde \Gamma_1$, $\Gamma_2=\tilde \Gamma_2$ and $c_s=\tilde c_s$, $c_p=\tilde c_p$ in $\Omega_1\cup \Omega_2=\tilde \Omega_1\cup\tilde\Omega_2$. Also, if $c_p\not=2c_s$ in $\Omega$, then $\rho=\tilde\rho$ in $\Omega_1$.
\end{theorem}
\subsection{Recovery of the speeds in the third, etc., layers}
This construction can be extended by induction under appropriate conditions:
\begin{theorem}\label{thm_ip2}
Assume we have two triples of coefficients $ \rho$, $ \mu$, $ \nu$ and $\tilde \rho$, $\tilde \mu$, $\tilde \nu$. Let
\be{G3'}
c_s|_{\Gamma_j^+}< c_s|_{\Gamma_j^-}, \quad c_p|_{\Gamma_j^+}< c_p|_{\Gamma_j^-} , \quad j=1,\dots,k.
\ee
Assume the foliation condition in $\Omega_1\cup\dots\cup\Omega_k$.
If $\Lambda=\tilde\Lambda$ with $T\gg1$, then $\Gamma_j=\tilde \Gamma_j$, $j=1,\dots,k$ and $c_s=\tilde c_s$, $c_p=\tilde c_p$ in $\Omega_1\cup\dots\cup\Omega_k$.
\end{theorem}
\begin{proof}
We show that one can recover the two speeds in $\Omega_3$ and then the theorem follows by induction.
\begin{figure}[!ht]
\includegraphics[page=20,scale=1]{DN_earth_layers_pics}
\caption{Solid curves are P waves. Dotted curves are S waves. We can create an S wave connecting points $x$ and $y$ on $\Gamma_2$ so that it is an SH wave at $x$.
}\label{pic_IP3}
\end{figure}
We show that we can recover $c_s$ there first following the proof of Lemma~\ref{lemma_G2}.
Fix two points $x$ and $y$ on $\Gamma_2$. We keep them close enough so that the S geodesic connecting them does not touch $\Gamma_3$ (if there is $\Gamma_3$, i.e., if $k\ge3$).
We construct a solution $u$ below $\Gamma_2$, between $x$ and $y$, of S type (at principal level, as everywhere in this section), see Figure~\ref{pic_IP3}. We chose the solution to be SH at $x$ but this is not essential. At $y$, there might be reflection, transmission and mode conversion to evanescent modes.
Then near $x$ and at $y$ (and the corresponding times $t_1$ and $t_2$), the traces of this solution on $\Gamma_2$ is in the (HH), (HM) or the (MM) region by \r{G3'}, with the possible exception of finitely many angles giving rise to tangential rays. On the other hand, on principal level, there are only incoming and reflected S waves at $\Gamma_2^+$ near $(t_1,x)$, satisfying the transmission conditions, and we can arrange no incoming waves at $x$ from $\Omega_3$ by the control argument for SH waves.
We extend those microlocal solutions to $\Omega_2$ and $\Omega_1$ first as in the proof of Lemma~\ref{lemma_G2}. We do this starting from $x$ first.
On $\Gamma_1$, each of the two S branches (meeting at $x$) are in one of the three regions mentioned above excluding directions of measure zero). In each one of those cases, we can choose four or two waves on the top, i.e., in $\Omega_1$ which cancel a reflected wave. At $\Gamma_1^-$, we decompose all S waves into SH and SV ones. The latter can be treated as acoustic waves and can be controlled from the top. The SV ones can be controlled as well as we showed in sections~\ref{sec_HH}, \ref{sec_hm} and \ref{sec_t_mm}. In Figure~\ref{pic_IP3}, for example, point $a$ corresponds to either an (HH) or an (HM) case; and point $b$ corresponds to an (MM) case; so does point $c$. Then we extend all waves in $\Omega_1$ to the exterior of $\Omega$, i.e., we let them leave $\Omega$.
At the point $y$, we do the same for the S and the P wave propagating into $\Omega_2$.
For the $p$ wave (hitting $\Gamma_1$ at $d$ in Figure~\ref{pic_IP3}), we are at the (HH) zone at $\Gamma_1$, and we apply the control argument we used before.
The so constructed microlocal solution vanishes (in this context, that means that it has no leading order singularities) for $t\ll0$, and by a shifting $t_1$, we may assume that this happens for, say, $t<\eps$ with some $\eps>0$ (we need $\eps$ so that we can do a smooth cutoff between $t=0$ and $t=\eps$ and construct an actual solution with the same singularities). Choose $T\gg1$ so that all outgoing branches starting from $x$ or $y$ reach $\bo$ before that time.
We are in the situation of Lemma~\ref{lemma_G2} now with $\Gamma_1$ playing the role of $\bo$ there with one difference.
We have not recovered $\rho$ in $\Omega_2$. We claim however that that near the micolrocal solutions along the rays hitting $x$ and $y$, $\tilde u$ (corresponding to the second system) has singularities of the same order as $u$. This follows form the following: the Cauchy data on $\R\times \Gamma_1^+$ and that on $\R\times\Gamma_1^+$ are related by the transmission conditions \r{tr}. It follows by \r{Nu} that they are related by an elliptic operator depending on $\rho$ (recall that we view the three independent coefficients as being $c_s$, $c_p$ and $\rho$). That dependence makes the refracted and the reflected amplitudes $\rho$ dependent, but it does not change the property of their principal parts being non-zero (except for specific angles). The same conclusion could have been reached by examining thse qualitative behavior of the solution of the microlocal systems, say \r{RMC4.1n} and \r{2x2} in the (HH) case, as a function of $\rho_-$. Therefore, $\tilde u$ has leading order singularities in $\Omega_2$ along the same rays as $u$ does. By the proof of Lemma~\ref{lemma_G2},
$y$ is uniquely recovered as the first time the S wave from $x$ hits $\Gamma_2$ again. Then the boundary distance function related to $c_s$ in $\Omega_3$ is uniquely recovered for $x$ close to $y$, which recovers the jet of $c_s$ at $\Gamma_2^-$. Then we know the $c_s$ lens relation as well, along rays not touching $\Gamma_3$. As in the proofs above, we can detect where $\Gamma_3$ is and also recover $c_s$ in $\Omega_3$.
The recovery of $c_p$ in $\Omega_3$ goes along the same lines as the proof of Lemma~\ref{lemma_G3}, using the arguments above.
We create a single P wave below $\Gamma_2$ connecting $x$ and $y$ and extend it until it reaches $\bo$ at both sides. At $\Gamma_2$, we are in the (HH) case, each ray, extended upwards, will create four new ones. On the upper surfaces, we can have any of the (HH), (HM) and the (MM) cases as above.
We can recover $\Gamma_3$ (if exists) by as in the previous lemmas.
The proof for $k>3$ follows by induction.
\end{proof}
\begin{remark}
Recovery of $\rho$ in $\Omega_j$, $j\ge2$ seems delicate. The arguments in \cite{Rachele03, Bhattacharyya_18} require the knowledge of the jet of $\rho$ at the boundary up to order three, which is true on $\bo$ by \cite{Rachele_2000} but proving this on $\Gamma_j^-$, $j=1,2,\dots$ seems to be not easy.
\end{remark}
\subsection{Exploiting mode conversion; the PREM model}
In the results above, we needed to ensure no total internal reflection of S or P waves or both, from the interior. The mode conversion was not used to obtain information, it was rather a difficulty we had to overcome. Below we show how one can use mode conversion to recover $c_p$ when the P waves are totally reflected but the refracted $S$ wave to the exterior is not.
In the Preliminary Reference Earth Model (PREM) \cite{DZIEWONSKI1981297}, in the Upper and the Lower Mantle, the S and the P speeds increase with depth, ``on average'', except on a small interval close to the surface. At the boundary of the Lower Mantle and the Outer Core however, the P speed jumps down with depth, hence it does not satisfy \r{G6} on that interface. The S speed jumps down to zero, i.e., the Outer Core is believed to be liquid. This violates \r{G3} on that interface (and there are no S waves in the Outer Core anyway). Therefore, the P waves in the Outer Core close to tangent ones to their upper boundary are totally reflected (as P waves only) and the results above do not apply for the recovery of $c_p$. In this case we can use mode conversion however because PREM shows that those P waves actually produce transmitted (hyperbolic) S waves into the exterior, i.e., condition \r{G10} below holds.
An analysis of a solid-liquid model is certainly possible with the methods we develop but it is beyond the scope of this work (see also \cite{de2015system}). We will sketch arguments based on the dynamical system only assuming no S waves below $\Gamma_1$ (formally, $c_s=0$ there). \textit{Those arguments are not a proof} since we assume preservation of the microlocal properties in the limit $c_s|_{\Omega_1}\to0$.
Assume
\be{G10}
c_s|_{\Gamma_1^+}< c_p|_{\Gamma_1^-}.
\ee
First, we can determine the two speeds in $\Omega_1$ per Lemma~\ref{lemma_G1}.
To recover $c_p$ in $\Omega_2$, take a P geodesic in $\Omega_2$ connecting $x$ and $y$ on $\Gamma_2$, so that it does not hit $\Gamma_2$; see Figure~\ref{pic_IP5}. We can construct a microlocal solution $u$ near it so that it is obtained by an S wave in $\Omega_1$ through mode conversion at $\Gamma_1$.
\begin{figure}[!ht]
\includegraphics[page=17,scale=1]{DN_earth_layers_pics}
\caption{Solid curves are P waves. Dotted curves are S waves. We can create a P wave in $\Omega_2$ connecting points on $\Gamma_1$ through mode conversion of an S wave coming from $\Omega_1$.
}\label{pic_IP5}
\end{figure}
To construct such an incoming solution, we can start with such between $x$ and $y$ and time reverse it. Then we take the S branch in $\Omega_1$, which on Figure~\ref{pic_IP5} is represented by the dashed most left incoming ray; and let it propagate. There will be a mode conversion in a a neighborhood of $x$, giving use the desired solution. It will have a non-zero principal level energy except possibly for directions of measure zero. There might be a mode converted reflected P wave at $x$ back to $\Omega_1$, not shown on Figure~\ref{pic_IP5}. If so, we let it propagate and exit $\Omega_1$, similarly to the reflected S wave.
There will be a reflected P wave, and a transmitted S wave of non-zero principal energy except possibly for angles of measure zero. There might be a P wave propagating from $y$ into $\Omega_1$, not shown on the figure. We let them propagate and exit $\Omega_1$ through $\bo$.
Since the tilded system has the same Cauchy data on $(0,T)\times\bo$ (as above, we shift the time, if needed so that the solution is smooth for $t<0$), and $c_p=\tilde c_p$, $c_s=\tilde c_2$ in $\Omega_1$ by Lemma~\ref{lemma_G1}, we get that the principal part of $u$ and $\tilde u$ coincide in the domain of influence. We recall that the principal parts do not depend on $\rho$. Then $u$ and $\tilde u$ have the same Cauchy data on $\Gamma_1^+$ as well. Then we can identify $y$ by the point where the first (in time) singularity hits $\Gamma_1$ again. The rest is as in the proof of the previous results.
|
2,869,038,156,658 | arxiv | \section*{References\markboth
{References}{References}}\list
{[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}
\def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em}
\sloppy
\sfcode`\.=1000\relax}
\let\endthebibliography=\endlist
\def\ ^<\llap{$_\sim$}\ {\ ^<\llap{$_\sim$}\ }
\def\ ^>\llap{$_\sim$}\ {\ ^>\llap{$_\sim$}\ }
\def\sqrt 2{\sqrt 2}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\gamma^{\mu}{\gamma^{\mu}}
\def\gamma_{\mu}{\gamma_{\mu}}
\def{1-\gamma_5\over 2}{{1-\gamma_5\over 2}}
\def{1+\gamma_5\over 2}{{1+\gamma_5\over 2}}
\def\sin^2\theta_W{\sin^2\theta_W}
\def\alpha_{EM}{\alpha_{EM}}
\defM_{\tilde{u} L}^2{M_{\tilde{u} L}^2}
\defM_{\tilde{u} R}^2{M_{\tilde{u} R}^2}
\defM_{\tilde{d} L}^2{M_{\tilde{d} L}^2}
\defM_{\tilde{d} R}^2{M_{\tilde{d} R}^2}
\defM_{z}^2{M_{z}^2}
\def\cos 2\beta{\cos 2\beta}
\defA_u{A_u}
\defA_d{A_d}
\def\cot \beta{\cot \beta}
\def\v#1{v_#1}
\def\tan\beta{\tan\beta}
\def$e^+e^-${$e^+e^-$}
\def$K^0$-$\bar{K^0}${$K^0$-$\bar{K^0}$}
\def\omega_i{\omega_i}
\def\chi_j{\chi_j}
\defW_\mu{W_\mu}
\defW_\nu{W_\nu}
\def\m#1{{\tilde m}_#1}
\defm_H{m_H}
\def\mw#1{{\tilde m}_{\omega #1}}
\def\mx#1{{\tilde m}_{\chi^{0}_#1}}
\def\mc#1{{\tilde m}_{\chi^{+}_#1}}
\def{\tilde m}_{\omega i}{{\tilde m}_{\omega i}}
\def{\tilde m}_{\chi^{0}_i}{{\tilde m}_{\chi^{0}_i}}
\def{\tilde m}_{\chi^{+}_i}{{\tilde m}_{\chi^{+}_i}}
\defM_z{M_z}
\def\sin\theta_W{\sin\theta_W}
\def\cos\theta_W{\cos\theta_W}
\def\cos\beta{\cos\beta}
\def\sin\beta{\sin\beta}
\defr_{\omega i}{r_{\omega i}}
\defr_{\chi j}{r_{\chi j}}
\defr_f'{r_f'}
\defK_{ik}{K_{ik}}
\defF_{2}(q^2){F_{2}(q^2)}
\def\({\cal F}\){\({\cal F}\)}
\def{\f(\tilde c;\tilde s;\tilde W)+ \f(\tilde c;\tilde \mu;\tilde W)}{{\({\cal F}\)(\tilde c;\tilde s;\tilde W)+ \({\cal F}\)(\tilde c;\tilde \mu;\tilde W)}}
\def\tan\theta_W{\tan\theta_W}
\defsec^2\theta_W{sec^2\theta_W}
\def{\tilde\chi^{+}_1}{{\tilde\chi^{+}_1}}
\def{\tilde\chi^{+}_2}{{\tilde\chi^{+}_2}}
\def{\tilde\theta}{{\tilde\theta}}
\def{\tilde\phi}{{\tilde\phi}}
\defM_z{M_z}
\def\sin\theta_W{\sin\theta_W}
\def\cos\theta_W{\cos\theta_W}
\def\cos\beta{\cos\beta}
\def\sin\beta{\sin\beta}
\defr_{\omega i}{r_{\omega i}}
\defr_{\chi j}{r_{\chi j}}
\defr_f'{r_f'}
\defK_{ik}{K_{ik}}
\defF_{2}(q^2){F_{2}(q^2)}
\def\({\cal F}\){\({\cal F}\)}
\def{\f(\tilde c;\tilde s;\tilde W)+ \f(\tilde c;\tilde \mu;\tilde W)}{{\({\cal F}\)(\tilde c;\tilde s;\tilde W)+ \({\cal F}\)(\tilde c;\tilde \mu;\tilde W)}}
\def$\cal{B}(\tau\to\mu \gamma)$~{$\cal{B}(\tau\to\mu \gamma)$~}
\def\tan\theta_W{\tan\theta_W}
\defsec^2\theta_W{sec^2\theta_W}
\newcommand{\pxn}[1]{{\color{red}{#1}}}
\newcommand{\ai}[1]{{\color{blue}{#1}}}
\begin{titlepage}
\begin{center}
{\large {\bf
Electron EDM as a Sensitive Probe of PeV Scale Physics}}\\
\vskip 0.5 true cm
Tarek Ibrahim$^{a,b}$\footnote{Email:
[email protected]},
Ahmad Itani$^{c}$\footnote{Email: [email protected]},
and Pran Nath$^{d}$\footnote{Emal: [email protected]}
\vskip 0.5 true cm
\end{center}
\noindent
{$^{a}$ Department of Physics, Faculty of Science,
University of Alexandria, Alexandria 21511, Egypt\footnote{Permanent address.} }\\
{$^{b}$ Center for Fundamental Physics, Zewail City of Science and Technology, Giza 12588, Egypt\footnote{Current address.}} \\
{$^{c}$
Department of Physics, Faculty of Sciences, Beirut Arab University,
Beirut 11 - 5020, Lebanon} \\
{$^{d}$ Department of Physics, Northeastern University, Boston, Massachusetts 02115-5000, USA} \\
\centerline{\bf Abstract}
We give a quantitative analysis of the electric dipole moments
as a probe of high scale physics.
We focus on the electric dipole moment of the electron since the limit on it is the most stringent.
Further, theoretical computations of it are free of QCD uncertainties.
The analysis presented here first explores the probe of high scales via electron
electric dipole moment (EDM) within MSSM
where the contributions to the EDM arise from the chargino and the neutralino exchanges
in loops. Here it is shown that the electron EDM
can probe mass scales from tens of TeV into the PeV range. The analysis is then extended to
include a vectorlike generation which can mix with the three ordinary generations. Here new
CP phases arise and it is shown that the electron EDM now has not only a supersymmetric contribution
from the exchange of charginos and neutralinos but also a non-supersymmetric contribution
from the exchange of $W$ and $Z$ bosons. It is further shown that the interference of the
supersymmetric and the non-supersymmetric contribution leads to the remarkable phenomenon
where the electron EDM as a function of the slepton mass first falls and become vanishingly small
and then rises again as the slepton mass increases This phenomenon arises as a consequence
of cancellation between the SUSY and the non-SUSY contribution at low scales while at high
scales the SUSY contribution dies out and the EDM is controlled by the non-SUSY contribution alone.
The high mass scales that can be probed by the EDM are far in excess of what accelerators will be able to probe.
The sensitivity of the EDM to CP phases both in the SUSY and the non-SUSY sectors are also
discussed.
\noindent
{\scriptsize
Keywords:{~Electric dipole moments, supersymmetry, PeV scale physics, vector lepton multiplets, }\\
PACS numbers:~13.40Em, 12.60.-i, 14.60.Fg}
\medskip
\end{titlepage}
\section{Introduction \label{sec1}}
In the standard model the electric dipole moments of elementary particles are very small\cite{sm}.
Thus for the electron it is estimated that $|d_e|\simeq 10^{-38}$ $e$cm
and for the neutron the value ranges from $10^{-31}-10^{-33}$ $e$cm.
This is far beyond the current sensitivity of experiments to measure.
However, in models of physics beyond the standard model much larger electric dipole moments,
orders of magnitude larger than those in the standard model, can be obtained (for a review see~\cite{Ibrahim:2007fb}).
Thus in the supersymmetric models the electric dipole moments of elementary particles such
as the electron and the quarks can be large enough that the current experimental upper limits
act as constraints on models. Indeed often in supersymmetric theories for light scalars, the
electric dipole moments can lie in the region larger than the current upper limits for the
electron and the neutron EDMs. This phenomenon is
often referred to as the SUSY EDM problem.
One solution to the SUSY EDM problem is the possibility that the CP phases
are small~\cite{earlywork}. Other possibilities allow for large, even maximal, phases and the EDM
is suppressed via the sparticle masses being large~\cite{Nath:1991dn}
or by invoking the so called cancellation mechanism~\cite{cancellation}
where contributions
from various diagrams that generate the electric dipole moment interfere destructively
to reduce the electric dipole moment to a level below its experimental upper limit.\\
In the post Higgs boson discovery era the apparent SUSY EDM problem can be turned around to ones
advantage as a tool to investigate high scale physics. The logic of this approach is the following:
The high mass of the Higgs boson at 126 GeV requires a large loop correction to lift its value
from the tree level, which lies below the $Z$ -boson mass, up to the experimental value. A large loop
correction requires that the scalar masses that enter in the Higgs boson loop be large so as to generate
the desired large correction which requires a high scale for the sfermion masses.
Large sfermions masses help with suppression of flavor changing neutral currents. They
also help resolve the SUSY EDM problem
and help stabilize the proton against decay via baryon and lepton number violating dimension five
operators in supersymmetric grand unified theories. \\
In this work we investigate the possibility that EDMs can be used as probes of high scale physics
as suggested in ~\cite{McKeen:2013dma,Moroi:2013sfa,Altmannshofer:2013lfa,Dhuria:2013ida}.
Specifically we focus here on the EDM of the electron since it is by far the most
sensitively determined one than any of the other EDMs.
Thus the ACME Collaboration~\cite{Baron:2013eja} using the polar molecule thorium monoxide (ThO) measures the
electron EDM so that
\begin{eqnarray}
d_e =(-2.1 \pm 3.7_{\rm stat} \pm 2.5_{\rm syst}) \times 10^{-29} e{\rm cm}.
\label{1.1}
\end{eqnarray}
The above corresponds to an upper limit of
\begin{eqnarray}
|d_e| < 8.7\times 10^{-29} ~e{\rm cm}\ ,
\label{1.2}
\end{eqnarray}
at 90\% CL. The corresponding upper limits on the EDM of the muon and on the tau lepton are
~\cite{pdg}
\begin{eqnarray}
|d_\mu| < 1.9 \times 10^{-19} ~e{\rm cm}\ , \\
\label{1.3}
|d_\tau| < 10^{-17} ~e{\rm cm}\ ,
\label{1.4}
\end{eqnarray}
and are not as stringent as the result of Eq. (\ref{1.2}) even after scaling in lepton mass is taken into account.
Further, the limit on $d_e$ is likely
to improve by an order of magnitude or more in the future as the projections below indicate
\begin{align}
\label{1.5}
{\text Fr }\cite{Sakemi:2011zz}
& ~~~~|d_e| \lesssim 1 \times 10^{-29} e{\rm cm}
\\
\label{1.6}
{\rm YbF ~molecule} \cite{Kara:2012ay}
&~~~ |d_e| \lesssim 1 \times 10^{-30} e{\rm cm}
\\
{\rm WN ~ion} \cite{Kawall:2011zz}
&~~~ |d_e| \lesssim 1 \times 10^{-30} e{\rm cm}
\label{1.7}
\end{align}
In the analysis here we will first consider the case of MSSM where the CP phases enter in the
soft parameters such as in the masses $M_i$ (i=1,2) of the electroweak gauginos,
and in the trilinear couplings $A_k$ and in the Higgs mixing parameter $\mu$.
Here we will investigate the scale of the slepton masses needed to reduce the electron EDM
below its upper limit for the case when the CP phases are naturally ${\cal O}(1)$. We will
see that this scale will be typically high lying in the range of tens of TeV to a PeV
(For a discussion of PeV scale in the context of supersymmetry in previous works see, e.g., \cite{Wells:2004di}).
We will carry out the analysis for the case where we extend MSSM to include a vector like leptonic
multiplet and allow for mixings between the vector like multiplet and the three sequential generations.
We will study the parametric dependence of the EDM on the scalar masses, on
fermion masses of the vector like generation, on CP phases and on $\tan\beta$.\\
The outline of the rest of the paper is as follows: In \cref{sec2} we discuss
the EDM of the electron within MSSM as a probe of the slepton masses.
In \cref{sec3} we extend the analysis of \cref{sec2} to MSSM with inclusion of a vector
like leptonic multiplet which allows for mixing between the vector multiplet and the three
sequential generations. Here we give analytic results for the electron EDM arising from the
supersymmetric exchange involving the chargino and neutralinos in the loops.
We also compute the non-supersymmtric contributions involving the $W$ and the $Z$ exchange.
In \cref{sec4} we give a numerical analysis of the limits on the mass scales that can be
{accessed} using the results of \cref{sec3}. Conclusions are given in \cref{sec5}.
Further details of the MSSM model with a vector multiplet used in the analysis of
\cref{sec3} are given in Appendices A-C.
\begin{figure}[t]
\begin{center}
{\rotatebox{0}{\resizebox*{14cm}{!}{\includegraphics{e_EDM.png}}\hglue5mm}}
\caption{The neutralino-slepton exchange diagram (left) and the chargino -sneutrino exchange diagram (right) that contribute
to the electric dipole moment of the electron in MSSM.}
\label{fig1}
\end{center}
\end{figure}
\section{Probe of slepton masses in MSSM from the electron EDM constraint \label{sec2}}
The supersymmetric Feynman diagrams that contribute to the electric dipole moment of the electron
involve the chargino-sneutrino exchange and the neutralino-slepton exchange as shown in \cref{fig1}.
In the analysis of these diagrams the input supersymmetry parameters consist of the following
\begin{gather}
M_{\tilde e L}, M_{\tilde \nu_e}, M_{\tilde e}, \mu, \tan\beta, M_1, M_2, A_e, A_{\nu_e}
\end{gather}
where $M_{\tilde e L}$ etc are the soft scalar masses, $M_1, M_2$ are the gaugino masses in the
$U(1)$ and $SU(2)$ sectors, $A_e$ etc are the trilinear couplings, $\mu$ is the Higgs mixing parameter
which enters the superpotential as $\mu H_1 H_2$, where $H_2$ gives mass to the up quarks and $H_1$ gives
mass to the down quarks and the leptons, while $\tan\beta$ is the ratio of the Higgs VEVs so that
$\tan\beta= <H_2>/<H_1>$ (see Appendix A for discussion of the soft parameters).
Further, $\mu$, $M_1$, $M_2$, and the trilinear coupling $A_k$
are complex and we define their phase so that
\begin{gather}
\mu= |\mu| e^{i\alpha_\mu}, ~~M_i= |M_i| e^{\alpha_i}, i=1,2\\
A_k= |A_k| e^{i\alpha_{A_k}}, ~~ k=e, \nu_e \ .
\end{gather}
The analysis of the diagrams of \cref{fig1} involves electron-chargino-sneutrino interactions and the electron-neutralino-slepton interactions. For the chargino-sneutrino exchange diagrams one has
\begin{eqnarray}
d_e^{\chi^{-}}= \frac{\alpha_{em}}{ 4\pi \sin^2\theta_W} \frac{k_e}{m_{\tilde \nu_e}^2}
\sum _{i=1}^2 m_{\tilde \chi^-_i} Im (U_{i2}^* V_{i1}^*) F \left(\frac{ m^2_{\tilde \chi^-_i}}{ m_{\tilde \nu_e}^2} \right)
\label{2.1}
\end{eqnarray}
where $F(x)$ is a form factor defined by
\begin{equation}
F(x)= \frac{1}{2(1-x)^2} \left(3- x + \frac{2 \ln x}{1-x}\right)
\label{2.2}
\end{equation}
and
\begin{gather}
\kappa_e = \frac{m_e}{ \sqrt 2 m_W \cos\beta}.
\label{2.3}
\end{gather}
Here $U,V$ diagonalize the chargino mass matrix $M_C$ so that
\begin{equation}
U^* M_C V= {\rm diag} (m_{\tilde \chi_1^-}, m_{\tilde \chi_2^-}).
\label{2.4}
\end{equation}
For the neutralino-slepton exchange diagrams one finds
\begin{eqnarray}
d_e^{\tilde \chi^0}= \frac{\alpha_{em}}{ 4\pi \sin^2\theta_W}
\sum_{k=1}^2 \sum_{i=1}^4 Im(\eta_{eik}) \frac{m_{\tilde \chi_i^0}}{M_{\tilde fk^2}} Q_{\tilde f}
G \left(\frac{ m^2_{\tilde \chi^-_i}}{ m_{\tilde \nu_e}^2} \right)
\label{2.5}
\end{eqnarray}
where $G(x)$ is a form factor defined by
\begin{equation}
G(x) = \frac{1}{2(1-x)^2} \left(1+ x + \frac{2 x \ln x}{1-x}\right)
\label{2.6}
\end{equation}
where
\begin{gather}
\eta_{eik} = \left[- \sqrt 2
\left\{ \tan\theta_W (Q_e- T_{3e}) X_{1i} + T_{3i} X_{2i} \right\} D_{e1k}^* + \kappa_e
X_{bi} D_{e2k}^*\right]\\
\left (\sqrt 2 \tan\theta_W Q_e X_{1i} D_{e2k} - \kappa_e X_{bi} D_{e1k}\right).
\label{2.7}
\end{gather}
where $b=3$ and $T_{3e}= -1/2$. Further, $X_{ij}$ are elements of the matrix $X$ which diagonalizes the
neutralino mass matrix $M_{\chi^0}$ so that
\begin{equation}
X^T M_{\chi^0} X= {\rm diag} \left( m_{\tilde \chi_1^0}, m_{\tilde \chi_2^0}, m_{\tilde \chi_3^0}, m_{\tilde \chi_4^0}\right)\ ,
\label{2.8}
\end{equation}
and $D_e$ diagonalizes the scalar electron mass $^2$ matrix so that
\begin{gather}
\tilde e_L= D_{e11} \tilde e_1 + D_{e12} \tilde e_2,
~\tilde e_R= D_{e21} \tilde e_1 + D_{e22} \tilde e_2
\label{2.9}
\end{gather}
where $\tilde e_1$ and $\tilde e_2$ are the selectron mass eigenstates.
In \cref{fig2} we give a numerical analysis of the electron EDM as a function of $m_0$.
Here one finds that the current constraint on the electron EDM allows one to probe the $m_0$ region in the
tens of TeV while improvement in the sensitivity by a factor of 10 or more will allow one to extend the
range up to 100 TeV - 1 PeV.
\vspace{0.5cm}
\begin{figure}[t]
\begin{center}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{2.png}}\hglue5mm}}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{2l.png}}\hglue5mm}}\\
\caption{{Left panel:}
A display of the electron EDM as a function of $m_0$ (where $m_0= M_{\tilde eL}= M_{\tilde e}$)
for different $\alpha_\mu$ (the phase of the
Higgs mixing parameter $\mu$)
with the mixings of the vector like generation with the regular three generations set to zero. The curves are for
the cases $\alpha_\mu= -3$ (small-dashed, red), $\alpha_\mu=-0.5$ (solid), $\alpha_\mu=1$ (medium-dashed, orange), and $\alpha_\mu= 2.5$ (long-dashed, green). The horizontal solid line is the current upper limit on the electron EDM set at $|d_e|=8.7 \times 10^{-29}$. The other parameters are $\text{$|\mu |$ = }4.1\times 10^2\text{ , $|$}M_1\text{$|$ = }2.8\times 10^2\text{ , $|$}M_2\text{$|$ = }3.4\times 10^2\text{ , $|$}A_e\text{$|$ = }3\times 10^6\text{ , }m_0^{\tilde{\nu}}\text{ = }4\times 10^6\text{ , $|$}A_0^{\tilde{\nu}}\text{$|$ = }5\times 10^6\text{ , tan$\beta $ = }30$ . All masses are in GeV, phases in rad and EDM in $e$cm.The analysis shows that improvements in the electron EDM constraint can probe scalar masses in the 100 TeV- 1 PeV region and beyond.
{Right panel: The same as the left panel except that the region below the current experiment limit is blown
up. The analysis shows that an improvement by a factor of ten can allow one to probe up to and beyond 1 PeV
in mass scales.}
}
\label{fig2}
\end{center}
\end{figure}
\section{EDM Analysis by inclusion of a vector generation in MSSM\label{sec3}}
Next we discuss the case when we include a vectorlike leptonic multiplet which mixes with the three
generations of leptons. In this case the mass eigenstates will be linear combinations of the three
generations plus the vector like generation which includes mirror particles. The details of
the model and its interactions are given in {Appendices} A-C. Here we discuss the contribution of the
model to the electron EDM. These contributions arise from four sources: the chargino exchange, the neutralino exchange,
the $W$ boson exchange and the $Z$ boson exchange.
\begin{figure}[t]
\begin{center}
{\rotatebox{0}{\resizebox*{14cm}{!}{\includegraphics{fig1.png}}\hglue5mm}}\\
{\rotatebox{0}{\resizebox*{14cm}{!}{\includegraphics{fig2.png}}\hglue5mm}}
\caption{
Upper diagrams: Supersymmetric contributions to the leptonic EDMs
arising from the exchange of the charginos, sneutrinos
and mirror sneutrinos (upper left) and the exchange of neutralinos, sleptons, and mirror sleptons (upper right)
inside the loop. Lower diagrams:
Non-supersymmetric diagrams that contribute to the leptonic EDMs
via the exchange of the $W$, the sequential and vector like neutrinos (lower left) and the exchange of the
$Z$, the sequential and vector like charged leptons (lower right).}
\label{fig3}
\end{center}
\end{figure}
Using the interactions given in Appendix B the chargino contribution is given by
\begin{align}
d_{\alpha}^{\chi^{+}}&=-\frac{1}{16\pi^2}\sum_{i=1}^{2}\sum_{j=1}^{8}\frac{m_{\chi^{+}_i}}{m^2_{\tilde\nu_{j}}}\text{Im}(C^{L}_{\alpha ij}C^{R*}_{\alpha ij})
F\left(\frac{m^{2}_{{\chi^{+}}_{i}}}{m^{2}_{\tilde\nu_{i}}}\right)
\label{3.1}
\end{align}
where the functions $C^L$ and $C^R$ are given in Appendix B and
the form factor $F(x)$ is given by { \cref{2.2}}.
Using the interactions given in Appendix B the neutralino contribution is given by
\begin{align}
d_{\alpha}^{\chi^{0}}&=-\frac{1}{16\pi^2}\sum_{i=1}^{4}\sum_{j=1}^{8}\frac{m_{\chi^{0}_i}}{m^2_{\tilde\tau_{j}}}\text{Im}(C'^{L}_{\alpha ij}C'^{R*}_{\alpha ij})
G\left(\frac{m^{2}_{{\chi^{0}}_{i}}}{m^{2}_{\tilde\tau_{i}}}\right)
\end{align}
where the functions $C^{'L}$ and $C^{'R}$ are defined in Appendix B and the
form factor $G(x)$ is given by Eq. (\ref{2.6}).
The contributions to the lepton electric moment from the $W$ and $Z$ exchange arise from similar loops. Using the interactions given in Appendix B the contribution arising from
the $W$ exchange diagram is given by
\begin{align}
d_{\alpha}^{W}&=\frac{1}{16\pi^2}\sum_{i=1}^{4}\frac{m_{\psi^{+}_i}}{m^2_W}\text{Im}(C^{W}_{Li\alpha }C^{W*}_{R i\alpha })
I_1\left(\frac{m^{2}_{{\psi}_{i}}}{m^{2}_{W}}\right)
\end{align}
where the functions $C_L^W$ and $C_R^W$ are given in Appendix B and the
form factor $I_1$ is given by
\begin{align}
I_1(x)&=\frac{2}{(1-x)^{2}}\left[1-\frac{11}{4}x +\frac{1}{4}x^2-\frac{3 x^2\ln x}{2(1-x)} \right]
\end{align}
The $Z$ boson exchange diagram contribution is given by
\begin{align}
d_{\alpha}^{Z}&=-\frac{1}{16\pi^2}\sum_{\beta=1}^{4}\frac{m_{\tau_\beta}}{m^2_Z}\text{Im}(C^{Z}_{L\alpha\beta }C^{Z*}_{R \alpha\beta })
I_2\left(\frac{m^{2}_{\tau_{\beta}}}{m^{2}_{Z}}\right)
\end{align}
where the functions $C_L^Z$ and $C_R^Z$ are defined in Appendix B and
where the form factor $I_2$ is given by
\begin{align}
I_2(x)&=\frac{2}{(1-x)^{2}}\left[1+\frac{1}{4}x +\frac{1}{4}x^2+\frac{3 x\ln x}{2(1-x)} \right]
\label{23}
\end{align}
\section{Numerical analysis and results\label{sec4}}
{
We discuss now the numerical analysis for the EDM of the electron in the model given in Section 3.
The parameter space of the model of Section 3 is rather large. In addition to the MSSM parameters, one has
the parameters arising from the vectorlike multiplet and its mixings with the standard model generations
of quarks and leptons.
Thus as in MSSM here also we look at slices of the parameter space
to show that interesting new physics exists in these regions.}
Thus for simplicity in the analysis we assume $A_{\nu_{\tau}}= A_{\nu_{\mu}}= A_{\nu_{e}}=A_{N}=A_{0}^{\tilde{\nu}}$ and
$m_{0}^{\tilde{\nu}^{2}}={M}_{\tilde N}^{2}={M}_{\tilde\nu_{\tau}}^{2}={M}_{\tilde\nu_{\mu}}^{2}={M}_{\tilde\nu_{e}}^{2}$
in the sneutrino mass squared matrix (see \cref{13}). We also assume
$m_{0}^{2}={M_{\tilde\tau L}}^{2}={M}_{\tilde E}^{2}={M}_{\tilde \tau}^{2}={M}_{\tilde \chi}^{2}={M}_{\tilde \mu L}^{2}={M}_{\tilde\mu}^{2}={M}_{\tilde e L}^{2}={M}_{\tilde e}^{2}$ and $A_{0}=A_{\tau}=A_{E}=A_{\mu}=A_{e}$ in the slepton mass squared matrix (see \cref{13}).
{The assumed masses for the new leptons are consistent with the lower limits given by the Particle Data Group\cite{pdg}.}
In \cref{fig2} we investigated $d_e$ in MSSM as a function of $m_0$ when there were no
mixing of the ordinary leptonic generations with the vectorlike generation. We wish now to switch
on a small mixing with the vector like generation and see what effect it has on the electron EDM.
To this end we focus on one curve in \cref{fig2} which we take to be the solid curve (the case
$\alpha_\mu=-0.5$). For this case we plot the individual contributions to $d_e$ in the left panel
of \cref{fig4}. Here one finds that the largest contribution to $d_e$ arises from the chargino
exchange while the
neutralino exchange produces a much smaller contribution
and as expected the $W$ and $Z$ exchanges do not contribute. \\
Next we turn on
a small coupling between the vector like generation and the three generations of leptons.
The analysis for this case is given in the right panel of \cref{fig4}.
The turning on of the mixings has the following effect: the supersymmetric contribution is modified only modestly
and its general feature remains as in the left panel. However, now because of mixing
with the vectorlike generation the contribution from the $W$ and $Z$ exchange is non-vanishing
and in fact is very significant. Further, unlike the chargino and the neutralino exchange
contribution the $W$ and $Z$ exchange contribution does not depend on $m_0$
as exhibited in \cref{fig4}. Thus as $m_0$ gets large the
supersymmetric contributions becomes much smaller than that of the $W$ and $Z$ exchange
contribution. For this reason, $d_e$ is dominated by the $W$ and $Z$ exchange.
This phenomenon is exhibited in further detail in \cref{table1} which is
done for the same set of parameters as the right panel of \cref{fig4} except that $m_0=1.1$ PeV.
Here column (i) gives the individual contributions for the case (i) of no mixing where
$W$ and $Z$ contributions vanish, and the non-vanishing contributions arise from chargino
and neutralino exchange.
Column (ii) exhibits the individual contributions when the mixings with the
vector like generation are turned on. Here one finds that the supersymmetric contributions
from the chargino and neutralino exchanges are essentially unchanged
from the case of no mixing but the
contributions from the $W$ and $Z$ exchanges are now non-zero and are in fact much larger
than the chargino and neutralino exchange contributions. The reason for the non-vanishing
contribution from the $W$ and $Z$ exchanges is due to
the mixings with vector like generation whose couplings are complex and carry CP violating phases.
\begin{figure}[H]
\begin{center}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{4l.png}}\hglue5mm}}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{4r.png}}\hglue5mm}}\\
\caption{
Left Panel: Exhibition of the individual contributions to the EDM of the electron when there is no
mixing between the vectorlike generation and the three regular generations. The parameters chosen
for this case are the same as for the solid curve in \cref{fig2} where $\alpha_\mu=-0.5$.
As expected the
contributions from the W-exchange (the long-dashed curve in orange) and the Z -exchange (dot-dashed purple curve)
give vanishing contribution in this case, and the entire contribution arises from the
chargino-exchange (the small-dashed curve in red) and the neutralino -exchange( the medium-dashed blue curve).
Right Panel: The parameter point chosen is the same as for the left panel except that mixing of the
vectorlike generation with the regular three generations is allowed. The additional parameters chosen
are $m_N=250, m_E=380$ and the f couplings set to $\left|f_3\text{$|$ = }\right.7.20\times 10^{-6}\text{ , $|$}f_3'\text{$|$ = }1.19\times 10^{-4}\text{ , $|$}f_3''\text{$|$ = }1.55\times 10^{-5}\text{ , $|$}f_4\text{$|$ = }8.13\times 10^{-4}\text{ , $|$}f_4'\text{$|$ = }3.50\times 10^{-1}\text{ , $|$}f_4''\text{$|$ = }6.29\times 10^{-1}\text{ , $|$}f_5\text{$|$ = }8.82\times 10^{-5}\text{ , $|$}f_5'\text{$|$ = }5.36\times 10^{-5}\text{ , $|$}f_5''\text{$|$ = }1.27\times 10^{-5}$. Their corresponding CP phases set to $\chi _3\text{ = }9.71\times 10^{-1}\text{ , }\chi _3'\text{ = }7.86\times 10^{-1}\text{ , } \chi _3''\text{ = }7.89\times 10^{-1}\text{ , }\chi _4\text{ = }7.66\times 10^{-1}\text{ , }\chi _4'\text{ = }8.38\times 10^{-1}\text{ , }\chi _4''\text{ = }8.23\times 10^{-1}\text{ , }\chi _5\text{ = }7.70\times 10^{-1}\text{ , }\chi _5'\text{ = }1.47\text{ , }\chi _5''\text{ = }7.82\times 10^{-1}$. All masses are in GeV, phases in rad and EDM in $e$cm. }
\label{fig4}
\end{center}
\end{figure}
\begin{table}[H] \centering
\begin{tabular}{ccc}
\toprule\toprule
& (i) Case of no mixing & (ii) Case of mixing \\ \cmidrule{2-3}
$d_e^{\chi^+}$ & $2.82\times 10^{-30}$ & $2.82 \times 10^{-30}$ \\
$d_e^{\chi^0}$ & $-2.53 \times 10^{-31}$ & $-2.53\times10^{-31}$ \\
$d_e^W$ & $0$ & $9.72 \times 10^{-29}$ \\
$d_e^Z$ & $ 0$ & $-3.05 \times 10^{-29}$ \\ \hline
$d_e$ & $2.57\times 10^{-30}$ & $6.93 \times 10^{-29}$ \\
\bottomrule \bottomrule
\end{tabular}
\caption{Column (i):
An exhibition of the individual contributions to $d_e$ arising from the chargino, neutralino, W and Z boson exchanges and their sum $d_e$ for the case when there is no mixing among the generations.
The parameters chosen are the same as for the solid curve ($\alpha_\mu=-0.5$ rad) of \cref{fig2}
where $m_0$ is set to 1.1 PeV.
Column (ii):
The analysis of column (ii) has the same set of parameters as the left panel except that inter-generational couplings are allowed. Here the couplings $f_3, f_3'$, $f_3'', f_4, f_4'$, $f_4'', f_5, f_5'$, and $f_5''$ are the same
as the ones in the right panel of \cref{fig4}. The fermion masses for the vectorlike generation
are $m_N=250$ and $m_E=380$ GeV. The EDM is in $e$cm units.}
\label{table1}
\end{table}
In \cref{fig5} we give an analysis of the electron EDM as a function of $m_0$ for different pairs of
fermion masses for the vectorlike generation. The fermion masses for the vectorlike generation
lies in the range 150-300 GeV. Here we find that $d_e$ is very sensitive to the fermion masses for the
vector like generation. The dependence of $|d_e|$ on $m_0$ shows a turn around where $|d_e|$ first
decreases and then increases. This is easily understood as follows: As discussed already for the
case of \cref{fig4} the supersymmetric contribution is very sensitive to $m_0$ since the sleptons
that enter in the supersymmetric diagrams get large as $m_0$ gets large and consequently
the SUSY contributions become negligible as $m_0$ gets large. However, also as already
discussed the $W$ and $Z$ exchange contributions are not affected by $m_0$. Thus at low
values of $m_0$, the supersymmetric contribution is large and of opposite sign to the $W$ and $Z$ exchange
contribution
in this region of the parameter space
which leads to a cancellation between the two thus a falling behavior
of $|d_e|$. However, as $m_0$ increases the SUSY contribution dies out and the $W$ and $Z$ contribution
take over which explains the turn around. This turn around is exhibited for two values of $m_0$
around the minimum
in \cref{table2}. Here we consider the parameter point $m_N=m_E=200$ GeV in \cref{fig4}
for the sample points $m_0=0.4$ PeV and $m_0=0.6$ PeV. Comparison of columns (i) and (ii) in
\cref{table2}
shows that the chargino and the neutralino exchange contribution vary in a significant way
while the $W$ and $Z$ exchange contribution is unchanged. Consequently
$d_e=-5.96\times 10^{-29}$ $e$cm for column (i) and $d_e= 6.61\times 10^{-29}$ $e$cm
for column (ii). Thus we see that the $d_e$ has switched the sign in going from
$m_0=0.4$ PeV to $m_0=0.6$ PeV which means that $d_e$ has gone through a zero
which explains the turn around of $|d_e|$ in \cref{fig5}.\\
\begin{figure}[h]
\begin{center}
{\rotatebox{0}{\resizebox*{10cm}{!}{\includegraphics{5.png}}\hglue5mm}}
\caption{An exhibition of the dependence of $|d_e|$ on $m_0$ for various vectorlike masses. The curves correspond to $m_N=m_E=150$ (dotdashed), $m_N=m_E=200$ (solid), $m_N=m_E=250$ (dotted), $m_N=m_E=300$ (dashed). The parameters are $\text{$|\mu |$ = }4.1\times 10^2\text{ , $|$}M_1\text{$|$ = }2.8\times 10^2\text{ , $|$}M_2\text{$|$ = }3.4\times 10^2\text{ , $|$}A_0\text{$|$ = }3\times 10^6\text{ , }m_0^{\tilde{\nu }}\text{ = }4\times 10^6\text{ , $|$}A_0^{\tilde{\nu }}\text{$|$ = }5\times 10^6\text{ , tan$\beta $ = }50$. The CP phases are $\theta_{\mu }\text{ = }1\text{ , }\alpha _1\text{ = }1.26\text{ , }\alpha _2\text{ = }0.94\text{ , }\alpha _{A_0}\text{ = }0.94\text{ , }\alpha _{A_0^{\tilde{\nu }}}\text{ = }1.88$. The f couplings are $\left|f_3\text{$|$ = }\right.3.01\times 10^{-5}\text{ , $|$}f_3'\text{$|$ = }8.07\times 10^{-6}\text{ , $|$}f_3''\text{$|$ = }2.06\times 10^{-5}\text{ , $|$}f_4\text{$|$ = }8.13\times 10^{-4}\text{ , $|$}f_4'\text{$|$ = }3.50\times 10^{-1}\text{ , $|$}f_4''\text{$|$ = }6.29\times 10^{-1}\text{ , $|$}f_5\text{$|$ = }6.38\times 10^{-5}\text{ , $|$}f_5'\text{$|$ = }1.03\times 10^{-6}\text{ , $|$}f_5''\text{$|$ = }2.44\times 10^{-8}$. Their corresponding CP phases are $\chi _3\text{ = }7.91\times 10^{-1}\text{ , }\chi _3'\text{ = }7.87\times 10^{-1}\text{ , } \chi _3''\text{ = }7.78\times 10^{-1}\text{ , }\chi _4\text{ = }7.66\times 10^{-1}\text{ , }\chi _4'\text{ = }8.38\times 10^{-1}\text{ , }\chi _4''\text{= }8.23\times 10^{-1}\text{ , }\chi _5\text{ = }7.57\times 10^{-1}\text{ , }\chi _5'\text{= }7.54\times 10^{-1}\text{ , }\chi _5''\text{= }7.83\times 10^{-1}$. All masses are in GeV, phases in rad, and $d_e$ in $e$cm. }
\label{fig5}
\end{center}
\end{figure}
In \cref{fig6} we exhibit the dependence of $|d_e|$ on the phase $\alpha_\mu$ which is the phase of the
Higgs mixing parameter $\mu$. The dependence of $|d_e|$ on $\alpha_\mu$ arises from various sources. Thus the
slepton masses as well as the chargino and the neutrino masses that enter in the supersymmetric
loop contribution have a dependence on $\alpha_\mu$ which makes a simple explanation of the
dependence on this parameter less transparent. A numerical analysis exhibiting the dependence of
$|d_e|$ on $\alpha_\mu$ is given in \cref{fig6}. The analysis is done for different $\tan\beta$ ranging from
$\tan\beta=20$ to $\tan\beta=50$. A similar analysis of the dependence of $|d_e|$ on $\chi_4''$ for
various values of $f_4''$ is given in \cref{fig7}. The sharp dependence of $|d_e|$ on $\chi_4''$ is not difficult
to understand. Unlike the case of the dependence of $|d_e|$
on $\alpha_\mu$ which arises mainly from the supersymmetric sector, here the dependence of $|d_e|$ on
$\chi_4''$ arises from the non-supersymmetric sector via the exchange of $W$ and $Z$ bosons.
The {SUSY} contribution dependence is limited by the smallness of $| f_4''|$ compared to the other masses in the slepton mass$^2$ matrix.
The non-supersymmetric
contribution is directly governed by $f_3'', f_4'', f_5''$ as can be seen from Eq.(\ref{7aa}) and Eq.(\ref{7bb}).
Here setting $f_3''=f_4''=f_5''=0$ puts the mass matrices in a block diagonal form where the first generation
totally decouples from the vector like generation. This clearly indicates that the effect of variation in
$|f_3''|, |f_4''|, |f_5''|$ and their phases, $\chi_3'', \chi_4'', \chi_5''$ will be strong.
This is what the analysis of \cref{fig7} indicates. Aside from the variations of the $W$ and $Z$ contributions
on $\chi_4''$, there is also a constructive/destructive interference between the $W$ and the $Z$ contributions
as $\chi_4''$ varies which explains the rapid variations of $|d_e|$ with $\chi_4''$ in \cref{fig7}.\\
Finally, the effect of mixing of the vectorlike generation with the three lepton generations has negligible effect
on the standard model predictions in the leptonic sector at the tree level. However, it does
affect the neutrino sector. Specifically taking the mixings into account the analysis presented here
satisfies the constraint on the sum of the neutrino masses arising from the Planck Satellite experiment~\cite{Schwetz:2008er} so that
\begin{equation}
\sum_{i=1}^3m_{\nu_{i}}<0.85 \ {\rm eV} \ ,
\label{6.1a}
\end{equation}
where we assume $\nu_i$ (i=1,2,3) to be the mass eigenstates with eigenvalues $m_{\nu_i}$.
Further, the neutrino oscillations constraint on the neutrino mass squared differences~\cite{Schwetz:2008er}
are also satisfied, i.e., the constraints
\begin{gather}
\label{6.1b}
\Delta m^2_{31}\equiv m_3^2-m_1^2= 2.4^{+0.12}_{-0.11} \times 10^{-3} ~{\rm eV}^2 \ , \\
\Delta m_{21}^2\equiv m_2^2- m_1^2= 7.65^{+0.23}_{-0.20} \times 10^{-5}~{\rm eV}^2.
\label{6.1c}
\end{gather}
The analysis given in this section respect all of the collider, i.e., LEP and LHC, constraints.
Specifically the lower limits on heavy lepton masses is around 100 GeV\cite{pdg} and masses of
$m_E$ and $m_N$ used here respect these limits. However,
in addition there are flavor constraints to consider. Here the constraint $\mu\to e+\gamma$ is the most
stringent constraint. Thus
the above framework allows the process $\mu\to e+\gamma$ for which the current upper
limit from experiment is \cite{pdg} $4.4\times 10^{-12}$. The analysis of this process requires
the mixing of the vectorlike generation with all the three generations. A similar analysis but for the
$\tau\to \mu + \gamma$ was given in \cite{Ibrahim:2012ds} and it was found that the model with a vector like
generation can produce a branching ratio for this process which lies below the current experimental
limit for that process but could be accessible in improved experiment . In that analysis the scalar masses were in the sub TeV region. However, in the present case we are
interested in the PeV size scalar masses.
From Figure 3 of \cite{Ibrahim:2012ds}, we see that for heavy scalars, the branching ratio decreases rapidly as the masses increase and since we are interested in the
PeV size scalars we expect that the $\mu\to e + \gamma$ experimental upper limits would be easily
satisfied. A full treatment of the processes is, however, outside the scope of this work and will be
discussed elsewhere.
\begin{table} \centering
\begin{tabular}{ccc}
\toprule\toprule
& (i) $m_{0}= 0.4$ PeV & (ii) $m_{0}= 0.6$ PeV \\ \cmidrule{2-3}
$d_e^{\chi+}$ & $-2.38\times 10^{-28}$ & $-1.13 \times 10^{-28}$ \\
$d_e^{\chi0}$ & $-9.18\times 10^{-31}$ & $-4.08\times 10^{-31}$ \\
$d_e^W$ & $2.72\times 10^{-28}$ & $2.72 \times 10^{-28}$ \\
$d_e^Z$ & $-9.31 \times 10^{-29}$ & $-9.31 \times 10^{-29}$ \\ \hline
$d_e$ & $-5.96 \times 10^{-29}$ & $6.61\times 10^{-29}$ \\
\bottomrule \bottomrule
\end{tabular}
\caption{An exhibition of the individual contributions to the electric dipole moment of the electron arising from the chargino exchange, neutralino exchange, W boson exchange and Z boson exchange.
The last row gives the total EDM $d_e$ where
$d_e= d_e^{\chi+} + d_e^{\chi0} + d_e^{W} + d_e^{Z}$.
The analysis is for the solid curve of \cref{fig5} where $m_N=m_E=200$ when (i) $m_0=0.4$ PeV, (ii) $m_0=0.6$ PeV.
The EDM is in $e$cm units.}
\label{table2}
\end{table}
\begin{figure}[H]
\begin{center}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{6l.png}}\hglue5mm}}
\caption{An exhibition of the dependence of $|d_e|$ on $\alpha_\mu$ for various $\tan\beta$. The curves correspond to $\tan\beta=20$ (dashed), $\tan\beta=30$ (dotted), $\tan\beta=40$ (solid), and $\tan\beta=50$ (dotdashed). The parameters used are $\text{$|\mu |$ = }3.9\times 10^2$ $\text{ , $|$}M_1\text{$|$ = }3.1\times 10^2\text{ , $|$}M_2\text{$|$ = }3.6\times 10^2\text{ , }m_N\text{ = }340\text{ , }m_E\text{ = }250\text{ , }m_0\text{ = }1.1\times 10^6\text{ , $|$}A_0\text{$|$ = }3.2\times 10^6\text{ , }m_0^{\tilde{\nu }}\text{ = }4.3\times 10^6$ $\text{ , $|$}A_0^{\tilde{\nu }}\text{$|$ =} 5.1\times 10^6$ $ \text{ , }\alpha _1\text{ = }1.88 $ $ \text{ , }\alpha _2\text{ = }1.26
\text{ , }\alpha _{A_0}\text{ = }0.94\text{ , }\alpha _{A_0^{\tilde{\nu }}}\text{ = }1.88$. $\text{The mixings are $|$}f_3\text{$|$ = }2.88\times 10^{-4}\text{ , $|$}f_3'\text{$|$ = }8.19\times 10^{-6}\text{ , $|$}f_3''\text{$|$ = }9.19\times 10^{-5}\text{ , $|$}f_4\text{$|$ = }8.13\times 10^{-4}\text{ , $|$}f_4'\text{$|$ = }3.50\times 10^{-1}\text{ , $|$}f_4''\text{$|$ = }1.29\times 10^{-1}\text{ , $|$}f_5\text{$|$ = }5.75\times 10^{-6}\text{ , $|$}f_5'\text{$|$ = }1.00\times 10^{-5}\text{ , $|$}f_5''\text{$|$ = }2.49\times 10^{-7}\text{ , }\chi _3\text{ = }7.74\times 10^{-1}\text{ , }\chi _3'\text{ = }7.73\times 10^{-1}\text{ , } \chi _3''\text{ = }7.86\times 10^{-1}\text{ , }\chi _4\text{ = }7.6\times 10^{-1}\text{ , }\chi _4'\text{ = }8.40\times 10^{-1}\text{ , }\chi _4''\text{ = }8.20\times 10^{-1}\text{ , }\chi _5\text{ = }7.51\times 10^{-1}\text{ , }\chi _5'\text{ = }8.19\times 10^{-1}\text{ , }\chi _5''\text{= }8.03\times 10^{-1}$. All masses are in GeV, phases in rad, and $d_e$ in $e$cm. }
\label{fig6}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
{\rotatebox{0}{\resizebox*{7.5cm}{!}{\includegraphics{7.png}}\hglue5mm}}
\caption{An exhibition of the dependence of $|d_e|$ on $\chi_4^{''}$ for various $f_4^{''}$. The curves correspond to $f_4^{''}$ of $0.1$ (dashed), $0.2$ (dotted), $0.5$ (solid), $1$ (dotdashed). The other parameters are $\text{$|\mu |$ = }1.1\times 10^6\text{ , $|$}M_1\text{$|$ = }2.8\times 10^6\text{ , $|$}M_2\text{$|$ = }3.4\times 10^6\text{ , }m_N\text{ = }250\text{ , }m_E\text{ = }380\text{ , }m_0\text{ = }1.1\times 10^6\text{ , $|$}A_0\text{$|$ = }3.2\times 10^6\text{ , }m_0^{\tilde{\nu }}\text{ = }1.4\times 10^6\text{ , $|$}A_0^{\tilde{\nu }}\text{$|$ = }5.1\times 10^6\text{ , }\alpha _1\text{ = }1.26\text{ , }\alpha _2\text{ = }0.94\text{ , }\alpha _{A_0}\text{ = }0.94\text{ , }\alpha _{A_0^{\tilde{\nu }}}\text{ = }1.88\text{ , tan$\beta $ = }30$. $\text{The mixings are $|$}f_3\text{$|$ = }2.93\times 10^{-4}\text{ , $|$}f_3'\text{$|$ = }8.19\times 10^{-6}\text{ , $|$}f_3''\text{$|$ = }9.15\times 10^{-5}\text{ , $|$}f_4\text{$|$ = }8.13\times 10^{-1}\text{ , $|$}f_4'\text{$|$ = }3.50\times 10^{-1}\text{ , $|$}f_5\text{$|$ = }5.08\times 10^{-6}\text{ , $|$}f_5'\text{$|$ = }9.98\times 10^{-6}\text{ , $|$}f_5''\text{$|$ = }2.56\times 10^{-7}\text{ , }\chi _3\text{ = }7.86\times 10^{-1}\text{ , }\chi _3'\text{ = }7.80\times 10^{-1}\text{ , } \chi _3''\text{ = }8.02\times 10^{-1}\text{ , }\chi _4\text{ = }7.6\times 10^{-1}\text{ , }\chi _4'\text{ = }8.4\times 10^{-1}\text{ , }\chi _5\text{ = }7.39\times 10^{-1}\text{ , }\chi _5'\text{ = }7.82\times 10^{-1}\text{ , }\chi _5''\text{ = }7.82\times 10^{-1}$. All masses are in GeV, phases in rad and $d_e$ in $e$cm.
}
\label{fig7}
\end{center}
\end{figure}
\section{Conclusion\label{sec5}}
In the future the exploration of high scale physics on the energy frontier
will be limited by the capability on the highest energy
that accelerators can achieve. Thus the upgraded LHC will achieve an energy of $\sqrt s=13$
TeV. Proposals are afoot to build accelerators that could extend the range to an ambitious
goal of 100 TeV. It has been pointed out recently that there are other avenues to access high
scales and one of these is via sensitive measurement of the EDM of elementary particles, i.e.,
of leptons and of quarks. In this work we focus on the EDM of the electron as it is the most
stringently constrained of the EDMs.
In this analysis we have used the current experimental limits on the EDM of the electron to
explore in a quantitative fashion the scale of the slepton masses that the electron
EDM can explore within MSSM.
It is found that the current constraints allow one to explore a wide scale
of slepton masses from few TeV to a PeV and beyond.
Further, we have extended the analysis to include a vector like lepton generation
and allowing for small mixings between the three ordinary generations and the vector like generation.
Here in addition to the supersymmetric contribution involving the exchange of the
charginos and the neutralinos, one has in addition a contribution arising from the exchange of the $W$ and of the $Z$ bosons.
Unlike the chargino and the neutralino contribution which is sensitive to the slepton masses,
the $W$ and $Z$ contribution is independent of them. Thus the interference between the
supersymmetric and the non-supersymmetric contribution produces a remarkable phenomenon
where the EDM first falls and then turns around and rises again as the common scalar mass $m_0$ increases.
This is easily understood by noting that the destructive interference between the supersymmetric
and the non-supersymmetric contribution leads first to a cancellation between the two but as
the supersymmetric contribution dies out with increasing $m_0$ the non-supersymmetric
contribution becomes dominant and controls the EDM. Thus in this case EDM could be substantial
even when $m_0$ lies in the several PeV region.
In the future, the EDM of the electron will be constrained even more stringently
by a factor of ten or more. Such a more stringent constraint will allow one to explore
even a larger range in the slepton masses.
Finally we note that a large
SUSY sfermion scale in the PeV region would automatically relieve the tension on flavor changing
neutral current problem and on too rapid a proton decay in supersymmetric
grand unified theories ~\cite{Ibrahim:2000tx}.\\
\noindent
{\em Acknowledgments}:
This research was supported in part by the NSF Grant PHY-1314774,
XSEDE- TG-PHY110015, and NERSC-DE-AC02-05CH1123.\\
\noindent
{{\bf Appendix A: The MSSM Extension with a vector leptonic multiplet\label{sec6}}}
{ In \cref{sec3} we extended MSSM to include a vector like generation. Here we provide
further details of this extension.
A vectorlike multiplet consists of an ordinary fourth generation of
leptons, quarks and their mirrors. A vector like generation is anomaly free and thus inclusion of it respects the
good properties of a gauge theory.
} Vector like multiplets arise in a variety of unified models~\cite{guts} some of which could be low lying.
They have been used recently in a variety of analyses
\cite{Babu:2008ge,Liu:2009cc,Martin:2009bg,Aboubrahim:2013yfa,Aboubrahim:2013gfa,Ibrahim:2008gg,Ibrahim:2010va,Ibrahim:2011im,Ibrahim:2010hv,Ibrahim:2009uv}.
In the analysis below we will assume an extended MSSM with just one vector mulitplet.
Before proceeding further we
define the notation and give a very brief description of the extended model and a more detailed
description can be found in the previous works mentioned above. Thus the extended MSSM
contains a vectorlike multiplet. To fix notation the three generations of leptons are denoted by
{
\begin{align}
\psi_{iL}\equiv
\left(\begin{matrix} \nu_{i L}\cr
~{l}_{iL} \end{matrix} \right) \sim (1,2,- \frac{1}{2}) \ ; ~~ ~l^c_{iL}\sim (1,1,1)\ ;
~~~ \nu^c_{i L}\sim (1,1,0)\ ;
~~~i=1,2,3
\label{2}
\end{align}
}
where the properties under $SU(3)_C\times SU(2)_L\times U(1)_Y$ are also exhibited.
The last entry in the braces such as $(1,2, -1/2)$ is
the value of the hypercharge
$Y$ defined so that $Q=T_3+ Y$. These leptons have $V-A$ interactions.
We can now add a vectorlike multiplet where we have a fourth family of leptons with $V-A$ interactions
whose transformations can be gotten from Eq.(\ref{2}) by letting {$i$ run from 1 to 4.}
A vectorlike lepton multiplet also has mirrors and so we consider these mirror
leptons which have $V+A$ interactions. {The quantum numbers of the mirrors} are given by
{
\begin{align}
\chi^c\equiv
\left(\begin{matrix} E_{ L}^c \cr
N_L^c\end{matrix}\right) \sim (1,2,\frac{1}{2})\ ;
~~ E_L \sim (1,1,-1)\ ; ~~ N_L \sim (1,1,0).
\label{3}
\end{align}
}
Interesting new physics arises when we allow mixings of the vectorlike generation with
the three ordinary generations. Here we focus on the mixing of the mirrors in the vectorlike
generation with the three generations.
Thus the superpotential of the model allowing for the mixings
among the three ordinary generations and the vectorlike generation is given by
\begin{align}
W&= -\mu \epsilon_{ij} \hat H_1^i \hat H_2^j+\epsilon_{ij} [f_{1} \hat H_1^{i} \hat \psi_L ^{j}\hat \tau^c_L
+f_{1}' \hat H_2^{j} \hat \psi_L ^{i} \hat \nu^c_{\tau L}
+f_{2} \hat H_1^{i} \hat \chi^c{^{j}}\hat N_{L}
+f_{2}' H_2^{j} \hat \chi^c{^{i}} \hat E_{ L} \nonumber \\
&+ h_{1} H_1^{i} \hat\psi_{\mu L} ^{j}\hat\mu^c_L
+h_{1}' H_2^{j} \hat\psi_{\mu L} ^{i} \hat\nu^c_{\mu L}
+ h_{2} H_1^{i} \hat\psi_{e L} ^{j}\hat e^c_L
+h_{2}' H_2^{j} \hat\psi_{e L} ^{i} \hat\nu^c_{e L}] \nonumber \\
&+ f_{3} \epsilon_{ij} \hat\chi^c{^{i}}\hat\psi_L^{j}
+ f_{3}' \epsilon_{ij} \hat\chi^c{^{i}}\hat\psi_{\mu L}^{j}
+ f_{4} \hat\tau^c_L \hat E_{ L} + f_{5} \hat\nu^c_{\tau L} \hat N_{L}
+ f_{4}' \hat\mu^c_L \hat E_{ L} + f_{5}' \hat\nu^c_{\mu L} \hat N_{L} \nonumber \\
&+ f_{3}'' \epsilon_{ij} \hat\chi^c{^{i}}\hat\psi_{e L}^{j}
+ f_{4}'' \hat e^c_L \hat E_{ L} + f_{5}'' \hat\nu^c_{e L} \hat N_{L}\ ,
\label{5}
\end{align}
where $\hat ~$ implies superfields, $\hat\psi_L$ stands for $\hat\psi_{3L}$, $\hat\psi_{\mu L}$ stands for $\hat\psi_{2L}$
and $\hat\psi_{e L}$ stands for $\hat\psi_{1L}$.
The mass terms for the neutrinos, mirror neutrinos, leptons and mirror leptons arise from the term
\begin{equation}
{\cal{L}}=-\frac{1}{2}\frac{\partial ^2 W}{\partial{A_i}\partial{A_j}}\psi_ i \psi_ j+\text{H.c.}
\label{6}
\end{equation}
where $\psi$ and $A$ stand for generic two-component fermion and scalar fields.
After spontaneous breaking of the electroweak symmetry, ($\langle H_1^1 \rangle=v_1/\sqrt{2} $ and $\langle H_2^2\rangle=v_2/\sqrt{2}$),
we have the following set of mass terms written in the 4-component spinor notation
so that
\begin{equation}
-{\cal L}_m= \bar\xi_R^T (M_f) \xi_L +\bar\eta_R^T(M_{\ell}) \eta_L +\text{H.c.},
\end{equation}
where the basis vectors in which the mass matrix is written is given by
\begin{gather}
\bar\xi_R^T= \left(\begin{matrix}\bar \nu_{\tau R} & \bar N_R & \bar \nu_{\mu R}
&\bar \nu_{e R} \end{matrix}\right),\nonumber\\
\xi_L^T= \left(\begin{matrix} \nu_{\tau L} & N_L & \nu_{\mu L}
& \nu_{e L} \end{matrix}\right) \ ,\nonumber\\
\bar\eta_R^T= \left(\begin{matrix}\bar{\tau_ R} & \bar E_R & \bar{\mu_ R}
&\bar{e_ R} \end{matrix}\right),\nonumber\\
\eta_L^T= \left(\begin{matrix} {\tau_ L} & E_L & {\mu_ L}
& {e_ L} \end{matrix}\right) \ ,
\end{gather}
and the mass matrix $M_f$ is given by
\begin{eqnarray}
M_f=
\left(\begin{matrix} f'_1 v_2/\sqrt{2} & f_5 & 0 & 0 \cr
-f_3 & f_2 v_1/\sqrt{2} & -f_3' & -f_3'' \cr
0&f_5'&h_1' v_2/\sqrt{2} & 0 \cr
0 & f_5'' & 0 & h_2' v_2/\sqrt{2}\end{matrix} \right)\ .
\label{7aa}
\end{eqnarray}
We define the matrix element $(22)$ of the mass matrix as $m_N$ so that
\begin{eqnarray}
m_N= f_2 v_1/\sqrt 2.
\end{eqnarray}
The mass matrix is not hermitian and thus one needs bi-unitary transformations to diagonalize it.
We define the bi-unitary transformation so that
\begin{equation}
D^{\nu \dagger}_R (M_f) D^\nu_L=\text{diag}(m_{\psi_1},m_{\psi_2},m_{\psi_3}, m_{\psi_4} ).
\label{7a}
\end{equation}
Under the bi-unitary transformations the basis vectors transform so that
\begin{eqnarray}
\left(\begin{matrix} \nu_{\tau_R}\cr
N_{ R} \cr
\nu_{\mu_R} \cr
\nu_{e_R} \end{matrix}\right)=D^{\nu}_R \left(\begin{matrix} \psi_{1_R}\cr
\psi_{2_R} \cr
\psi_{3_R} \cr
\psi_{4_R}\end{matrix}\right), \ \
\left(\begin{matrix} \nu_{\tau_L}\cr
N_{ L} \cr
\nu_{\mu_L} \cr
\nu_{e_L}\end{matrix} \right)=D^{\nu}_L \left(\begin{matrix} \psi_{1_L}\cr
\psi_{2_L} \cr
\psi_{3_L} \cr
\psi_{4_L}\end{matrix}\right) \ .
\label{8}
\end{eqnarray}
In \cref{7a}
$\psi_1, \psi_2, \psi_3, \psi_4$ are the mass eigenstates for the neutrinos,
where in the limit of no mixing
we identify $\psi_1$ as the light tau neutrino, $\psi_2$ as the
heavier mass eigen state, $\psi_3$ as the muon neutrino and $\psi_4$ as the electron neutrino.
A similar analysis goes to the lepton mass matrix $M_\ell$ where
\begin{eqnarray}
M_\ell=
\left(\begin{matrix} f_1 v_1/\sqrt{2} & f_4 & 0 & 0 \cr
f_3 & f'_2 v_2/\sqrt{2} & f_3' & f_3'' \cr
0&f_4'&h_1 v_1/\sqrt{2} & 0 \cr
0 & f_4'' & 0 & h_2 v_1/\sqrt{2}\end{matrix} \right)\ .
\label{7bb}
\end{eqnarray}
In general $f_3, f_4, f_5, f_3', f_4',f_5', f_3'', f_4'',f_5''$ can be complex and we define their phases
so that
\begin{eqnarray}
f_k= |f_k| e^{i\chi_k}, ~~f_k'= |f_k'| e^{i\chi_k'}, ~~~f_k''= |f_k''| e^{i\chi_k''}\ ; k=3,4,5\ .
\end{eqnarray}
We introduce now the mass parameter $m_E$ defined by the (22) element of the mass matrix above so that
\begin{eqnarray}
m_E= f_2' v_2/\sqrt 2.
\end{eqnarray}
Next we consider the mixing of the charged sleptons and the charged mirror sleptons.
The mass squared matrix of the slepton - mirror slepton comes from three sources: the F term, the
D term of the potential and the soft {SUSY} breaking terms.
Using the superpotential of \cref{5} the mass terms arising from it
after the breaking of the electroweak symmetry are given by
the Lagrangian
\begin{equation}
{\cal L}= {\cal L}_F +{\cal L}_D + {\cal L}_{\rm soft}\ ,
\end{equation}
where $ {\cal L}_F$ is deduced from \cref{5} and is given in \cite{Ibrahim:2012ds}, while the ${\cal L}_D$ is given by
\begin{align}
-{\cal L}_D&=\frac{1}{2} m^2_Z \cos^2\theta_W \cos 2\beta \{\tilde \nu_{\tau L} \tilde \nu^*_{\tau L} -\tilde \tau_L \tilde \tau^*_L
+\tilde \nu_{\mu L} \tilde \nu^*_{\mu L} -\tilde \mu_L \tilde \mu^*_L
+\tilde \nu_{e L} \tilde \nu^*_{e L} -\tilde e_L \tilde e^*_L \nonumber \\
&+\tilde E_R \tilde E^*_R -\tilde N_R \tilde N^*_R\}
+\frac{1}{2} m^2_Z \sin^2\theta_W \cos 2\beta \{\tilde \nu_{\tau L} \tilde \nu^*_{\tau L}
+\tilde \tau_L \tilde \tau^*_L
+\tilde \nu_{\mu L} \tilde \nu^*_{\mu L} +\tilde \mu_L \tilde \mu^*_L \nonumber \\
&+\tilde \nu_{e L} \tilde \nu^*_{e L} +\tilde e_L \tilde e^*_L
-\tilde E_R \tilde E^*_R -\tilde N_R \tilde N^*_R +2 \tilde E_L \tilde E^*_L -2 \tilde \tau_R \tilde \tau^*_R
-2 \tilde \mu_R \tilde \mu^*_R -2 \tilde e_R \tilde e^*_R
\}.
\label{12}
\end{align}
For ${\cal L}_{\rm soft}$ we assume the following form
\begin{align}
-{\cal L}_{\text{soft}}&= M^2_{\tilde \tau L} \tilde \psi^{i*}_{\tau L} \tilde \psi^i_{\tau L}
+ M^2_{\tilde \chi} \tilde \chi^{ci*} \tilde \chi^{ci}
+ M^2_{\tilde \mu L} \tilde \psi^{i*}_{\mu L} \tilde \psi^i_{\mu L}
+M^2_{\tilde e L} \tilde \psi^{i*}_{e L} \tilde \psi^i_{e L}
+ M^2_{\tilde \nu_\tau} \tilde \nu^{c*}_{\tau L} \tilde \nu^c_{\tau L}
+ M^2_{\tilde \nu_\mu} \tilde \nu^{c*}_{\mu L} \tilde \nu^c_{\mu L} \nonumber \\
&+ M^2_{\tilde \nu_e} \tilde \nu^{c*}_{e L} \tilde \nu^c_{e L}
+ M^2_{\tilde \tau} \tilde \tau^{c*}_L \tilde \tau^c_L
+ M^2_{\tilde \mu} \tilde \mu^{c*}_L \tilde \mu^c_L
+ M^2_{\tilde e} \tilde e^{c*}_L \tilde e^c_L
+ M^2_{\tilde E} \tilde E^*_L \tilde E_L
+ M^2_{\tilde N} \tilde N^*_L \tilde N_L \nonumber \\
&+\epsilon_{ij} \{f_1 A_{\tau} H^i_1 \tilde \psi^j_{\tau L} \tilde \tau^c_L
-f'_1 A_{\nu_\tau} H^i_2 \tilde \psi ^j_{\tau L} \tilde \nu^c_{\tau L}
+h_1 A_{\mu} H^i_1 \tilde \psi^j_{\mu L} \tilde \mu^c_L
-h'_1 A_{\nu_\mu} H^i_2 \tilde \psi ^j_{\mu L} \tilde \nu^c_{\mu L} \nonumber \\
&+h_2 A_{e} H^i_1 \tilde \psi^j_{e L} \tilde e^c_L
-h'_2 A_{\nu_e} H^i_2 \tilde \psi ^j_{e L} \tilde \nu^c_{e L}
+f_2 A_N H^i_1 \tilde \chi^{cj} \tilde N_L
-f'_2 A_E H^i_2 \tilde \chi^{cj} \tilde E_L +\text{H.c.}\}\ .
\label{13}
\end{align}
Here $M_{\tilde e L}, M_{\tilde \nu_e}$ etc are the soft masses and $A_e, A_{\nu_e}$ etc are the trilinear couplings.
The trilinear couplings are complex and we define their phases so that
\begin{gather}
A_e= |A_e| e^{i \alpha_{A_e}} \ ,
~~A_{\nu_e}= |A_{\nu_e}|
e^{i\alpha_{A_{\nu_e}}} \ ,
\cdots \ .
\end{gather}
From these terms we construct the scalar mass$^2$ matrices \cite{Ibrahim:2012ds}
{which are exhibited in Appendix C}.\\
{As discussed in \cref{sec3} and \cref{sec4} the inclusion of the vector like generation
brings in new phenomena such as exchange contributions
from the $W$ and $Z$ bosons which are otherwise absent. Their inclusion
gives an important contribution to the EDM since the
$W$ and the $Z$ boson contribution begins to play a role and leads to constructive and
destructive interference with the chargino and neutralino exchange contribution.
A more detailed description of this phenomenon is given in \cref{sec4}.}
\noindent
{{\bf Appendix B: Interactions that enter in the EDM analysis in the MSSM Extension with a Vector like Multiplet
\label{sec7}
}}\\
In this section we discuss the interactions in the mass diagonal basis involving charged leptons,
sneutrinos and charginos. Thus we have
\begin{align}
-{\cal L}_{\tau-\tilde{\nu}-\chi^{-}} &= \sum_{i=1}^{2}\sum_{j=1}^{8}\bar{\tau}_{\alpha}(C_{\alpha ij}^{L}P_{L}+C_{\alpha ij}^{R}P_{R})\tilde{\chi}^{ci}\tilde{\nu}_{j}+\text{H.c.},
\end{align}
such that
\begin{align}
\begin{split}
C_{\alpha ij}^{L}=&g(-\kappa_{\tau}U^{*}_{i2}D^{\tau*}_{R1\alpha} \tilde{D}^{\nu}_{1j} -\kappa_{\mu}U^{*}_{i2}D^{\tau*}_{R3\alpha}\tilde{D}^{\nu}_{5j}-
\kappa_{e}U^{*}_{i2}D^{\tau*}_{R4\alpha}\tilde{D}^{\nu}_{7j}+U^{*}_{i1}D^{\tau*}_{R2\alpha}\tilde{D}^{\nu}_{4j}-
\kappa_{N}U^{*}_{i2}D^{\tau*}_{R2\alpha}\tilde{D}^{\nu}_{2j})
\end{split} \\~\nonumber\\
\begin{split}
C_{\alpha ij}^{R}=&g(-\kappa_{\nu_{\tau}}V_{i2}D^{\tau*}_{L1\alpha}\tilde{D}^{\nu}_{3j}-\kappa_{\nu_{\mu}}V_{i2}D^{\tau*}_{L3\alpha}\tilde{D}^{\nu}_{6j}-
\kappa_{\nu_{e}}V_{i2}D^{\tau*}_{L4\alpha}\tilde{D}^{\nu}_{8j}+V_{i1}D^{\tau*}_{L1\alpha}\tilde{D}^{\nu}_{1j}+V_{i1}D^{\tau*}_{L3\alpha}\tilde{D}^{\nu}_{5j}\\
&+V_{i1}D^{\tau*}_{L4\alpha}\tilde{D}^{\nu}_{7j}-\kappa_{E}V_{i2}D^{\tau*}_{L2\alpha}\tilde{D}^{\nu}_{4j}),
\end{split}
\end{align}
with
\begin{align}
(\kappa_{N},\kappa_{\tau},\kappa_{\mu},\kappa_{e})&=\frac{(m_{N},m_{\tau},m_{\mu},m_{e})}{\sqrt{2}m_{W}\cos\beta} , \\~\nonumber\\
(\kappa_{E},\kappa_{\nu_{\tau}},\kappa_{\nu_{\mu}},\kappa_{\nu_{e}})&=\frac{(m_{E},m_{\nu_{\tau}},m_{\nu_{\mu}},m_{\nu_{e}})}{\sqrt{2}m_{W}\sin\beta} .
\end{align}
We now discuss the interactions in the mass diagonal basis involving charged leptons,
sleptons and neutralinos. Thus we have
\begin{align}
-{\cal L}_{\tau-\tilde{\tau}-\chi^{0}} &= \sum_{i=1}^{4}\sum_{j=1}^{8}\bar{\tau}_{\alpha}(C_{\alpha ij}^{'L}P_{L}+C_{\alpha ij}^{'R}P_{R})\tilde{\chi}^{0}_{i}\tilde{\tau}_{j}+\text{H.c.},
\end{align}
such that
\begin{align}
C_{\alpha ij}^{'L}=&\sqrt{2}(\alpha_{\tau i}D^{\tau *}_{R1\alpha}\tilde{D}^{\tau}_{1j}-\delta_{E i}D^{\tau *}_{R2\alpha}\tilde{D}^{\tau}_{2j}-
\gamma_{\tau i}D^{\tau *}_{R1\alpha}\tilde{D}^{\tau}_{3j}+\beta_{E i}D^{\tau *}_{R2\alpha}\tilde{D}^{\tau}_{4j}
+\alpha_{\mu i}D^{\tau *}_{R3\alpha}\tilde{D}^{\tau}_{5j}-\gamma_{\mu i}D^{\tau *}_{R3\alpha}\tilde{D}^{\tau}_{6j} \nonumber\\
&+\alpha_{e i}D^{\tau *}_{R4\alpha}\tilde{D}^{\tau}_{7j}-\gamma_{e i}D^{\tau *}_{R4\alpha}\tilde{D}^{\tau}_{8j})
\end{align}
\begin{align}
C_{\alpha ij}^{'R}=&\sqrt{2}(\beta_{\tau i}D^{\tau *}_{L1\alpha}\tilde{D}^{\tau}_{1j}-\gamma_{E i}D^{\tau *}_{L2\alpha}\tilde{D}^{\tau}_{2j}-
\delta_{\tau i}D^{\tau *}_{L1\alpha}\tilde{D}^{\tau}_{3j}+\alpha_{E i}D^{\tau *}_{L2\alpha}\tilde{D}^{\tau}_{4j}
+\beta_{\mu i}D^{\tau *}_{L3\alpha}\tilde{D}^{\tau}_{5j}-\delta_{\mu i}D^{\tau *}_{L3\alpha}\tilde{D}^{\tau}_{6j} \nonumber\\
&+\beta_{e i}D^{\tau *}_{L4\alpha}\tilde{D}^{\tau}_{7j}-\delta_{e i}D^{\tau *}_{L4\alpha}\tilde{D}^{\tau}_{8j}),
\end{align}
where
\begin{align}
\alpha_{E i}&=\frac{gm_{E}X^{*}_{4i}}{2m_{W}\sin\beta} \ ; && \beta_{E i}=eX'_{1i}+\frac{g}{\cos\theta_{W}}X'_{2i}\left(\frac{1}{2}-\sin^{2}\theta_{W}\right) \\
\gamma_{E i}&=eX^{'*}_{1i}-\frac{g\sin^{2}\theta_{W}}{\cos\theta_{W}}X^{'*}_{2i} \ ; && \delta_{E i}=-\frac{gm_{E}X_{4i}}{2m_{W}\sin\beta}
\end{align}
and
\begin{align}
\alpha_{\tau i}&=\frac{gm_{\tau}X_{3i}}{2m_{W}\cos\beta} \ ; && \alpha_{\mu i}=\frac{gm_{\mu}X_{3i}}{2m_{W}\cos\beta} \ ; && \alpha_{e i}=\frac{gm_{e}X_{3i}}{2m_{W}\cos\beta} \\
\delta_{\tau i}&=-\frac{gm_{\tau}X^{*}_{3i}}{2m_{W}\cos\beta} \ ; && \delta_{\mu i}=-\frac{gm_{\mu}X^{*}_{3i}}{2m_{W}\cos\beta} \ ; && \delta_{e i}=-\frac{gm_{e}X^{*}_{3i}}{2m_{W}\cos\beta}
\end{align}
{and where }
\begin{align}
\beta_{\tau i}=\beta_{\mu i}=\beta_{e i}&=-eX^{'*}_{1i}+\frac{g}{\cos\theta_{W}}X^{'*}_{2i}\left(-\frac{1}{2}+\sin^{2}\theta_{W}\right) \\
\gamma_{\tau i}=\gamma_{\mu i}=\gamma_{e i}&=-eX'_{1i}+\frac{g\sin^{2}\theta_{W}}{\cos\theta_{W}}X'_{2i}
\end{align}
Here $X'$ are defined by
\begin{align}
X'_{1i}&=X_{1i}\cos\theta_{W}+X_{2i}\sin\theta_{W} \\
X'_{2i}&=-X_ {1i}\sin\theta_{W}+X_{2i}\cos\theta_{W}
\end{align}
where $X$ diagonalizes the neutralino mass matrix and is defined by Eq.(\ref{2.8}).\\
In addition to the computation of the supersymmetric loop diagrams, we compute the contributions
arising from the exchange of the W and $Z$ bosons and the leptons and the mirror leptons in the
loops. The relevant interactions needed are given below. For the $W$ boson exchange the
interactions that enter are given by
\begin{align}
-{\cal L}_{\tau W\psi} &= W^{\dagger}_{\rho}\sum_{i=1}^{4}\sum_{\alpha=1}^{4}\bar{\psi}_{i}\gamma^{\rho}[C_{L_{i\alpha}}^W P_L + C_{R_{i\alpha}}^W P_R]\tau_{\alpha}+\text{H.c.}
\end{align}
where
\begin{eqnarray}
C_{L_{i\alpha}}^W= \frac{g}{\sqrt{2}} [D^{\nu*}_{L1i}D^{\tau}_{L1\alpha}+
D^{\nu*}_{L3i}D^{\tau}_{L3\alpha}+D^{\nu*}_{L4i}D^{\tau}_{L4\alpha}] \\
C_{R_{i\alpha}}^W= \frac{g}{\sqrt{2}}[D^{\nu*}_{R2i}D^{\tau}_{R2\alpha}]
\end{eqnarray}
For the $Z$ boson exchange the interactions that enter are given by
\begin{eqnarray}
-{\cal L}_{\tau\tau Z} &= Z_{\rho}\sum_{\alpha=1}^{4}\sum_{\beta=1}^{4}\bar{\tau}_{\alpha}\gamma^{\rho}[C_{L_{\alpha \beta}}^Z P_L + C_{R_{\alpha \beta}}^Z P_R]\tau_{\beta}
\end{eqnarray}
where
\begin{eqnarray}
C_{L_{\alpha \beta}}^Z=\frac{g}{\cos\theta_{W}} [x(D_{L\alpha 1}^{\tau\dag}D_{L1\beta}^{\tau}+D_{L\alpha 2}^{\tau\dag}D_{L2\beta}^{\tau}+D_{L\alpha 3}^{\tau\dag}D_{L3\beta}^{\tau}+D_{L\alpha 4}^{\tau\dag}D_{L4\beta}^{\tau})\nonumber\\
-\frac{1}{2}(D_{L\alpha 1}^{\tau\dag}D_{L1\beta}^{\tau}+D_{L\alpha 3}^{\tau\dag}D_{L3\beta}^{\tau}+D_{L\alpha 4}^{\tau\dag}D_{L4\beta}^{\tau})]
\end{eqnarray}
and
\begin{eqnarray}
C_{R_{\alpha \beta}}^Z=\frac{g}{\cos\theta_{W}} [x(D_{R\alpha 1}^{\tau\dag}D_{R1\beta}^{\tau}+D_{R\alpha 2}^{\tau\dag}D_{R2\beta}^{\tau}+D_{R\alpha 3}^{\tau\dag}D_{R3\beta}^{\tau}+D_{R\alpha 4}^{\tau\dag}D_{R4\beta}^{\tau})\nonumber\\
-\frac{1}{2}(D_{R\alpha 2}^{\tau\dag}
D_{R 2\beta }^{\tau}
)]
\end{eqnarray}
where $x=\sin^{2}\theta_{W}$.\\
\noindent
{{\bf Appendix C : The scalar mass squared matrices
\label{sec8}}}\\
For convenience we collect here all the contributions to the scalar mass$^2$ matrices
arising from the superpotential. They are given by
\begin{equation}
{\cal L}^{\rm mass}_F= {\cal L}_C^{\rm mass} +{\cal L}_N^{\rm mass}\ ,
\end{equation}
where ${\cal L}_C^{\rm mass}$ gives the mass terms for the charged sleptons while
$ {\cal L}_N^{mass}$ gives the mass terms for the sneutrinos. For ${\cal L}_C^{\rm mass}$ we have
\begin{gather}
-{\cal L}_C^{\rm mass} =\left(\frac{v^2_2 |f'_2|^2}{2} +|f_3|^2+|f_3'|^2+|f_3''|^2\right)\tilde E_R \tilde E^*_R
+\left(\frac{v^2_2 |f'_2|^2}{2} +|f_4|^2+|f_4'|^2+|f_4''|^2\right)\tilde E_L \tilde E^*_L\nonumber\\
+\left(\frac{v^2_1 |f_1|^2}{2} +|f_4|^2\right)\tilde \tau_R \tilde \tau^*_R
+\left(\frac{v^2_1 |f_1|^2}{2} +|f_3|^2\right)\tilde \tau_L \tilde \tau^*_L
+\left(\frac{v^2_1 |h_1|^2}{2} +|f_4'|^2\right)\tilde \mu_R \tilde \mu^*_R\nonumber\\
+\left(\frac{v^2_1 |h_1|^2}{2} +|f_3'|^2\right)\tilde \mu_L \tilde \mu^*_L
+\left(\frac{v^2_1 |h_2|^2}{2} +|f_4''|^2\right)\tilde e_R \tilde e^*_R
+\left(\frac{v^2_1 |h_2|^2}{2} +|f_3''|^2\right)\tilde e_L \tilde e^*_L\nonumber\\
+\Bigg\{-\frac{f_1 \mu^* v_2}{\sqrt{2}} \tilde \tau_L \tilde \tau^*_R
-\frac{h_1 \mu^* v_2}{\sqrt{2}} \tilde \mu_L \tilde \mu^*_R
-\frac{f'_2 \mu^* v_1}{\sqrt{2}} \tilde E_L \tilde E^*_R
+\left(\frac{f'_2 v_2 f^*_3}{\sqrt{2}} +\frac{f_4 v_1 f^*_1}{\sqrt{2}}\right) \tilde E_L \tilde \tau^*_L\nonumber\\
+\left(\frac{f_4 v_2 f'^*_2}{\sqrt{2}} +\frac{f_1 v_1 f^*_3}{\sqrt{2}}\right) \tilde E_R \tilde \tau^*_R
+\left(\frac{f'_3 v_2 f'^*_2}{\sqrt{2}} +\frac{h_1 v_1 f'^*_4}{\sqrt{2}}\right) \tilde E_L \tilde \mu^*_L
+\left(\frac{f'_2 v_2 f'^*_4}{\sqrt{2}} +\frac{f'_3 v_1 h^*_1}{\sqrt{2}}\right) \tilde E_R \tilde \mu^*_R\nonumber\\
+\left(\frac{f''^*_3 v_2 f'_2}{\sqrt{2}} +\frac{f''_4 v_1 h^*_2}{\sqrt{2}}\right) \tilde E_L \tilde e^*_L
+\left(\frac{f''_4 v_2 f'^*_2}{\sqrt{2}} +\frac{f''^*_3 v_1 h^*_2}{\sqrt{2}}\right) \tilde E_R \tilde e^*_R
+f'_3 f^*_3 \tilde \mu_L \tilde \tau^*_L +f_4 f'^*_4 \tilde \mu_R \tilde \tau^*_R\nonumber\\
+f_4 f''^*_4 \tilde {e}_R \tilde{\tau}^*_R
+f''_3 f^*_3 \tilde {e}_L \tilde{\tau}^*_L
+f''_3 f'^*_3 \tilde {e}_L \tilde{\mu}^*_L
+f'_4 f''^*_4 \tilde {e}_R \tilde{\mu}^*_R
-\frac{h_2 \mu^* v_2}{\sqrt{2}} \tilde{e}_L \tilde{e}^*_R
+H.c. \Bigg\}
\end{gather}
We define the scalar mass squared matrix $M^2_{\tilde \tau}$ in the basis $(\tilde \tau_L, \tilde E_L, \tilde \tau_R,
\tilde E_R, \tilde \mu_L, \tilde \mu_R, \tilde e_L, \tilde e_R)$. We label the matrix elements of these as $(M^2_{\tilde \tau})_{ij}= M^2_{ij}$ where the elements of the matrix are given by
\begin{align}
M^2_{11}&=\tilde M^2_{\tau L} +\frac{v^2_1|f_1|^2}{2} +|f_3|^2 -m^2_Z \cos 2 \beta \left(\frac{1}{2}-\sin^2\theta_W\right), \nonumber\\
M^2_{22}&=\tilde M^2_E +\frac{v^2_2|f'_2|^2}{2}+|f_4|^2 +|f'_4|^2+|f''_4|^2 +m^2_Z \cos 2 \beta \sin^2\theta_W, \nonumber\\
M^2_{33}&=\tilde M^2_{\tau} +\frac{v^2_1|f_1|^2}{2} +|f_4|^2 -m^2_Z \cos 2 \beta \sin^2\theta_W, \nonumber\\
M^2_{44}&=\tilde M^2_{\chi} +\frac{v^2_2|f'_2|^2}{2} +|f_3|^2 +|f'_3|^2+|f''_3|^2 +m^2_Z \cos 2 \beta \left(\frac{1}{2}-\sin^2\theta_W\right), \nonumber
\end{align}
\begin{align}
M^2_{55}&=\tilde M^2_{\mu L} +\frac{v^2_1|h_1|^2}{2} +|f'_3|^2 -m^2_Z \cos 2 \beta \left(\frac{1}{2}-\sin^2\theta_W\right), \nonumber\\
M^2_{66}&=\tilde M^2_{\mu} +\frac{v^2_1|h_1|^2}{2}+|f'_4|^2 -m^2_Z \cos 2 \beta \sin^2\theta_W, \nonumber\\
M^2_{77}&=\tilde M^2_{e L} +\frac{v^2_1|h_2|^2}{2}+|f''_3|^2 -m^2_Z \cos 2 \beta \left(\frac{1}{2}-\sin^2\theta_W\right), \nonumber\\
M^2_{88}&=\tilde M^2_{e} +\frac{v^2_1|h_2|^2}{2}+|f''_4|^2 -m^2_Z \cos 2 \beta \sin^2\theta_W\ . \nonumber
\end{align}
\begin{align}
M^2_{12}&=M^{2*}_{21}=\frac{ v_2 f'_2f^*_3}{\sqrt{2}} +\frac{ v_1 f_4 f^*_1}{\sqrt{2}} ,
M^2_{13}=M^{2*}_{31}=\frac{f^*_1}{\sqrt{2}}(v_1 A^*_{\tau} -\mu v_2),
M^2_{14}=M^{2*}_{41}=0,\nonumber\\
M^2_{15} &=M^{2*}_{51}=f'_3 f^*_3,
M^{2*}_{16}= M^{2*}_{61}=0, M^{2*}_{17}= M^{2*}_{71}=f''_3 f^*_3, M^{2*}_{18}= M^{2*}_{81}=0,\nonumber\\
M^2_{23}&=M^{2*}_{32}=0,
M^2_{24}=M^{2*}_{42}=\frac{f'^*_2}{\sqrt{2}}(v_2 A^*_{E} -\mu v_1), M^2_{25} = M^{2*}_{52}= \frac{ v_2 f'_3f'^*_2}{\sqrt{2}} +\frac{ v_1 h_1 f^*_4}{\sqrt{2}} ,\nonumber\\
M^2_{26} &=M^{2*}_{62}=0, M^2_{27} =M^{2*}_{72}= \frac{ v_2 f''_3f'^*_2}{\sqrt{2}} +\frac{ v_1 h_1 f'^*_4}{\sqrt{2}}, M^2_{28} =M^{2*}_{82}=0, \nonumber\\
M^2_{34}&=M^{2*}_{43}= \frac{ v_2 f_4 f'^*_2}{\sqrt{2}} +\frac{ v_1 f_1 f^*_3}{\sqrt{2}}, M^2_{35} =M^{2*}_{53} =0, M^2_{36} =M^{2*}_{63}=f_4 f'^*_4,\nonumber\\
M^2_{37} &=M^{2*}_{73} =0, M^2_{38} =M^{2*}_{83} =f_4 f''^*_4,\nonumber\\
M^2_{45}&=M^{2*}_{54}=0, M^2_{46}=M^{2*}_{64}=\frac{ v_2 f'_2 f'^*_4}{\sqrt{2}} +\frac{ v_1 f'_3 h^*_1}{\sqrt{2}}, \nonumber\\
M^2_{47} &=M^{2*}_{74}=0, M^2_{48} =M^{2*}_{84}= \frac{ v_2 f'_2f''^*_4}{\sqrt{2}} +\frac{ v_1 f''_3 h^*_2}{\sqrt{2}},\nonumber\\
M^2_{56}&=M^{2*}_{65}=\frac{h^*_1}{\sqrt{2}}(v_1 A^*_{\mu} -\mu v_2),
M^2_{57} =M^{2*}_{75}=f''_3 f'^*_3, \nonumber\\
M^2_{58} &=M^{2*}_{85}=0, M^2_{67} =M^{2*}_{76}=0,\nonumber\\
M^2_{68} &=M^{2*}_{86}=f'_4 f''^*_4, M^2_{78}=M^{2*}_{87}=\frac{h^*_2}{\sqrt{2}}(v_1 A^*_{e} -\mu v_2)\ . \nonumber
\label{14}
\end{align}
We can diagonalize this hermitian mass squared matrix by the
unitary transformation
\begin{gather}
\tilde D^{\tau \dagger} M^2_{\tilde \tau} \tilde D^{\tau} = diag (M^2_{\tilde \tau_1},
M^2_{\tilde \tau_2}, M^2_{\tilde \tau_3}, M^2_{\tilde \tau_4}, M^2_{\tilde \tau_5}, M^2_{\tilde \tau_6}, M^2_{\tilde \tau_7}, M^2_{\tilde \tau_8} )\ .
\end{gather}
For ${\cal L}_N^{\rm mass}$ we have
\begin{multline}
-{\cal L}_N^{\rm mass}=
\left(\frac{v^2_1 |f_2|^2}{2}
+|f_3|^2+|f_3'|^2+|f_3''|^2\right)\tilde N_R \tilde N^*_R\\
+\left(\frac{v^2_1 |f_2|^2}{2}+|f_5|^2+|f_5'|^2+|f_5''|^2\right)\tilde N_L \tilde N^*_L
+\left(\frac{v^2_2 |f'_1|^2}{2}+|f_5|^2\right)\tilde \nu_{\tau R} \tilde \nu^*_{\tau R}\\
+\left(\frac{v^2_2 |f'_1|^2}{2}
+|f_3|^2\right)\tilde \nu_{\tau L} \tilde \nu^*_{\tau L}
+\left(\frac{v^2_2 |h'_1|^2}{2}
+|f_3'|^2\right)\tilde \nu_{\mu L} \tilde \nu^*_{\mu L}
+\left(\frac{v^2_2 |h'_1|^2}{2}
+|f_5'|^2\right)\tilde \nu_{\mu R} \tilde \nu^*_{\mu R}\nonumber\\
+\left(\frac{v^2_2 |h'_2|^2}{2}
+|f_3''|^2\right)\tilde \nu_{e L} \tilde \nu^*_{e L}
+\left(\frac{v^2_2 |h'_2|^2}{2}
+|f_5''|^2\right)\tilde \nu_{e R} \tilde \nu^*_{e R}\nonumber\\
+\Bigg\{ -\frac{f_2 \mu^* v_2}{\sqrt{2}} \tilde N_L \tilde N^*_R
-\frac{f'_1 \mu^* v_1}{\sqrt{2}} \tilde \nu_{\tau L} \tilde \nu^*_{\tau R}
-\frac{h'_1 \mu^* v_1}{\sqrt{2}} \tilde \nu_{\mu L} \tilde \nu^*_{\mu R}
+\left(\frac{f_5 v_2 f'^*_1}{\sqrt{2}} -\frac{f_2 v_1 f^*_3}{\sqrt{2}}\right) \tilde N_L \tilde \nu^*_{\tau L}\nonumber\\
+\left(\frac{f_5 v_1 f^*_2}{\sqrt{2}} -\frac{f'_1 v_2 f^*_3}{\sqrt{2}}\right) \tilde N_R \tilde \nu^*_{\tau R}
+\left(\frac{h'_1 v_2 f'^*_5}{\sqrt{2}} -\frac{f'_3 v_1 f^*_2}{\sqrt{2}}\right) \tilde N_L \tilde \nu^*_{\mu L}
+\left(\frac{f''_5 v_1 f^*_2}{\sqrt{2}} -\frac{f''^*_3 v_2 h'_2}{\sqrt{2}}\right) \tilde N_R \tilde \nu^*_{e R}\nonumber\\
+\left(\frac{h'^*_2 v_2 f''_5}{\sqrt{2}} -\frac{f''^*_3 v_1 f_2}{\sqrt{2}}\right) \tilde N_L \tilde \nu^*_{e L}
+\left(\frac{f'_5 v_1 f^*_2}{\sqrt{2}} -\frac{h'_1 v_2 f'^*_3}{\sqrt{2}}\right) \tilde N_R \tilde \nu^*_{\mu R}\nonumber\\
+f'_3 f^*_3 \tilde \nu_{\mu L} \tilde \nu_{\tau^*_L} +f_5 f'^*_5 \tilde \nu_{\mu R} \tilde \nu^*_{\tau R}
-\frac{h'_2 \mu^* v_1}{\sqrt{2}} \tilde{\nu}_{e L} \tilde{\nu}^*_{e R}\\
+f''_3 f^*_3 \tilde{\nu}_{e L} \tilde{\nu}^*_{\tau L}
+f_5 f''^*_5 \tilde{\nu}_{e R} \tilde{\nu}^*_{\tau R}
+f''_3 f'^*_3 \tilde{\nu}_{e L} \tilde{\nu}^*_{\mu L}
+f'_5 f''^*_5 \tilde{\nu}_{e R} \tilde{\nu}^*_{\mu R}
+H.c. \Bigg\}.
\label{11b}
\end{multline}
Next we write the mass$^2$ matrix in the sneutrino sector the basis $(\tilde \nu_{\tau L}, \tilde N_L,$
$ \tilde \nu_{\tau R}, \tilde N_R, \tilde \nu_{\mu L},\tilde \nu_{\mu R}, \tilde \nu_{e L}, \tilde \nu_{e R} )$.
Thus here we denote the sneutrino mass$^2$ matrix in the form
$(M^2_{\tilde\nu})_{ij}=m^2_{ij}$ where
\begin{align}
m^2_{11}&=\tilde M^2_{\tau L} +m^2_{\nu_\tau} +|f_3|^2 +\frac{1}{2}m^2_Z \cos 2 \beta, \nonumber\\
m^2_{22}&=\tilde M^2_N +m^2_{N} +|f_5|^2 +|f'_5|^2+|f''_5|^2, \nonumber\\
m^2_{33}&=\tilde M^2_{\nu_\tau} +m^2_{\nu_\tau} +|f_5|^2, \nonumber\\
m^2_{44}&=\tilde M^2_{\chi} +m^2_{N} +|f_3|^2 +|f'_3|^2+|f''_3|^2 -\frac{1}{2}m^2_Z \cos 2 \beta, \nonumber\\
m^2_{55}&=\tilde M^2_{\mu L} +m^2_{\nu_\mu} +|f'_3|^2 +\frac{1}{2}m^2_Z \cos 2 \beta, \nonumber\\
m^2_{66}&=\tilde M^2_{\nu_\mu} +m^2_{\nu_\mu} +|f'_5|^2, \nonumber\\
m^2_{77}&=\tilde M^2_{e L} +m^2_{\nu_e} +|f''_3|^2+\frac{1}{2}m^2_Z \cos 2 \beta, \nonumber\\
m^2_{88}&=\tilde M^2_{\nu_e} +m^2_{\nu_e} +|f''_5|^2, \nonumber
\end{align}
\begin{align}
m^2_{12}&=m^{2*}_{21}=\frac{v_2 f_5 f'^*_1}{\sqrt{2}}-\frac{ v_1 f_2 f^*_3}{\sqrt{2}},
~m^2_{13}=m^{2*}_{31}=\frac{f'^*_1}{\sqrt{2}}(v_2 A^*_{\nu_\tau} -\mu v_1)\nonumber\\
m^2_{14}&=m^{2*}_{41}=0,
~m^2_{15}=m^{2*}_{51}= f'_3 f^*_3, m^2_{16}=m^{2*}_{61}=0,\nonumber\\
m^2_{17}&=m^{2*}_{71}= f''_3 f^*_3, m^2_{18}=m^{2*}_{81}=0,\nonumber\\
m^2_{23}&=m^{2*}_{32}=0,
m^2_{24}=m^{2*}_{42}=\frac{f^*_2}{\sqrt{2}}(v_{1}A^*_N-\mu v_2), \nonumber\\
m^2_{25}&=m^{2*}_{52}=-\frac{v_{1}f^*_2 f'_3}{\sqrt{2}}+\frac{h'_1 v_2 f'^*_5}{\sqrt{2}},\nonumber\\
m^2_{26}&=m^{2*}_{62}=0, m^2_{27}=m^{2*}_{72}=-\frac{v_{1}f^*_2 f''_3}{\sqrt{2}}+\frac{h'_2 v_2 f''^*_5}{\sqrt{2}}
\nonumber\\
m^2_{28}&=m^{2*}_{82}=0, m^2_{34}=m^{2*}_{43}=\frac{v_1 f^*_2 f_5}{\sqrt{2}}-\frac{v_2 f'_1 f^*_3}{\sqrt{2}},\nonumber\\
m^2_{35}&=m^{2*}_{53}=0, m^2_{36}=m^{2*}_{63}=f_5 f'^*_5, \nonumber\\
m^2_{37}&=m^{2*}_{73}=0, m^2_{38}=m^{2*}_{83}=f_5 f''^*_5, \nonumber\\
m^2_{45}&=m^{2*}_{54}=0, m^2_{46}=m^{2*}_{64}=-\frac{h'^*_1 v_2 f'_3}{\sqrt{2}}+\frac{v_1 f_2 f'^*_5}{\sqrt{2}},
\nonumber\\
m^2_{47}&=m^{2*}_{74}=0,
m^2_{48}=m^{2*}_{84}=\frac{v_1 f_2 f''^*_5}{\sqrt{2}}-\frac{v_2 h'^*_2 f''_3}{\sqrt{2}},\nonumber\\
m^2_{56}&=m^{2*}_{65}=\frac{h'^*_1}{\sqrt{2}}(v_2 A^*_{\nu_\mu}-\mu v_1), \nonumber\\
m^2_{57}&=m^{2*}_{75}= f''_3 f'^*_3, m^2_{58}=m^{2*}_{85}=0, \nonumber\\
m^2_{67}&=m^{2*}_{76}=0, m^2_{68}=m^{2*}_{86}= f'_5 f''^*_5, \nonumber\\
m^2_{78}&=m^{2*}_{87}=\frac{h'^*_2}{\sqrt{2}}(v_2 A^*_{\nu_e}-\mu v_1).
\end{align}
We can diagonalize the sneutrino mass square matrix by the unitary transformation
\begin{equation}
\tilde D^{\nu\dagger} M^2_{\tilde \nu} \tilde D^{\nu} = \text{diag} (M^2_{\tilde \nu_1}, M^2_{\tilde \nu_2}, M^2_{\tilde \nu_3}, M^2_{\tilde \nu_4},M^2_{\tilde \nu_5}, M^2_{\tilde \nu_6}, M^2_{\tilde \nu_7}, M^2_{\tilde \nu_8})\ .
\end{equation}
\newpage
|
2,869,038,156,659 | arxiv | \section{Introduction}
As a rule, the size of radiative corrections to a given process
is determined by the discrepancy between the various mass and
energy scales involved.
In $Z$-boson physics, the dominant effects arise from light
charged fermions, which induce large logarithms of the form
$\alpha^n\ln^m(M_Z^2/m_f^2)$ $(m\le n)$ in the
fine-structure constant (and also in initial-state radiative
corrections), and from the top quark, which generates power corrections
of the orders $G_Fm_t^2$, $G_F^2m_t^4$, $\alpha_sG_Fm_t^2$, etc.
On the other hand, the quantum effects due to a heavy Higgs boson
are screened, i.e., logarithmic in $M_H$ at one loop and just quadratic
at two loops.
By contrast, such corrections are proportional to $M_H^2$ and $M_H^4$,
respectively, in the Higgs sector.
\section{Gauge Sector}
\vspace*{-0.35cm}
\subsection{Universal Corrections: Electroweak Parameters (Oblique
Corrections)}
For a wide class of low-energy and $Z$-boson observables,
the dominant effects originate entirely in the
gauge-boson propagators (oblique corrections)
and may be parametrized conveniently in terms of
four electroweak parameters, $\Delta\alpha$, $\Delta\rho$, $\Delta r$,
and $\Delta\kappa$, which bear the following physical meanings:\cite{bur}
\begin{enumerate}
\item $\Delta\alpha$ determines the running fine-structure constant at the
$Z$-boson scale, $\alpha(M_Z)/\alpha=(1-\Delta\alpha)^{-1}$,
where $\alpha$ is the corresponding value at the electron scale;
\item $\Delta\rho$ measures the quantum corrections to the ratio
of the neutral- and charged-current amplitudes at low energy,\cite{ros}
$G_{NC}(0)/G_{CC}(0)=(1-\Delta\rho)^{-1}$;
\item $\Delta r$ embodies the non-photonic corrections to the muon
lifetime,\cite{sir} $G_F=$\break
$\left(\pi\alpha/\sqrt2s_w^2M_W^2\right)(1-\Delta r)^{-1}$;
\item $\Delta\kappa$ controls the effective weak mixing angle,
$\bar s_w^2=s_w^2(1+\Delta\kappa)$,
that occurs in the ratio of the $f\bar fZ$ vector and axial-vector
couplings,\cite{hol} $v_f/a_f=1-4|Q_f|\bar s_w^2$.
\end{enumerate}
Unless stated otherwise, we adopt the on-shell scheme and set\cite{sir}
$c_w^2=1-s_w^2=M_W^2/M_Z^2$.
The large logarithms are collected by $\Delta\alpha$, and the leading $m_t$
dependence is carried by $\Delta\rho$.
$\Delta r$ and $\Delta\kappa$ may be decomposed as
$(1-\Delta r)=(1-\Delta\alpha)(1+c_w^2/s_w^2\Delta\rho)-\Delta r_{rem}$
and $\Delta\kappa=c_w^2/s_w^2\Delta\rho+\Delta\kappa_{rem}$, respectively,
where the remainder parts are devoid of $m_f$ logarithms and $m_t$
power terms.
The triplet $(\Delta\rho,\Delta r_w,\Delta\kappa)$, where
$\Delta r_w$ is defined by
$(1-\Delta r)=(1-\Delta\alpha)(1-\Delta r_w)$,
is equivalent to synthetical sets like\cite{pes} $(S,T,U)$ and
$(\varepsilon_1,\varepsilon_2,\varepsilon_3)$,
which have gained vogue recently.
We note in passing that the bosonic contributions to these electroweak
parameters are, in general, gauge dependent and finite only in a restricted
class of gauges if the conventional formulation in terms of vacuum
polarizations is employed.
This problem may be cured in the framework of the pinch technique.\cite{deg}
At two loops, large contributions are expected to arise from
the exchange of heavy Higgs bosons, heavy top quarks, and gluons.
The hadronic contributions to $\Delta\alpha$ and the
$t\bar t$ threshold effects on $\Delta\rho$, $\Delta r$,
and $\Delta\kappa$ cannot be calculated reliably in
QCD to finite order.
However, they may be related via dispersion relations to
data of $e^+e^-\to hadrons$ and theoretical predictions of
$e^+e^-\to t\bar t$ based on realistic quark potentials, respectively.
\vglue 0.3cm
\leftline{\twelveit 2.1.1. Two-Loop $\O(G_F^2M_H^2M_Z^2)$ Corrections}
\vglue 1pt
Such corrections are generated by two-loop gauge-boson vacuum-polarization
diagrams that are constructed from physical and unphysical Higgs bosons.
Know\-ledge\cite{bij} of the first two terms of the Taylor expansion around
$q^2=0$ is sufficient to derive\cite{hal,bcs} the leading contributions to
$\Delta\rho$, $\Delta r$, and $\Delta\kappa$,
\begin{eqnarray}
\Delta\rho&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{G_F^2M_H^2M_W^2\over64\pi^4}\,{s_w^2\over c_w^2}
\left(-9\sqrt3\mathop{{\rm Li}_2}\nolimits\left({\pi\over3}\right)+{9\over2}\zeta(2)
+{9\over4}\pi\sqrt3-{21\over8}\right)\nonumber\\
&\hspace*{-2.5mm}\approx\hspace*{-2.5mm}&4.92\cdot10^{-5}\left({M_H\over1\,{\rm TeV}}\right)^2,\\
\Delta r&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{G_F^2M_H^2M_W^2\over64\pi^4}
\left(9\sqrt3\mathop{{\rm Li}_2}\nolimits\left({\pi\over3}\right)-{25\over18}\zeta(2)
-{11\over4}\pi\sqrt3+{49\over72}\right)\nonumber\\
&\hspace*{-2.5mm}\approx\hspace*{-2.5mm}&-1.05\cdot10^{-4}\left({M_H\over1\,{\rm TeV}}\right)^2,\\
\Delta\kappa&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{G_F^2M_H^2M_W^2\over64\pi^4}
\left(-9\sqrt3\mathop{{\rm Li}_2}\nolimits\left({\pi\over3}\right)+{53\over18}\zeta(2)
+{5\over2}\pi\sqrt3-{119\over72}\right)\nonumber\\
&\hspace*{-2.5mm}\approx\hspace*{-2.5mm}&1.37\cdot10^{-4}\left({M_H\over1\,{\rm TeV}}\right)^2,
\end{eqnarray}
Due to the smallness of the prefactors, these contributions are
insignificant for $M_H\lsim1$~TeV.
\vglue 0.3cm
\leftline{\twelveit 2.1.2. Two-Loop $\O(G_F^2m_t^4)$ Corrections for
$M_H\ne0$}
\vglue 1pt
Also at two loops, $\Delta\rho$ picks up the leading large-$m_t$ term,
and $\Delta r$ and $\Delta\kappa$ depend on $m_t$ chiefly via $\Delta\rho$.
Neglecting $m_b$ and defining $x_t=\left(G_Fm_t^2/8\pi^2\sqrt2\right)$, one
has
\begin{equation}
\label{drho}
\Delta\rho=3x_t\left[1
+x_t\rho^{(2)}\left({M_H\over m_t}\right)
-{2\over3}\left(2\zeta(2)+1\right){\alpha_s(m_t)\over\pi}\right],
\end{equation}
where, for completeness, also the well-known $\O(\alpha_sG_Fm_t^2)$
term\cite{djo,kni} is included.
Very recently, also the $\O(\alpha_s^2G_Fm_t^2)$ term has been
computed,\cite{avd} the result being
$(-21.27063+1.78621\,N_F)(\alpha_s/\pi)^2$,
where $N_F$ is the number of active quark flavours;
the details are reported elsewhere.\cite{joc}
The coefficient $\rho^{(2)}(r)$ is negative for all plausible
values of $r$, bounded from below by $\rho^{(2)}(5.72)=-11.77$,
and exhibits the following asymptotic behaviour:\cite{bar,dfg}
\begin{equation}
\rho^{(2)}(r)=
\cases{
\displaystyle
-12\zeta(2)+19-4\pi r+\O(r^2\ln r),
&if $r\ll1;$\cr
\displaystyle
6\ln^2r-27\ln r+6\zeta(2)+{49\over4}
+O\left({\ln^2r\over r^2}\right),
&if $r\gg1.$\cr}
\end{equation}
The value at\cite{hoo} $r=0$ greatly underestimates the effect.
Both $\O(G_F^2m_t^4)$ and $\O(\alpha_sG_Fm_t^2)$ corrections
screen the one-loop result and thus increase the value of $m_t$ predicted
indirectly from global analyses of low-energy, $M_W$, LEP/SLC, and other
high-precision data.
Recently, a first attempt was made to control subleading corrections to
$\Delta\rho$, of $\O\Bigl(G_F^2m_t^2M_Z^2\ln(M_Z^2/m_t^2)\Bigr)$, in an
SU(2) model of weak interactions, and significant effects were
found.\cite{dfg}
\vglue 0.3cm
\leftline{\twelveit 2.1.3. Two-Loop $\O(\alpha_sG_FM_W^2)$ Corrections}
\vglue 1pt
For $m_t\gg M_W$, the bulk of the QCD corrections is concentrated in
$\Delta\rho$; see Eq.~(\ref{drho}).
However, for realistic values of $m_t$, the subleading terms,
of $\O(\alpha_sG_FM_W^2)$, are significant numerically,
e.g., they amount to 20\% of the full two-loop QCD correction to
$\Delta r$ at $m_t=150$~GeV.
Specifically, one has\cite{hal,kni}
\begin{eqnarray}
\Delta r_{rem}&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{G_FM_W^2\over\pi^3\sqrt2}\left\{
-\alpha_s(M_Z)\left({c_w^2\over s_w^2}-1\right)\ln c_w^2
\right.\nonumber\\ &\hspace*{-2.5mm}\n&\qquad{}+\left.
\alpha_s(m_t)\left[\left({1\over3}-{1\over4s_w^2}\right)
\ln{m_t^2\over M_Z^2}+A+{B\over s_w^2}\right]\right\}\!,\quad\;\\
\Delta\kappa_{rem}&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{G_FM_W^2\over\pi^3\sqrt2}\left\{
\alpha_s(M_Z){c_w^2\over s_w^2}\ln c_w^2
-\alpha_s(m_t)\left[\left({1\over6}-{1\over4s_w^2}\right)
\ln{m_t^2\over M_Z^2}+{A\over2}+{B\over s_w^2}\right]\right\}\!,\quad\;
\end{eqnarray}
where terms of $\O(M_Z^2/m_t^2)$ are omitted within the square brackets and
\begin{eqnarray}
A&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{1\over3}\left(-4\zeta(3)+{4\over3}\zeta(2)+{5\over2}\right)
\approx-0.03833,\\
B&\hspace*{-2.5mm}=\hspace*{-2.5mm}&\zeta(3)-{2\over9}\zeta(2)-{1\over4}
\approx0.58652.
\end{eqnarray}
For contributions due to the $tb$ doublet, $\mu=m_t$ is the
natural scale for $\alpha_s(\mu)$.
\vglue 0.3cm
\leftline{\twelveit 2.1.4. Hadronic Contributions to $\Delta\alpha$}
\vglue 1pt
Jegerlehner has updated his 1990 analysis\cite{jeg} of the
hadronic contributions to $\Delta\alpha$
by taking into account the hadronic resonance parameters
specified in the 1992 report\cite{hik} by the Particle Data Group
and recently published low-energy $e^+e^-$ data taken at Novosibirsk.
The (preliminary) result at $\sqrt s=91.175$~GeV reads\cite{fje}
\begin{equation}
\label{dalp}
\Delta\alpha_{hadrons}=0.0283\pm0.0007,
\end{equation}
i.e., the central value has increased by $1\cdot10^{-4}$, while
the error has decreased by $\pm2\cdot10^{-4}$.
The latter is particularly important, since this error has long
constituted the dominant uncertainty for theoretical predictions of
electroweak parameters.
For comparison, we list the leptonic contribution up to two loops in
QED,\cite{kni}
\begin{eqnarray}
\Delta\alpha_{leptons}&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{\alpha\over3\pi}\sum_\ell
\left[\ln{M_Z^2\over m_\ell^2}-{5\over3}
+{\alpha\over\pi}\left({3\over4}\ln{M_Z^2\over m_\ell^2}
+3\zeta(2)-{5\over8}\right)+O\left({m_\ell^2\over M_Z^2}\right)\right]
\nonumber\\
&\hspace*{-2.5mm}=\hspace*{-2.5mm}&0.031\,496\,6\pm0.000\,000\,4,
\end{eqnarray}
where the error stems from the current $m_\tau$ world average,\cite{wei}
$m_\tau=(1777.0\pm0.4)$~MeV.
\vglue 0.3cm
\leftline{\twelveit 2.1.5. $t\bar t$ Threshold Effects}
\vglue 1pt
Although loop amplitudes involving the top quark are mathematically well
behaved, it is evident that interesting and possibly significant features
connected with the $t\bar t$ threshold cannot be accommodated when the
perturbation series is truncated at finite order.
In fact, perturbation theory up to $\O(\alpha\alpha_s)$ predicts a
discontinuous steplike threshold behaviour for
$\sigma\left(e^+e^-\to t\bar t\,\right)$.
A more realistic description includes the formation of toponium resonances
by multi-gluon exchange.
For $m_t\gsim130$~GeV, the revolution period of a $t\bar t$ bound state
exceeds its lifetime, and the individual resonances are smeared out to a
coherent structure.
By Cutkosky's rule, $\sigma\left(e^+e^-\to t\bar t\,\right)$
corresponds to the absorptive parts of the photon and $Z$-boson
vacuum polarizations, and its enhancement at threshold induces additional
contributions in the corresponding real parts,
which can be computed via dispersive techniques.
Decomposing the vacuum-polarization tensor generated by the insertion
of a top-quark loop into a gauge-boson line as
\begin{equation}
\Pi_{\mu\nu}^{V,A}(q)=\Pi^{V,A}(q^2)g_{\mu\nu}+\lambda^{V,A}(q^2)q_\mu q_\nu,
\end{equation}
where $V$ and $A$ label the vector and axial-vector components and
$q$ is the external four-momentum, and imposing Ward identities,
one derives the following set of dispersion relations:\cite{ks}
\begin{eqnarray}
\Pi^V(q^2)&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{q^2\over\pi}\int{ds\over s}\,
{\mathop{\rm Im}\nolimits\Pi^V(s)\over q^2-s-i\epsilon},\\
\Pi^A(q^2)&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{1\over\pi}\int ds
\left({\mathop{\rm Im}\nolimits\Pi^A(s)\over q^2-s-i\epsilon}+\mathop{\rm Im}\nolimits\lambda^A(s)\right).
\end{eqnarray}
The alternative set of dispersion relations proposed in Ref.~22
does not, in general, yield correct results, as has been demonstrated\cite{hll}
by establishing a perturbative counterexample, namely the
$\O(\alpha_sG_Fm_t^2)$ corrections to $\Gamma(H\to\ell^+\ell^-)$
(see Sect.~3.1.).
It has been suggested that this argument may be extended to all orders in
$\alpha_s$ by means of the operator product expansion.\cite{tak}
In the threshold region, only $\mathop{\rm Im}\nolimits\Pi^V(q^2)$ and $\mathop{\rm Im}\nolimits\lambda^A(q^2)$ receive
significant contributions and are related by
$\mathop{\rm Im}\nolimits\lambda^A(q^2)\approx-\mathop{\rm Im}\nolimits\Pi^V(q^2)/q^2$,
while $\mathop{\rm Im}\nolimits\Pi^A(q^2)$ is strongly suppressed due to centrifugal barrier
effects.\cite{ks}
Of course,\cite{ks} $\lambda^V(q^2)=-\Pi^V(q^2)/q^2$.
These contributions in turn lead to shifts in $\Delta\rho$, $\Delta r$,
and $\Delta\kappa$.
A crude estimation may be obtained by setting
$\mathop{\rm Im}\nolimits\Pi^V(q^2)=\mathop{\rm Im}\nolimits\Pi^V(4m_t^2)=\alpha_sm_t^2$
in the interval $(2m_t-\Delta)^2\le q^2\le4m_t^2$, where $\Delta$
may be regarded as the binding energy of the 1S state.
This yields
\begin{eqnarray}
\Delta\rho&\hspace*{-2.5mm}=\hspace*{-2.5mm}&-{G_F\over2\sqrt2}\,{\alpha_s\over\pi}m_t\Delta,\\
\Delta r&\hspace*{-2.5mm}=\hspace*{-2.5mm}&-{c_w^2\over s_w^2}\Delta\rho\left[1-
\left(1-{8\over3}s_w^2\right)^2{M_Z^2\over4m_t^2-M_Z^2}
+{16\over9}s_w^4{M_Z^2\over m_t^2}\right],\\
\Delta\kappa&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{c_w^2\over s_w^2}\Delta\rho\left[1-
\left(1-{8\over3}s_w^2\right){M_Z^2\over4m_t^2-M_Z^2}\right].
\end{eqnarray}
Obviously, the threshold effects have the same sign as the
$\O(\alpha_sG_Fm_t^2)$ corrections.
For realistic quark potentials, one has approximately $\Delta\propto m_t$,
so that the threshold contributions scale like $m_t^2$.
Again, $\Delta\rho$ is most strongly affected, while the corrections to
$\Delta r_{rem}$ and $\Delta\kappa_{rem}$ are suppressed by $M_Z^2/m_t^2$.
A comprehensive numerical analysis may be found in Refs.~21,25,26.
For 150~GeV${}\le m_t\le{}$200~GeV, the threshold effects enhance the QCD
corrections by roughly 30\%.
We emphasize that the above QCD corrections come with both experimental and
theoretical errors.
The experimental errors are governed by the $\alpha_s$ measurement,\cite{bet}
$\alpha_s(M_Z)=0.118\pm0.006$.
Assuming $m_t=174$~GeV,
this amounts to errors of $\pm5\%$ and $\pm18\%$ on the continuum and
threshold contributions to $\Delta\rho$, respectively.
This reflects the fact the $\alpha_s$ dependence is linear in the continuum,
while that of 1S peak height is approximately cubic.
Theoretical errors are due to unknown higher-order corrections.
In the continuum, they are usually estimated by varying the renormalization
scale, $\mu$, of $\alpha_s(\mu)$ in the range $m_t/2\le\mu\le2m_t$,
which amounts to $\pm11\%$.
The theoretical error on the threshold contribution is mainly due to
model dependence and is estimated to be $\pm20\%$ by comparing conventional
quark potentials.
A conservative analysis of the combined error on the absolute value
of $\Delta\rho$ at $m_t=174$~GeV yields $\pm1.5\cdot10^{-4}$.
Due to the magnification factor $c_w^2/s_w^2$, the corresponding error
on $\Delta r$ and $\Delta\kappa$ is $\pm5.0\cdot10^{-4}$.
We stress that, in the case of $\Delta r$ and thus the $M_W$ prediction
from the muon lifetime, this error is almost as large as the one from
hadronic sources introduced via $\Delta\alpha$; see Eq.~(\ref{dalp}).
For higher $m_t$ values, it may even be larger.
In Eq.~(\ref{drho}), we have evaluated the $\O(\alpha_sG_Fm_t^2)$
correction at $\mu=m_t$, since this is the only scale available.
However, this is a leading-order QCD prediction, which suffers from the
usual scale ambiguity.
We may choose $\mu=\xi m_t$ in such a way that the
$\O\Bigl(\alpha_s(\mu)G_Fm_t^2\Bigr)$ calculation agrees with the
$\O\Bigl(\alpha_s(m_t)G_Fm_t^2\Bigr)$ one plus the $t\bar t$ threshold
effects.
In the case of $\Delta\rho$, this leads to $\xi=0.190{+0.097\atop-0.057}$,
where we have included the $\pm30\%$ error on the $t\bar t$ threshold
contribution.
Alternative, conceptually very different approaches of scale
setting\cite{sv,asi,alb} yield results in the same ball park.
In Ref.~28,
it is suggested that long-distance effects lower
the renormalization point for $\alpha_s(\mu)$ in Eq.~(\ref{drho}) through the
contributions of the near-mass-shell region to the evolution of the quark mass
from the mass shell to distances of order $1/m_t$.
To estimate these effects, the authors of Ref.~28
apply the
Brodsky-Lepage-Mackenzie (BLM) criterion\cite{blm} to Eq.~(\ref{drho}) and
find $\xi=0.154$.
The author of Ref.~29
expresses first the fermionic contribution to $\Delta\rho$ in terms of
$\overline m_t(m_t)$, where $\overline m_t(\mu)$ is the top-quark
$\overline{\rm MS}$ mass at renormalization scale $\mu$, and then relates
$\overline m_t(m_t)$ to $m_t$ by optimizing the expansion of
$m_t/\overline m_t(m_t)$, which is known through $\O(\alpha_s^2)$,\cite{gra}
according to the BLM criterion.\cite{blm}
In Ref.~30,
he refines this argument by using the new results of Ref.~12
and an expansion of $\mu_t/\overline m_t(m_t)$, where
$\mu_t=\overline m_t(\mu_t)$, and obtains $\xi=0.323$.
Finally, we observe that the $\O(\alpha_s^2G_Fm_t^2)$ term indeed has the
very sign predicted by the study\cite{ks,fan,nut} of the $t\bar t$
threshold effects and accounts also for the bulk of their size.
In fact, this term may be absorbed into the $\O(\alpha_sG_Fm_t^2)$ term by
choosing\cite{avd} $\xi=0.348$ for $N_F=6$.
Arguing that $N_F=5$ is more appropriate for $\mu<m_t$, this value comes down
to\cite{avd} $\xi=0.324$, which is not far outside the range
$0.133\le\xi\le0.287$ predicted from the $t\bar t$ threshold analysis.
The residual difference may be understood by observing that the ladder
diagrams of $\O(\alpha_s^nG_Fm_t^2)$, with $n\ge3$,
are not included in the
fixed-order calculation of Ref.~12.
The claim\cite{ynd} that the $t\bar t$ threshold effects are greatly
overestimated in Refs.~21,25
is based on a simplified analysis,
which demonstrably\cite{nut} suffers from a number of severe analytical and
numerical errors.
Speculations\cite{ghv} that the dispersive computation of $t\bar t$ threshold
effects is unstable are quite obviously unfounded, since they arise from
uncorrelated and unjustifiably extreme variations of the continuum and
threshold contributions.
In particular, the authors of Ref.~34
ascribe the unavoidable scale
dependence of the $\O(\alpha_sG_Fm_t^2)$ continuum result to the uncertainty
in the much smaller threshold contribution, which artificially amplifies
this uncertainty.
In fact, the sum of both contributions, which is the physically relevant
quantity, is considerably less $\mu$ dependent than the continuum
contribution alone.\cite{nut}
\subsection{Specific Corrections: $\Gamma\left(Z\to b\bar b\right)$ and
$\Gamma(Z\to {\rm hadrons})$}
The observable $\Gamma\left(Z\to b\bar b\right)$ deserves special attention,
since it receives specific $m_t$ power corrections.
These may be accommodated in the improved Born approximation\cite{hol,hal}
by replacing the parameters
$\rho=(1-\Delta\rho)^{-1}$ and $\kappa=1+\Delta\kappa$ by
$\rho_b=\rho(1+\tau)^2$ and $\kappa_b=\kappa(1+\tau)^{-1}$, respectively,
where $\tau$ is an additional electroweak parameter.
Similarly to $\Delta\rho$, $\tau$ receives contributions in the orders
$G_Fm_t^2$, $G_F^2m_t^4$, $\alpha_sG_Fm_t^2$, etc.
\vglue 0.3cm
\leftline{\twelveit 2.2.1. Two-Loop $\O(G_F^2m_t^4)$ Corrections for
$M_H\ne0$}
\vglue 1pt
In the oblique corrections considered so far, the $m_t$ dependence
might be masked by all kinds of physics beyond the standard model.
Contrariwise, in the case of $Z\to b\bar b$,
the virtual top quark is tagged directly by the external bottom flavour.
At one loop, there is a strong cancellation between the flavour-independent
oblique corrections, $\Delta\rho$ and $\Delta\kappa$, and the specific
$Z\to b\bar b$ vertex correction,\cite{akh} $\tau$.
The leading two-loop corrections to $\tau$,
of\cite{bar} $\O(G_F^2m_t^4)$ and\cite{fle} $\O(\alpha_sG_Fm_t^2)$,
have recently become available.
The master formula reads
\begin{equation}
\label{tau}
\tau=-2x_t\left(1
+x_t\tau^{(2)}\left({M_H\over m_t}\right)
-2\zeta(2){\alpha_s(m_t)\over\pi}\right),
\end{equation}
where $x_t$ is defined above Eq.~(\ref{drho}).
$\tau^{(2)}(r)$ rapidly varies with $r$,
$\tau^{(2)}(r)\ge\tau^{(2)}(1.55)=1.23$, and its
asymptotic behaviour is given by\cite{bar}
\begin{equation}
\tau^{(2)}(r)=
\cases{
\displaystyle
-2\zeta(2)+9-4\pi r+\O(r^2\ln r),
&if $r\ll1;$\cr
\displaystyle
{5\over2}\ln^2r-{47\over12}\ln r+\zeta(2)+{311\over144}
+O\left({\ln^2r\over r^2}\right),
&if $r\gg1.$\cr}
\end{equation}
The value at $r=0$ has been confirmed by a third group.\cite{den}
\vglue 0.3cm
\leftline{\twelveit 2.2.2. Two-Loop $\O(\alpha_sG_Fm_t^2)$ Corrections}
\vglue 1pt
In Eq.~(\ref{tau}), we have also included the $\O(\alpha_sG_Fm_t^2)$
term,\cite{fle}
assuming that the formula for $\Gamma\left(Z\to b\bar b\right)$ is, at the
same time, multiplied by the overall factor $(1+\alpha_s/\pi)$,
which is the common beginning of the QCD perturbation series of the quark
vector and axial-vector current correlators, $R^V$ and $R^A$.
We observe that the $\O(G_F^2m_t^4)$ and $\O(\alpha_sG_Fm_t^2)$ terms of
Eq.~(\ref{tau}) cancel partially.
\vglue 0.3cm
\leftline{\twelveit 2.2.3. Three-Loop $\O(\alpha_s^3)$ Corrections}
\vglue 1pt
Most of the results discussed in this section are valid also for the
$Z\to q\bar q$ decays with $q\ne b$.
Here, we put $m_q=0$, except for $q=t$.
Finite-$m_q$ effects will be considered in the next section.
By the optical theorem, the QCD corrections to
$\Gamma\left(Z\to q\bar q\right)$
may be viewed as the imaginary parts of the $Z$-boson self-energy diagrams
that contain a $q$-quark loop decorated with virtual gluons and possibly
other quark loops.
Diagrams where the two $Z$-boson lines are linked to the same quark loop
are usually called non-singlet, while the residual diagrams are called
singlet, which includes the so-called double-triangle diagrams.
By $\gamma_5$ reflection, the non-singlet contribution, $R_{NS}$, to $R^A$
coincides with the one to $R^V$.
Up to $\O(\alpha_s^3)$ in the $\overline{\rm MS}$ scheme with $N_F=5$, one
has\cite{sur}
\begin{equation}
\label{rns}
R_{NS}=1+{\alpha_s\over\pi}+\left({\alpha_s\over\pi}\right)^2
\left(1.40923+F\left({M_Z\over4m_t^2}\right)\right)
-12.76706\left({\alpha_s\over\pi}\right)^3.
\end{equation}
$F$ collects the decoupling-top-quark effects in $\O(\alpha_s^2)$ and has the
expansion\cite{che}
\begin{equation}
F(r)=r\left[-{8\over135}\ln(4r)+{176\over675}\right]+\O(r^2).
\end{equation}
$F$ has also been obtained in numerical form recently.\cite{sop}
We note that an analytic expression for $F$ had been known previously from
the study of the two-loop QED vertex correction due to virtual heavy
fermions.\cite{ver}
Recently, the $\O(\alpha_s^4)$ term of Eq.~(\ref{rns}) has been estimated
using the principle of minimal sensitivity and the effective-charges
approach.\cite{kat}
The $\O(\alpha^2)$ and $\O(\alpha\alpha_s)$ corrections to
$\Gamma\left(Z\to b\bar b\right)$ from photonic source are well under
control.\cite{alk}
Due to Furry's theorem, singlet diagrams with $q\bar qZ$ vector couplings
occur just in $\O(\alpha_s^3)$.
They contain two quark loops at the same level of hierarchy, which,
in general, involve different flavours.
Thus, they cannot be assigned unambiguously to a specific $q\bar q$ channel.
In practice, this does not create a problem, since their combined contribution
to $\Gamma(Z\to{\rm hadrons})$ is very small anyway,\cite{sur}
\begin{equation}
\delta\Gamma_Z={G_FM_Z^3\over8\pi\sqrt2}\left(\sum_{q=u,d,s,c,b}v_q\right)^2
(-0.41318)\left({\alpha_s\over\pi}\right)^3,
\end{equation}
where $v_q=2I_q-4Q_qs_w^2$.
Axial-type singlet diagrams contribute already in $\O(\alpha_s^2)$.
The sum over triangle subgraphs involving mass-degenerate (e.g., massless)
up- and down-type quarks vanishes.
Thus, after summation, only the double-triangle diagrams involving $t$ and
$b$ quarks contribute to $\Gamma\left(Z\to b\bar b\right)$ and
$\Gamma(Z\to{\rm hadrons})$.
The present knowledge of the singlet part, $R_S^A$, of $R^A$ is summarized by
($m_t$ is the top-quark pole mass)
\begin{equation}
\label{rsa}
R_S^A=\left({\alpha_s\over\pi}\right)^2{1\over3}
I\left({M_Z^2\over4m_t^2}\right)+\left({\alpha_s\over\pi}\right)^3
\left({23\over12}\ln^2{m_t^2\over M_Z^2}-{67\over18}\ln{m_t^2\over M_Z^2}
-15.98773\right).
\end{equation}
An analytic expression for the $I$ function may be found in Ref.~44;
its high-$m_t$ expansion reads\cite{kk}
\begin{equation}
\label{iexp}
I(r)=3\ln(4r)-{37\over4}+{28\over27}r+\O(r^2).
\end{equation}
The second term on the right-hand side of Eq.~(\ref{iexp}) has been confirmed
recently.\cite{kgc}
The $\O(\alpha_s^3)$ logarithmic terms of Eq.~(\ref{rsa}) follow from
Eq.~(\ref{iexp}) by means of renormali\-zation-group techniques,\cite{kgc}
while the constant term requires a separate computa\-tion.\cite{lar}
\vglue 0.3cm
\leftline{\twelveit 2.2.4. Finite-$m_b$ Effects}
\vglue 1pt
In $\O(\alpha_s)$, the full $m_b$ dependence of $R^V$ and $R^A$ is
known,\cite{cgn,sch} while, in higher orders, only the first terms of their
$m_b^2/M_Z^2$ expansions have been calculated.\cite{sgg,chk,ckk}
In the $\overline{\rm MS}$ scheme, one has
\begin{eqnarray}
\label{drv}
\delta R^V&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{12\overline m_b^2\over M_Z^2}\,{\alpha_s\over\pi}\left[1+
{629\over72}\,{\alpha_s\over\pi}+45.14610\left({\alpha_s\over\pi}\right)^2
\right],\\
\label{dra}
\delta R^A&\hspace*{-2.5mm}=\hspace*{-2.5mm}&-{6\overline m_b^2\over M_Z^2}\left[1+{11\over3}\,
{\alpha_s\over\pi}
+\left({\alpha_s\over\pi}\right)^2\left(11.28560-\ln{m_t^2\over M_Z^2}\right)
\right],
\end{eqnarray}
where $\alpha_s$ and the $b$-quark $\overline{\rm MS}$ mass,
$\overline m_b$, are to be evaluated at $\mu=M_Z$.
The second and third terms of Eq.~(\ref{drv}) come from
Refs.~48,49,
respectively, and the third term of Eq.~(\ref{dra})
is from Ref.~50.
Due to the use of $\overline m_b(M_Z)$, Eqs.~(\ref{drv},\ref{dra}) are devoid
of terms involving $\ln(M_Z^2/m_b^2)$.
The $\O(\alpha_sm_b^2/M_Z^2)$ corrections should be detectable.
The finite-$m_b$ terms beyond $\O(\alpha_s)$ in Eqs.~(\ref{drv},\ref{dra})
each amount to approximately $5\cdot10^{-3}\%$ of
$\Gamma\left(Z\to b\bar b\right)$ but have opposite signs.
\section{Higgs Sector: Corrections to $\Gamma\left(H\to f\bar f\,\right)$}
Quantum corrections to Higgs-boson phenomenology have received
much attention in the literature; for a review, see Ref.~51.
The experimental relevance of radiative corrections to the $f\bar f$
branching fractions of the Higgs boson has been emphasized recently
in the context of a study\cite{gkw} dedicated to LEP~2.
Techniques for the measurement of these branching fractions at a
$\sqrt s=500$~GeV $e^+e^-$ linear collider have been elaborated in
Ref.~53.
In the Born approximation, the $f\bar f$ partial widths of the Higgs boson
are given by
\begin{equation}
\label{born}
\Gamma_0\left(H\to f\bar f\,\right)={N_fG_FM_Hm_f^2\over4\pi\sqrt2}
\left(1-{4m_f^2\over M_H^2}\right)^{3/2},
\end{equation}
where $N_f=1$~(3) for lepton (quark) flavours.
The full one-loop electroweak corrections to Eq.~(\ref{born}) are now well
estab\-lished.\cite{jfl,hff}
They consist of an electromagnetic and a weak part, which are separately
finite and gauge independent.
They may be included in Eq.~(\ref{born}) as an overall factor,
$\left[1+(\alpha/\pi)Q_f^2\Delta_{em}\right](1+\Delta_{weak})$.
For $M_H\gg2m_f$, $\Delta_{em}$ develops a large logarithm,
\begin{equation}
\label{delem}
\Delta_{em}=-{3\over2}\ln{M_H^2\over m_f^2}+{9\over4}
+\O\left({m_f^2\over M_H^2}\ln{M_H^2\over m_f^2}\right).
\end{equation}
For $M_H\ll2M_W$, the weak part is well approximated by\cite{hff}
\begin{equation}
\label{weak}
\Delta_{weak}={G_F\over8\pi^2\sqrt2}\left\{C_fm_t^2
+M_W^2\left({3\over s_w^2}\ln c_w^2-5\right)
+M_Z^2\left[{1\over2}-3\left(1-4s_w^2|Q_f|\right)^2\right]\right\},
\end{equation}
where $C_b=1$ and $C_f=7$ for all other flavours, except for top.
The $t\bar t$ mode will not be probed experimentally anytime soon
and we shall not be concerned with it in the remainder of this presentation.
{}From Eq.~(\ref{weak}) it is evident that the dominant effect is due to
virtual
top quarks.
In the case $f\ne b$, the $m_t$ dependence is carried solely by the
renormalizations of the wave function and the vacuum expectation value
of the Higgs field and is thus flavour independent.
These corrections are of the same nature as those considered in
Ref.~56.
For $f=b$, there are additional $m_t$-dependent contributions from the
$b\bar bH$ vertex correction and the $b$-quark wave-function renormalization.
Incidentally, they cancel almost completely the universal $m_t$ dependence.
It is amusing to observe that a similar situation has been encountered in the
context of the $Z\to f\bar f$ decays.\cite{akh}
The QCD corrections to the universal and non-universal $\O(G_Fm_t^2)$ terms
will be presented in the next two sections.
\subsection{Two-Loop $\O(\alpha_sG_Fm_t^2)$ Universal Corrections}
The universal $\O(G_Fm_t^2)$ term of $\Delta_{weak}$ resides inside the
combination
\begin{equation}
\label{delta}
\Delta_u=-{\Pi_{WW}(0)\over M_W^2}-\mathop{\rm Re}\nolimits \Pi_{HH}^\prime\left(M_H^2\right),
\end{equation}
where $\Pi_{WW}$ and $\Pi_{HH}$ are the unrenormalized self-energies
of the $W$ and Higgs bosons, respectively.\cite{hff}
The same is true of its QCD correction.
For $M_H<2m_t$ and $m_b=0$, the one-loop term reads\cite{hff}
\begin{equation}
\label{delzero}
\Delta_u^0=4N_cx_t\left[\left(1+{1\over2r}\right)
\sqrt{{1\over r}-1}\arcsin\sqrt r-{1\over4}-{1\over2r}\right],
\end{equation}
where $r=(M_H^2/4m_t^2)$ and $x_t$ is defined above Eq.~(\ref{drho}).
In the same approximation, the two-loop term may be written as\cite{hll,two}
\begin{equation}
\label{delone}
\Delta_u^1=N_cC_Fx_t{\alpha_s\over\pi}
\left(6\zeta(3)+2\zeta(2)-{19\over4}-\mathop{\rm Re}\nolimits H_1^\prime(r)\right),
\end{equation}
where $C_F=\left(N_c^2-1\right)/(2N_c)=4/3$ and $H_1$ has an expression in
terms of dilogarithms and trilogarithms.\cite{two}
Equation~(\ref{delone}) has been confirmed recently.\cite{djg}
In the heavy-quark limit ($r\ll1$), one has\cite{two}
\begin{equation}
H_1^\prime(r)=6\zeta(3)+3\zeta(2)-{13\over4}+{122\over135}r+\O(r^2).
\end{equation}
Combining Eqs.~(\ref{delzero},\ref{delone}) and retaining only the leading
high-$m_t$ terms, one finds the QCD-corrected coefficients $C_f$ for $f\ne b$,
\begin{equation}
\label{kun}
C_f=7-2\left({\pi\over3}+{3\over\pi}\right)\alpha_s
\approx7-4.00425\,\alpha_s.
\end{equation}
This result has been reproduced recently.\cite{kwi}
We recover the notion that, in electroweak physics, the one-loop
$\O\left(G_Fm_t^2\right)$ terms get screened by their QCD corrections.
The QCD correction to the shift in $\Gamma\left(H\to f\bar f\,\right)$
induced by a pair of novel quarks with arbitrary masses may be found in
Ref.~57.
\subsection{Two-Loop $\O(\alpha_sG_Fm_t^2)$ Non-universal Corrections}
The QCD correction to the non-universal one-loop contribution to
$\Gamma\left(H\to b\bar b\right)$ arises in part from genuine two-loop
three-point diagrams, which are more involved technically.
However, the leading high-$m_t$ term may be extracted\cite{spi} by means of a
low-energy theorem,\cite{ell}
which relates the amplitudes of two processes that differ by the
insertion of an external Higgs-boson line carrying zero momentum.
In this way, one only needs to compute the irreducible two-loop $b$-quark
self-energy diagrams with one gluon and one longitudinal $W$ boson, which
may be taken massless.
After using the Dirac equation and factoring out one power of $m_b$,
one may put $m_b=0$ in the two-loop integrals,
which may then be solved analytically.
Applying the low-energy theorem and performing on-shell renormalization,
one eventually finds the non-universal leading high-$m_t$ term along with
its QCD correction,\cite{spi}
\begin{equation}
\Delta_{nu}=x_t\left(-6+{3\over2}C_F{\alpha_s\over\pi}\right).
\end{equation}
Combining the term contained within the parentheses with Eq.~(\ref{kun}),
one obtains the QCD-corrected coefficient $C_b$,
\begin{equation}
\label{knu}
C_b=1-2\left({\pi\over3}+{2\over\pi}\right)\alpha_s
\approx1-3.36763\,\alpha_s.
\end{equation}
Again, the $\O(G_Fm_t^2)$ term is screened by its QCD correction.
\subsection{Two-Loop $\O(\alpha_s^2)$ Corrections Including Finite-$m_q$
Effects}
In the on-shell scheme, the one-loop QCD correction\cite{bra} to
$\Gamma\left(H\to q\bar q\right)$ emerges from one-loop QED correction by
substituting $\alpha_sC_F$ for $\alpha Q_f^2$.
{}From Eq.~(\ref{delem}) it is apparent that, for $m_q\ll M_H/2$, large
logarithmic corrections occur.
In general, they are of the form $(\alpha_s/\pi)^n\ln^m(M_H^2/m_q^2)$,
with $n\ge m$.
Owing to the renormalization-group equation, these logarithms may be
absorbed completely into the running $\overline{\rm MS}$ quark mass,
$\overline m_q(\mu)$, evaluated at $\mu=M_H$.
A similar mechanism has been exploited also in Eqs.~(\ref{drv},\ref{dra}).
In this way, these logarithms are resummed to all orders and the perturbation
expansion converges more rapidly.
This observation gives support to the notion that the $q\bar qH$ Yukawa
couplings are controlled by the running quark masses.
For $q\ne t$, the QCD corrections to $\Gamma\left(H\to q\bar q\right)$ are
known up to $\O(\alpha_s^2)$.
In the $\overline{\rm MS}$ scheme, the result is\cite{bak,lrs}
\begin{eqnarray}
\label{hqqmsb}
\Gamma\left(H\to q\bar q\right)&\hspace*{-2.5mm}=\hspace*{-2.5mm}&{3G_FM_H\overline m_q^2\over4\pi\sqrt2}
\left[\left(1-4{\overline m_q^2\over M_H^2}\right)^{3/2}
+C_F{\alpha_s\over\pi}\left({17\over4}-30{\overline m_q^2\over M_H^2}\right)
\right.\nonumber\\
&&\qquad{}+\left.
\left({\alpha_s\over\pi}\right)^2\left(K_1
+K_2{\overline m_q^2\over M_H^2}+12\sum_{i=u,d,s,c,b}
{\overline m_i^2\over M_H^2}\right)
\vphantom{\left(1-4{\overline m_q^2\over M_H^2}\right)^{3/2}}\right],
\end{eqnarray}
where $K_1=35.93996-1.35865\,N_F$,\cite{gor}
$K_2=-129.72924+6.00093\,N_F$,\cite{lrs} with $N_F$ being the number
of quark flavours active at $\mu=M_H$, and it is understood that $\alpha_s$,
$\overline m_q$, and $\overline m_i$ are to be evaluated at this scale.
The electroweak corrections may be implemented in Eq.~(\ref{hqqmsb}) by
multiplication with
$\left[1+(\alpha/\pi)Q_f^2\Delta_{em}\right](1+\Delta_{weak})$,
where $\Delta_{em}$ and $\Delta_{weak}$ are given in
Eqs.~(\ref{delem},\ref{weak}), respectively.
To include also the $\O(\alpha_sG_Fm_t^2)$ corrections, one substitutes in
Eq.~(\ref{weak}) the QCD-corrected $C_f$ terms specified in
Eqs.~(\ref{kun},\ref{knu}).
We note in passing that our result\cite{spi} disagrees with a recent
calculation\cite{kwi} of the $\O(\alpha_sG_Fm_t^2)$ correction to
$\Gamma\left(H\to b\bar b\right)$ in the on-shell scheme.
\section{Conclusions}
In conclusion, all dominant two-loop and even certain three-loop radiative
corrections to $Z$-boson physics are now available.
However, one has to bear in mind that, apart from the lack of knowledge of
the accurate values of $M_H$ and $m_t$, the reliability
of the theoretical predictions is limited by a number of error sources.
The inherent QCD errors on the hadronic contribution to $\Delta\alpha$ and
the $tb$ contribution to $\Delta\rho$ are
$\delta\Delta\alpha=\pm7\cdot10^{-4}$ and
$\delta\Delta\rho=\pm1.5\cdot10^{-4}$, respectively, which amounts to
$\delta\Delta r=\pm8.6\cdot10^{-4}$.
The unknown electroweak corrections are of the order
$(\alpha/\pi s_w^2)^2(m_t^2/M_Z^2)\ln(m_t^2/M_Z^2)\approx6\cdot10^{-4}$,
possibly multiplied by a large prefactor.\cite{dfg}
The scheme dependence of the key electroweak parameters has been estimated
in Refs.~25,65,66
by comparing the evaluations in the on-shell
scheme and certain variants of the $\overline{\rm MS}$ scheme;
the maximum variation of $\Delta r$ in the ranges
60~GeV${}<M_H<{}$1~TeV and 150~GeV${}<m_t<{}$200~GeV
is $8\cdot10^{-5}$ when the coupling-constant renormalization
is converted\cite{fan} and $4\cdot10^{-4}$ when the top-quark mass is
redefined taking into account just the QCD corrections.\cite{ber}
The effect on $\Delta\rho$ of including also the leading electroweak
corrections in the redefinition of the top-quark mass has been
investigated\cite{boc} recently in the approximation $M_H,m_t\gg M_Z$.
The theoretical predictions for Higgs-boson physics at present and
near-future colliding-beam experiments are probably far more precise than the
expected theoretical errors.
\section{Acknowledgements}
I would like to thank the organizers of the Tennessee International Symposium
on Radiative Corrections, in particular Prof.\ B.F.L. Ward, for creating such
a stimulating atmosphere.
I am indebted to the Department of Physics and Astronomy of the University
of Tennessee at Knoxville for supporting my travel.
I am grateful to the KEK Theory Group for the warm hospitality extended to me
during my visit, when this talk was written up.
This work was supported by the Japan Society for the Promotion of Science
(JSPS) through Fellowship No.~S94159.
\section{References}
\vspace*{-0.4cm}
|
2,869,038,156,660 | arxiv | \section{Introduction}
\label{Sec:Intro}
Many living organisms, chemical and physical systems can behave as self-sustained oscillators~\cite{Book:Winfree}.
Spiking neurons in the brain, flashing fireflies, Belousov-Zhabotinsky (BZ) reaction
and swinging pendula of metronomes are just a few examples of such kind.
When two or more self-sustained oscillators interact with each other,
their rhythms tend to adjust in a certain order
resulting in their partial or complete synchronization~\cite{Book:PikovskyRK,AreD-GKMZ2008}.
Such phenomena have been observed in many real-world systems and laboratory experiments,
including Josephson junction arrays~\cite{WieCS1996}, populations of fireflies~\cite{BucB1968} and yeast cells~\cite{DeMODS2008},
chemical~\cite{TayTWHS2009} and electrochemical oscillators~\cite{KisZH2002}.
Moreover, synchronization turns out to underlie many physiological processes~\cite{Gla2001,YamIMOYKO2003}
and is, in some cases, associated with certain brain disorders,
such as schizophrenia, epilepsy, Alzheimer's and Parkinson's diseases~\cite{UhlS2006,LehBHKRSW2009}.
Mathematical modelling of synchronization processes often relies
on different versions of the Kuramoto model~\cite{Book:Kuramoto,Str2000,AceBVRS2005,RodPJK2016},
where the state of each oscillator is described by a single scalar variable, its phase.
These models are derived as normal forms
for general weakly coupled oscillator networks~\cite{Book:HoppensteadtI,AshR2016,PieD2019}.
Moreover, the resulting equations are simplified additionally to reduce their complexity.
This paper is concerned with a Kuramoto model of the form
\begin{equation}
\df{\theta_k}{t} = - \frac{2 \pi}{N} \sum\limits_{j=1}^N
G\left( \fr{2\pi (k - j)}{N} \right) \sin( \theta_k(t) - \theta_j(t) + \alpha),\qquad k=1,\dots,N.
\label{Eq:Oscillators}
\end{equation}
Here~$G(x)$ is a nonconstant continuous even function called {\it coupling kernel}
and $\alpha\in(-\pi/2,\pi/2)$ is a phase lag parameter.
System~(\ref{Eq:Oscillators}) describes dynamics of~$N$ identical nonlocally coupled phase oscillators~$\theta_k$.
Moreover, if~$G(x)$ is $2\pi$-periodic, then the connections between oscillators have circular symmetry
and system~(\ref{Eq:Oscillators}) is in fact a ring of coupled oscillators.
Model~(\ref{Eq:Oscillators}) was first suggested by Kuramoto and Battogtokh in~\cite{KurB2002}
and since then has been intensively studied in the context of chimera states.
By chimera states one denotes specific dynamical regimes in system~(\ref{Eq:Oscillators})
where a part of oscillators get synchronized, while the others keep oscillating asynchronously.
Such states are irrelevant to the circular symmetry of system~(\ref{Eq:Oscillators}),
therefore their emergence seems to be counterintuitive,
what explains the origin of their name~\cite{AbrS2004}.
Being first reported for nonlocally coupled phase oscillators,
later chimera states have been found in a variety of other coupled oscillator systems,
both in experiments~\cite{HagMRHOS2012,KapKWCM2014,MarTFH2013,RosRHSG2014,SchSKG-M2014,TinNS2012,TotRTSE2018,WicK2013},
and in numerical simulations, see~\cite{PanA2015,KemHSKK2016,Sch2016,Ome2018,MajBGP2019} and references therein.
A typical example of a chimera state in the system~(\ref{Eq:Oscillators}) with a cosine coupling kernel
\begin{equation}
G(x) = \fr{1}{2\pi} (1 + A \cos x),\qquad A>0,
\label{Coupling:Cos}
\end{equation}
is shown in Figure~\ref{Fig:P}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{DynamicsP.pdf}
\end{center}
\caption{
A stationary chimera state in the system~(\ref{Eq:Oscillators}) with cosine coupling kernel~(\ref{Coupling:Cos}).
(a) Snapshot. (b) Effective frequencies~$\Omega_{\mathrm{eff},k}$ as defined in~(\ref{Def:Omega_eff}).
(c) Space-time plot of phase velocities.
(d) Modulus of the global order parameter~$Z_N(t)$ given by~(\ref{Def:Z}).
Parameters: $N = 8192$, $A = 0.9$ and $\alpha = \pi/2 - 0.16$.
}
\label{Fig:P}
\end{figure}
Note that, for convenience, in the panels~(a)--(c) we use the oscillator positions $x_k = -\pi + 2\pi k / N$ instead of the discrete indices~$k$.
The main difference between synchronized and asynchronous oscillators can be seen in the space-time plot~(c).
The former oscillators rotate at almost the same constant speed,
while the latter oscillators have a more complicated dynamics.
As a result, in the snapshot~(a) there appear two regions:
a coherent region, where the phases~$\theta_k$ lie on a smooth curve,
and an incoherent region, where the phases are randomly distributed.
Moreover, if we compute the {\it effective frequencies} $\Omega_{\mathrm{eff},k}$ defined by
\begin{equation}
\Omega_{\mathrm{eff},k} = \lim\limits_{\tau\to\infty} \fr{1}{\tau} \int_0^\tau \df{\theta_k}{t} dt,
\label{Def:Omega_eff}
\end{equation}
then we obtain the arc-shaped graph~(b) with a plateau corresponding to synchronized oscillators.
Note that although the oscillator dynamics behind the chimera state is quite complicated and chaotic~\cite{WolOYM2011},
in the statistical sense, this state is stationary as can be seen from the graph~(d)
of the global order parameter
\begin{equation}
Z_N(t) = \fr{1}{N} \sum\limits_{k=1}^N e^{i \theta_k(t)}.
\label{Def:Z}
\end{equation}
In this paper, we consider a more complicated and therefore less explored type of chimera states,
called {\it breathing chimera states}, which is characterized by nonstationary macroscopic dynamics.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{DynamicsQP.pdf}
\end{center}
\caption{A breathing chimera state in the system~(\ref{Eq:Oscillators}) with cosine coupling kernel~(\ref{Coupling:Cos}).
(a) Snapshot. (b) Effective frequencies~$\Omega_{\mathrm{eff},k}$ as defined in~(\ref{Def:Omega_eff}).
(c) Space-time plot of phase velocities.
(d) Modulus of the global order parameter~$Z_N(t)$ given by~(\ref{Def:Z}).
Parameters: $N = 8192$, $A = 1.05$ and $\alpha = \pi/2 - 0.16$.
}
\label{Fig:QP}
\end{figure}
An example of such state is shown in Figure~\ref{Fig:QP}.
Roughly speaking, the typical features of breathing chimera states are:
(i) multiple synchronized and asynchronous regions in the snapshot of~$\theta_k$,
(ii) multiple equidistant plateaus in the graph of the effective frequencies $\Omega_{\mathrm{eff},k}$,
(iii) periodically varying (breathing) oscillator dynamics,
(iv) oscillating modulus~$|Z_N(t)|$ of the global order parameter.
Apart from the above example breathing chimera states have been found in systems~(\ref{Eq:Oscillators})
with exponential~\cite{BolSOP2018} and top-hat~\cite{SudO2018,SudO2020} coupling kernels~$G(x)$.
They were also observed in two-dimensional lattices of phase oscillators
as breathing spirals with randomized cores~\cite{XieKK2015,OmeWK2018}.
Moreover, it was shown that breathing chimera states persist
for slightly heterogeneous phase oscillators~\cite{Ome2020a,Lai2009}
and can emerge in systems of coupled limit cycle oscillators~\cite{GopCVL2014,CleCFR2018}.
In spite of the number of these results the true nature of breathing chimera states still remains unclear.
Even for relatively simple system~(\ref{Eq:Oscillators}) no mathematical methods are known
to analyze their stability or predict their properties for given coupling kernel~$G(x)$ and phase lag~$\alpha$.
This is the problem we are going to address below.
Our approach is based on the consideration of an integro-differential equation
\begin{equation}
\frac{d z}{d t} = \frac{1}{2} e^{-i \alpha} \mathcal{G} z
- \frac{1}{2} e^{i \alpha} z^2 \mathcal{G} \overline{z},
\label{Eq:OA}
\end{equation}
where~$z(x,t)$ is an unknown complex-valued function $2\pi$-periodic with respect to~$x$,
$\overline{z}$~denotes the complex conjugate of~$z$
and symbol~$\mathcal{G}$ denotes an integral operator
\begin{equation}
\mathcal{G}\::\: C_\mathrm{per}([-\pi,\pi];\mathbb{C})\to C_\mathrm{per}([-\pi,\pi];\mathbb{C}),\qquad
( \mathcal{G} u)(x) = \int_{-\pi}^\pi G(x-y) u(y) dy
\label{Def:G}
\end{equation}
with the coupling kernel~$G(x)$ identical to that in system~(\ref{Eq:Oscillators}).
It is known~\cite{Ome2018,Ome2013} that in the limit of infinitely many oscillators $N\to\infty$,
Eq.~(\ref{Eq:OA}) describes the long-term coarse-grained dynamics of system~(\ref{Eq:Oscillators}).
If one assumes that $x_k = -\pi + 2\pi k / N$ is the physical position of the $k$th oscillator,
then $z(x,t)$~yields the local order parameter of the oscillators positioned around $x\in[-\pi,\pi]$.
More precisely, this means
$$
z(x,t) = \fr{1}{\# \{ k\::\: | x_k - x | < \delta \} } \sum\limits_{k\::\: |x_k - x|<\delta} e^{i \theta_k(t)},
$$
where $0 < \delta << 1$ and $\#\{\cdot\}$ denotes the number of indices~$k$ that satisfy the condition in curly brackets.
If $|z(x,t)| = 1$, then the oscillators with $x_k\approx x$ behave synchronously, in other words, they are coherent.
In contrast, if $|z(x,t)| < 1$, then the oscillators with $x_k\approx x$ behave asynchronously.
This interpretation implies that only functions~$z(x,t)$ satisfying the inequality $|z(x,t)|\le 1$
are relevant to the oscillator system~(\ref{Eq:Oscillators}).
Moreover, every chimera state is represented by a spatially structured solution~$z(x,t)$
composed of both coherent ($|z| = 1$) and incoherent ($|z| < 1$) regions.
In most of previous studies, chimera states were identified with the rotating wave solutions of Eq.~(\ref{Eq:OA}) given by
\begin{equation}
z = a(x) e^{i \Omega t},
\label{Ansatz:RW}
\end{equation}
where $a\in C_\mathrm{per}([-\pi,\pi];\mathbb{C})$ and $\Omega\in\mathbb{R}$.
However, for breathing chimera states ansatz~(\ref{Ansatz:RW}) does not work.
Numerical results in~\cite{SudO2020,OmeWK2018} suggest
that, in this case, one has to look for more complicated solutions
\begin{equation}
z = a(x,t) e^{i \Omega t},
\label{Ansatz:QP}
\end{equation}
where~$a(x,t)$ is a function $2\pi$-periodic with respect to~$x$
and $T$-periodic with respect to~$t$ for some~$T>0$.
To distinguish the cyclic frequencies~$\Omega$ and~$\omega = 2\pi/T$
in the following we call them {\it the primary frequency} and {\it the secondary frequency}, respectively.
In general these frequencies are different and incommensurable,
therefore the dynamics of the solution~(\ref{Ansatz:QP}) appears to be quasiperiodic,
thus breathing chimera states can also be called {\it quasiperiodic chimera states}.
\begin{remark}
Note that the product ansatz~(\ref{Ansatz:QP}) with a $T$-periodic function $a(x,t)$, in general, is not uniquely determined.
Indeed, for every nonzero integer $m$ we can rewrite it in the equivalent form
$$
z = a_m(x,t) e^{i \Omega_m t}\quad\mbox{with}\quad
a_m(x,t) = a(x,t) e^{i m \omega t}\quad\mbox{and}\quad \Omega_m = \Omega - m \omega.
$$
To avoid this ambiguity, throughout the paper we assume that the function $a(x,t)$
and the constant $\Omega$ in~(\ref{Ansatz:QP}) are chosen so that
$$
\lim\limits_{\tau\to\infty} \frac{1}{\tau} \int_0^\tau d \arg Y(t) = 0
\qquad\mbox{where}\qquad
Y(t) = \frac{1}{2\pi} \int_{-\pi}^\pi a(x,t) dx.
$$
Roughly speaking, we request that the variation of the complex argument of $Y(t)$
remains bounded for all $t \ge 0$.
Importantly, the above calibration condition is well-defined only if $Y(t)\ne 0$ for all $t\ge 0$,
therefore we checked carefully that this requirement is satisfied
for all examples of breathing chimera states shown in the paper.
\label{Remark:Calibration}
\end{remark}
\begin{remark}
The term quasiperiodic chimera state was previously used in~\cite{PikR2008}
to denote some partially synchronized states in a two population model
of globally coupled identical phase oscillators.
In contrast to breathing chimera states considered here,
the quasiperiodic chimera states from~\cite{PikR2008} are characterized
by quasiperiodically (not periodically!) oscillating modulus $|Z_N(t)|$ of the global order parameter.
Moreover, their mathematical description goes beyond the Ott-Antonsen theory~\cite{OttA2008} used in this paper
and requires the application of a more general approach suggested by Watanabe and Strogatz~\cite{WatS1993,WatS1994}.
The chimera state in Figure~\ref{Fig:QP} demonstrates a simpler, periodic dynamics of $|Z_N(t)|$,
therefore, in the context of two population models, it should be compared
with the breathing chimera states reported in~\cite{AbrMSW2008}.
\end{remark}
\begin{remark}
It is easy to verify that Eq.~(\ref{Eq:OA}) is equivariant with respect
to the one-parameter Lie group of complex phase shifts $z \mapsto z e^{i \phi}$ with $\phi\in\mathbb{R}\backslash 2\pi\mathbb{Z}$.
Therefore, in the mathematical literature, solutions of Eq.~(\ref{Eq:OA}) of the form~(\ref{Ansatz:RW}) and~(\ref{Ansatz:QP})
are called relative equilibria and relative periodic orbits (with respect to this group action), respectively.
These names also indicate that solution~(\ref{Ansatz:RW}) becomes an equilibrium
and solution~(\ref{Ansatz:QP}) becomes a periodic orbit in the appropriate corotating frame.
The existence and stability of relative equilibria of Eq.~(\ref{Eq:OA})
were in the focus of many papers, in particular, those concerned with stationary chimera states.
In contrast, relative periodic orbits of Eq.~(\ref{Eq:OA}) were hardly considered.
The main purpose of the present paper is to fill this gap.
\end{remark}
In this paper, we demonstrate that ansatz~(\ref{Ansatz:QP}) does describe breathing chimera states
and can be used for their continuation as well as for their stability analysis.
The paper is organized as follows.
In Section~\ref{Sec:Riccati} we consider an auxiliary complex Riccati equation
$$
\df{u}{t} = w(t) - i s u(t) - \overline{w}(t) u^2(t),
$$
where~$s$ is a real coefficient and~$w(t)$ is a continuous complex-valued function.
We show that, in general, this equation has a unique stable periodic solution satisfying the inequality~$|u(t)|\le 1$.
The corresponding solution operator is denoted by
$$
\mathcal{U}\::\: (w,s) \in C_\mathrm{per}([0,2\pi];\mathbb{C}) \times \mathbb{R} \mapsto u\in C^1_\mathrm{per}([0,2\pi];\mathbb{C}).
$$
Its properties are discussed in Sections~\ref{Sec:Operator:U} and~\ref{Sec:Operator:U:Derivatives}.
Although the operator~$\mathcal{U}$ is defined implicitly, it turns out that its value can be computed
by solving only three initial value problems for the above complex Riccati equation.
In Section~\ref{Sec:SC}, we show that if Eq.~(\ref{Eq:OA}) has a stable solution of the form~(\ref{Ansatz:QP}),
then its amplitude~$a(x,t)$ and its primary and secondary frequencies~$\Omega$ and~$\omega$
satisfy a self-consistency equation
\begin{equation}
2 \omega e^{i \alpha} w(x,t) - \mathcal{G} \mathcal{U}( w(x,t), s ) = 0,
\label{Eq:SC}
\end{equation}
where
\begin{equation}
\omega = \fr{2\pi}{T},\qquad
s = \fr{\Omega}{\omega},\qquad
w(x,t) = \fr{e^{-i \alpha}}{2 \omega} \mathcal{G} a\left( x, \fr{t}{\omega} \right).
\label{Def:omega:s:w}
\end{equation}
A modified version of Eq.~(\ref{Eq:SC}) is obtained in Section~\ref{Eq:SC:Modified}.
Then in Section~\ref{Eq:SC:Modified:Cos} we suggest a continuation algorithm
allowing to compute the solution branches of Eq.~(\ref{Eq:SC})
and thus to predict the properties of breathing chimera states.
In particular, in Section~\ref{Sec:Formulas:Z:Omega_eff}
we explain how the continuum limit analogs of the global order parameter~$Z_N(t)$
and of the effective frequencies~$\Omega_{\mathrm{eff},k}$ can be computed.
Moreover, for the sake of completeness, in Section~\ref{Sec:Extraction}
we also explain how to extract the primary and secondary frequencies
of a breathing chimera state from the corresponding trajectory of coupled oscillator system~(\ref{Eq:Oscillators}).
In Section~\ref{Sec:Stability}, we carry out the linear stability analysis of a general solution~(\ref{Ansatz:QP}) to Eq.~(\ref{Eq:OA}).
It relies on the consideration of the monodromy operator,
which describes the evolution of small perturbations in the linearized Eq.~(\ref{Eq:OA}).
More precisely, the stability of the solution~(\ref{Ansatz:QP}) is determined by the spectrum of the monodromy operator.
The spectrum consists of two parts: essential and discrete spectra.
The former part is known explicitly and has no influence on the stability of breathing chimera states,
while the latter part is crucial for their stability but can be computed only numerically as explained in Section~\ref{Sec:DiscreteSpectrum}.
In Section~\ref{Sec:Example}, we illustrate the performance of the developed methods
considering specific examples of breathing chimera states in system~(\ref{Eq:Oscillators}).
In particular, using the continuation method, we find a branch of breathing chimera states
starting from the chimera state in Fig.~\ref{Fig:QP}.
Moreover, we also compute theoretical predictions of the graphs of~$|Z_N(t)|$ and~$\Omega_{\mathrm{eff},k}$
shown in Fig.~\ref{Fig:QP}(b),(d).
Furthermore, our analysis reveals that solutions~(\ref{Ansatz:QP}) can lose their stability
only via nonclassical bifurcations when one or two unstable eigenvalues emerge from the essential spectrum.
Similar results were known for partially synchronized states in the classical Kuramoto model~\cite{MirS2007}
and for stationary chimera states~(\ref{Ansatz:RW})~\cite{Ome2013}.
But for breathing chimera states they are reported for the first time here.
Finally, in Section~\ref{Sec:Conclusions} we summarize the obtained results
and point out other potential applications of our methods.
{\bf Notations.} Throughout this paper we use the following notations.
We let $C_\mathrm{per}([-\pi,\pi];\mathbb{C})$ denote
the space of all $2\pi$-periodic continuous complex-valued functions.
A similar notation $C_\mathrm{per}([-\pi,\pi]\times[0,2\pi];\mathbb{C})$
is used to denote the space of all continuous double-periodic functions
on the square domain $[-\pi,\pi]\times[0,2\pi]$.
Moreover, the capital calligraphic letters such as~$\mathcal{G}$ or~$\mathcal{U}$
are used to denote operators on appropriate Banach spaces.
Finally, the symbol $\Theta(t)$ denotes the Heaviside step function
such that $\Theta(t) = 0$ for $t < 0$ and $\Theta(t) = 1$ for $t \ge 0$.
\section{Periodic complex Riccati equation}
\label{Sec:Riccati}
Let us consider a complex Riccati equation of the form
\begin{equation}
\df{u}{t} = w(t) - i s u(t) - \overline{w}(t) u^2(t),
\label{Eq:Riccati}
\end{equation}
where $s\in\mathbb{R}$ and~$w(t)$ is a continuous complex-valued function.
Let $\mathbb{D} = \{ v\in\mathbb{C}\::\:|v| < 1 \}$ denote the open unit disc of the complex plane
and $\overline{\mathbb{D}} = \mathbb{D}\cup\partial \mathbb{D}$ be its closure.
In this section we will show that for every $(w(t),s)\in C_\mathrm{per}([0,2\pi];\mathbb{C})\times\mathbb{R}$
such that $|s| + \max\limits_{t\in[0,2\pi]} |w(x)| \ne 0$, in general,
there exists a unique stable solution to Eq.~(\ref{Eq:Riccati})
lying entirely in the unit disc~$\overline{\mathbb{D}}$.
The nonlinear operator yielding this solution will be denoted by $\mathcal{U}(w(t),s)$.
\begin{proposition}
For every $s\in\mathbb{R}$, $w\in C(\mathbb{R};\mathbb{C})$ and~$u_0 \in \overline{\mathbb{D}}$
there exists a unique global solution to equation~(\ref{Eq:Riccati})
starting from the initial condition $u(0) = u_0$.
Moreover, if $|u_0| = 1$ or $|u_0| < 1$,
then $|u(t)| = 1$ or $|u(t)| < 1$ for all~$t\in\mathbb{R}$, respectively.
\label{Proposition:Disc}
\end{proposition}
{\bf Proof:}
Suppose that~$u(t)$ is a solution to equation~(\ref{Eq:Riccati}), then
\begin{eqnarray}
\df{|u|^2}{t} &=& u(t) \df{\overline{u}}{t} + \overline{u}(t) \df{u}{t}
= u(t) \overline{w}(t) + i s |u(t)|^2 - w(t) |u(t)|^2 \overline{u}(t) \nonumber\\[2mm]
&+& \overline{u}(t) w(t) - i s |u(t)|^2 - \overline{w}(t) |u(t)|^2 u(t)
= 2 \mathrm{Re}( u(t) \overline{w}(t) ) ( 1 - |u(t)|^2 ).
\label{Eq:Modulus:u}
\end{eqnarray}
According to Eq.~(\ref{Eq:Modulus:u}), if $|u(0)| = 1$ then $|u(t)| = 1$ for all other~$t\ne 0$,
therefore the solution~$u(t)$ cannot blow up in finite time
and hence it can be extended for all $t\in\mathbb{R}$.
On the other hand, Eq.~(\ref{Eq:Modulus:u}) implies that every solution~$u(t)$
satisfying $|u(0)| < 1$ remains trapped inside the disc~$\mathbb{D}$,
therefore it also can be extended for all $t\in\mathbb{R}$.~\hfill \rule{2.3mm}{2.3mm}
\begin{remark}
Every solution~$u(t)$ to Eq.~(\ref{Eq:Riccati}) satisfying the identity $|u(t)| = 1$
can be written in the form $u(t) = e^{i \psi(t)}$
where $\psi(t)$ is a solution to the equation
$$
\df{\psi}{t} = - s + 2 \mathrm{Im}( w(t) e^{-i \psi} ).
$$
\label{Remark:u1}
\end{remark}
Now we consider Eq.~(\ref{Eq:Riccati}) with a $2\pi$-periodic coefficient~$w(t)$.
It is well-known~\cite{Cam1997,Wil2008} that the Poincar{\'e} map of such equation
coincides with the M{\"o}bius transformation.
Because of Proposition~\ref{Proposition:Disc} this M{\"o}bius transformation
maps unit disc~$\overline{\mathbb{D}}$ onto itself, therefore it can be written in the form
\begin{equation}
\mathcal{M}(u) = \fr{e^{i\theta}(u + b)}{\overline{b} u + 1}
\quad\mbox{where}\quad \theta\in\mathbb{R}\quad\mbox{and}\quad b\in\mathbb{C}.
\label{Def:M}
\end{equation}
\begin{remark}
The fact that the Poincar{\'e} map of the periodic complex Riccati equation~(\ref{Eq:Riccati})
coincides with the M{\"o}bius transformation~(\ref{Def:M}) can also be justified in a different way,
using the Lie theory, see~\cite[Sec.~III]{MarMS2009}.
\end{remark}
The next proposition shows that the parameters~$\theta$ and~$b$ in formula~(\ref{Def:M})
can be uniquely determined using two solutions to Eq.~(\ref{Eq:Riccati})
starting from the initial conditions $u = 0$ and $u = 1$.
\begin{proposition}
Suppose $s\in\mathbb{R}$ and $w\in C_\mathrm{per}([0,2\pi];\mathbb{C})$.
Let $U(t)$ and $\Psi(t)$ be solutions of the initial value problems
\begin{eqnarray}
\df{U}{t} &=& w(t) - i s U(t) - \overline{w}(t) U^2(t),\qquad U(0) = 0,
\label{IVP:U}\\[2mm]
\df{\Psi}{t} &=& - s + 2 \mathrm{Im}( w(t) e^{-i \Psi} ),\qquad\phantom{aaaaa} \Psi(0) = 0,
\label{IVP:psi}
\end{eqnarray}
and let $\zeta = U(-2\pi)$ and $\chi = \Psi(2\pi)$,
then the Poincar\'e map of Eq.~(\ref{Eq:Riccati}) is determined by the formula~(\ref{Def:M}) with
\begin{equation}
b = -\zeta
\qquad\mbox{and}\qquad
e^{i \theta} = \fr{ \overline{\zeta} - 1 }{\zeta - 1} e^{i\chi}.
\label{M:Params}
\end{equation}
Moreover $|b| < 1$.
\label{Proposition:U:psi}
\end{proposition}
{\bf Proof:} The definition of the Poincar{\'e} map and Remark~\ref{Remark:u1} imply
$$
\fr{e^{i\theta}(\zeta + b)}{\overline{b} \zeta + 1} = 0\qquad\mbox{and}\qquad \fr{e^{i\theta}(1 + b)}{\overline{b} + 1} = e^{i \chi}.
$$
The former equation yields $b = -\zeta$.
Inserting this into the latter equation we obtain a formula for $e^{i\theta}$.
Notice that because of Proposition~\ref{Proposition:Disc} we always have $|\zeta| < 1$, and hence $|b| < 1$ too.~\hfill \rule{2.3mm}{2.3mm}
Every $2\pi$-periodic solution to Eq.~(\ref{Eq:Riccati})
corresponds to a fixed point of the Poincar{\'e} map,
or equivalently to a solution of the equation
\begin{equation}
\mathcal{M}(u) = \fr{e^{i\theta}(u + b)}{\overline{b} u + 1} = u.
\label{Eq:FP}
\end{equation}
The periodic solution is stable or unstable,
if the corresponding fixed point~$u_*$ is stable or unstable
with respect to the map~$\mathcal{M}(u)$,
and the latter condition can be easily verified by estimating the derivative~$\mathcal{M}'(u_*)$.
Indeed, if $|\mathcal{M}'(u_*)| < 1$, then the fixed point~$u_*$ is stable.
On the other hand, if $|\mathcal{M}'(u_*)| > 1$, then~$u_*$ is unstable.
Moreover, the special properties of the map~$\mathcal{M}(u)$, see Remark~\ref{Remark:M:1}, allow us to conclude
that a fixed point~$u_*$ with $|\mathcal{M}'(u_*)| = 1$ is also stable provided it is non-degenerate.
In the next proposition, we show that every Poincar{\'e} map~(\ref{Def:M}) with $0<|b|<1$
has either a unique stable fixed point in the closed unit disc~$\overline{\mathbb{D}}$,
or a unique fixed point at all
(in this case, the fixed point is degenerate and lies on the unit disc boundary~$\partial\mathbb{D}$).
\begin{proposition}
Suppose $\theta\in(-\pi,\pi]$ and $0 < |b| < 1$, then Eq.~(\ref{Eq:FP})
has a unique solution~$u_0\in\overline{\mathbb{D}}$ such that $|\mathcal{M}'(u_0)| \le 1$.
This solution is given by the formulas
\begin{eqnarray}
&&
u_0 = \fr{ i \sin(\theta/2) + \sqrt{ |b|^2 - \sin^2(\theta/2) } }{|b|^2} b e^{i \theta / 2}\qquad\mbox{for}\qquad
|b| > |\sin (\theta/2)|,\label{Formula:u0:1}\\[2mm]
&&
u_0 = \fr{ i \sin(\theta/2) - i \sqrt{ \sin^2(\theta/2) - |b|^2 } }{|b|^2} b e^{i \theta / 2}\qquad\mbox{for}\qquad
|b| \le \sin (\theta/2),\nonumber\\[2mm]
&&
u_0 = \fr{ i \sin(\theta/2) + i \sqrt{ \sin^2(\theta/2) - |b|^2 } }{|b|^2} b e^{i \theta / 2}\qquad\mbox{for}\qquad
|b| \le -\sin (\theta/2).
\nonumber
\end{eqnarray}
Moreover, $|u_0| = 1$ for $|b| \ge |\sin (\theta/2)|$,
while $|u_0| < 1$ for $|b| < |\sin (\theta/2)|$.
Furthermore, $|\mathcal{M}'(u_0)| < 1$ for $|b| > |\sin (\theta/2)|$,
$|\mathcal{M}'(u_0)| = 1$ for $|b| < |\sin (\theta/2)|$,
and $\mathcal{M}'(u_0) = 1$ for $|b| = |\sin (\theta/2)|$.
\label{Proposition:IV}
\end{proposition}
{\bf Proof:} For every $|b| < 1$ and $u\in\overline{\mathbb{D}}$ equation~(\ref{Eq:FP}) can be rewritten in the form
\begin{equation}
e^{-i \theta/2} \overline{b} u^2 - 2 i \sin(\theta/2) u - e^{i \theta/2} b = 0.
\label{Eq:FP_}
\end{equation}
Since $b\ne 0$ this is a quadratic equation which generically has two complex roots.
We are going to check which of these roots lie in the unit disc~$\overline{\mathbb{D}}$
and what are their stability properties. To address the latter question
we compute the derivative of the M{\"o}bius transformation~(\ref{Def:M})
\begin{equation}
\mathcal{M}'(u) = \fr{e^{i \theta} ( 1 - |b|^2 )}{(\overline{b} u + 1)^2}
\label{Def:M:prime}
\end{equation}
and evaluate its modulus (keeping in mind that $|b| < 1$)
\begin{equation}
|\mathcal{M}'(u)| = \fr{1 - |b|^2}{|\overline{b} u + 1|^2}.
\label{M:prime}
\end{equation}
Depending on the sign of the difference $|b|^2 - \sin^2(\theta/2)$ we distinguish two cases.
{\it Case~1.} Suppose $|b| > |\sin(\theta/2)|$, then two solutions to Eq.~(\ref{Eq:FP_}) read
$$
u_\pm = \fr{ i \sin(\theta/2) \pm \sqrt{ |b|^2 - \sin^2(\theta/2) } }{\overline{b} e^{-i \theta / 2}}.
$$
It is easy to verify that in this case $|u_+| = |u_-| = 1$. Moreover, we also obtain
\begin{eqnarray*}
1 - |b|^2 - |\overline{b} u_\pm + 1|^2 &=& 1 - |b|^2 - \left( \cos(\theta/2) \pm \sqrt{ |b|^2 - \sin^2(\theta/2) } \right)^2 \\[2mm]
&=& - 2 \sqrt{ |b|^2 - \sin^2(\theta/2) } \left( \sqrt{ |b|^2 - \sin^2(\theta/2) } \pm \cos(\theta/2) \right).
\end{eqnarray*}
Obviously, for every $\theta\in(-\pi,\pi]$ and $|\sin(\theta/2)| < |b| < 1$ we have
$$
\cos(\theta/2) = \sqrt{ 1 - \sin^2(\theta/2) } > \sqrt{ |b|^2 - \sin^2(\theta/2) },
$$
therefore
\begin{equation}
1 - |b|^2 - |\overline{b} u_+ + 1|^2 < 0\quad\mbox{and}\quad 1 - |b|^2 - |\overline{b} u_- + 1|^2 > 0,
\label{Ineq:u_pm}
\end{equation}
and hence
$$
| \mathcal{M}'(u_+) | < 1\quad\mbox{and}\quad | \mathcal{M}'(u_-) | > 1.
$$
{\it Case~2.} The other case is determined by the inequality $|b| \le |\sin(\theta/2)|$.
If $|b| < |\sin(\theta/2)|$, then Eq.~(\ref{Eq:FP_}) has two solutions
\begin{equation}
u_\pm = \fr{ i \sin(\theta/2) \pm i \sqrt{ \sin^2(\theta/2) - |b|^2 } }{\overline{b} e^{-i \theta / 2}},
\label{Def:u:pm}
\end{equation}
while for $|b| = |\sin(\theta/2)|$ the values~$u_+$ and~$u_-$ given by~(\ref{Def:u:pm}) coincide.
To estimate the moduli~$|u_+|$ and~$|u_-|$ we compute the difference
$$
\left( \sin(\theta/2) \pm \sqrt{ \sin^2(\theta/2) - |b|^2 } \right)^2 - |b|^2 = 2 \sqrt{ \sin^2(\theta/2) - |b|^2 } \left( \sqrt{ \sin^2(\theta/2) - |b|^2 } \pm \sin(\theta/2) \right).
$$
Then for $\sin(\theta/2) > 0$ we obtain
$$
\left( \sin(\theta/2) + \sqrt{ \sin^2(\theta/2) - |b|^2 } \right)^2 - |b|^2 > 0,\quad
\left( \sin(\theta/2) - \sqrt{ \sin^2(\theta/2) - |b|^2 } \right)^2 - |b|^2 < 0,
$$
and hence $|u_+| > 1$ and $|u_-| < 1$. Similarly, for $\sin(\theta/2) < 0$ we obtain $|u_+| < 1$ and $|u_-| > 1$.
Finally, we compute a difference relevant to formula~(\ref{M:prime})
$$
1 - |b|^2 - |\overline{b} u_\pm + 1|^2 = 1 - |b|^2 - \left| \cos(\theta/2) \pm i \sqrt{ \sin^2(\theta/2) - |b|^2 } \right|^2 = 0,
$$
which implies $| \mathcal{M}'(u_+) | = | \mathcal{M}'(u_-) | = 1$.
On the other hand, in the limiting case $|b| = |\sin(\theta/2)|$, formulas~(\ref{Def:M:prime}) and~(\ref{Def:u:pm})
yield $|u_+| = |u_-| = 1$ and $\mathcal{M}'(u_+) = \mathcal{M}'(u_-) = 1$.~\hfill \rule{2.3mm}{2.3mm}
\begin{remark}
Let us consider the formula~(\ref{Def:M}) with $0 < |b| < |\sin(\theta/2)|$ in more detail.
In this case, $\mathcal{M}(u)$ determines an elliptic M{\"o}bius transformation, see \cite[Ch.~3.VII]{Needham}.
This means that it has two distinct fixed points that are neither attractive nor repulsive but indifferent.
(Recall the equation $| \mathcal{M}'(u_\pm) | = 1$ from the Case~2 in the proof of Proposition~\ref{Proposition:IV}.)
Moreover, the transformation moves all other points of the complex plane in circles around the two fixed points.
Therefore, according to the Lyapunov stability classification, both fixed points are stable, but not asymptotically stable.
Similarly one can verify that the other two cases $|\sin(\theta/2)| < |b| < 1$ and $|b| = |\sin(\theta/2)| \ne 0$
considered in Proposition~\ref{Proposition:IV} correspond
to the M{\"o}bius transformations~$\mathcal{M}(u)$ of hyperbolic and parabolic types, respectively.
This is in accordance with the fact that~$\mathcal{M}(u)$ has a pair of attracting and repulsive fixed points in the former case
and a degenerate fixed point in the latter case. Note that the degenerate fixed point
of a parabolic M{\"o}bius transformation is always unstable in the sense of Lyapunov~\cite[Ch.~3.VII]{Needham}.
\label{Remark:M:1}
\end{remark}
\begin{remark}
If $b = 0$, then Eq.~(\ref{Eq:FP}) degenerates into the linear equation $e^{i \theta} u = u$.
For $e^{i \theta} \ne 1$ this equation has only one solution $u = 0$,
while for $e^{i \theta} = 1$ it becomes trivial identity $u = u$
and hence has infinitely many solutions $u\in\overline{\mathbb{D}}$.
In both cases all the solutions are stable, because $\mathcal{M}(u)$ is linear and $| \mathcal{M}'(u) | = 1$.
Moreover, the case $b = 0$ and $e^{i \theta} = 1$ corresponds
to the equation~(\ref{Eq:Riccati}) with $w(t) = 0$ and $s = 0$.
\label{Remark:b:zero}
\end{remark}
\begin{remark}
If~$u_0$ is determined by formula~(\ref{Formula:u0:1}), then~$\mathcal{M}'(u_0)$ is real and $\mathcal{M}'(u_0)\in(0,1)$.
Indeed, formula~(\ref{Formula:u0:1}) implies
$$
\overline{b} u_0 = \left( i \sin(\theta/2) + \sqrt{ |b|^2 - \sin^2(\theta/2) } \right) e^{i \theta / 2},
$$
therefore
$$
\overline{b} u_0 + 1 = \left( \cos(\theta/2) + \sqrt{ |b|^2 - \sin^2(\theta/2) } \right) e^{i \theta / 2}.
$$
Hence the assertion follows from formula~(\ref{Def:M:prime}) and from the first of two inequalities~(\ref{Ineq:u_pm}).
\label{Remark:M:real}
\end{remark}
\subsection{Solution operator~$\mathcal{U}$}
\label{Sec:Operator:U}
In the previous section we showed that for every~$w\in C_\mathrm{per}([0,2\pi];\mathbb{C})$ and $s\in\mathbb{R}$
the complex Riccati equation~(\ref{Eq:Riccati}) has a uniquely determined $2\pi$-periodic solution~$u(t)\in\overline{\mathbb{D}}$
that is stable in the sense of Lyapunov (or at least linearly stable in the degenerate case).
Let us denote the corresponding solution operator
$$
\mathcal{U}\::\: C_\mathrm{per}([0,2\pi];\mathbb{C})\times\mathbb{R}\to C_\mathrm{per}^1([0,2\pi];\mathbb{C}).
$$
The definition of~$\mathcal{U}$ is constructive and relies on the following steps:
\smallskip
1) Given~$w(t)$ and~$s$ one solves two initial value problems~(\ref{IVP:U}) and~(\ref{IVP:psi})
and obtains coefficients~$b$ and~$\theta$ of the M{\"o}bius transformation~(\ref{Def:M}), see Proposition~\ref{Proposition:U:psi}.
\smallskip
2) Using Proposition~\ref{Proposition:IV} one computes the initial value~$u_0$ of the periodic solution~$u(t)$
that lies entirely in the unit disc~$\overline{\mathbb{D}}$ and, moreover, is stable provided $|b| \ne |\sin(\theta/2)|$.
In the case $b = 0$, one assumes $u_0 = 0$, see Remark~\ref{Remark:b:zero}.
\smallskip
3) One integrates Eq.~(\ref{Eq:Riccati}) with the initial condition $u(0) = u_0$ and obtains $2\pi$-periodic solution~$u(t)$.
\smallskip
\noindent
Importantly, Propositions~\ref{Proposition:U:psi} and~\ref{Proposition:IV} ensure that the steps~1--3 can always be realized.
Therefore, the mapping $\mathcal{U}\::\: (w(t),s) \mapsto u(t)$ is well-defined.
\begin{remark}
Note that the minimal period of the function $u(t) = \mathcal{U}(w(t),s)$ does not have to be~$2\pi$.
In general, it can assume any value $2\pi/k$ with $k\in\mathbb{N}$.
Moreover, for certain values of the arguments $(w(t),s)$ the operator~$\mathcal{U}$ can also return a constant function~$u(t)$.
\end{remark}
Due to the definition of~$\mathcal{U}$ we have $|u(t)| \le 1$ for all $t\in[0,2\pi]$, therefore~$\mathcal{U}$ is a bounded operator.
Moreover, the operator~$\mathcal{U}$ has a specific dichotomy property:
\begin{proposition}
Let $w_*\in C_\mathrm{per}([0,2\pi];\mathbb{C})$, $s_*\in\mathbb{R}$ and $u_* = \mathcal{U}( w_*, s_* )$.
Moreover, let
\begin{equation}
M_* = \exp\left( - \int_0^{2\pi} ( i s_* + 2 \overline{w}_*(t) u_*(t) ) dt \right).
\label{Def:M_star}
\end{equation}
Then either $|u_*(t)| = 1$ for all $t\in[0,2\pi]$ and~$M_*$ is a real number such that $M_*\in(0,1]$,
or $|u_*(t)| < 1$ for all $t\in[0,2\pi]$ and $|M_*| = 1$.
\label{Proposition:Dichotomy}
\end{proposition}
{\bf Proof:} We need only to show that $M_* = \mathcal{M}'(u_0)$ where $u_0 = u_*(0)$.
Then the assertion follows from Propositions~\ref{Proposition:Disc} and~\ref{Proposition:IV}
and from Remarks~\ref{Remark:b:zero} and~\ref{Remark:M:real}.
Let us consider Eq.~(\ref{Eq:Riccati}) for $w(t) = w_*(t)$ and $s = s_*$.
Inserting there ansatz $u(t) = u_*(t) + v(t)$ and linearizing the resulting equation
with respect to small perturbations~$v(t)$ we obtain
\begin{equation}
\df{v}{t} = - ( i s_* + 2 \overline{w}_*(t) u_*(t) ) v.
\label{Eq:Linear:v}
\end{equation}
Obviously, formula~(\ref{Def:M_star}) determines the Floquet multiplier of the scalar linear equation~(\ref{Eq:Linear:v}).
By definition its value coincides with the derivative of the Poincar{\'e} map of the original nonlinear equation~(\ref{Eq:Riccati}),
hence $M_* = \mathcal{M}'(u_0)$ where $u_0 = u_*(0)$.~\hfill \rule{2.3mm}{2.3mm}
\begin{remark}
Proposition~\ref{Proposition:IV} and Remark~\ref{Remark:b:zero} imply
that $u_0 = u_*(0)$ is a simple fixed point of Eq.~(\ref{Eq:FP}), if and only if $M_* \ne 1$.
Therefore, the equation $M_* = 1$ can be considered as a degeneracy or bifurcation condition.
\end{remark}
\subsection{Derivatives of the operator~$\mathcal{U}$}
\label{Sec:Operator:U:Derivatives}
In this section we show how to compute partial derivatives of the operator~$\mathcal{U}$.
\begin{proposition}
Let $w_*\in C_\mathrm{per}([0,2\pi];\mathbb{C})$, $s_*\in\mathbb{R}$ and $u_* = \mathcal{U}( w_*, s_* )$.
Suppose
$$
\Phi_*(2\pi) \ne 1\qquad\mbox{where}\qquad
\Phi_*(t) = \exp\left( - \int_0^t ( i s_* + 2 \overline{w}_*(\tau) u_*(\tau) ) d\tau \right),
$$
then there exists a bounded linear operator $\mathcal{J}\::\: C_\mathrm{per}([0,2\pi];\mathbb{C}) \to C^1_\mathrm{per}([0,2\pi];\mathbb{C})$
given by
$$
(\mathcal{J} f)(t) = \int_0^{2\pi} \fr{\Phi_*(2\pi) + (1 - \Phi_*(2\pi)) \Theta(t - \tau)}{1 - \Phi_*(2\pi)} \Phi_*(t) \Phi_*^{-1}(\tau) f(\tau) d\tau
$$
such that $v(t) = (\mathcal{J} f)(t)$ is a $2\pi$-periodic solution to the equation
$$
\df{v}{t} + ( i s_* + 2 \overline{w}_*(t) u_*(t) ) v(t) = f(t).
$$
\label{Proposition:Green}
\end{proposition}
{\bf Proof:} This assertion has been proved in~\cite[Proposition~A.1]{Ome2019}.~\hfill \rule{2.3mm}{2.3mm}
\begin{proposition}
Let the assumptions of Proposition~\ref{Proposition:Green} be fulfilled.
Then for every $w\in C_\mathrm{per}([0,2\pi];\mathbb{C})$ we have
\begin{eqnarray}
\left. \partial_\varepsilon \mathcal{U}( w_* + \varepsilon w, s_* )\right|_{\varepsilon = 0} &=& \mathcal{J} ( w - u_*^2 \overline{w} ),\label{Eq:U_w}\\[2mm]
\partial_s \mathcal{U}( w_*, s_* ) &=& \mathcal{J} (- i u_*).\label{Eq:U_s}
\end{eqnarray}
\label{Proposition:Derivatives}
\end{proposition}
{\bf Proof:}
For every $\varepsilon\in\mathbb{R}$ function $v(t,\varepsilon,s) = \mathcal{U}( w_*(t) + \varepsilon w(t), s )$ satisfies
\begin{equation}
\df{v(t,\varepsilon,s)}{t} = w_*(t) + \varepsilon w(t) - i s v(t,\varepsilon,s) - ( \overline{w}_*(t) + \varepsilon \overline{w}(t) ) v^2(t,\varepsilon,s).
\label{Eq:Identity:v}
\end{equation}
Differentiating this identity with respect to~$\varepsilon$ and inserting $\varepsilon = 0$ and $s = s_*$, we obtain
\begin{equation}
\df{v_\varepsilon(t,0,s_*)}{t} = w(t) - ( i s_* + 2 \overline{w}_*(t) u_*(t) ) v_\varepsilon(t,0,s_*) - \overline{w}(t) u_*^2(t).
\label{Eq:v_eps}
\end{equation}
Now, using Proposition~\ref{Proposition:Green} we solve Eq.~(\ref{Eq:v_eps})
with respect to~$v_\varepsilon(t,0,s_*)$ and obtain formula~(\ref{Eq:U_w}).
Formula~(\ref{Eq:U_s}) is justified similarly.
We differentiate~(\ref{Eq:Identity:v}) with respect to~$s$
and solve the resulting equation using Proposition~\ref{Proposition:Green}.~\hfill \rule{2.3mm}{2.3mm}
\section{Self-consistency equation}
\label{Sec:SC}
Suppose that Eq.~(\ref{Eq:OA}) has a solution of the form~(\ref{Ansatz:QP})
where $a(x,t+T) = a(x,t)$ for some $T > 0$. Let us define
\begin{equation}
\omega = \fr{2\pi}{T}\qquad\mbox{and}\qquad u(x,t) = a\left( x, \fr{t}{\omega} \right),
\label{Scaling:omega:u}
\end{equation}
then the new function~$u(x,t)$ is $2\pi$-periodic with respect to~$t$ and satisfies
\begin{equation}
\omega \df{u}{t} = - i \Omega u + \fr{1}{2} e^{-i \alpha} \mathcal{G} u - \fr{1}{2} e^{i \alpha} u^2 \mathcal{G}\overline{u}.
\label{Eq:OA:u}
\end{equation}
Dividing Eq.~(\ref{Eq:OA:u}) by~$\omega$ and introducing the notations
\begin{equation}
s = \fr{\Omega}{\omega}\qquad\mbox{and}\qquad w(x,t) = \fr{e^{-i\alpha}}{2 \omega} \mathcal{G} u,
\label{Scaling:s:w}
\end{equation}
we rewrite Eq.~(\ref{Eq:OA:u}) in the form
\begin{equation}
\df{u}{t} = w(x,t) - i s u - \overline{w}(x,t) u^2.
\label{Eq:OA:u_}
\end{equation}
In Section~\ref{Sec:Riccati} we showed that every stable solution to Eq.~(\ref{Eq:OA:u_})
that lies entirely in the unit disc~$\overline{\mathbb{D}}$ is given by the formula $u(x,t) = \mathcal{U}( w(x,t), s )$.
Inserting this result into the definition of~$w(x,t)$
we arrive at a self-consistency equation
$$
w(x,t) = \fr{e^{-i\alpha}}{2 \omega} \mathcal{G} \mathcal{U}( w(x,t), s ),
$$
which can be written in the equivalent form~(\ref{Eq:SC}).
Eq.~(\ref{Eq:OA:u}) has several continuous symmetries.
More precisely, the set of its solutions is invariant with respect to the following transformations:
\smallskip
1) spatial translations $u(x,t) \mapsto u(x + c,t)$ for $c\in\mathbb{R}$,
\smallskip
2) complex phase shifts $u(x,t) \mapsto u(x,t) e^{i \phi}$ for $\phi\in\mathbb{R}$,
\smallskip
3) time shifts $u(x,t) \mapsto u(x,t+\tau)$ for $\tau\in\mathbb{R}$.
\smallskip
\noindent
All these symmetries are inherited by the self-consistency equation~(\ref{Eq:SC}),
therefore to select its unique solution~$w(x,t)$ we need to provide three pinning conditions.
In practice, this number can be reduced by one if we restrict Eq.~(\ref{Eq:SC}) to the space of even functions
$$
X_\mathrm{e} = \left\{ w \in C_\mathrm{per}([-\pi,\pi]\times[0,2\pi];\mathbb{C})\::\: w(-x,t) = w(x,t)\quad\mbox{for all}\quad (x,t)\in [-\pi,\pi]\times[0,2\pi] \vphantom{\sum}\right\}.
$$
Indeed, for symmetric coupling kernels~$G(x)$
equation~(\ref{Eq:SC}) is reflection symmetric with respect to~$x$,
therefore we can look for solutions~$w(x,t)$ satisfying $w(-x,t) = w(x,t)$ only.
In this case the spatial translation symmetry is eliminated automatically.
Then two pinning conditions relevant to the complex phase shift and the time shift can be chosen in the form
\begin{eqnarray}
&&
\mathrm{Im}\:\left(\int_{-\pi}^\pi dx \int_0^{2\pi} w(x,t) dt \right) = 0,
\label{Eq:Pinning:1}\\[2mm]
&&
\mathrm{Im}\:\left(\int_{-\pi}^\pi dx \int_0^{2\pi} w(x,t) e^{i t} dt \right) = 0.
\label{Eq:Pinning:2}
\end{eqnarray}
In the next sections we will show that the augmented system
consisting of Eqs.~(\ref{Eq:SC}), (\ref{Eq:Pinning:1}) and~(\ref{Eq:Pinning:2}) is well-defined.
This means that for fixed phase lag~$\alpha$ and kernel~$G(x)$ it correctly determines
the unknown even function~$w(x,t)$ and two scalar parameters~$\omega$ and~$s$.
\subsection{Modified self-consistency equation}
\label{Eq:SC:Modified}
In this section we show that the phase shift symmetry can also be eliminated from Eq.~(\ref{Eq:SC}).
Then we decrease the number of equations and unknowns in the augmented system described above.
Let us define a linear operator
$$
\mathcal{P}\::\:C_\mathrm{per}([-\pi,\pi]\times[0,2\pi];\mathbb{C})\to\mathbb{C},\qquad\mathcal{P} w = \fr{1}{(2\pi)^2} \int_{-\pi}^\pi dx \int_0^{2\pi} w(x,t) dt,
$$
which gives a constant part of the function $w(x,t)$.
Using this operator and the identity operator~$\mathcal{I}$,
we rewrite Eq.~(\ref{Eq:SC}) in the equivalent form
\begin{eqnarray}
2 \omega e^{i \alpha} \mathcal{P} w &=& \mathcal{P} \mathcal{G} \mathcal{U}(w,s),\label{Eq:Pw}\\[2mm]
2 \omega e^{i \alpha} (\mathcal{I} - \mathcal{P}) w &=& (\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}(w,s).
\nonumber
\end{eqnarray}
Dividing the latter equation by the former one (which is a scalar equation!) we obtain
$$
\fr{(\mathcal{I} - \mathcal{P}) w}{\mathcal{P} w} = \fr{(\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}(w,s)}{\mathcal{P} \mathcal{G} \mathcal{U}(w,s)},
$$
or equivalently
\begin{equation}
( \mathcal{P} \mathcal{G} \mathcal{U}(w,s) ) (\mathcal{I} - \mathcal{P}) w = ( \mathcal{P} w ) (\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}(w,s).
\label{Eq:SC:Reduced}
\end{equation}
If we assume
\begin{equation}
w(x,t) = p + v(x,t),\quad\mbox{where}\quad p\in(0,\infty)\quad\mbox{and}\quad v(x,t)\in \left\{ u\in X_\mathrm{e}\::\: \mathcal{P} u = 0 \right\},
\label{Ansatz:w}
\end{equation}
then pinning condition~(\ref{Eq:Pinning:1}) is fulfilled automatically and can be discarded.
Moreover, inserting the ansatz~(\ref{Ansatz:w}) into Eq.~(\ref{Eq:SC:Reduced}) and into the pinning condition~(\ref{Eq:Pinning:2}) we obtain
\begin{equation}
( \mathcal{P} \mathcal{G} \mathcal{U}(p + v,s) ) v = p (\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}(p + v,s),
\label{Eq:SC:Reduced_}
\end{equation}
and
\begin{equation}
\mathrm{Im}\:\left(\int_{-\pi}^\pi dx \int_0^{2\pi} v(x,t) e^{i t} dt \right) = 0.
\label{Eq:Pinning:2_}
\end{equation}
Now instead of solving the system of equations~(\ref{Eq:SC}), (\ref{Eq:Pinning:1}) and~(\ref{Eq:Pinning:2}),
we can look for solutions of the system comprising Eqs.~(\ref{Eq:SC:Reduced_}) and~(\ref{Eq:Pinning:2_}).
In this case $p > 0$ must be given, then the system of equations~(\ref{Eq:SC:Reduced_}) and~(\ref{Eq:Pinning:2_})
has to be solved with respect to two unknowns: scalar parameter~$s$ and even function $v(x,t)$ satisfying $\mathcal{P} v = 0$.
As soon as such solution is found, one can compute the corresponding values of~$\omega$ and~$\alpha$
from Eq.~(\ref{Eq:Pw}) written in the form
$$
2 \omega e^{i \alpha} = \fr{1}{p} \mathcal{P} \mathcal{G} \mathcal{U}(p + v,s).
$$
\subsection{Modified self-consistency equation for cosine kernel~(\ref{Coupling:Cos})}
\label{Eq:SC:Modified:Cos}
In this section we consider a specific example of integral operator~$\mathcal{G}$
and show how system~(\ref{Eq:SC:Reduced_}), (\ref{Eq:Pinning:2_})
can be solved approximately using Galerkin's method.
For this we assume that~$G(x)$ is the cosine kernel~(\ref{Coupling:Cos}).
Given a positive integer~$F$ let us define a set of $8 F + 2$ functions $\psi_k(x,t)$
\begin{eqnarray*}
&& \sqrt{2} \cos x,\quad i \sqrt{2} \cos x,\\[2mm]
&& e^{i m t}, \quad i e^{i m t},\quad e^{i m t} \sqrt{2} \cos x,\quad i e^{i m t} \sqrt{2} \cos x,\quad m=-F,\dots,-1, 1,\dots,F.
\end{eqnarray*}
Note that $m\ne 0$, therefore constant functions~$\sqrt{2}$ and~$i \sqrt{2}$ are not included in the set.
The order of~$\psi_k(x,t)$ is irrelevant apart from the only place below where we will assume $\psi_8(x,t) = i e^{-i t}$.
It is easy to verify that $\psi_k(x,t)$ satisfy the orthonormality condition $\langle \psi_k , \psi_n \rangle = \delta_{kn}$
with respect to the scalar product
$$
\langle u, v \rangle = \mathrm{Re}\:\left( \fr{1}{(2\pi)^2} \int_{-\pi}^\pi dx \int_0^{2\pi} \overline{u}(x,t) v(x,t) dt\right),
$$
where~$\delta_{kn}$ is the Kronecker delta.
Therefore, functions~$\psi_k(x,t)$ span a finite-dimensional subspace of $\left\{ u\in X_\mathrm{e}\::\: \mathcal{P} u = 0 \right\}$.
We look for approximate solution to Eq.~(\ref{Eq:SC:Reduced_}) in the form
\begin{equation}
v(x,t) = \sum\limits_{k=1}^{8 F + 2} c_k \psi_k(x,t)
\label{Ansatz:v}
\end{equation}
where~$c_k\in\mathbb{R}$ are unknown coefficients.
Inserting~(\ref{Ansatz:v}) into Eq.~(\ref{Eq:SC:Reduced_}) we write $8 F + 2$ orthogonality conditions
$$
\left\langle \psi_n, ( \mathcal{P} \mathcal{G} \mathcal{U}(p + v,s) ) v \right\rangle = p \left\langle \psi_n, (\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}(p + v,s) \right\rangle,\qquad n=1,\dots,8 F + 2.
$$
Since for the cosine kernel~$G(x)$ it holds $\mathcal{P} \mathcal{G} = \mathcal{P}$, the above system can be written as follows
\begin{equation}
\sum\limits_{k=1}^{8 F + 2} \left\langle \psi_n, \psi_k \mathcal{P} \mathcal{U}\left(p + \sum\limits_{m=1}^{8 F + 2} c_m \psi_m,s\right) \right\rangle c_k
= p \left\langle \psi_n, (\mathcal{I} - \mathcal{P}) \mathcal{G} \mathcal{U}\left(p + \sum\limits_{m=1}^{8 F + 2} c_m \psi_m,s\right) \right\rangle.
\label{System:Galerkin}
\end{equation}
To account for the pinning condition~(\ref{Eq:Pinning:2_}) we assume $\psi_8(x,t) = i e^{-i t}$, then
$$
\mathrm{Im}\:\left(\int_{-\pi}^\pi dx \int_0^{2\pi} v(x,t) e^{i t} dt \right) = \mathrm{Re}\:\left(\int_{-\pi}^\pi dx \int_0^{2\pi} v(x,t) (-i) e^{i t} dt \right) = (2\pi)^2 \langle \psi_8, v \rangle.
$$
This means $c_8 = 0$. Inserting this identity into Eq.~(\ref{System:Galerkin})
we end up with a system of $8 F + 2$ nonlinear equations with respect to $8 F + 2$ real unknowns
(these are $8 F + 1$ coefficients~$c_k$ with $k\ne 8$ and the parameter~$s$).
The system~(\ref{System:Galerkin}) can be solved by Newton's method,
using a semi-analytic Jacobian expression
involving the derivative representations obtained in Section~\ref{Sec:Operator:U:Derivatives}.
Note that breathing chimera states typically have a very fine spatial structure,
therefore to approximate the integrals in~(\ref{System:Galerkin}) with the same accuracy,
one needs to use either a nonuniform grid with a moderate number of nodes in the $x$-direction,
or a uniform grid with a much larger number of nodes.
For example, all numerical results reported in Section~\ref{Sec:Example}
were obtained using a nonuniform grid with ca. $10^3$ discretization points in the $x$-direction
(the distribution of points, in this case, was $10$ to $100$ times denser
in the vicinity of the coherence-incoherence boundaries than in the other regions of the chimera state).
On a uniform grid, the same accuracy would be acheived only with at least~$10^5$ discretization points,
what would lead to extremely large computational times.
\subsection{Formulas for global order parameter and effective frequencies}
\label{Sec:Formulas:Z:Omega_eff}
Every solution~$z(x,t)$ to Eq.~(\ref{Eq:OA}) corresponds to a probability density $f(\theta,x,t)$ representing
a specific statistical equilibrium of the large system of coupled oscillators~(\ref{Eq:Oscillators}), see~\cite{Ome2013}.
More precisely, the function $f(\theta,x,t)$ yields the probability to find oscillator with phase~$\theta$ at position~$x$ at time~$t$
and has the characteristic property
\begin{equation}
\int_0^{2\pi} f(\theta,x,t) e^{i \theta} d\theta = z(x,t).
\label{Integral:z}
\end{equation}
Property~(\ref{Integral:z}) can be used to derive formulas for the global order parameter~(\ref{Def:Z})
as well as for the effective frequencies of oscillators~(\ref{Def:Omega_eff}).
Indeed, formula~(\ref{Def:Z}) determines the average of~$e^{i \theta_k(t)}$ over all oscillator indices~$k$.
For large enough~$N$ this average can also be computed as follows
\begin{equation}
Z(t) = \fr{1}{2\pi} \int_{-\pi}^\pi dx \int_0^{2\pi} f(\theta,x,t) e^{i\theta} d\theta = \fr{1}{2\pi} \int_{-\pi}^\pi z(x,t) dx.
\label{Z:Continuous}
\end{equation}
To obtain a similar formula for~$\Omega_{\mathrm{eff},k}$ we write Eq.~(\ref{Eq:Oscillators}) in the form
$$
\df{\theta_k}{t} = - \mathrm{Im}\:\left( e^{i \alpha} \overline{W}_k(t) e^{i \theta_k(t)} \right)
\quad\mbox{where}\quad
W_k(t) = \frac{2 \pi}{N} \sum\limits_{j=1}^N G\left( \fr{2\pi (k - j)}{N} \right) e^{i \theta_j(t)}.
$$
Recall that $x_k = -\pi + 2\pi k / N$ is the spatial position of the $k$th oscillator,
therefore for infinitely large~$N$ points~$x_k$ densely fill the interval~$[-\pi,\pi]$.
In this case instead of the discrete set of functions~$W_k(t)$ we can consider
a function $W(x,t)$ depending on the continuous argument $x\in[-\pi,\pi]$.
Because of the property~(\ref{Integral:z})
we have $W(x,t) = (\mathcal{G} z)(x,t)$ and
\begin{eqnarray}
\Omega_\mathrm{eff}(x) &=& - \lim\limits_{\tau\to\infty} \fr{1}{\tau} \int_0^\tau \mathrm{Im}\:\left( e^{i \alpha} \overline{W}(x,t) \int_0^{2\pi} f(\theta,x,t) e^{i \theta} d\theta \right) dt \nonumber\\[2mm]
&=& - \mathrm{Im}\:\left( e^{i \alpha} \lim\limits_{\tau\to\infty} \fr{1}{\tau} \int_0^\tau z(x,t) (\mathcal{G} \overline{z})(x,t) dt \right).
\label{Omega_eff:Continuous}
\end{eqnarray}
Notice that formulas~(\ref{Z:Continuous}) and~(\ref{Omega_eff:Continuous}) are the continuum limit counterparts
of formulas~(\ref{Def:Z}) and~(\ref{Def:Omega_eff}) for any solution~$z(x,t)$ to Eq.~(\ref{Eq:OA}).
If the solution~$z(x,t)$ is taken in the form~(\ref{Ansatz:QP}), then we obtain the following proposition.
\begin{proposition}
Let the triple $( w(x,t), \omega, s)$ be a solution to the self-consistency equation~(\ref{Eq:SC})
and let $u(x,t) = \mathcal{U}( w(x,t), s )$ and $a(x,t) = u(x,\omega t)$.
Then the following formulas hold
$$
|Z(t)| = \fr{1}{2\pi} \left| \int_{-\pi}^\pi a(x,t) dx \right| = \fr{1}{2\pi} \left| \int_{-\pi}^\pi u(x,\omega t) dx \right|
$$
and
$$
\Omega_\mathrm{eff}(x) = - \mathrm{Im}\left( \fr{1}{T} \int_0^T e^{i\alpha} a(x,t) (\mathcal{G} \overline{a})(x,t) dt \right)
= - 2 \omega\: \mathrm{Im}\left( \fr{1}{2\pi} \int_0^{2\pi} u(x,t) \overline{w}(x,t) dt \right).
$$
\label{Proposition:Z:Omega_eff}
\end{proposition}
\subsection{Extraction of the parameters of breathing chimera states}
\label{Sec:Extraction}
Suppose that we observe a breathing chimera state in system~(\ref{Eq:Oscillators})
and we wish to extract from the observation its primary~$\Omega$ and secondary~$\omega$ frequencies
as well as the amplitude~$a(x,t)$ of the corresponding solution~(\ref{Ansatz:QP}) to Eq.~(\ref{Eq:OA}).
This can be done in the following way.
Let~$R_\mathrm{min}$ and~$R_\mathrm{max}$ be the minimal and the maximal values of~$|Z_N(t)|$,
see~(\ref{Def:Z}), over a sufficiently long observation time interval for a breathing chimera state.
If~$t_k$ are consecutive time moments such that the graph of~$|Z_N(t)|$
crosses the mid-level $( R_\mathrm{min} + R_\mathrm{max} ) / 2$ from below,
then the period~$T$ of breathing chimera state can be computed
as an average of all differences $t_k - t_{k-1}$.
On the other hand, if we compute the variation of the complex argument of~$Z_N(t)$
from the time moment~$t_{k-1}$ till~$t_k$, then the quotient
$$
\fr{1}{t_k - t_{k-1}} \int_{t_{k-1}}^{t_k} d \arg Z_N(t)
$$
yields an approximate value of the primary cyclic frequency~$\Omega$ in~(\ref{Ansatz:QP}).
Of course, the accuracy of this approximation improves
if the latter quotient is averaged over all possible indices~$k$.
\begin{remark}
Note that the above method for determining the breathing period $T$
relies on the assumption that the $|Z_N(t)|$-graph crosses
the mid-level $( R_\mathrm{min} + R_\mathrm{max} ) / 2$
only twice on the period (once from below and the other from above).
If this is not the case, then one needs to select another mid-level value
in the interval $(R_\mathrm{min},R_\mathrm{max})$,
which guarantees the two intersections condition.
\end{remark}
As soon as the period~$T$ and the primary frequency~$\Omega$ are known
we can find approximate values of the function~$a(x,t)$ in~(\ref{Ansatz:QP}) by
\begin{equation}
a(x_k,t) = \fr{1}{2 M + 1} \sum\limits_{j=k-M}^{k+M} e^{i ( \theta_j(t) - \Omega t )},
\label{Eq:a:approx}
\end{equation}
where the indices~$j$ are taken modulo~$N$,
$x_k = -\pi + 2\pi k/N$ is the scaled position of the $k$th oscillator
and $M = [\sqrt{N}/2]$ is the largest integer that does not exceed~$\sqrt{N}/2$.
Finally, using formulas~(\ref{Def:omega:s:w}) we can compute the secondary cyclic frequency~$\omega$
as well as the ratio~$s$ and the function~$w(x,t)$ appearing in the self-consistency equation~(\ref{Eq:SC}).
\begin{remark}
Note that using the method described above, we automatically obtain
the function~$a(x,t)$ and cyclic frequency~$\Omega$
which satisfy the calibration condition from Remark~\ref{Remark:Calibration}.
\end{remark}
\section{Stability analysis}
\label{Sec:Stability}
Suppose that Eq.~(\ref{Eq:OA}) has a solution of the form
\begin{equation}
z = a(x,t) e^{i \Omega t},
\label{Ansatz:0}
\end{equation}
where $a(x,t)$ is $T$-periodic with respect to its second argument.
To analyze the stability of this solution we insert the ansatz
$$
z = ( a(x,t) + v(x,t) ) e^{i \Omega t}
$$
into Eq.~(\ref{Eq:OA}) and linearize it with respect to the small perturbation $v(x,t)$.
In the result we obtain a linear equation with $T$-periodic coefficients
\begin{equation}
\df{v}{t} = - \eta(x,t) v + \fr{1}{2} e^{-i \alpha} \mathcal{G} v - \fr{1}{2} e^{i \alpha} a^2(x,t) \mathcal{G} \overline{v},
\label{Eq:Linear}
\end{equation}
where
\begin{equation}
\eta(x,t) = i \Omega + e^{i \alpha} a(x,t) \mathcal{G} \overline{a}.
\label{Def:N}
\end{equation}
Along with Eq.~(\ref{Eq:Linear}) it is convenient to consider its complex-conjugate version
$$
\df{\overline{v}}{t} = - \overline{\eta}(x,t) \overline{v}
+ \fr{1}{2} e^{i \alpha} \mathcal{G} \overline{v} - \fr{1}{2} e^{-i \alpha} \overline{a}^2(x,t) \mathcal{G} v.
$$
These two equations can be written in the operator form
\begin{equation}
\df{V}{t} = \mathcal{A}(t) V + \mathcal{B}(t) V,
\label{Eq:Operator}
\end{equation}
where $V(t) = ( v_1(t), v_2(t) )^\mathrm{T}$ is a function with values in $C_\mathrm{per}([-\pi,\pi] ; \mathbb{C}^2)$, and
$$
\mathcal{A}(t) V = \left(
\begin{array}{ccc}
- \eta(\cdot,t) & & 0 \\[2mm]
0 & & - \overline{\eta}(\cdot,t)
\end{array}
\right)
\left(
\begin{array}{c}
v_1 \\[2mm]
v_2
\end{array}
\right),
$$
and
$$
\mathcal{B}(t) V = \fr{1}{2}
\left(
\begin{array}{ccc}
e^{-i \alpha} & & - e^{i \alpha} a^2(\cdot,t) \\[2mm]
- e^{-i \alpha} \overline{a}^2(\cdot,t) & & e^{i \alpha}
\end{array}
\right)
\left(
\begin{array}{c}
\mathcal{G} v_1 \\[2mm]
\mathcal{G} v_2
\end{array}
\right).
$$
For every fixed~$t$ the operators~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$
are linear operators from $C_\mathrm{per}([-\pi,\pi] ; \mathbb{C}^2)$ into itself.
Moreover, they both depend continuously on~$t$
and thus their norms are uniformly bounded for all $t\in[0,T]$.
Our further consideration is concerned with the stability of the zero solution to Eq.~(\ref{Eq:Operator}).
Therefore, we are dealing only with the linear stability of the solution~(\ref{Ansatz:0}).
We apply the methods of qualitative analysis of differential equations in Banach spaces~\cite{DaleckiiKrein}.
Since~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$ are uniformly bounded operators,
we can define an operator exponent
$$
\mathcal{E}(t) = \exp\left( \int_0^t ( \mathcal{A}(t') + \mathcal{B}(t') ) dt' \right).
$$
Then the solution of Eq.~(\ref{Eq:Operator}) with the initial condition $V(0) = V_0$
is given by the formula $V(t) = \mathcal{E}(t) V_0$.
Recalling that~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$ are $T$-periodic,
we conclude~\cite[Chapter~V]{DaleckiiKrein} that the stability of the zero solution to Eq.~(\ref{Eq:Operator})
is determined by the spectrum of the {\it monodromy operator}~$\mathcal{E}(T)$.
Roughly speaking, the necessary condition for the stability of the zero solution to Eq.~(\ref{Eq:Operator})
is that the spectrum of the operator~$\mathcal{E}(T)$ lies entirely in the unit circle of the complex plane.
Otherwise, this solution is unstable.
The main problem in the application of the above stability criterion
is concerned with the fact that the monodromy operator~$\mathcal{E}(T)$
acts in an infinite-dimensional functional space.
Therefore, its spectrum consists of infinitely many points,
which can be arbitrarily distributed in the complex plane.
Below we use the explict form of the operators~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$
and show the following properties of the monodromy operator~$\mathcal{E}(T)$:
\smallskip
(i) The spectrum of the operator~$\mathcal{E}(T)$ is bounded
and symmetric with respect to the real axis of the complex plane.
It consists of two qualitatively different parts:
essential spectrum~$\sigma_\mathrm{ess}$ and discrete spectrum~$\sigma_\mathrm{disc}$.
\smallskip
(ii) The essential spectrum~$\sigma_\mathrm{ess}$ is given by the formula
\begin{equation}
\sigma_\mathrm{ess} = \left\{ \exp\left( - \int_0^T \eta(x,t) dt \right) \::\: x\in[-\pi,\pi] \right\} \cup \{ \mathrm{c. c.} \}.
\label{Eq:Ess}
\end{equation}
\smallskip
(iii) The discrete spectrum~$\sigma_\mathrm{disc}$ comprises finitely many isolated eigenvalues~$\mu$,
which can be found using the formula $\mu = e^{\lambda T}$
where~$\lambda$ are roots of a characteristic equation specified below.
\smallskip
\begin{proposition}
The monodromy operator~$\mathcal{E}(T)$ can be written as a sum
\begin{equation}
\mathcal{E}(T) = \mathcal{E}_0(T) + \mathcal{K},
\label{Decomposition:E}
\end{equation}
where~$\mathcal{E}_0(T)$ is a multiplication operator of the form
$$
\mathcal{E}_0(T) = \exp\left( \int_0^T \mathcal{A}(t) dt \right),
$$
and~$\mathcal{K}$ is a compact operator from $C_\mathrm{per}([-\pi,\pi] ; \mathbb{C}^2)$ into itself.
\end{proposition}
{\bf Proof:}
Every function~$V(t)$ satisfying Eq.~(\ref{Eq:Operator})
and the initial condition $V(0) = V_0$ solves also integral equation
\begin{equation}
V(t) = \mathcal{E}_0(t) V_0 + \int_0^t \mathcal{E}_0(t) \mathcal{E}_0^{-1}(t') \mathcal{B}(t') V(t') dt',
\label{Eq:Volterra}
\end{equation}
where
$$
\mathcal{E}_0(t) = \exp\left( \int_0^t \mathcal{A}(t') dt' \right).
$$
On the other hand, every solution to Eq.~(\ref{Eq:Volterra}) can be decomposed into a sum
\begin{equation}
V(t) = \mathcal{E}_0(t) V_0 + W(t),
\label{Decomposition:V}
\end{equation}
where~$W(t)$ is a solution to the integral equation
\begin{equation}
W(t) = \int_0^t \mathcal{E}_0(t) \mathcal{E}_0^{-1}(t') \mathcal{B}(t') \mathcal{E}_0(t') V_0 dt' + \int_0^t \mathcal{E}_0(t) \mathcal{E}_0^{-1}(t') \mathcal{B}(t') W(t') dt'.
\label{Eq:Volterra_}
\end{equation}
The Volterra integral equation~(\ref{Eq:Volterra_}) has unique solution~$W(t)$ that continuously depends on the initial value~$V_0$.
Moreover, the mapping $V_0 \mapsto W(T)$ is a compact operator
from $C_\mathrm{per}([-\pi,\pi] ; \mathbb{C}^2)$ into itself
(recall the compactness of the operator~$\mathcal{G}$
involved in the definition of the operator~$\mathcal{B}(t)$).
This fact along with the formula~(\ref{Decomposition:V}) implies
that the monodromy operator~$\mathcal{E}(T)$
is the sum of the multiplication operator~$\mathcal{E}_0(T)$
and a compact operator abbreviated by~$W(T)$.~\hfill \rule{2.3mm}{2.3mm}
The spectrum of monodromy operator~$\mathcal{E}(T)$ consists of all numbers $\mu\in\mathbb{C}$
such that the difference operator $\mathcal{E}(T) - \mu \mathcal{I}$ is not invertible.
Because of the definition of~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$
this spectrum is symmetric with respect to the real axis.
Moreover, since~$\mathcal{A}(t)$ and~$\mathcal{B}(t)$ are uniformly bounded for $t\in[0,T]$,
the monodromy operator~$\mathcal{E}(T)$ is bounded too,
and hence its spectrum lies in a circle of finite radius of the complex plane.
Other spectral properties of~$\mathcal{E}(T)$ follow from the decomposition formula~(\ref{Decomposition:E}).
Indeed, formula~(\ref{Decomposition:E}) implies~\cite{Kato} that
the essential spectrum of monodromy operator~$\mathcal{E}(T)$
coincides with the spectrum of multiplication operator~$\mathcal{E}_0(T)$.
Using the definition of~$\mathcal{A}(t)$ we obtain
$$
\mathcal{E}_0(t) = \left(
\begin{array}{ccc}
\Phi(x,t) & & 0 \\[2mm]
0 & & \overline{\Phi}(x,t)
\end{array}
\right),
$$
where
$$
\Phi(x,t) = \exp\left( - \int_0^t \eta(x,t') dt' \right).
$$
This allows us to calculate the spectrum of~$\mathcal{E}_0(T)$ explicitly
and obtain formula~(\ref{Eq:Ess}) for~$\sigma_\mathrm{ess}$.
\begin{remark}
Suppose that we consider a solution~(\ref{Ansatz:0}) to Eq.~(\ref{Eq:OA})
with the amplitude~$a(x,t)$, primary frequency~$\Omega$ and secondary frequency~$\omega$,
which satisfy the self-consistency equation~(\ref{Eq:SC}) with~$w(x,t)$ and~$s$ defined by~(\ref{Def:omega:s:w}).
Then inserting~(\ref{Scaling:omega:u}) and~(\ref{Scaling:s:w}) into~(\ref{Def:N}) we obtain
$$
\exp\left( - \int_0^T \eta(x,t) dt \right) = \exp\left( - \int_0^{2\pi} ( i s + 2 \overline{w}(x,t) u(x,t) ) dt \right)
$$
and therefore formula~(\ref{Eq:Ess}) and Proposition~\ref{Proposition:Dichotomy} imply
that every~$\mu\in\sigma_\mathrm{ess}$ lies either on the boundary of the unit circle~$|\mu| = 1$
or on the interval~$(0,1]$ of the real axis.
Hence the essential spectrum~$\sigma_\mathrm{ess}$ cannot be relevant to any instability
of the solution~(\ref{Ansatz:0}) obtained from the self-consistency equation~(\ref{Eq:SC}).
Note that, in general, Eq.~(\ref{Eq:OA}) may have solutions of the form~(\ref{Ansatz:0})
with unstable essential spectrum, but such solutions do not satisfy the self-consistency equation~(\ref{Eq:SC}).
\label{Remark:EssentialSpectrum}
\end{remark}
Formula~(\ref{Decomposition:E}) also implies
$$
\mathcal{E}(T) - \mu \mathcal{I} = ( \mathcal{E}_0(T) - \mu \mathcal{I} ) + \mathcal{K}.
$$
For every $\mu\notin\sigma_\mathrm{ess}$ the right-hand side of this formula
is a sum of the invertible operator $\mathcal{E}_0(T) - \mu \mathcal{I}$
and the compact operator~$\mathcal{K}$, hence it defines a Fredholm operator of index zero.
This means that apart from the essential spectrum~$\sigma_\mathrm{ess}$
the monodromy operator~$\mathcal{E}(T)$ can have a discrete spectrum~$\sigma_\mathrm{disc}$
consisting of eigenvalues of finite multiplicity.
Since the entire spectrum~$\sigma_\mathrm{ess}\cup\sigma_\mathrm{disc}$ is confined
in a bounded region of the complex plane, there can be at most finitely many such eigenvalues.
These eigenvalues can be computed only numerically and in the following we outline the way how this can be done.
\begin{proposition}
Let $\lambda$ be a complex number such that the equation
\begin{equation}
\df{V}{t} = \mathcal{A}(t) V + \mathcal{B}(t) V - \lambda V
\label{Eq:lambda}
\end{equation}
has a nontrivial $T$-periodic solution, then the number $\mu = e^{\lambda T}$
is an eigenvalue of the monodromy operator~$\mathcal{E}(T)$.
Conversely, for every nonzero $\mu\in\sigma_\mathrm{disc}$ there exists
a number~$\lambda\in\mathbb{C}$ such that Eq.~(\ref{Eq:lambda}) has a nontrivial $T$-periodic solution
and $\mu = e^{\lambda T}$.
\label{Proposition:Equivalence}
\end{proposition}
{\bf Proof:} Eq.~(\ref{Eq:lambda}) has a nontrivial $T$-periodic solution if and only if
$$
\exp\left( \int_0^T ( \mathcal{A}(t') + \mathcal{B}(t') - \lambda \mathcal{I} ) dt' \right)
= \exp\left( \int_0^T ( \mathcal{A}(t') + \mathcal{B}(t') ) dt' \right) e^{-\lambda T} = \mathcal{I}.
$$
This is equivalent to the identity $\mathcal{E}(T) = e^{\lambda T} \mathcal{I}$ that ends the proof.~\hfill \rule{2.3mm}{2.3mm}
\begin{remark}
Notice that formula $\mu = e^{\lambda T}$ is not a one-to-one relation between~$\lambda$ and~$\mu$.
For every~$\lambda\in\mathbb{C}$ it yields one value~$\mu$.
In contrast, given a nonzero~$\mu$ one obtains infinitely many corresponding values~$\lambda$,
namely $\lambda = (\log \mu + 2 \pi k i) / T$ with $k\in\mathbb{Z}$.
\label{Remark:Spectrum}
\end{remark}
\begin{proposition}
Let $\lambda$ be a complex number such that $e^{\lambda T}\notin\sigma_\mathrm{ess}$,
then for every continuous $T$-periodic function~$F(t)$ there exists a unique $T$-periodic solution of equation
$$
\df{V}{t} = ( \mathcal{A}(t) - \lambda \mathcal{I} ) V + F(t),
$$
which is given by
$$
V(t) = \int_0^T \mathcal{D}_\lambda(t,t') F(t') dt'
$$
where
$$
\mathcal{D}_\lambda(t,t') = ( \mathcal{I} - \mathcal{E}_\lambda(T) )^{-1} ( \mathcal{E}_\lambda(T) + \Theta(t - t') ( \mathcal{I} - \mathcal{E}_\lambda(T) ) ) \mathcal{E}_\lambda(t) \mathcal{E}_\lambda^{-1}(t')
$$
and
$$
\mathcal{E}_\lambda(t) = \exp\left( \int_0^t ( \mathcal{A}(t') - \lambda \mathcal{I} ) dt' \right) = \mathcal{E}_0(t) e^{-\lambda t}.
$$
\label{Proposition:Eq:Periodic}
\end{proposition}
{\bf Proof:} This assertion can be proved by analogy with~\cite[Proposition~A.1]{Ome2019}.~\hfill \rule{2.3mm}{2.3mm}
Proposition~\ref{Proposition:Eq:Periodic} implies that for every $\lambda\in\mathbb{C}$
such that $e^{\lambda T}\notin\sigma_\mathrm{ess}$ all $T$-periodic solutions~$V(t)$ of Eq.~(\ref{Eq:lambda})
satisfy also the integral equation
\begin{equation}
V(t) = \int_0^T \mathcal{D}_\lambda(t,t') \mathcal{B}(t') V(t') dt'.
\label{Eq:Integral:V}
\end{equation}
This fact can be used to compute the discrete spectrum~$\sigma_\mathrm{disc}$ numerically.
To this end let us choose $C > 0$ and consider Eq.~(\ref{Eq:Integral:V}) in the rectangular region
$$
\Pi = \{ \lambda\in\mathbb{C}\::\: | \mathrm{Re}\: \lambda | \le C,\: | \mathrm{Im}\: \lambda | \le \pi / T \}.
$$
If we find all $\lambda\in\Pi$ such that Eq.~(\ref{Eq:Integral:V}) has a bounded nontrivial solution~$V(t)$,
then according to Proposition~\ref{Proposition:Equivalence} and Remark~\ref{Remark:Spectrum}
we also find all eigenvalues $\mu = e^{\lambda T}$ of the monodromy operator~$\mathcal{E}(T)$
lying in the circular region $e^{- C T} \le |\mu| \le e^{C T}$.
Since the spectrum of the monodromy operator~$\mathcal{E}(T)$ is bounded,
this ensures that for sufficiently large~$C$ we determine all eigenvalues~$\mu\in\sigma_\mathrm{disc}$
relevant to the stability of the solution~(\ref{Ansatz:0}).
Indeed, considering Eq.~(\ref{Eq:Integral:V}) for $\lambda\in\Pi$
we may overlook eigenvalues~$\mu$ in the circle $|\mu| \le e^{- C T}$.
However, all these eigenvalues satisfy $|\mu| < 1$ and therefore have no impact
on the stability of the solution~(\ref{Ansatz:0}).
\subsection{Computation of the discrete spectrum}
\label{Sec:DiscreteSpectrum}
Recalling the definitions of~$\mathcal{B}(t)$ and~$\mathcal{D}_\lambda(t,t')$
it is easy to see that Eq.~(\ref{Eq:Integral:V}) is a homogeneous Fredholm integral equation.
In general, it cannot be solved explicitly, but its solutions can be found approximately using Galerkin's method.
For this one needs to choose a set of linearly independent functions~$\varphi_k(x,t)$, $k=1,\dots,K$,
which are $2\pi$-periodic with respect to~$x$ and $T$-periodic with respect to~$t$.
Without loss of generality it can be assumed that these functions are orthonormalized with respect to the scalar product
$$
\llangle v_1, v_2 \rrangle = \fr{1}{2\pi T} \int_0^T dt \int_{-\pi}^\pi \overline{v}_1(x,t) v_2(x,t) dx
$$
such that $\llangle \varphi_j, \varphi_k \rrangle = \delta_{jk}$.
Then one looks for an approximate solution to Eq.~(\ref{Eq:Integral:V}) in the form
$$
V(t) = \sum\limits_{k=1}^K V_k \varphi_k(\cdot,t),\quad\mbox{where}\quad V_k\in\mathbb{C}^2.
$$
Inserting this ansatz into Eq.~(\ref{Eq:Integral:V}) and writing the projected problem, we obtain
$$
V_n = \sum\limits_{k=1}^K \llangle[\Bigg] \varphi_n, \int_0^T \mathcal{D}_\lambda(t,t') \mathcal{B}(t') \varphi_k(\cdot,t') dt' \rrangle[\Bigg] V_k,\qquad n=1,\dots,K.
$$
This is a system of linear algebraic equations for the $K$ two-dimensional coefficients~$V_k$.
Obviously, it has a nontrivial solution if and only if~$\lambda$ satisfies the characteristic equation
\begin{equation}
\det\left( \mathbf{M}_{2K}(\lambda) - \mathbf{I}_{2K} \right) = 0,
\label{Eq:Characteristic}
\end{equation}
where
$$
\mathbf{M}_{2K}(\lambda) = \left(
\begin{array}{ccc}
\mathbf{B}_{11}(\lambda) & \dots & \mathbf{B}_{1K}(\lambda) \\[2mm]
\vdots & \ddots & \vdots \\[2mm]
\mathbf{B}_{K1}(\lambda) & \dots & \mathbf{B}_{KK}(\lambda)
\end{array}
\right)
$$
is a block matrix with the $(2\times 2)$-matrix entries
\begin{equation}
\mathbf{B}_{nk}(\lambda) = \llangle[\Bigg] \varphi_n, \int_0^T \mathcal{D}_\lambda(t,t') \mathcal{B}(t') \varphi_k(\cdot,t') dt' \rrangle[\Bigg].
\label{Def:Matrix:B}
\end{equation}
Solving Eq.~(\ref{Eq:Characteristic}) one obtains approximate eigenvalues~$\lambda$ of Eq.~(\ref{Eq:Integral:V})
and hence the corresponding approximate eigenvalues $\mu = e^{\lambda T}$ of the monodromy operator~$\mathcal{E}(T)$.
Taking into account that the functions~$\varphi_k(x,t)$ appearing in the definition of matrix~$\mathbf{M}_{2K}(\lambda)$
must be $2\pi$-periodic with respect to~$x$ and $T$-periodic with respect to~$t$
it is convenient to choose them in the form of spatiotemporal Fourier modes.
More precisely, let $K_x$ and $K_t$ be two positive integers, then we assume
$$
\varphi_{nm}(x,t) = e^{i n x + 2 \pi i m t / T},\quad n=-K_x,\dots,K_x,\quad m=-K_t,\dots,K_t.
$$
Thus we obtain a set of $K = (2 K_x + 1) (2 K_t + 1)$ functions such that $\llangle \varphi_{nm}, \varphi_{n'm'} \rrangle = \delta_{nn'}\delta_{mm'}$.
Notice that the functions~$\varphi_{nm}(x,t)$ have an important property.
If coupling kernel~$G(x)$ has a Fourier series representation
$$
G(x) = g_0 + \sum\limits_{k=1}^\infty 2 g_k \cos(k x)\quad\mbox{with}\quad
g_k = \fr{1}{2\pi} \int_{-\pi}^\pi G(x) e^{-i k x} dx = \fr{1}{2\pi} \int_{-\pi}^\pi G(x) \cos(k x) dx,
$$
then for all integer indices~$n$ and~$m$ we have
$$
\mathcal{G} \varphi_{nm} = 2\pi g_n \varphi_{nm}.
$$
This implies
$$
\mathcal{B}(t) \varphi_{nm} = 2\pi g_n \mathcal{B}_0(t) \varphi_{nm},\quad\mbox{where}\quad
\mathcal{B}_0(t) = \fr{1}{2}
\left(
\begin{array}{ccc}
e^{-i \alpha} & & - e^{i \alpha} a^2(\cdot,t) \\[2mm]
- e^{-i \alpha} \overline{a}^2(\cdot,t) & & e^{i \alpha}
\end{array}
\right),
$$
therefore in the case of functions~$\varphi_{nm}(x,t)$ formula~(\ref{Def:Matrix:B}) can be written
\begin{equation}
\mathbf{B}_{nm n'm'}(\lambda) = \fr{g_{n'}}{2\pi T} \int_{-\pi}^\pi dx \int_0^T dt \int_0^T \overline{\varphi}_{nm}(x,t) \mathcal{D}_\lambda(t,t') \mathcal{B}_0(t') \varphi_{n'm'}(x,t') dt'.
\label{Def:Matrix:B_}
\end{equation}
The main advantage of the latter expression is that it does not contain any operators, but only explicitly known functions.
More precisely, the term $\mathcal{D}_\lambda(t,t')$ is a $(2\times 2)$-matrix
with entries depending on~$x$, $t$, $t'$ and~$\lambda$,
while $\mathcal{B}_0(t')$ is a $(2\times 2)$-matrix with entries depending on~$x$ and~$t'$.
Importantly, the triple integration in~(\ref{Def:Matrix:B_}) must be carried out
for each entry of the resulting product matrix separately.
As soon as all matrices $\mathbf{B}_{nm n'm'}(\lambda)$ are determined
they must be combined into the block matrix~$\mathbf{M}_{2K}(\lambda)$
and then the left-hand side of the characteristic equation~(\ref{Eq:Characteristic}) can be computed.
\begin{remark}
In Section~\ref{Sec:SC} we mentioned that Eq.~(\ref{Eq:OA:u}) has three continuous symmetries.
This implies that the monodromy operator~$\mathcal{E}(T)$ has three linearly independent
eigenfunctions corresponding to the unit eigenvalue $\mu = 1$.
Respectively the characteristic equation~(\ref{Eq:Characteristic})
has the triple root $\lambda = 0$, which is embedded in the essential spectrum.
\label{Remark:MultipleEV}
\end{remark}
\section{Example}
\label{Sec:Example}
Let us illustrate the performance of the methods developed in Sections~\ref{Sec:SC} and~\ref{Sec:Stability}.
For this we consider the breathing chimera state shown in Figure~\ref{Fig:QP}.
The primary frequency~$\Omega$ and the secondary frequency~$\omega$ of this state
can be extracted from the time trajectory of the global order parameter~$Z_N(t)$, see Section~\ref{Sec:Extraction}.
Then using formula~(\ref{Eq:a:approx}) we compute an approximate amplitude~$a(x,t)$
of the corresponding solution~(\ref{Ansatz:QP}) to Eq.~(\ref{Eq:OA}).
Inserting these results into formulas~(\ref{Scaling:omega:u}) and~(\ref{Scaling:s:w})
we obtain approximate values of the parameter~$s$ and function~$w(x,t)$.
Finally using continuous symmetries of Eq.~(\ref{Eq:OA:u}) we ensure
that~$w(x,t)$ is even with respect to~$x$ and satisfies the pinning conditions~(\ref{Eq:Pinning:1}) and~(\ref{Eq:Pinning:2}).
The obtained function~$w(x,t)$ can be represented as a Fourier series
$$
w(x,t) = \sum\limits_{k = -\infty}^\infty ( \hat{w}_{0,k} + \hat{w}_{1,k} \cos x ) e^{i k t}.
$$
Then the leading coefficients~$\hat{w}_{0,k}$, $\hat{w}_{1,k}$ with indices $k = -10,\dots, 10$
can be used as an initial guess in the Galerkin's system~(\ref{System:Galerkin}) with $F = 10$.
The latter system was solved using Newton's method up to the accuracy~$10^{-9}$.
The obtained set of coefficients~$c_k$ was transformed into function~$w(x,t)$
using formulas~(\ref{Ansatz:w}) and~(\ref{Ansatz:v}).
Then the corresponding solution~$u(x,t)$ to Eq.~(\ref{Eq:OA:u})
was computed using the operator~$\mathcal{U}$ defined in Section~\ref{Sec:Operator:U}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth,angle=0]{Oscillators_LOP_abs.pdf}\hspace{5mm}
\includegraphics[width=0.45\textwidth,angle=0]{LOP_abs.pdf}\\[2mm]
\includegraphics[width=0.45\textwidth,angle=0]{Oscillators_LOP_arg.pdf}\hspace{5mm}
\includegraphics[width=0.45\textwidth,angle=0]{LOP_arg.pdf}
\end{center}
\caption{(a), (b) Approximate complex amplitude~$a(x,t)$ of the solution~(\ref{Ansatz:QP})
corresponding to the chimera state in Fig.~\ref{Fig:QP}, see formula~(\ref{Eq:a:approx}).
(c), (d) The corresponding solution~$u(x,t)$ to Eq.~(\ref{Eq:OA:u})
obtained from the Galerkin's system~(\ref{System:Galerkin}) with $F = 10$.}
\label{Fig:LOP}
\end{figure}
Figure~\ref{Fig:LOP}(a),(b) shows an approximate amplitude~$a(x,t)$
computed using formula~(\ref{Eq:a:approx}) for the breathing chimera state in Fig.~\ref{Fig:QP}.
We also solved the self-consistency equation~(\ref{Eq:SC}) for the same parameters~$A$ and~$\alpha$
using Galerkin's system~(\ref{System:Galerkin}) and found a time-periodic solution~$u(x,t)$ to Eq.~(\ref{Eq:OA:u}),
see Figure~\ref{Fig:LOP}(c),(d).
As expected, the graphs of~$a(x,t)$ and~$u(x,t)$ agree with each other on a large scale,
but have some fine structure differences which can be attributed to finite size effects.
The assertion is confirmed by simulations of system~(\ref{Eq:Oscillators}) with more oscillators (not shown).
In particular, several darker filaments protruding into the coherent region (yellow/bright) in Fig.~\ref{Fig:LOP}(a)
become thinner for growing system size and disappear in the limit $N\to\infty$, see Fig.~\ref{Fig:LOP}(c),
in accordance with the coherence/incoherence invariance property described in Proposition~\ref{Proposition:Disc}.
The self-consistency equation~(\ref{Eq:SC}) allows us to predict almost perfectly
the graphs of the global order parameter~$Z_N(t)$, see~(\ref{Def:Z}),
and of the effective frequencies~$\Omega_{\mathrm{eff},k}$, see~(\ref{Def:Omega_eff}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.3\textwidth,angle=270]{Comparison.pdf}
\end{center}
\caption{(a) Global order parameter~$Z_N(t)$ and (b) effective frequencies~$\Omega_{\mathrm{eff},k}$
computed for the chimera state in Fig.~\ref{Fig:QP} and the corresponding theoretical predictions~$Z(t)$
and~$\Omega_\mathrm{eff}(x)$ obtained using Proposition~\ref{Proposition:Z:Omega_eff}.}
\label{Fig:Comparison}
\end{figure}
In Figure~\ref{Fig:Comparison} these quantities, computed
for a breathing chimera state in system~(\ref{Eq:Oscillators}),
are compared with their continuum limit counterparts~$Z(t)$ and~$\Omega_\mathrm{eff}(x)$
computed by the formulas from Proposition~\ref{Proposition:Z:Omega_eff}
where we inserted the functions~$w(x,t)$ and~$u(x,t)$ obtained from the Galerkin's system~(\ref{System:Galerkin}).
Figure~\ref{Fig:Scan} illustrates another application of the self-consistency equation~(\ref{Eq:SC}).
We used it for computation of a branch of breathing chimera states in Eq.~(\ref{Eq:OA}).
The theoretically predicted primary and secondary frequencies are compared
with the corresponding values of~$\Omega$ and~$\omega$ observed in breathing chimera states
in the coupled oscillator system~(\ref{Eq:Oscillators}) with $N = 8192$.
Again, the agreement between the theoretical curve and the numerical points is very good.
A slightly recognizable mismatch can be attributed to finite size effects
or to the small number of Fourier modes ($F = 10$) in the Galerkin's approximation.
Note that the curves in Figure~\ref{Fig:Scan} fold for $\alpha \approx \pi/2 - 0.145$.
This fact explains a sudden collapse of breathing chimera states to the completely synchronous state,
which we observed in system~(\ref{Eq:Oscillators}) for $\alpha > \pi/2 - 0.145$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.3\textwidth,angle=270]{Scan_fold.pdf}
\end{center}
\caption{(a) Primary frequency~$\Omega$ and (b) secondary frequency~$\omega$ of breathing chimera states
in the system~(\ref{Eq:Oscillators}) for cosine coupling kernel~(\ref{Coupling:Cos}) and $A = 1.05$.
The solid curve shows theoretical predictions made using the Galerkin's system~(\ref{System:Galerkin}) with $F = 10$.
The points show frequencies extracted from the breathing chimera states
observed in the system~(\ref{Eq:Oscillators}) with $N = 8192$ oscillators.}
\label{Fig:Scan}
\end{figure}
For all breathing chimera states on the solution branch in Figure~\ref{Fig:Scan}
we also computed the corresponding essential spectra~$\sigma_\mathrm{ess}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.25\textwidth,angle=270]{Spectrum.pdf}%
\includegraphics[width=0.25\textwidth,angle=270]{SpectrumFold.pdf}%
\includegraphics[width=0.25\textwidth,angle=270]{SpectrumPD.pdf}%
\includegraphics[width=0.25\textwidth,angle=270]{SpectrumTorus.pdf}
\end{center}
\caption{All breathing chimera states on the solution branch in Fig.~\ref{Fig:Scan}
have identical essential spectra shown in panel~(a).
The point indicates a multiple eigenvalue embedded into the essential spectrum.
Other panels show hypothetical bifurcation scenarios for breathing chimera states:
(b) Fold and symmetry-breaking bifurcations, (c) period-doubling bifurcation, (d) torus bifurcation.
The arrows indicate directions in which one or two eigenvalues can appear from the essential spectrum.}
\label{Fig:Spectra}
\end{figure}
These spectra look identically, see Figure~\ref{Fig:Spectra}(a),
and have the maximal possible size, see Remark~\ref{Remark:EssentialSpectrum} for more detail.
The computation of the discrete spectra~$\sigma_\mathrm{disc}$ turned out
to be a time demanding task, therefore at present we were unable to carry it out.
However, because of Remark~\ref{Remark:MultipleEV} we assert that~$\sigma_\mathrm{disc}$
includes a triple eigenvalue $\mu = 1$ embedded into the essential spectrum.
Since for every breathing chimera state the unit circle~$|\mu| = 1$
is a subset of its essential spectrum~$\sigma_\mathrm{ess}$,
the destabilization of such chimera state cannot occur via a classical bifurcation of finite codimension.
Indeed, any unstable eigenvalue can emerge only from the essential spectrum
and therefore this eigenvalue cannot be isolated at the bifurcation point.
By analogy with other dynamical systems we may expect
that breathing chimera states in general can lose their stability
via a fold, symmetry breaking, period-doubling or torus bifurcation,
which are associated with the appearance of one or two unstable eigenvalues~$\mu$
from the essential spectrum on the unit circle as shown in Figure~\ref{Fig:Spectra}(b)-(d).
Note that proper consideration of such bifurcations requires the use of generalized spectral methods~\cite{ChiN2011,Die2016,ChiM2019},
which, however, must be adapted to a situation where the reference solution is a relative periodic orbit rather than a simple equilibrium.
\section{Conclusions}
\label{Sec:Conclusions}
In summary, we showed that breathing chimera states observed in large size systems~(\ref{Eq:Oscillators})
are properly represented by solutions of the form~(\ref{Ansatz:QP}) in Eq.~(\ref{Eq:OA}).
The self-consistency equation~(\ref{Eq:SC}) and Proposition~\ref{Proposition:Z:Omega_eff}
allow one to predict the most important features of such chimera states.
A general approach for stability analysis of breathing chimera states is formulated in Section~\ref{Sec:Stability}.
It relies on the consideration of the monodromy operator which describes the evolution of small perturbations in the system.
We found that the spectrum of this operator consists of two qualitatively different parts: essential and discrete spectra.
The former part was completely characterized in this paper,
while for the latter part we suggested a numerical algorithm for its approximate evaluation.
Although we were unable to compute discrete spectrum of any breathing chimera state
we obtained a theoretical indication of the fact that such chimera states lose their stability
in nonclassical bifurcation scenarios where one or two unstable eigenvalues
appear from the essential spectrum. Note that this statement is still speculative
and therefore needs to be confirmed by computable examples.
We emphasize that the consideration scheme suggested in this paper can be applied to systems~(\ref{Eq:Oscillators})
with arbitrary coupling kernels~$G(x)$, including exponential~\cite{BolSOP2018}
and top-hat~\cite{SudO2018,SudO2020} coupling.
In particular, using the self-consistency equation~(\ref{Eq:SC})
one can carry out a more rigorous asymptotic analysis of breathing chimera states
reminiscent of that in~\cite{SudO2020}.
Moreover, all above methods can be extended to the study of breathing spiral chimera states
in two-dimensional lattices of phase oscillators~\cite{XieKK2015,OmeWK2018}.
Furthermore, our results can also be applied to explore the appearance
of pulsing and alternating coherence-incoherence patterns~\cite{Ome2020a}
and modulated travelling chimera states~\cite{Ome2020}
in systems of heterogeneous nonlocally coupled phase oscillators,
though in this case one needs to modify the definition of the solution operator~$\mathcal{U}$.
In a more general context, we believe that many theoretical constructions of this paper
such as the solution operator of the periodic Riccati equation~(\ref{Eq:Riccati}),
the periodic self-consistency equation~(\ref{Eq:SC})
and the essential and discrete spectra of the monodromy operator~$\mathcal{E}(T)$
can be adapted to a broader class of problems
concerned with the application of the Ott-Antonsen ansatz~\cite{OttA2008}.
These are, for example, the Kuramoto model with a bimodal frequency distribution,
where one finds periodically breathing partially synchronized states~\cite{MarBSOSA2009},
modular Kuramoto networks~\cite{BicGLM2020} as well as networks of theta neurons~\cite{LukBS2013,Lai2014,ByrAC2019}
(or equivalently, quadratic integrate-and-fire neurons~\cite{MonPR2015,Esn-ARAM2017}).
\section*{Acknowledgment}
This work was supported by the Deutsche Forschungsgemeinschaft under grant OM 99/2-1.
|
2,869,038,156,661 | arxiv | \section{Introduction}
Interstellar dust grains absorb, scatter, and polarise radiation and
emit the absorbed radiation through non-thermal and thermal
mechanisms. Dust grains not only absorb and scatter stellar photons,
but also the radiation from dust and gas. In addition, interstellar
dust in the diffuse interstellar medium (ISM) and in other
environments that are illuminated by UV photons show photoluminescence
in the red part of the spectrum, a contribution known as Extended Red
Emission (ERE, Cohen et al. 1975; Witt et al. 1984).
Clues to the composition of interstellar dust come from observed
elemental depletions in the gas phase. It is generally assumed, that
the abundances of the chemical elements in the ISM are similar to
those measured in the photosphere of the Sun. The abundances of the
elements of the interstellar dust (the condensed phase of matter) are
estimated as the difference between the elemental abundances in the
solar photosphere (Asplund et al. 2009) and that of the
gas-phase. Absolute values of the interstellar gas-phase abundance of
element [X] are given with respect to that of hydrogen [H]. Such
[X]/[H] ratios are often derived from the analysis of absorption
lines. (Note that oscillator strengths of some species e.g., CII
2325\AA \/ are uncertain up to a factor of 2, see discussion in Draine
2011). The most abundant condensible elements in the ISM are C, O,
Mg, Si and Fe. When compared to the values of the Sun, elements such
as Mg, Si and Fe, which form silicates, in the gas-phase appear
under--abundant. By contrast, oxygen represents a striking exception,
as it appears over--abundant towards certain sight-lines
(Voshchinnikov \& Henning 2010). Another important dust
forming element, C, cannot be characterised in detail because it has
been analysed only in a restricted number of sight-lines, leading in
some cases to inaccurate values of its abundance (Jenkins 2009);
Parvahti et al. 2012). It is widely accepted that cosmic dust
consists of some form of silicate and carbon.
Stronger constraints on the composition of interstellar grains come
from the analysis of their spectroscopic absorption and emission
signatures. The observed extinction curves display various spectral
bands. The most prominent ones are the 2175\,\AA\ bump, where
graphite and polycyclic aromatic hydrocarbons (PAHs) have strong
electronic transitions, and the 9.7\,$\mu$m and 18\,$\mu$m features
assigned to Si-O stretching and O-Si-O bending modes of silicate
minerals, respectively. In addition, there are numerous weaker
features, such as the 3.4\,$\mu$m absorption assigned to C-H
stretching modes (Mennella et al. 2003), and the diffuse interstellar
bands in the optical (Krelowski 2002). The observed 9.7 and 18\,$\mu$m
band profiles can be better reproduced in the laboratory by amorphous
silicates than crystalline structures. Features that are assigned to
crystalline silicates, such as olivine (Mg$_{2x}$ Fe$_{2-2x}$
SiO$_{4}$ with $x \sim 0.8$), have been detected in AGB and T Tauri
stars and in comet Hale-Bopp (see Henning 2010 for a review). However,
since these features are not seen in the ISM, the fraction of
crystalline silicates in the ISM is estimated $\la 2$\,\% (Min et
al. 2007). Dust in the diffuse ISM appears free of ices that
are detected in regions shielded by $A_{\rm V} > 1.6$\,mag (Bouwman et
al. 2011, Pontoppidan et al. 2003, Siebenmorgen \& Gredel 1997,
Whittet et al. 1988).
In the IR, there are conspicuous emission bands at 3.3, 6.2, 7.7, 8.6,
11.3 and 12.7\,$\mu$m, as well as a wealth of weaker bands in the 12
-- 18\,$\mu$m region. These bands are ascribed to vibrational
transitions in PAH molecules. PAHs are planar structures, and consist
of benzol rings with hydrogen attached. Less perfect structures may be
present where H atoms are replaced by OH or CN and some of the C atoms
by N atoms (Hudgins et al. 2005). PAHs may be ionised: PAH$^+$ cations
may be created by stellar photons, and PAH$^-$ anions may be created
by collisions of neutral PAHs with free e$^-$. The ionisation degree
of PAH has little influence on the central wavelength of the emission
bands, but has a large impact on the feature strengths (Allamandola et
al. 1999, Galliano et al. 2008). Feature strengths also depend on the
hardness of the exciting radiation field, or on the hydrogenation
coverage of the molecules (Siebenmorgen \& Heymann 2012).
The extinction curve gives the dust extinction as a function of
wavelength. It provides important constraints on dust models, and in
particular on the size distribution of the grains. Important work has
been published by Mathis et al. (1977), who introduced their
so--called MRN size distribution, and by Greenberg (1978), who have
presented his grain core--mantle model. Another important constraint
on dust models is provided by the IR emission. IRAS data revealed
stronger than expected 12 and 25\,$\mu$m emission from interstellar
clouds (Boulanger et al. 1985). At the same time, various PAH
emission bands have been detected (Allamandola et al. 1985, 1989,
Puget \& L\'{e}ger 1989). Both emission components can only be
explained by considering dust particles that are small enough to be
stochastically heated. A step forward was taken in the dust models by
D\'{e}sert et al. (1990), Siebenmorgen \& Kr\"ugel (1992), Dwek et
al. (1997), Li \& Draine (2001), and Draine \& Li (2007). In these
models, very small grains and PAHs are treated as an essential grain
component beside the (so far) standard carbon and silicate mixture of
large grains.
In all these studies, large grains are assumed to be spherical and,
the particle shape is generally not further discussed. However, in
order to account for the widely observed interstellar polarisation,
non--spherical dust particles partially aligned by some mechanism need
to be considered. This has been done e.g. by Hong \& Greenberg
(1980), Voshchinnikov (1989), Li \& Greenberg (1997) who considered
infinite cylinders. More realistic particle shapes such as spheroids
have been considered recently. The derivation of cross sections of
spheroids is an elaborate task, and computer codes have been made
available by Mishchenko (2000) and Voshchinnikov \& Farafonov (1993).
The influence of the type of spheroidal grain on the polarisation is
discussed by Voshchinnikov (2004). Dust models considering spheroidal
particles that fit the average Galactic extinction and polarization
curves were presented by Gupta et al. (2005), Draine \& Allaf-Akbari
(2006), Draine \& Fraisse (2009). The observed interstellar extinction
and polarization curves towards particular sight lines are modelled by
Voshchinnikov \& Das (2008) and Das et al. (2010).
In this paper we present a dust model for the diffuse ISM that
accounts for observations of elemental abundances, extinction,
emission, and interstellar polarisation by grains. The ERE is not
polarised and is not further studied in this work (see Witt \& Vijh
2004 for a review). We also do not discuss diffuse interstellar bands
as their origin remains unclear (Snow \& Destree, 2011) nor various
dust absorption features observed in denser regions. We first
describe the light scattering and alignment of homogeneous spheroidal
dust particles and discuss the absorption properties of PAHs. Then we
present our dust model. It is first applied to the observed average
extinction and polarisation data of the ISM, and the dust emission at
high Galactic latitudes. We present dust models towards four stars for
which extinction and IR data are available, and for which we have
obtained new spectro--polarimetric observations with the FORS
instrument of the VLT. We conclude with a summary of the main findings
of this work.
\section{Model of the interstellar dust}
\subsection{Basic definitions}
Light scattering by dust is a process in which an incident
electromagnetic field is scattered into a new direction after
interaction with a dust particle. The directions of the propagation of
the incident wave and the scattered wave define the so--called {\it
scattering plane}. In this paper we define the Stokes parameters
{\it I, Q, U, V} as in Shurcliff (1962), adopting as a reference
direction the one perpendicular to the scattering plane. This way, the
reduced Stokes parameter $P_Q = Q/I$ is given by the difference
between the flux perpendicular to the scattering plane, $F^{\bot}$,
minus the flux parallel to that plane, $F^{\Vert}$, divided by the sum
of the two fluxes. In the context of this paper, Stokes $U$ is always
identically zero, so that $P_Q$ is also the total fraction of linear
polarisation $p$.
\noindent
We now define $\tau^{\Vert}$ and $\tau^{\bot}$ as the extinction
coefficients in the two directions, and $\tau_{\rm eff} =
(\tau^{\Vert} + \tau^{\bot}) /2$. In case of weak extinction
($\tau_{\rm eff} \ll 1$), and since
$\vert \tau^{\Vert} - \tau^{\bot} \vert \ll 1$, the polarisation by
dichroic absorption is approximated by
\begin{equation}
p = {{F^{\bot} - F^{\Vert}} \over {F^{\bot} + F^{\Vert}} }
= {{e^{\tau^{\Vert}} - e^{\tau^{\bot}}} \over {e^{\tau^{\bot}} + e^{\tau^{\Vert}}} } \simeq
\frac{\tau^{\Vert} - \tau^{\bot}}{2} \,.
\label{linpol.eq}
\end{equation}
A medium is birefringent if its refractive index depends on the
direction of the wave propagation. In this case, the phase velocity of
the radiation also depends on the wave direction, and the medium
introduces a phase retardation between two perpendicular components of
the radiation, transforming linear into circular polarisation. In the
ISM, a first scattering event in cloud 1 will linearly polarise the
incoming (unpolarised) radiation, and a second scattering event in
cloud 2 will transform part of this linear polarisation into circular
polarisation. Denoting by $p(1)$ the fraction of linear polarised
induced during the first scattering event, and by $N_d(2)$ the dust
column density in cloud 2, and with $\Psi$ the difference of
positional angles of polarisation in clouds 1 and 2, the circular
polarisation $ p_{\rm c}$ is a second order effect and given by
\begin{equation}
p_{\rm c} = \frac{V}{I} = N_d(2) C_{c}(2) \ \times \ p(1) \sin(2\Psi) \,,
\label{cp.eq}
\end{equation}
where $C_{c}$ is the cross section of circular polarisation
(birefringence) calculated as the difference in phase lags introduced
by cloud 2. Light propagation in a medium in which the grain
alignment changes is discussed by Martin (1974) and Clarke
(2010). The wavelength dependence and degree of circular
polarisation is discussed by Kr\"ugel (2003) and that of single light
scattering by asymmetric particles by Guirado et al. (2007).
\subsection{Spheroidal grain shape \label{spheroids.sec}}
The phenomenon of interstellar polarisation cannot be explained by
spherical dust particles consisting of optically isotropic materials.
Therefore, as a simple representation of finite sized grains, we
consider spheroids. The shape of spheroids is characterised by the
ratio $a/b$ between major and minor semi-axes. There are two types of
spheroids: \textit{prolates}, such as needles, which are
mathematically described by rotation about the major axis of an
ellipse, and \textit{oblates}, such as pancakes, obtained from the
rotation of an ellipse about its minor axis. In our notation, the
volume of a prolate is the same of a sphere with radius $r = (a \cdot
b^2)^{1/3}$, and the volume of an oblate is that of a sphere with
radius $r = (a^2 \cdot b)^{1/3}$.
The extinction optical thickness, which is due to absorption plus
scattering by grains of radius $r$, is given by
\begin {equation}
\tau ({\nu}) = N_{\rm d} \ C_{\rm ext}({\nu})
\label{tauCext.eq}
\end {equation}
\noindent
and similar the linear polarisation by
\begin {equation}
p(\nu) = N_{\rm d} \ C_{\rm p}({\nu})\,,
\label{pCp.eq}
\end {equation}
\noindent
where $N_{\rm d}$ is the total column density of the dust grains along
the line of sight and $C_{\rm {ext, p}}$ are the extinction and
linear polarisation cross sections.
\subsection{Dust alignment}
Stellar radiation may be polarised by partially aligned spheroidal
dust grains that wobble and rotate about the axis of greatest moment
of inertia. The question on how grain alignment works is not
settled. Various mechanisms such as magnetic or radiation alignment
are suggested, see Voshchinnikov (2012) for a review. We consider
grain alignment along the magnetic field $\vec{B}$ that is induced by
paramagnetic relaxation of particles having Fe impurities. This
so--called imperfect Davis-Greenstein (IDG) orientation of spheroids
can be described by
\begin{figure}
\centering
\includegraphics[width=8cm]{angles.eps}
\caption{Geometrical configuration of a spinning and wobbling
prolate spheroidal grain with notation by Das et al. (2010). The
major axis O$_1$O$_2$ of the particle spheroid is placed in the
spinning plane NO$_1$O$_2$ that is perpendicular to the angular
momentum $\vec{J}$. Direction of the light propagation $\vec{k}$
is set parallel to the $Z$-axis. We measure from $Z$ the angle $0
\leq \Omega \leq 90\degr$ to the magnetic field $\vec{B}$, the
angle $\alpha$ to the major rotation axis of the particle and the
angle $\theta$ to the angular momentum $\vec{J}$; $\varphi$ is
the spin angle, $\beta$ is the precession-cone angle, and
$\omega$ the current precession angle.\label{angles} }
\end{figure}
\begin{equation}
{f}(\xi, \beta) = \frac{\xi \sin \beta}{(\xi^2 \cos^2 \beta + \sin^2 \beta)^{3/2} } \,,
\end{equation}
where $\beta$ is the precession-cone angle defined in Fig.~1, and
\begin{equation}
\xi^2 = \frac{r +
\delta_0\ (T_{\rm d}/T_{\rm g})}{r +\delta_0}\,.
\label{align}
\end{equation}
\noindent The alignment parameter, $\xi$, depends on the size of the
particle, $r$. The parameter $\delta_0$ is related to the magnetic
susceptibility of the grain, its angular velocity and temperature, the
field strength and gas temperature (Hong \& Greenberg 1980).
Voshchinnikov \& Das (2008) show that the maximum value of the
polarisation depends on $\delta_0$, whereas the spectral shape of the
polarisation does not. Das et al. (2010) are able to fit polarisation
data by varying the size distribution of the dust particles and by
assuming different alignment functions. We simplify matters and choose
$\delta_0=10\mu$m and $T_{\rm g} = 10 \ T_{\rm d}$. If the grains are
not aligned ($\xi=1$) then ${f}(\xi, \beta)=\sin \beta$; in the case
of perfect rotational alignment $\xi=0$. In the IDG mechanism
(Eq.~\ref{align}) smaller grains are better aligned than larger ones.
\subsection{Cross sections of spheroids \label{C.sec}}
Cross sections of spinning spheroids change periodically. We compute
the average cross section of spinning particles following Das et
al. (2010). Such mean extinction ${C}_{\rm ext}$, linear $ {C}_{\rm
p}$ and circular polarisation ${C}_{{\rm c}}$ cross sections of a
single-sized homogeneous spheroidal particle are obtained at a given
frequency $\nu$ by:
\begin{equation}
{C}_{{\rm ext}}(\nu) = \frac{2}{\pi}
\int (Q_{{\rm ext}}^{\rm{TM}} + Q_{{\rm ext}}^{\rm{TE}}) \, r^2 \,
f(\xi, \beta) \, \mathrm{d}{\varphi} \, \mathrm{d}{\omega} \, \mathrm{d}{\beta}\,,
\label{Cext.eq}
\end{equation}
\begin{equation}
{C}_{{\rm p}}(\nu) = \frac{1}{\pi} \int
(Q_{{\rm ext}}^{\rm{TM}} - Q_{{\rm ext}}^{\rm{TE}}) \, r^2 \,
f(\xi, \beta) \, \cos (2{\psi}) \,
\mathrm{d}{\varphi} \, \mathrm{d}{\omega} \, \mathrm{d}{\beta}\,,
\label{Cp.eq}
\end{equation}
\begin{equation}
{C}_{{\rm c}}(\nu) = \frac{1}{\pi} \int
(Q_{{\rm pha}}^{\rm{TM}} - Q_{{\rm pha}}^{\rm{TE}}) r^2 f(\xi, \beta) \cos (2{\psi}) \,
\mathrm{d}{\varphi} \, \mathrm{d}{\omega} \, \mathrm{d}{\beta}\,.
\label{Cc.eq}
\end{equation}
\noindent
Angles $\psi ,\, \varphi ,\, \omega ,\, \beta$ are shown in
Fig.~\ref{angles}. The relations between them are defined by Hong \&
Greenberg (1980). The efficiency factors $Q$ in Eqs.~(\ref{Cext.eq}
-- \ref{Cc.eq}), with suffix TM for the transverse magnetic and TE for
transverse electric modes of polarisation (Bohren \& Huffman 1983),
are defined as the ratios of the cross sections to the geometrical
cross-section of the equal volume spheres, $Q = C/\pi r^{2}$. The
extinction and scattering efficiencies $Q_{{\rm ext}}$, $Q_{{\rm
sca}}$ and phase lags $Q_{{\rm pha}}$ of the two polarisation
directions are computed with the program code provided by
Voshchinnikov \& Farafonov (1993). The average absorption and
scattering cross sections ${C}_{{\rm abs, sca}}$ are obtained similar
to Eq.~(\ref{Cext.eq}) utilizing $Q_{{\rm abs, sca}}$, respectively.
\begin{table*}[h!tb]
\begin{center}
\caption {Band parameters of astronomical PAHs.}
\label{pah.tab}
\begin{tabular}{|c|c|c|c|c|c|c|c|l|}
\hline
\hline
\multicolumn{2}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \\
\multicolumn{2}{|c|}
{Center Wavelength} & \multicolumn{3}{|c|}{Damping Constant} & \multicolumn{3}{|c|}{Integrated Cross Section$^{a}$} & Mode$^{b}$ \\
\multicolumn{2}{|c|}{$\lambda_0$ ($\mu$m)} & \multicolumn{3}{|c|}{
$\gamma$ ($10^{12}$s$^{-1}$)} & \multicolumn{3}{|c|}{ $ \sigma_{\rm int}$ ($10^{-22}$cm$^2$$\mu$m)} & \\
\hline
& & & & & & & & \\
ISM + & NGC\,2023 & Starbursts$^c$ & ISM$^d$& NGC\,2023$^d$ & Starbursts$^c$ & ISM$^d$& NGC\,2023$^d$ & \\
Starbursts& & & & & & & & \\
\hline
& & & & & & & & \\
0.2175 & & $\cdots$ &1800 &1800& $\cdots$ &8000&8000& $\pi^* \leftarrow \pi$ transitions \\
3.3 & $\cdots$ & 20 &20 &20& 10 &20 &20& C-H stretch \\
5.1 & $\cdots$ & 12 &20 &20& 1 &2.7 &2.7& C-C vibration \\
6.2 & $\cdots$ & 14 &14 &14& 21 &10 &30& C-C vibration \\
7.0 & $\cdots$ & 6 &6 &5& 13 &5 &10& C-H? \\
7.7 & $\cdots$ & 22 &22 &20& 55 &35 &55& C-C vibration \\
8.6 & $\cdots$ & 6 &6 &10& 35 &20 &35& C-H in-plane bend \\
11.3 & $\cdots$ & 4 &6 &4& 36 &250 &52& C-H solo out-of-plane bend \\
11.9 & $\cdots$ & 7 &7 &7& 12 &60 &12& C-H duo out-of-plane bend \\
12.7 & $\cdots$ & 3.5 &5&3.5& 28 &150 &28& C-H trio out-of-plane bend \\
13.6 & $\cdots$ & 4 &4 &4& 3.7 &3.7 &3.7& C-H quarto out-of-plane \\
14.3 & $\cdots$ & 5 &5 &5& 0.9 &0.9 &0.9& C-C skeleton vibration \\
15.1 & 15.4 & 3 &4 &4& 0.3 &0.3 &0.3& C-C skeleton vibration \\
15.7 & 15.8 & 2 &5 &4& 0.3 &0.3 &5& C-C skeleton vibration \\
16.5 & 16.4 & 3 &10 &1& 0.5 &5 &6& C-C skeleton vibration \\
18.2 & 17.4 & 3 &10 &0.2& 1 &3 &1& C-C skeleton vibration \\
21.1 & 18.9 & 3 &10 &0.2& 2 &2 &2& C-C skeleton vibration \\
23.1 & $\cdots$ & 3 &10 &4& 2 &2 &1& C-C skeleton vibration \\
\hline
\end{tabular}
\end{center}
{\bf {Notes.}}$^a$Cross sections are integrated over the band and are
given per H atom for C-H vibrations, and per C atom for C-C
modes; $^b$Assignment following Tielens (2008), Moutou et al. (1996),
Pauzat et al. (1997); $^c$ Siebenmorgen et al. (2001); $^d$ {\it {this
work}} (Sect.~\ref{ism.sec}, Sect.~\ref{hd37903}).
\end{table*}
\subsection {Cross sections of PAHs\label{Cpah.sec}}
Strong infrared emission bands in the 3 -- 13\,$\mu$m range are
observed in a variety of objects. The observations can be explained by
postulating as band carriers UV-pumped large PAH molecules that show
IR fluorescence. PAHs are an ubiquitous and important component of the
ISM. A recent summary of the vast literature of astronomical PAHs is
given by Joblin \& Tielens (2011).
Unfortunately, the absorption cross section of PAHs, $C_{\rm {PAH}}$,
remains uncertain. PAH cross sections vary by large factors from one
molecule species to the next and they depend strongly on their
hydrogenation coverage and charge state. Still in $C_{\rm {PAH}}$ one
notices a spectral trend: molecules have a cut-off at low (optical)
frequencies that depends on the PAH ionisation degree (Schutte et al.\
1993, Salama et al.\ 1996), a local maximum near the 2175\,\AA \,
extinction bump (Verstraete \& L\'eger\ 1992; Mulas et al.\ 2011), and a
steep rise in the far UV (Malloci et al.\ 2011; Zonca et al.\
2011).
We guide our estimates of the absorption cross section at photon
energies between 1.7--15\,eV of a mixture of ionised and neutral PAH
species to the theoretical studies by Malloci et al. (2007). We follow
Salama et al. (1996) for the cut-off frequency $\nu_{\rm {cut}}$ or
cut-off wavelength: $\lambda_{\rm {cut}}^{-1} \ = \ 1 + 3.8 \times
({0.4 \, N_{\rm C}})^{-0.5}$\,($\mu$m)$^{-1}$ and set $ \lambda_{\rm
{cut}} \geq 0.55\mu$m. The cross section towards the near IR is
given by Mattioda et al. (2008, their Eq.~1):
\begin{equation}
C_{\rm M}(\nu) =
N_{\rm C} \ \kappa_{\rm{UV}} \ {10 ^{-1.45 \lambda}}
\quad {\rm {:}} \quad \nu \leq \nu_{\rm {cut}}\,,
\label{cpahcon.eq}
\end{equation}
\noindent
where $N_{\rm {C}}$ is the number of carbon atoms of the PAHs,
$\kappa_{\rm{UV}} = 1.76 \times 10^{-19}$\, (cm$^2$/C-atom);
energetically unimportant features near 1$\mu$m are excluded. The
2175\,\AA\ bump is approximated by a Lorentzian profile
(Eq.\ref{cpahl.eq}). In the far UV at $\lambda^{-1} > 6 \mu$m$^{-1}$,
the cross section is assumed to follow that of similar sized graphite
grains. The influence of hard radiation components on PAHs is
discussed by Siebenmorgen \& Kr\"ugel (2010). The size of a PAH, which
are generally non-spherical molecules, can be estimated following
Tielens (2005), by considering the radius of a disk of a centrally
condensed compact PAH that is given by $r \sim 0.9 {N_{\rm C}}^{0.5} $
(\AA).
With the advent of {\it{ISO}} and {\it{Spitzer}} more PAH emission
features and more details of their band structures have been detected
(Tielens\ 2008). We consider 17 emission bands and apply, for
simplicity, a damped oscillator model. Anharmonic band shapes are not
considered despite having been observed (Peeters, et al. 2002, van
Diedenhoven et al. 2004). The Lorentzian profiles are given by
\begin{equation}
C_{\rm L}(\nu) \ = \ N_{\rm {C, H}} \cdot \sigma_{\rm int} \;
{\nu_0^2\over c} \cdot
{\gamma \nu^2 \over \pi^2 (\nu^2-\nu_0^2)^2 + (\gamma \nu / 2)^2 }\,,
\label{cpahl.eq}
\end{equation}
\noindent
where $N_{\rm {C, H}}$ is the number of carbon or hydrogen atoms of
the PAHs in the particular vibrational mode at the central frequency
$\nu_0= c/\lambda_0$ of the band, $\sigma_{\rm{int}} = \int
{\sigma_{\lambda} \mathrm{d}\lambda}$ is the cross section of the band
integrated over wavelength, and $\gamma$ is the damping constant. PAH
parameters are calibrated by Siebenmorgen \& Kr\"ugel (2007) using
mid-IR spectra of starburst nuclei and are listed in
Table~\ref{pah.tab}. Their procedure first solved the radiative
transfer of a dust embedded stellar cluster, which contains young and
old stellar populations. A fraction of the OB stars are in compact
clouds that determine the mid IR emission. In a second step, the model
is applied to NGC\,1808, a particular starburst, and the mid-IR
cross-sections of PAHs are varied, until a satisfactory fit to the ISO
spectrum is found (Siebenmorgen et al. 2001). Finally, the so derived
PAH cross-sections are validated by matching the SED of several well
studied galaxies. In this latter step, the PAH cross-sections are
held constant and the luminosity, size and obscuration of the star
cluster is varied (Siebenmorgen et al. 2007). Efstathiou \&
Siebenmorgen (2009) and other colleagues, using the starburst library,
have further confirmed the applied PAH mid--IR cross--sections. The
total PAH cross section, $C_{\rm {PAH}}$, is given as the sum of the
Lorentzians (Eq.~\ref{cpahl.eq}) and the continuum absorption
(Eq.~\ref{cpahcon.eq}).
In Fig.\ref{PAHKappa.ps}, we show the PAH absorption cross-sections
suggested by Li \& Draine (2001), Malloci et al. (2007), this work, as
well as the cross-section of a particular PAH molecule,
Coronene (Mulas priv.com.). In the far UV the PAH cross
sections agree within a factor of two. Near the 2200\,\AA \ bump we
apply for the absorption maximum the same frequency and strength as Li
\& Draine (2001) but a slightly smaller width. Our choice of the width
is guided by Coronene, which has a feature shifted to 2066\,\AA
. Malloci et al. (2007) derive a mean PAH absorption cross section by
averaging over more than 50 individual PAHs, which are computed in
four charge states of the molecules. Such a procedure may cause a
slight overestimate of the width of the PAH band near the 2200\,\AA
\ bump, because different molecules show a peak at different
frequencies in that region (for an example, see Coronene in
Fig.\ref{PAHKappa.ps}). Between 3 and 4\,$\mu$m$^{-1}$ the cross
section by Malloci et al. (2007) and Li\& Draine (2001) are identical
and agree within a factor $\sim 3$ to that one derived for
Coronene. In the lower panel of Fig.\ref{PAHKappa.ps} PAH
absorption cross sections between $1 - 30\,\mu$m are displayed for
Coronene, neutral PAH-graphite particles with 60 C atoms (Li
\& Draine, 2001), and fitting results of this work, one for the solar
neighbourhood (labelled ISM) and a second for the reflection nebulae
NGC\,2023 (Table~\ref{pah.tab}). In the near IR, below 3\,$\mu$m, the
PAH cross sections are orders of magnitude smaller than in the
optical. The scatter in the near IR cross sections of PAH is
energetically not important for computing the emission spectrum at
longer wavelengths. In the emission bands we find similar cross
sections as Li \& Draine for neutral species whereas those for ionised
PAHs are larger by a factor of 10. Our model differs to the one by Li
\& Draine as we explain most of the observed ratios of PAH emission
bands by de-hydrogenation rather than by variations of the PAH charge
state (see Fig.1 in Siebenmorgen \& Heymann, 2012). Beyond 15$\mu$m a
continuum term is often added to the PAH cross sections (D\'{e}sert et
al. 1990; Schutte et al. 1993; Li \& Draine 2001; Siebenmorgen et
al. 2001). We neglect such a component as it requires an additional
parameter and is not important for this work.
\begin{figure}
{\hspace{-0.75cm} \includegraphics[width=9.4cm]{PAHKappa.ps}}
\caption{Absorption cross-sections of PAHs in the optical/UV (top) and
infrared (bottom) as suggested by Li \& Draine (2001, dashed blue),
Malloci et al. (2007, dotted green) and {\it {this work}}, with
parameters of Table~\ref{pah.tab} for the ISM (dotted magenta) and
for NGC\,2023 (full magenta line). For comparison we show the
cross-section of Coronene (dash--dotted black). \label{PAHKappa.ps}}
\end{figure}
\subsection{Dust populations \label{dustpop.sec}}
We consider two dust materials: amorphous silicates and carbon. Dust
particles of various sizes are needed to fit an extinction curve from
the infrared to the UV. Our size range starts from the molecular domain
($r_{-} = 5$ \AA ) to an upper size limit of $r_{+} \la 0.5\,\mu$m that
we constrain by fitting the polarisation spectrum. For simplicity we
apply a power law size distribution $\mathrm{d}n(r)/\mathrm{d}r
\propto r^{-q}$ (Mathis et al. 1977).
We aim to model the linear and circular polarisation spectrum of
starlight so that some particles need to be of non--spherical shape
and partly aligned. Large homogeneous {\it spheroids} are made up of
silicate with optical constants provided by Draine (2003) and
amorphous carbon with optical constants by Zubko et al. (1996), using
their mixture labelled ACH2. The various cross sections of spheroids
are computed with the procedure outlined in Sect.~(\ref{C.sec}) for
100 particle sizes between {$60\,{\rm{\AA}} \leq r \leq 800$\,nm}. In
addition there is a population of {\it small} silicates and graphite.
For the latter we use optical constants provided by Draine (2003). The
small particles are of spherical shape and have sizes between 5\,\AA
\, $< r \la 40$\,\AA \,. We take the same exponent $q$ of the size
distribution for small and large grains. For graphite the dielectric
function is anisotropic and average extinction efficiencies are
computed by setting ${Q} = 2 {Q}(\epsilon^\bot)/3 + {Q}(\epsilon^\Vert)/3
$, where $\epsilon^\bot$, $\epsilon^\Vert$ are dielectric constants
for two orientations of the $\vec{E}$ vector relative to the basal
plane of graphite (Draine \& Malhotra 1993). Efficiencies in both
directions are computed by Mie theory. We include small and large
PAHs with 50 C and 20 H atoms and 250 C and 50 H atoms, respectively.
Cross sections of PAHs are detailed in Sect.~\ref{Cpah.sec}. In
summary, we consider four different dust populations, which are labelled
in the following as large silicates (Si), amorphous carbon (aC), small
silicates (sSi), graphite (gr) and PAH.
\subsection{Extinction curve \label{extin.sec}}
The attenuation of the flux of a reddened star is described by the
dust extinction $ A(\nu) = 1.086 \ \tau_{\rm ext}(\nu)$, which is
wavelength dependent and approaches zero for long wavelengths.
Extinction curves are measured through the diffuse ISM towards
hundreds of stars, and are observed from the near IR to the UV. The
curves vary for different lines of sight. The extinction curve
provides information on the composition and size distribution of the
dust. For the $B$ and $V$ photometric bands it is customary to define
the ratio of total--to--selective extinction $R_{\rm {V}} = A_{\rm
{V}} / (A_{\rm {B}} - A_{\rm {V}})$ that varies between $2.1 \la
R_{\rm{V}} \la 5.7$. Flat extinction curves with large values of
$R_{\rm{V}}$ are measured towards denser regions.
We fit the extinction curve, or equivalently the observed optical
depth profile, along different sight lines by the extinction cross
section of the dust model, so that:
\begin{equation}
\left({\tau (\nu) \over \tau_{\rm v}}\right)_{\rm {obs}} \sim \left({K_{\rm {ext}}(\nu) \over K_{\rm {ext, V}}}\right)_{\rm {model}}\,,
\label{tauK.eq}
\end{equation}
\noindent
where $K_{\rm ext}({\nu})$ is the total mass extinction cross section
averaged over the dust size distribution in (cm$^2$/g-dust) given by
\begin{equation}
K_{\rm ext} = \sum_i \ \int_{r_-}^{r_+} K_{{\rm ext}, i}(r) \ \mathrm{d}r\,.
\label{Ktot.eq}
\end{equation}
\noindent
where index $i$ refers to the dust populations
(Sect.~\ref{dustpop.sec}). The extinction cross sections $K_{{\rm
{ext}}, i}(r)$\, (cm$^2$/g-dust) of a particle of population $i
\in \{ {\rm {Si, aC, sSi, gr}} \}$, of radius $r$ and density $\rho_i$
are
\begin{equation}
K_{{\rm ext}, i}(r) = {w_i \over {{ \displaystyle 4 \pi \over 3} \ \rho_i}} \
{r^{-q} \over \displaystyle \int_{r_{-,i}}^{r_{+,i}} r^{3-q} \ \mathrm{d}r} \ C_{{\rm ext}, i}(r)\,,
\label{K.eq}
\end{equation}
\noindent
where $w_i$ is the relative weight of dust component $i$, which, for
large amorphous carbon grains, is
\begin{equation}
w_{\rm {aC}} = {{\Upsilon_{\rm {aC}} \ \mu_{\rm C}} \over
{({\tiny{\Upsilon_{\rm {aC}}+\Upsilon_{\rm {gr}}+\Upsilon_{\rm {PAH}}) \mu_{\rm C}
+ (\Upsilon_{\rm {Si}}+\Upsilon_{\rm {sSi}}) \mu_{\rm Si}}}}}\; ,
\label{w.eq}
\end{equation}
\noindent
with molecular weight of carbon $\mu_{\rm C} =12$ and silicate grains
$\mu_{\rm Si} =168$. As bulk density we take $\rho_C \sim
2.3$\,(g/cm$^3$) for all carbon materials and $\rho_{\rm Si} \sim
3$\,(g/cm$^3$). Dust abundances are denoted by $\Upsilon$ together
with a subscript for each dust population (Sect.~\ref{dustpop.sec}).
The expressions of the relative weights of the other grain materials
are similar to expression Eq.~(\ref{w.eq}). The cross section
normalised per gram dust of a PAH molecule is
\begin{equation}
K_{\nu{\rm {, PAH}}} = {w_{_{ \rm {PAH}}} \over N_{\rm C} \
\mu_{\rm C} \ m_{\rm p} } \ C_{\nu \rm{, PAH}}\,,
\label{Kpah.eq}
\end{equation}
\noindent
where $m_{\rm p}$ is the proton mass.
Our grain model with two types of bare material is certainly
simplistic. ISM dust grains are bombarded by cosmic rays and atoms,
they grow and get sputtered. Therefore fluffy structures with
impurities and irregular grain shapes are more realistic. The cross
section of composite particles (Kr\"ugel \& Siebenmorgen 1994,
Ossenkopf \& Henning 1994, Voshchinnikov et al. 2005), that are porous
aggregates made up of silicate and carbon, vary when compared to
homogeneous particles by a factor of 2 in the optical and, and by
larger factors in the far IR/submm. We study the influence of the
grain geometry on the cross section. For this we compare the cross
section of large prolate particles $K_{\rm {prolate}}$ with axial
ratios $a/b = 2$, 3, and 4, to that of spherical grains $K_{\rm
{sphere}}$ (Fig.~\ref{cmpC.fig}). With the exception of
Fig.~\ref{cmpC.fig}, throughout this work for large grains we use the
cross-sections computed for spheroids. The dust models with the two
distinct grain shapes are treated with same size and mass distribution
as the large ISM grains above. The peak-to-peak variation of the ratio
$K_{\rm {prolate}} / K_{\rm {sphere}}$ is for $\lambda \leq 2\mu$m:
$\sim 4$\% for an axial ratio of $a/b=2$, 9\% for $a/b=3$ and 14\% for
$a/b=4$, respectively. In the far IR the prolate particles have by a
factor of 1.5 -- 3 larger cross sections than spherical grains. In
that wavelength region the cross section varies roughly as $C_{abs}
\propto \nu^2$, so that the emissivity of a grain with radius $r$ is
about $\epsilon \propto r \ T_{\rm d}^6$, where $T_{\rm d}$ denotes
the dust temperature. Therefore spheroids with $a/b \leq 2$ obtain in
the same radiation environment $\sim 10\%$ lower temperatures than
their spherical cousins, an effect becoming larger for more elongated
particles (Voshchinnikov et al., 1999). Nevertheless, the larger far
IR/submm cross sections of the spheroids scales with the derived mass
estimates of the cloud ($M \propto 1/K$, in the optical thin case) and
is therefore important.
We keep in mind the above mentioned simplifications and uncertainties
of the absorption and scattering cross sections $K_i$ and allow for
some fine-tuning of them, so that the observed extinction curve,
$K^{\rm {obs}}_{\rm {ext}}(\nu) = K_{\rm {V}} \left( \tau({\nu}) /
\tau_{\rm V}\right)_{\rm {obs}}$ (Eq.~\ref{tauK.eq}) is perfectly
matched. Another possibility to arrive at a perfect match of the
extinction curve can be achieved by ignoring uncertainties in the
cross sections and altering the dust size distribution (Kim et
al. 1994, Weingartner \& Draine 2001, Zubko et al. 2004). In our
procedure we apply initially the $K_i$'s as computed strictly
following the prescription of Sect.(\ref{C.sec}) and
Eq.~(\ref{K.eq}). Then for each wavelength new absorption and
scattering cross sections $K_i' = f \ K_i$ are derived using
\begin{equation}
f = {K^{\rm {obs}}_{\rm {ext}} \over K_{\rm {sca}}} \ \Lambda\,,
\label{f.eq}
\end{equation}
\noindent
where $0 < \Lambda \leq 1$ denotes the dust albedo from the initial
(unscaled) cross sections
\begin{equation}
\Lambda = K_{\rm {sca}} / K_{\rm {ext}}\,.
\label{albedo.eq}
\end{equation}
\noindent
We note that the procedure of Eq.~(\ref{f.eq}) is only a fine
adjustment. We vary the $K_i$'s of the large spheroids only at
$\lambda < 2\mu$m and the $K_i$'s of the small grains only in the UV
at $\lambda < 0.3\mu$m. At these wavelengths we allow variations of
the cross sections by never more than 10\%, so we set $\min {(f)} >
0.9$ and $\max {(f)} < 1.1$. Typically a few \% variation of the
initial cross sections is sufficient to perfectly match the observed
extinction profile; otherwise cross sections remain unchanged. In
Fig.~\ref{ism.fig} {\it {(top)}} both extinction models are indicated,
the one derived from the unscaled cross sections is labelled ``fit''
and the other, using scaled cross sections, is marked ``best''. The
uncertainty of our physical description in explaining the extinction
curve is measured by the $f$--value. From this we conclude that the
initial model is accurate down to a few \%, which is within the
observational uncertainties. Therefore we prefer keeping the
description of the dust size distribution simple.
\begin{figure}
\hspace{-1cm}
\includegraphics[width=10cm]{cmpC.ps}
\caption{Ratio of the total mass extinction cross section of large
prolate and spherical particles with same volume. Prolates, with
$a/b$ ratios as labelled, and spheres have same size distribution
and relative weights ($w_i$) as for dust in the solar neighbourhood
(Table~\ref{para.tab}). \label{cmpC.fig} }
\end{figure}
\subsection {Element abundances \label{abu.sec}}
Estimating the absolute elemental abundances is a tricky task, and in
the literature there is no consensus yet as to their precise values.
To give one example, for the cosmic (solar or stellar) C/H abundance
ratio, expressed in ppm, one finds values of: 417 (Cameron \&
Fegley\ 1982), 363 (Anders \& Grevesse\ 1989), 398 (Grevesse et
al.\ 1993), 330 (Grevesse \& Sauval\ 1998), 391 (Holweger\ 2001), 245
(Asplund \& Garcia-Perez\ 2004), 269 (Asplund et al.\ 2009), 316
(Caffau et al.\ 2010), 245 (Lodders\ 2010), and 214 (Nieva \&
Przybilla\ 2012), respectively. Towards 21 sight lines Parvathi et
al. (2012) derive C/H ratios between 69 and 414\,ppm. Nozawa \&
Fukugita (2013) consider a solar abundance of C/H = 251\,ppm with a
scatter between 125 to 500\,ppm. For dust models one extra
complication appears as one needs to estimate how much of the carbon
is depleted from the gas into the grains. Present estimates are that
60\% -- 70\% of all C atoms stick into dust particles (Sofia et
al. 2011, Cardelli 1996), whereas earlier values range between 30 --
40 \% (Sofia et al. 2004). From extinction fitting, Mulas et
al. (2013) derive an average C abundance in grains of $145$\,ppm and
estimate an uncertainty of about a factor of two. The abundance of O
is uncertain within a factor of two. Variations of absolute abundance
estimates are noticed for elements such as Si, Mg and Fe, for which
one assumes that they are completely condensed (for a recent review on
dust abundances see Voshchinnikov et al. 2012). Averaging over all
stars of the Voshchinnikov \& Henning (2010) sample the total Si
abundance is $25 \pm 3$\,ppm.
We design a dust model where only relative abundances need to be
specified. These are the weight factors $w_i$ as introduced in
Eq.~(\ref{w.eq}). The weight factors prevent us from introducing
systematic errors of the absolute dust abundances into the model.
Still they can be easily converted into absolute abundances of element
$i$ in the dust. We find for the solar neighbourhood a total C
abundance of $w_{\rm {C}} = 37.2$\% (Table ~\ref{para.tab}). To
exemplify matters let us assume that the absolute C abundance in dust
is $183$\,ppm and of Si of $22$\,ppm. For this case one converts the
weight factors into absolute element abundances of the dust
populations to $\Upsilon_{\rm {aC}} = 143.5$\,ppm, $\Upsilon_{\rm
{Si}} = 19.1$\,ppm, $\Upsilon_{\rm {gr}} = 21$\,ppm, $\Upsilon_{\rm
{sSi}} = 2.9$\,ppm, $\Upsilon_{\rm {PAHs}} = 6.7$\,ppm,
$\Upsilon_{\rm {PAHb}} = 11.5$\,ppm, respectively. These numbers can
be updated following Eq.~\ref{w.eq} and Table~\ref{para.tab} whenever
more accurate estimates of absolute element abundances in the dust
become available.
\subsection {Optical thin emission}
For optically thin regions we model the emission spectrum of the
source computed for 1g of dust at a given temperature. The emission
$\epsilon_i (r)$ of a dust particle of material $i$ and radius $r$ is
\begin{equation}
\begin{array} {r l}
\epsilon_{i}(r) =& \mathlarger{\int} {K_{\nu{{, i}}}^{abs} (r) \, J_{\nu} \, \mathrm{d}\nu} \\
& \\
=& \mathlarger{\int} {K_{\nu{{, i}}}^{abs} (r) \, P(r,T) \, B_{\nu}(T) \, \mathrm{d}T \, \mathrm{d}\nu }\,,
\label{emis.eq}
\end{array}
\end{equation}
\noindent
where the mass absorption cross sections are defined in
Eqs.~(\ref{K.eq},~\ref{Kpah.eq}), $J_{\nu}$ denotes the mean
intensity, $B_{\nu}(T)$ is the Planck function and $P(r,T)$ is the
temperature distribution function that gives the probability of
finding a particle of material $i$ and radius $r$ at temperature
$T$. This function is evaluated using an iterative scheme that is
described by Kr\"ugel (2008). The $P(T)$ function needs only to be
evaluated for small grains as it approaches a $\delta$-function for
large particles. The total emission $\epsilon_{\nu}$, is given as sum
of the emission $\epsilon_{{i,} \nu}(r)$ of all dust components.
\subsection {Dust radiative transfer}
For dust enshrouded sources we compute their emission spectrum by
solving the radiative transfer problem. Dropping for clarity the
frequency dependency of the variables, the radiative transfer equation
of the intensity $I$ is
\begin{equation}
I(\tau) = I(0) \ e^{-\tau} \ + \ \int_0^{\tau} S(\tau') \ e^{-(\tau - \tau')} \mathrm{d}\tau'\,.
\end{equation}
\noindent
We take as source function
\begin{equation}
S = {{K_{\rm {sca}} J + \sum_{i} \epsilon_i} \over {K_{\rm {ext}}}}\,,
\end{equation}
where $\epsilon_i$ is the emission of dust component $i$ computed
according to Eq.~(\ref{emis.eq}). The problem is solved by ray tracing
and with the code described in Kr\"ugel (2008). Dust temperatures and
$P(T)$ are derived at various distances from the source. The star is
placed at the centre of the cloud, and is considered to be spherically
symmetric. Our solution of the problem for arbitrary dust geometries
is discussed by Heymann \& Siebenmorgen (2012).
\begin{figure}
\hspace{-0.7cm}
\includegraphics[width=10.cm]{Polab.ps}
\caption{Linear (top) and circular (bottom) polarisation spectra of
silicates with $r_{-}=100$\,nm, $r_{+}=450$\,nm, $q=3.5$. The axial
ratios $a/b$ and grain shapes (prolate, oblate) are
indicated. The circular polarisation spectra are normalized to
their maxima.\label{Polab.fig} }
\end{figure}
\begin{figure}
\includegraphics[width=8.cm]{PolRpmax.ps}
\caption{Influence of the upper, $r_{+}$, and lower, $r_{-}$, limit
of the particle radii of aligned silicates on the spectral shape
of the dichroic polarisation. Shown are prolates with $a/b=2$
and $q=3.5$. Top: $r_{+}$ is held constant at 450\,nm
and polarisation spectra are computed for $r_{-} = 50$, 80, and
200\,nm that give rise to a maximum polarisation of $p/\tau_{\rm
V}=2.1$\%, 1.6\%, and 0.8\%, respectively. Bottom:
lower limit of $r_{-} = 100$\,nm and
varying $r_{+} = 300$, 450, and 600\,nm that produce a maximum
polarisation of $p/\tau_{\rm V}=1.7$\%, 1.4\%, and 1.2\%,
respectively. \label{PolRpmax.fig}}
\end{figure}
\begin{figure}
\includegraphics[width=8.cm]{ExtPolq.ps}
\caption{Influence of the exponent of the dust size distribution
$q$ on the extinction and linear polarisation curve. For the
polarisation we consider prolates made up of silicates with
$r_{+}=450$\,nm and other parameters as for the solar neighbourhood
(Table~\ref{para.tab}). \label{ExtPolq.fig}}
\end{figure}
\subsection {Linear polarisation \label{ismpol.sec}}
Observations of reddened stars frequently show linear polarisation of
several percent. These stars have often such thin dust shells, if any,
that the polarisation cannot be explained by circumstellar dust
(Scicluna et al. 2013). As discussed in Sect.~\ref{spheroids.sec},
partly aligned non--spherical grains have different extinction with
respect to their orientation, and therefore they polarise the
radiation. The polarisation scales with the amount of dust, hence with
the optical depth towards the particular sight line
(Eqs.~\ref{tauCext.eq}, \ref{pCp.eq}). In the model the linear
polarisation cross section $K_{\rm {p}}({\nu})$, is computed utilizing
${C}_{{\rm p}}(\nu)$ (Eq.~\ref{Cp.eq}) and replacing subscript {\it
{ext}} by {\it p} in Eqs.~(\ref{Ktot.eq}, \ref{K.eq}). Observations
of linear polarisation by dichroic extinction are modelled using:
\begin{equation}
p/\tau_{\rm V} = K_{\rm {p}}(\nu) / K_{\rm {ext, V}}\,.
\label{ptau.eq}
\end{equation}
\noindent
We apply the dust model with the parameters of Table~\ref{para.tab},
col.2. For grain alignment we assume the IDG mechanism of
Eq. (\ref{align}). We consider moderately elongated particles with
$a/b=2$ and assume that only large grains with sufficient inertia are
aligned and that grains smaller than about 50\,nm are randomly
oriented. We show in Fig.~\ref{Polab.fig} that larger axial ratios
increase the maximum polarisation and do not influence the spectral
shape of the polarisation curve strongly. However, for wavelengths
below $\lambda_{\rm {max}}$, one notices that oblate particles have a
stronger decline in the polarisation than prolates.
\begin{table*}
\caption{Log of FORS spectro-polarimetric observations, all of which were obtained on
2011-01-21.\label{Obs.tab} }
\begin{center}
\begin{tabular}{|ll|r|c|r|c|r|c|r|c|r|}
\hline \hline
\multicolumn{3}{|c} {Target} &
\multicolumn{4}{|c|}{Linear polarisation} &
\multicolumn{4}{c|}{Circular polarisation} \\
\hline
\multicolumn{2}{|c|} {Name} & PA &
\multicolumn{2}{c|}{grism 600\,B}&
\multicolumn{2}{c|}{grism 1200\,R}&
\multicolumn{2}{c|}{grism 600\,B}&
\multicolumn{2}{c|}{grism 1200\,R}\\
\hline
\multicolumn{2}{|c|}{ } &(\degr)&
\multicolumn{1}{c|}{(hh:mm)} &
\multicolumn{1}{c|}{$t$ (sec)} &
\multicolumn{1}{c|}{(hh:mm)} &
\multicolumn{1}{c|}{$t$ (sec)} &
\multicolumn{1}{c|}{(hh:mm)} &
\multicolumn{1}{c|}{$t$ (sec)} &
\multicolumn{1}{c|}{(hh:mm)} &
\multicolumn{1}{c|}{$t$ (sec)} \\
\hline
HD\,35149 &=\,HR~1770 & 0 & 00:24 & 29 & 00:44 & 24 & 00:49 & 34 & 00:52 & 40 \\
& & 90 & 01:38 & 12 & 01:21 & 16 & 01:47 & 20 & 01:29 & 32 \\[2mm]
HD\,37061 &=\,$\nu$\,Ori & 30 & 02:04 & 40 & 02:23 & 40 & 02:12 & 85 & 02:32 & 101 \\
& & 60 & 04:10 & 60 & $\cdots$ & $\cdots$& 04:17 & 60 & $\cdots$ & $\cdots$ \\
& & 90 & 03:51 & 90 & $\cdots$ & $\cdots$& 04:58 & 60 & $\cdots$ & $\cdots$ \\
& &120 & 03:08 & 51 & 03:27 & 36 & 04:18 & 150 & 00:00 & 300 \\
& &150 & 04:28 & 56 & 00:00 & 300 & 04:35 & 60 & $\cdots$ & $\cdots$\\
& &210 & 02:48 & 12 & 00:00 & 300 & 02:56 & 48 & $\cdots$ & $\cdots$\\[2mm]
HD\,37903 &=\,BD$-$02\,1345& 0 & 04:49 & 150 & 05:12 & 175 & 05:00 & 280 & 05:25 & 435 \\
& & 0 & 05:54 & 200 & 06:12 & 235 & 06:03 & 200 & 06:22 & 240 \\[2mm]
HD\,93250 &=\,CD$-$58\,3537& 0 & 06:47 & 67 & 07:07 & 80 & 06:59 & 160 & 07:19 & 115 \\
& & 90 & 07:36 & 25 & 07:57 & 140 & 07:46 & 200 & 08:05 & 180 \\[2mm]
HD\,99872 &=\,HR~4425 & 0 & 09:16 & 40 & 09:37 & 80 & 09:22 & 40 & 09:30 & 70 \\[2mm]
HD\,94660 &=\,HR~4263 & 0 & 08:27 & 24 & 00:00 & 300 & 08:34 & 28 & $\cdots$ & $\cdots$\\[2mm]
Ve 6-23 &=\,Hen\,3-248 & 0 & 09:00 & 720 & $\cdots$ & $\cdots$& $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
One can see from Fig.~\ref{PolRpmax.fig}, that the choice of the
lower, $r_{\rm {-}}$, and the upper particle radius $r_{+}$ of aligned
grains is sensitive to the curvature of the derived polarisation
spectrum. Reducing the lower size limit of aligned grains, $r_{-}$,
enhances the polarisation at short wavelengths and increasing upper
size limit, $r_{+}$, produces stronger polarisation at longer
wavelengths. A similar trend is given by altering the exponent $q$ of
the size distribution. For larger $q$ there are smaller particles
and the polarisation shifts to shorter wavelengths, the maximum
polarisation shrinks and the spectrum broadens. The increase of
$r_{\rm {-}}$ and the decrease of $q$ can be associated with the
growth of dust grains due to accretion and coagulation
processes. Voshchinnikov et al. (2013) find that both mechanisms shift
the maximum polarisation to longer wavelengths and narrow the
polarisation curve. These characteristics are parameterised by the coefficients $\lambda_{\max}$ and $k_p$ of
the Serkowski curve Eq.~(\ref{serk.eq}), respectively. However, this
effect is less pronounced than altering the range of particle sizes of
aligned grains. In Fig.\ref{ExtPolq.fig} we show the polarisation
curve for $q = 3$, 3.5, and 4 of silicates with prolate shape $a/b=2$,
$r_{\rm {-}} = 100$\,nm, $r_{+}=450$\,nm using IDG alignment as well as
the influence of $q$ on the derived extinction curve. One notices a
strong effect in which larger $q$ values produce steeper UV
extinction. In summary the extinction is sensitive to variations of
$q$ while the polarisation spectrum depends critically on the size
spectrum of aligned grains.
The model predicts a strong polarisation in the silicate band
(Fig.~\ref{PolRpmax.fig}). This is in agreement with the many
detections of a polarised signal at that wavelength (Smith et
al. 2000). However, in the mid IR two orthogonal mechanisms may be at
work and produce the observed linear polarisation. There is either
dichroic absorption as discussed in this work and observed in
proto-stellar systems (Siebenmorgen \& Kr\"ugel 2000) or dichroic
emission by elongated dust particles as observed on galactic scales
(Siebenmorgen et al., 2001).
The angle between the line of sight and the (unsigned) magnetic field is in the
limits $0\degr \leq \Omega \leq 90\degr$. We find that the spectral
shape of the linear polarisation only marginally depends on
$\Omega$, contrary to the maximum of the linear polarisation. The
polarisation is strongest for $\Omega =90\degr$. For prolate silicate
particles with size distribution and IDG alignment characteristic of
the ISM the maximum polarisation decreases for decreasing
$\Omega$. For $\Omega = 60^0$ the polarisation decreases to 60\% of
that found at maximum, and for $\Omega=30\degr$ further down to
$\sim$\,20\%. The dependency of $p$ with $\Omega$ is used by
Voshchinnikov (2012) to estimate the orientation of the magnetic field
in the direction of the polarised source. Unless otherwise stated we
use $\Omega=90^{\rm {o}}$.
\subsection {Circular polarisation}
The dust model also predicts the observed circular polarisation of
light (Martin \& Campbell 1976, Martin 1978). The circular
polarisation spectrum as of Eq. (\ref{cp.eq}) is shown in
Fig.~\ref{Polab.fig}, normalised to the maximum of $V/I$. We apply the
same dust parameters as for the linear polarisation spectrum described
above. We note that $V/I$ changes sign at wavelengths close to the
position of maximum of the linear polarisation (Voshchinnikov
2004). This may explain many null detections of circular polarisation
in the visual part of the spectrum because $\lambda_{\rm {max}} \sim
0.5\mu$m. The local maxima and minima of $V/I$ critically depend on
elongation and geometry of the grain: prolate versus oblate. In fact,
that circular polarisation can provide new insights into the optical
anisotropy of the ISM has been proposed long time ago (van de Hulst
1957, Kemp \& Wolstencroft 1972). Typically, for both particle shapes,
the maximum of $V/I$ is $\sim 7 \times 10^{-5}$ for $a/b=2$, and $\sim
25 \times 10^{-5}$ for $a/b=4$. Detecting this amount polarisation is
at the very limit of the observational capabilities of the current
instrumentation.
\section{Spectro-polarimetric observations} \label{obs.sec}
Linear and circular spectro-polarimetric observations were obtained
with the FORS instrument of the VLT (Appenzeller et al. 1998). Our
main goal is to constrain the dust models with ultra-high accuracy
polarisation measurements. Stars were selected from the sample
provided by Voshchinnikov \& Henning (2010). Towards these sight lines
linear polarisation was previously detected and extinction curves are
available. Targets were chosen based on visibility constraints at the time of
the observations. Among the various available grisms, we adopted those
with higher resolution. The observed wavelength ranges are
340-610\,nm in grism 600\,B, and 580-730\,nm in grism 1200\,R. This
grism choice was determined by practical considerations on how to accumulate
a very high signal-to-noise ratio (SNR) in the interval range where we
expect a change of sign of the circular
polarisation. Table~\ref{Obs.tab} gives our target list, the
instrument position angle on sky (counted counterclockwise from North
to East), the UT at mid exposure, and the total exposure time in
seconds for each setting.
Linear polarimetric measurements were obtained by setting the
$\lambda/2$ retarder waveplate at position angles 0\degr, 22\fdg5,
45\degr, and 67\fdg5. All circular polarisation measurements were
obtained by executing once or twice the sequence with the $\lambda/4$
retarder waveplate at 315\degr, 45\degr, 135\degr, and
225\degr. Because targets are bright, to minimise risk of saturation
we set the slit width to $0\farcs5$; this provides a spectral
resolution of $\sim 1500$ and $\sim 4200$ in grism 600\,B and 1200\,R,
respectively. Observations of HD\,99872 and Ve\,6-23 were performed
with a slit width of 1\arcsec\, providing a spectral resolution of
$\sim 800$ and $\sim 1200$ in grism 600\,B and 1200\,R, respectively.
Finally, spectra are rebinned by 256 pixels to achieve the highest
possible precision in the continuum. This gives a spectral bin of
$\sim 17$\,nm and $\sim 10$\,nm in grisms 600\,B and 1200\,R,
respectively. It allows us to push the signal-to-noise ratio of the
circular polarisation measurements to a level of several tens of
thousands over a 10\,nm spectral bin.
\subsection{Standard stars}\label{Sect_Standard}
To verify the alignment of the polarimetric optics, and to measure the
instrumental polarisation, we observed two standard stars: HD\,94660
and Ve\,6-23. From the circular polarimetric observations of the
magnetic star HD\,94660 we expected a zero signal in the continuum,
and, in spectral lines, a signal consistent with a mean longitudinal
magnetic field of about -2\,kG (Bagnulo et al. 2002). Indeed we found
in the continuum a circular polarisation signal consistent with zero.
The magnetic field was measured following the technique described in
Bagnulo et al. (2002) on non-rebinned data. We found a value
consistent with the one above and concluded that the $\lambda/4$
retarder waveplate was correctly aligned.
In Ve\,6-23 we measured a linear polarisation consistent with
the expected values of about 7.1\,\% in the $B$ band, 7.9\,\% in the
$V$ band, and with a position angle of $\sim 173\degr$\ and 172\degr\ in
$B$ and $V$, respectively (Fossati et al. 2007). This demonstrates that
the $\lambda/2$ retarder waveplate was correctly set. An unexpected
variation of the position angle $\Theta$ is observed at $\lambda \la
400$\,nm, possibly related to a substantial drop in the
signal-to-noise ratio. Patat \& Romaniello (2006) identified such a
spurious and asymmetric polarization field that is visible in FORS1
imaging through the B band. We measured the linear polarisation of
HD\,94660, expecting a very low level in the continuum, and we found a
signal of $\sim 0.18$\,\% that is discussed below.
\begin{figure}
\includegraphics[width=9.cm]{Fig_LIN_IP.ps}
\caption{\label{Fig_LIN_IP} Instrumental linear polarisation spectra
derived from observing pairs as labelled together with the continuum
polarisation of HD\,94660. Stokes parameters $Q/I$ (top) and
$U/I$ (bottom) are measured with respect to the instrument
reference system.}
\end{figure}
\begin{figure}
\includegraphics[width=9.cm]{Fig_HD37061.ps}
\caption{\label{Fig_HD37061} Linear polarisation spectra of HD\,37061
that are corrected for instrumental contribution at various PA as
labelled. Stokes parameters $Q/I$ (top) and $U$ (bottom)
are measured having as a reference direction the celestial meridian
passing through the target. }
\end{figure}
\begin{figure}
\includegraphics[width=8.5cm]{Fig_XTalk.ps}
\caption{\label{Fig_XTalk} Polarisation at $\lambda = 500$\,nm of
HD\,37061 at various instrument position angles. $Q/I$ (blue dashed
line) and $U/I$ (red dotted line) are measured in the instrument
reference system. The circular polarisation (black solid line) is
approximately one tenth of the linear polarisation in the principal
plane of the Wollaston prism. }
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{Fig_CIR_IP.ps}
\vspace{-5cm}
\caption{\label{Fig_CIR_IP} Circular polarisation in the continuum of
HD\,94660 at instrument PA=$0^\circ$ and of other stars from
observing pairs as labelled.}
\end{figure}
\subsection{Instrumental linear polarisation}\label{Sect_LIP}
We observed the stars at different instrument position angles on sky
to evaluate the instrumental polarisation. The rationale of this
observing strategy is that observations of circular polarisation
should not depend on the position angle (PA) of the instrument, while
linear polarisation measurement follow a well known transformation
(see Eq.~(10) in Bagnulo et al. 2009). For example, denoting with
$P_Q^{(\alpha)}$, $P_U^{(\alpha)}$, $P_V^{(\alpha)}$ the reduced
Stokes parameters measured with the instrument position at PA=
$\alpha$ on sky, one should find:
\begin{equation}
\label{Eq_Rot}
\begin{array}{rcl}
P_Q^{(90)} &=& -P_Q^{(0)}\\[2mm]
P_U^{(90)} &=& -P_U^{(0)}\\[2mm]
P_V^{(90)} &=& P_V^{(0)}\; . \\
\end{array}
\end{equation}
Departures from this behaviour may be due to spurious instrumental
effects, which should not change as the instrument rotates. Hough et
al. (2007) suggest to evaluate the instrumental linear polarisation as
\begin{equation}
\begin{array}{rcl}
P_Q^{\rm instr} &=& \frac{1}{2}\, \left(P_Q^{(0)} - P_Q^{(90)}\right)\\
P_U^{\rm instr} &=& \frac{1}{2}\, \left(P_U^{(0)} - P_U^{(90)}\right) \, . \\
\end{array}
\end{equation}
Instrumental polarisation of $\sim 0.1$\,\% is identified in FORS1
data by Fossati et al. (2007), and is discussed by Bagnulo (2011).
From the analysis of our data we confirm that FORS2 measurements are
affected by an instrumental polarisation that depends on the adopted
grism. By combining the measurements of HD\,37061 obtained at
instrument position angles 30\degr\ and 120\degr\ in grism 600\,B, we
measure a spurious signal of linear polarisation of about $0.16$\,\%
that is nearly constant with wavelength along the principal plane of
the Wollaston prism. In grism 1200\,R, we find that the instrumental
contribution in the principal plane of the Wollaston prism is linearly
changing with wavelength from $\sim 0.2$\,\% at $\lambda=580$\,nm to
$\sim 0.32$\,\% at $\lambda=720$\,nm; and in the perpendicular plane
we find a value $\sim -0.1$\,\% that is nearly constant with
wavelength (Fig.~\ref{Fig_LIN_IP}). From the observations of the same
star obtained at instrument position angles 120\degr\ and 210\degr\
(grism 600\,B only) we retrieve $\sim 0.14$\,\% in the principal plane
of the Wollaston prism. Similar values of the instrumental
polarisation are derived for the observations of the other two targets
that are shown in Fig.~\ref{Fig_LIN_IP}. A higher instrumental
polarisation is observed at $\lambda \la 400$\,nm than at longer
wavelengths.
We conclude that the instrumental polarisation is either not constant
in time or depends on the telescope pointing. It is, at least in part,
related to the grism, and is much higher and wavelength dependent in
in the measurements obtained with the holographic grism 1200\,R than
in those obtained with grism 600\,B. We note that holographic grisms
are known to have a transmission that strongly depends on the
polarisation of the incoming radiation. Nevertheless, the instrumental
polarisation may depend also on the telescope optics and the position
of the Longitudinal Atmospheric Dispersion Corrector (LADC, Avila et
al. 1997). Our experiments to measure the instrumental polarisation by
observing at different PA suggest that the linear polarisation signal
measured in HD\,94660 is at most instrumental (Fig.~\ref{Fig_LIN_IP}).
Figure~\ref{Fig_HD37061} shows all our measurements of HD\,37061
obtained at various instrument PA and corrected for our estimated
instrumental polarisation. All these spectra refer to the north
celestial meridian. These data show that, although the photon noise
in a spectral bin of 10\,nm is well below 0.01\,\%, the accuracy of
our linear polarisation measurements is limited by an instrumental
effect that we have calibrated probably to within $\sim 0.05$\,\%.
\subsection{Instrumental circular polarisation}\label{Sect_CIP}
In our science targets, we also find that circular polarisation
measurements depend on the instrument PA. The fact that a zero
circular polarisation is measured in the continuum of HD\,94660, a star
not linearly polarised, suggests that the cross-talk from intensity
$I$ to Stokes $V$ is negligible, and that the spurious circular
polarisation is due to cross-talk from linear to circular
polarisation. The observations of HD\,37061 strongly support this
hypothesis, as the measured circular polarisation seems roughly
proportional to the $Q/I$ value measured in the instrument reference
system (Fig.~\ref{Fig_XTalk}). This phenomenon is discussed by
Bagnulo et al.\ (2009). It can be physically ascribed either to the
instrument collimator or the LADC. If cross-talk is stable, the
polarisation intrinsic to the source can be obtained by averaging the
signals measured at two instrument PAs that differ by 90\degr, i.e.,
using exactly the same method adopted for linear polarisation. For
HD\,37061, with grism 600\,B, the average signal is $ (P_V^{(30)} +
P_V^{(120)})/2 \sim 0.03$\,\% (Fig.~\ref{Fig_CIR_IP}). By combining
the pairs of observations at 120\degr\ and 210\degr\ we find $\sim
0.1$\,\%. Therefore the cross-talk from linear to circular
polarisation is not constant with time, telescope or instrument
position. In grisms 1200\,R we obtain $(P_V^{(30)} + P_V^{(120)})/2
\sim 0.03$\,\%. In Fig.~\ref{Fig_CIR_IP} we also show the measurements
in grism 600\,B towards HD\,35149 giving $(P_V^{(0)} + P_V^{(90)})/2
\sim 0.05$\,\% and towards HD\,93250 of $\sim 0.02$\,\%, respectively.
Our observing strategy reduces the cross-talk from linear to circular
polarisation. Nevertheless, the instrumental issues
require a more accurate calibration. We are able to achieve in the
continuum of the circular polarisation spectrum an accuracy of $\sim
0.03$\,\%. This is, however, insufficient to test the theoretical
predictions computed in Fig.~\ref{Polab.fig}. Linear polarisation
spectra that are corrected for instrumental signatures of the stars
HD\,37061 (Fig.~\ref{HD37061.fig}), HD\,93250
(Fig.~\ref{HD93250.fig}), HD\,99872 (Fig.~\ref{HD99872.fig}), and
HD\,37903 (Fig.~\ref{exthd37903.fig}) are discussed below.
\section{Fitting results}
The dust model is applied to average properties of the ISM and towards
specific sight-lines. We set-up models so that abundance constraints
are respected to within their uncertainties (Sect.~\ref{abu.sec}), and
we fit extinction, polarisation and emission spectra. The observed
extinction is a line-of-sight measurement to the star, while the IR
emission is integrated over a larger solid angle and along the
entire line-of-sight through the Galaxy. In principle both
measurements treat different dust column densities. Therefore an
extra assumption is made that the dust responsible for extinction and
emission has similar physical characteristics. Dust emission from
dense and cold background regions may give a significant contribution
to the FIR/submm. However, PAH are mostly excited by UV photons that
cannot be emitted far away from the source, and the same holds for
warm dust that needs heating by a nearby source.
\subsection {Dust in the solar neighbourhood \label{ism.sec}}
\begin{figure}
{\vspace{-0.35cm}
\hspace{-0.075cm} \includegraphics[width=9.2cm]{Extin_ISM.ps}}
\vspace{-0.25cm}
{\hspace{-0.075cm} \includegraphics[width=9.2cm]{sedDirbe.ps}}
{\hspace{-0.075cm} \includegraphics[width=9.2cm]{PolsigISM.ps}}
{\vspace{-0.25cm}
\caption{Dust in the solar neighbourhood. Mean (dashed) and
1$\sigma$ variation (hatched area) of the observed extinction
curves in the ISM, (up to 8.6\,$\mu$m$^{-1}$ by Fitzpatrick (1999)
and $\le 10$\,$\mu$m$^{-1}$ by Fitzpatrick \& Massa (2007)). The
contribution of the individual dust components to the total
extinction of the model with scaled (magenta line) and unscaled
(magenta dotted) cross sections (Eq.~\ref{f.eq}) are given (top).
Emission normalized per H atom when dust is heated by the
ISRF. High Galactic latitudes observations with $1\sigma$ error
bars (gray) from DIRBE (Arendt et al. 1998) and FIRAS (Finkbeiner
et al. 1999). The model fluxes convolved with the band passes of
the observations are shown as filled circles. The contribution of
the dust components to the total emission (black line) is shown
(middle). Mean (dashed) and 1$\sigma$ variation (hatched area) of
the observed linear polarisation normalised to the maximum
polarisation as given by Voshchinnikov et al. (2012). The
normalised linear polarisation of silicates with prolate (black
line) and oblate (magenta dotted) shapes is shown
(bottom). \label{ism.fig}}}
\end{figure}
The average of the extinction curves over many sight lines is taken to
be representative for the diffuse ISM of the Milky Way and gives
$R_{\rm{V}} = 3.1$. Such average extinction curves and their scatter
are given by Fitzpatrick (1999) up to 8.6$\mu$m$^{-1}$ and Fitzpatrick
\& Massa (2007) up to 10$\mu$m$^{-1}$. They are displayed in
Fig.~\ref{ism.fig} as a ratio of the optical depths. The average
extinction curve is approximated by varying the exponent of the dust
size distribution $q$ and the relative weights $w_i$ of the dust
populations. The upper size limit of large grains is derived by
fitting the mean polarisation spectrum of the Milky Way discussed
below. We find $r_{+} = 0.45\mu$m. In 1\,g of dust we choose: 546\,mg
to be in large silicates, 292\,mg in amorphous carbon, 43\,mg in
graphite, 82\,mg in small silicates, 14\,mg in small and 23\,mg in
large PAHs, respectively. A fit that is consistent to within the
errors of the mean extinction curve is derived with the parameters of
Table~\ref{para.tab} and is shown in Fig.~\ref{ism.fig}.
Comparing results of the extinction fitting with infrared observations
at high galactic latitudes has become a kind of benchmark for models
that aim to reproduce dust emission spectra of the diffuse ISM in the
Milky Way (D\'{e}sert et al. 1990; Siebenmorgen \& Kr\"ugel 1992; Dwek
et al. 1997; Li\& Draine 2001; Compiegne et al. 2011; Robitaille et
al. 2012). In these models the dust is heated by the mean intensity,
$J_{\nu}^{\rm {ISRF}}$, of the interstellar radiation field in the
solar neighbourhood (Mathis et al. 1983). Observations at Galactic
latitude $\vert b \vert \ga 25\degr$ using DIRBE (Arendt et al. 1998)
and FIRAS (Finkbeiner et al. 1999) on board of COBE are given in
$\lambda I_{\lambda} /N_{\rm H}$ (erg/s/sr/H-atom), with hydrogen
column density $N_{\rm H}$ (H-atom/cm$^2$). Therefore we need to
convert the units and scale the dust emission spectrum computed by
Eq.~(\ref{emis.eq}) by the dust mass $m_d$ (g--dust/H-atom); this
gives $I_{\lambda} /N_{\rm H} = m_d \ \epsilon_{\lambda}$.
\begin{figure}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{Extin_HD37061.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{pl_linpolHD37061.ps}}
\caption{\label{HD37061.fig} Extinction curve (top) and polarised
spectrum (bottom) of HD\,37061. The observed extinction curve
(black dashed line) with 1\,$\sigma$ error bars (hatched area) and
the dust model (magenta solid line) with contribution from
individual dust components are shown as labelled. The FORS2 linear
polarisation spectra (circles) and the model (black solid line) is
shown. Model parameters are given in Table~\ref{para.tab}. }
\end{figure}
\begin{figure} [htb]
{\hspace{-0.75cm} \includegraphics[width=10.cm]{Extin_HD93250.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{pl_linpolHD93250.ps}}
\caption{\label{HD93250.fig} Same as Fig.\ref{HD37061.fig} for HD\,93250.}
\end{figure}
We derive the conversion factor by matching the model flux to that in
the 140$\mu$m DIRBE band pass. This gives $m_{\rm d}= 1.48 \times
10^{-26}$ (g-dust/H-atom) and a gas--to--dust mass ratio towards that
direction of $(1.36 \ m_{\rm p}/m_{\rm d}) \sim 153$. If one corrects
for the different specific densities $\rho_i$ of the dust materials,
our estimate is consistent within 8\% of that by Li \& Draine
(2001). A Kramers--Kronig analysis is applied by Purcell (1969)
finding as principle value an upper limit of $1.36 \ m_{\rm p}/m_{\rm
d} <170$, where a specific density of the dust material of $\rho
\leq 2.5$\,(g/cm$^3$) and a correction factor of 0.95 for the grain
shape is assumed (cmp. Eq. (21.17) in Draine 2011). The dust mass in
the model shall be taken as a lower limit because there could be
undetected dust components made up of heavy metals. For example, a
remaining part of Fe that is not embedded into the amorphous olivines
(Voshchinnikov et al. 2012) might be physically bonded in layers of
iron-fullerene clusters (Fe$\cdot$C$_{60}$, Lityaeva et al., 2006) or
other iron nanoparticles (Draine \& Hensley 2013). To our knowledge
there is no firm spectral signature of such putative components
established so we do not consider them here. Nevertheless, in the
discussion of the uncertainty of $m_d$ one should consider the
observational uncertainties in estimates of $N_{\rm H}$ towards that
region as well.
The total emission and the spectrum of each grain population is shown
in Fig.~\ref{ism.fig}. All observed in-band fluxes are fit within the
uncertainties. We compare the dust emission computed by applying
cross sections of the initial physical model with that of the
fine-tuned cross sections (Eq.~\ref{f.eq}). We find that the
difference of these models in the DIRBE band pass is less than
1\%. The graphite emission peaks in the 20--40$\mu$m region. This
local maximum in the emission is due to the optical constants. We
verified that the emission by graphite in that region becomes flatter
when applying different optical constants of graphite, such as those
provided by Laor \& Draine (1993) and Draine \& Lee (1984). Such
flatter graphite emission is shown by e.g., Siebenmorgen \& Kr\"ugel
(1992). The 12$\mu$m DIRBE band is dominated by the emission of the
PAHs. We have difficulties to fit this band by adopting PAH cross
sections as derived for starburst nuclei. Li \& Draine (2001)
underestimate the emission in this DIRBE band by 40\%. In their model
the 11.3 and 12.7$\mu$m bands do not depend on the ionisation
degree. In the laboratory it is observed that the ratio of the C-H
stretching bands, at 11.3 and 12.7$\mu$m, over the C=C stretching
vibrations, at 6.2 and 7.7$\mu$m, decrease manifold upon ionisation
(Tielens\ 2008). Therefore in the diffuse ISM we vary the PAH cross
section as compared to the ones derived in the harsh environment of OB
stars where PAHs are likely to be ionised (Table~\ref{pah.tab}). Data
may be explained without small silicates.
\noindent
In the optical and near IR the observed polarisation spectra of the
ISM can be fit by a mathematical formulae, known as the Serkowski
(1973) curve:
\begin{equation}
\frac{p(\lambda)}{p_{\max}} = \exp \left[ -k_p \ \ln^2
\left( \frac{\lambda_{\max}}{\lambda} \right) \right]\,,
\label{serk.eq}
\end{equation}
\begin{figure} [h!tb]
{\hspace{-0.75cm} \includegraphics[width=10.cm]{Extin_HD99872.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{pl_linpolHD99872.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{sed_HD99872.ps} }
\caption{\label{HD99872.fig} Top and middle panels as in
Fig.\ref{HD37061.fig} for HD\,37061. Bottom: 3 -- 36\,$\mu$m
emission observed with Spitzer/IRS (red filled circles) and WISE
(green filled circles). The photospheric emission of the star is
represented by the dashed line}
\end{figure}
\noindent
where the maximum polarisation is observed to be $p_{\rm {max}}/A_V
\la 3\%/$mag (Whittet 2003). In the thermal IR at $\lambda >
2.5\,\mu$m polarisation data are fit by a power--law where $p(\lambda)
\propto \lambda^{-t}$ with $1.6 \le t \le 2$ (Martin et al., 1992,
Nishiyama et al. 2006). This fit naturally breaks down in the 10$\mu$m
silicate band. The average observed linear polarisation of the ISM is
displayed in Fig.~\ref{ism.fig}. The Serkowski curve is fit without
carbon particles, only silicates are aligned. We find a good fit
assuming that silicates with particle sizes of radii between 100 --
450\,nm are aligned. Below 0.3$\mu$m the mean polarisation spectrum is
better explained by dust of prolate than oblate structure
(Fig.~\ref{ism.fig}).
\subsection {HD\,37061}
We repeat the exercise of the Sect.~\ref{ism.sec} and model
extinction, polarisation and, when available, dust emission spectra.
So far we modelled dust properties from extinction and polarisation
data when observations are averaged over various sight lines. In the
following we fit data towards a particular star and choose those for
which we present observations of the linear polarisation spectrum.
This star is of spectral type B1.5V, and located at 720\,pc from us. The
extinction curve is compiled by Fitzpatrick \& Massa (2007). The
selective extinction is $R_{\rm V} = 4.55 \pm 0.13$ and the visual
extinction $A_{\rm V} = 2.41 \pm 0.11$ (Voshchinnikov et al.
2012). ISO and Spitzer spectra of the dust emission are not
available. Polarisation spectra are observed by us with
of FORS/VLT. The spectra are consistent with earlier measurements of
the maximum linear polarisation of $p_{\rm {max}} = 1.54 \pm 0.2$\,\%
at $\lambda_{\rm {max}} = 0.64 \pm 0.04\,\mu$m by Serkowski et
al. (1975). A fit to the extinction curve and the polarisation
spectrum is shown in Fig.~\ref{HD37061.fig}. The observed
polarisation spectrum is fit by silicate grains that are of prolate
shape with IDG alignment and $\Omega \sim 55^{\rm{o}}$, other
parameters as of Table~\ref{para.tab}.
\subsection {HD\,93250}
This star is of spectral type O6V and located 1.25\,kpc from us. The
extinction curve is compiled by Fitzpatrick \& Massa (2007) and
between $3.3\mu \rm{m}^{-1} \la \lambda^{-1} \la 11\mu \rm{m}^{-1}$ by
Gordon et al. (2009), who present spectra of the Far Ultraviolet
Spectroscopic Explorer (FUSE) and supplemented spectra from the
International Ultraviolet Explorer (IUE). The selective extinction is
$R_{\rm V} = 3.55 \pm 0.34$ and the visual extinction $A_{\rm V} =
1.54 \pm 0.1$ (Gordon et al.\ 2009). Polarisation spectra are
observed by us in two orientations of the instrument; other
polarisation data as well as ISO or Spitzer spectra are not
available. A fit to the extinction curve and the polarisation spectrum
is shown in Fig.~\ref{HD93250.fig}. Dust parameters are summarised in
Table~\ref{para.tab}.
\subsection {HD\,99872}
The star has spectral type B3V and is located at 230\,pc from us. The
extinction curve is compiled by Fitzpatrick \& Massa (2007) and
between $3.3\mu \rm{m}^{-1} \la \lambda^{-1} \la 11\mu \rm{m}^{-1}$ by
Gordon et al. (2009). The selective extinction is $R_{\rm V} = 2.95
\pm 0.44$ and the visual extinction $A_{\rm V} = 1.07 \pm 0.04$
(Gordon et al.\ 2009). The spectral shape of the FORS polarisation
can be fit by adopting aligned silicate particles with a prolate
shape, while a contribution of aligned carbon grains is not required.
The observed maximum polarisation of 3.3\,\% is reproduced assuming
IDG alignment with efficiency computed by Eq.(\ref{align}) and $a/b
\sim 6$. Dust parameters are summarised in Table~\ref{para.tab}. A fit
to the extinction curve, polarisation spectrum, and IR emission is
shown in Fig.~\ref{HD99872.fig}. The IR emission is constrained by
WISE (Cutri et al.\ 2012) and by photometry and a Spitzer/IRS archival
spectrum (Houck et al.\ 2004). The 3 -- 32$\mu$m observations are
consistent with a 20000\,K blackbody stellar spectrum, and do not
reveal spectral features by dust. At wavelengths $\ga 32\,\mu$m the
observed flux is in excess of the photospheric emission. The excess
has such a steep rise that it is not likely to be explained by
free--free emission. It is pointing towards emission from a cold dust
component such as a background source or a faint circumstellar dust
halo. The optical depth of such a putative halo is too small to
contribute to the observed polarisation that is therefore of
interstellar origin (Scicluna et al.\ 2013). Unfortunately it has not
been possible to find in the literature high quality data at longer
wavelengths to confirm and better constrain the far IR excess.
\section {The Reflection Nebula NGC\,2023 \label{hd37903}}
We present observations towards the star HD\,37903, which is the primary
heating source of the reflection nebula NGC\,2023. It is located in the
Orion Nebula at a distance of $\sim 400$\,pc (Menten et al. 2007). The
star is of spectral type B1.5V. It has carved a quasi--spherical
dust--free HII region of radius of $\leq 0.04$\,pc (Knapp et al.\
1975, Harvey et al.\ 1980). Dust emission is detected further out, up
to several arcmin, and is distributed in a kind of bubble--like
geometry (Peeters et al. 2012). In that envelope ensembles of dust
clumps, filaments and a bright southern ridge are noted in near IR
and HST images (Sheffer et al.\ 2011).
\begin{figure} [h!tb]
\hspace{-0.cm}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{Extin_HD37903.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{pl_albedo.ps}}
{\hspace{-0.75cm} \includegraphics[width=10.cm]{pl_linpolHD37903.ps}}
\caption{Extinction (top, with notation of Fig.~\ref{HD37061.fig}.),
albedo (middle) and linear polarisation spectrum (bottom) of
HD\,37903. The albedo of the dust model of this work (magenta line)
and by Weingartner \& Draine (2001, dashed) is displayed together
with estimates of the dust albedo of the reflection nebulae NGC\,7023
(square) and NGC\,2023 (circle). The observed polarisation spectrum
with 1$\sigma$ error bars of this work (red symbols) and as compiled
by Efimov (2009, gray symbols) is displayed together with the model
(black line) that is the sum of aligned silicates (green) and
amorphous carbon (orange) particles. Dust parameters are summarised
in Table~\ref{para.tab}. \label{exthd37903.fig}}
\end{figure}
\begin{figure} [h!tb]
{\hspace{-0.7cm} \includegraphics[width=10cm]{sedHD37903.ps}}
\caption{Spectral energy distribution of HD\,37903 that is heating
NGC\,2023 (top) is shown together with a zoom into the 5 -- 50 $\mu$m
region (bottom): photometric data (symbols); spectra by IUE (dark
gray), Spitzer/IRS (magenta) and ISOSWS (blue) with 1$\sigma$ error
(gray hatched area) and the model (black lines) in apertures as
indicated. \label{sedhd37903.fig} }
\end{figure}
\subsection {Extinction \label{hd37903dust.sec}}
The extinction curve towards HD\,37903 is observed by Fitzpatrick \&
Massa (2007) and Gordon et al. (2009). The extinction towards HD\,37903
is low and estimates range between A$_{\rm V} \sim 1.2 - 1.5$\,mag
(Burgh et al.\ 2002, Compiegne et al.\ 2008, Gordon et al.\ 2009). The
selective extinction is $R_{\rm V} = 4.11 \pm 0.44$ (Gordon et
al. 2009). We fit the extinction curve with parameters as of
Table.~\ref{para.tab}. The resulting fit together with the
contribution of the various grain populations is shown in
Fig.~\ref{exthd37903.fig}.
The total amount of energy removed by dust from the impinging light
beam is equal to (1-$\Lambda$), i.e. it is related to the particle
albedo $\Lambda$ (Eq.~\ref{albedo.eq}) and the angular distribution of
scattered light. The latter is determined by the asymmetry parameter
$g$. The albedo and the asymmetry parameter cannot be derived
separately and are only observed in a combined form (Voshchinnikov,
2002). These two quantities have been estimated for several reflection
nebulae in the Galaxy (Gordon 2004). Reflection nebulae are bright
and often heated by a single star. By assuming a simple scattering
geometry estimates of $\Lambda$ are given for the reflection nebula
NGC\,2023 (Burgh et al. 2002) and NGC\,7023 (Witt et al. 1982, Witt et
al. 1993). They are shown in Fig.~\ref{exthd37903.fig} together with
our dust model of HD\,37903 and for comparison the $R_{\rm V}=3.1$ model
by Weingartner \& Draine (2001). The albedo of both dust models are
similar. They are consistent within $1\sigma$ uncertainties of the
data of NGC\,2023, however deviate with the peak observed at 0.14$\mu$m
for NGC\,7023.
\subsection {Dust envelope}
PAH emission of NGC\,2023 is detected by Sellgren (1985), with ISOCAM by
Cesarsky et al. (2000) and Spitzer/IRS by Joblin et
al. (2005). Emission features at 7.04, 17.4 and 18.9\,$\mu$m are
detected and assigned to neutral fullerene (C$_{60}$) with an
abundance estimate of $<1$\% of interstellar carbon (Sellgren et
al. 2010). The fullerene show a spatial distribution distinct from
that of the PAH emission (Peeters et al. 2012). The dust emission
between 5 -- 35\,$\mu$m in the northern part of the nebula is modelled
by Compiegne et al. (2008), who apply a dust model fitting the mean
extinction curve of the ISM. The authors approximate that part of the
object in a plane parallel geometry and simplify the treatment of
scattering. Compiegne et al. (2008) find that the relative weight and
hence abundance of PAHs is five times smaller in the denser part of
the cloud than in the diffuse ISM.
We compile the spectral energy distribution (SED) towards HD\,37903
using photometry in the optical, UBVRI bands by Comeron (2003), in the
near-infrared, IJHKL filters by Burgh et al. (2002), IRAS (Neugebauer
et al. 1984; Joint IRAS Science 1994) and at 1.3mm by Chini et
al. (1984). The SED is complimented by spectroscopy between
0.115$\,\mu$m and 0.32\,$\mu$m using data of the International Ultraviolet
Explorer (IUE), as available in the Mikulski archive for space
telescopes$^1${\footnote {http://archive.stsci.edu/iue/}}, the
Spitzer/IRS archival spectrum (Houck et al. 2004) and ISO/SWS
spectroscopy (Sloan et al. 2003). The observed SED is shown in
Fig.~\ref{sedhd37903.fig}.
The observed bolometric IR luminosity is used as an approximation of
the stellar luminosity of $L_* \sim 10^4$\Lsun . We take a blackbody
as stellar spectrum. Its temperature of $T_* = 21000$\,K is derived by
fitting the IUE spectrum and optical, NIR photometry first in a
dust-free model and correcting for a foreground extinction of $\tau
\sim D/1kpc = 0.4$. Both parameters, $L_*$ and $T_*$, are appropriate
to the spectral type of HD\,37903. A weak decline of the dust density
with radius is assumed in the models by Witt et al. (1984). We assume
in the nebulae a constant dust density of $\rho = 10^{-23}$\,g/cm$^3$
for simplicity. The inner radius of the dust shell is set at
$r_{\rm{in}} = 10^{17}$\,cm. For the adopted outer radius of
$r_{\rm{out}} = 1$\,pc the total dust mass in the envelope is
$M_{\rm{dust}} = 0.62$\,\Msun \/ and the optical depth, measured
between $r_{\rm {in}}$ and $r_{\rm {out}}$, is $\tau_{\rm V}=0.8$. We
build up SEDs of the cloud within apertures of different angular
sizes. The models are compared to the observations in
Fig.~\ref{sedhd37903.fig}. The 1.3\,mm flux is taken as upper limit as
it might be dominated by cold dust emission of the molecular cloud
behind the nebula. Model fluxes computed for different diaphragms
envelope reasonably well data obtained at various apertures.
Fullerenes are not treated as an individual dust component. The cross
sections of the PAH emission bands are varied to better fit the
Spitzer/IRS spectrum. They are listed in Table~\ref{pah.tab} and are
close to the ones estimated for starburst galaxies. The cross
sections of the C-H bands at 11.3 and 12.7\,$\mu$m are much smaller
and of the C=C vibrations at 6.2 and 7.7\,$\mu$m slightly larger than
those derived for the solar neighbourhood. This finding is consistent
with a picture of dominant emission by ionised PAHs near B stars and
neutral PAHs in the diffuse medium. We note that alternatively, the
hydrogenation coverage of PAHs can be used in explaining a good part
of observed variations of PAH band ratios in various galactic and
extra--galactic objects (Siebenmorgen \& Heymann, 2012). We do not
find a good fit to the 10\,$\mu$m continuum emission if we ignore the
contribution of small silicate grains.
\begin{table*}[h!tb]
\begin{center}
\caption {Parameters of the dust models. \label{para.tab}}
\begin{tabular}{|l|c|c|c|c|c|}
\hline\hline
& & & & & \\
Parameter $^a$ & Solar & HD\,37061 & HD\,93250 & HD\,99872 & HD\,37903 \\
&neighbourhood & & & &(NGC\,2023) \\
\hline
$r_+$ (nm) & 440 & 485 & 380 & 400 & 485\\
$r_{\rm {-}}$ (nm)& 100 & 125 & 100 & 140 & 120\\
$q$ & 3.4 & 3.0 & 3.3 & 3.3 & 3.2\\
$a/b$ & 2 & 2 & 2.2 & 6 & 2 \\
$w_{\rm {aC}}$ & 29.2 & 40.1 & 28.8 & 31.5 & 34.3\\
$w_{\rm {Si}}$ & 54.6 & 56.2 & 58.2 & 55.1 & 48.1\\
$w_{\rm {gr}}$ & 4.3 & 1.0 & 0.8 & 2.4 & 5.7\\
$w_{\rm {sSi}}$ & 8.2 & 0.3 & 8.9 & 5.4 & 11.2\\
$w_{\rm {PAHs}}$ & 1.4 & 0.8 & 1.3 & 2.4 & 0.2\\
$w_{\rm {PAHb}}$ & 2.3 & 1.6 & 1.9 & 3.2 & 0.5\\
\hline
\end{tabular}
\end{center}
{\bf {Notes.}} $^a$ Upper ($r_+$) and lower ($r_{\rm
{-}}$) particle radius of aligned grains, exponent of the dust size
distribution ($q$), axial ratio of the particles ($a/b$), and relative
weight per g--dust (\%) of amorphous carbon ($w_{\rm{aC}}$), large
silicates ($w_{\rm{Si}}$), graphite ($w_{\rm{gr}}$), small silicates
($w_{\rm{sSi}}$), small ($w_{\rm{PAHs}}$) and large ($w_{\rm{PAHb}}$) PAHs,
respectively.
\end{table*}
\subsection {Polarisation}
Spectro--polarimetric data acquired with the polarimeters on board the
WUPPE and HPOL satellites{\footnote{www.sal.wisc.edu/}} and from
ground (Anderson et al. 1996) are compiled by Efimov (2009). They
agree to within the errors of the polarisation spectra observed by us
with FORS2/VLT. We find a good fit to the polarisation spectrum only
when both silicates and amorphous carbon are aligned. The observed
polarisation spectrum is fit by grains that are of prolate shape,
$\Omega \sim 65\degr$ and other parameters as of Table~\ref{para.tab}.
As most of the extinction towards HD\,37903 is coming from the nebula,
one may also consider other alignment mechanisms of the grains such as
the radiative torque alignment proposed by Lazarian (2007). Another
way to explain the observed polarisation could be due to dust
scattering. This would require significant inhomogeneities of the dust
density distribution of the nebula within the aperture of the
instruments. The contribution of scattering to the polarisation might
be estimated from a polarisation spectrum of a nearby star outside the
reflection nebula that is not surrounded by circumstellar dust.
\section{Conclusion \label{conclusion.sec}}
The main results of this paper are as follows: \\
{\bf 1)} We have presented an interstellar dust model that includes a
population of carbon and silicate grains with a power--law size
distribution ranging from the molecular domain (5\AA) up to 500\,nm.
Small grains are graphite, silicates and PAHs, and large spheroidal
grains are made of amorphous carbon and silicates. The relative
weight of each dust component is specified, so that absolute
abundances of the chemical elements are not direct input parameters
and used as a consistency check (Eq.~\ref{w.eq}). We apply the
imperfect Davis--Greenstein alignment mechanism to spheroidal dust
particles, which spin and wobble along the magnetic field. Their far
IR/submm absorption cross section is a factor 1.5 -- 3 larger than
spherical grains of identical volume. Mass estimates derived
from submillimeter observations that ignore this effect are
overestimated by the same amount. The physical model fits
observed extinction curves to within a few percent and a perfect match
is found after fine adjustment of the computed cross sections
(Eq.~\ref{f.eq}).
{\bf 2)} The wavelength-dependent absorption cross-section of PAHs have
been revised to give better agreement with recent laboratory and
theoretical work. PAH cross-sections of the emission bands are
calibrated to match observations in different radiation
environments. We have found that in harsh environments, such as in
starbursts, the integrated cross-section of the C=C bands at 6.2 and
7.7\,$\mu$m are a factor $\sim 2$ larger, and for the C--H bands at
11.3 and 12.7\,$\mu$m a factor $\sim 5$ weaker than in the neutral
regions of the ISM. The cross sections near OB stars are similar to
the ones derived for starbursts.
{\bf 3)} With the FORS instrument of the VLT, we have obtained new
ultra-high signal-to-noise linear and circular spectro-polarimetric
observations for a selected sample of sight lines. We have
performed a detailed study of the instrumental polarisation in an
attempt to achieve the highest possible accuracy. We show that
circular polarisation provides a diagnostic on grain shape and
elongation. However, it is beyond the limit of the observations that
we have obtained.
{\bf 4)} The dust model reproduces extinction, linear and circular
polarisation curves and emission spectra of the diffuse ISM. It is
set up to keep the number of key parameters to a minimum
(Table~\ref{para.tab}). The model accounts for IR observations at
high galactic latitudes. It can be taken as representative of the
local dust in the solar neighbourhood.
{\bf 5)} We have applied our dust model on individual sight lines
ranging between $1.2\la A_{\rm V}/{\rm {mag}} \la 2.4$ and $2.9 \la
R_{\rm V} \la 4.6$, towards the early type stars: HD\,37061,
HD\,37903, HD\,93250, and HD\,99872. For these stars we present
polarisation spectra and measure a maximum polarisation of 1 -- 3\,\%.
The IR emission of the star HD\,37903 that is heating the reflection
nebula NGC\,2023 is computed with a radiative transfer program
assuming spherical symmetry. In the Spitzer/IRS spectrum of the
massive star HD\,99872 we detect an excess emission over its photosphere
that is steeply rising towards longer wavelengths and pointing towards
a cool dust component.
{\bf 6)} Linear polarisation depends on the type of the spheroid,
prolate or oblate, its elongation, and the alignment efficiency. We
have found that the spectral shape of the polarisation is critically
influenced by the assumed lower and upper radius of dust that is
aligned. In conclusion polarisation helps to determine the otherwise
purely constrained upper size limit of the dust particles. The
observed linear polarisation spectra are better fit by prolate than
by oblate grains. For accounting of the polarisation typically only
silicates, with an elongation of about 2 and radii between 100\,nm
and 500\,nm need to be aligned.
\begin{acknowledgements} {We are grateful to Endrik Kr\"ugel for
helpful discussions and thank Gaicomo Mulas for providing their
PAH cross sections in electronic form. NVV was partly supported by
the RFBR grant 13-02-00138. This work is based on observations
collected at the European Southern Observatory, VLT programs
386.C-0104. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France. This work is based on data
products of the following observatories: Spitzer Space Telescope,
which is operated by the Jet Propulsion Laboratory, California
Institute of Technology under a contract with NASA. Infrared Space
Observatory funded by ESA and its member states. Two Micron All
Sky Survey, which is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis
Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science
Foundation. Wide-field Infrared Survey Explorer, which is a joint
project of the University of California, Los Angeles, and the Jet
Propulsion Laboratory/California Institute of Technology, funded
by the National Aeronautics and Space Administration. Data
available at the Space Astronomy Laboratory (SAL), a unit of the
Astronomy Department at the University of Wisconsin. }
\end{acknowledgements}
|
2,869,038,156,662 | arxiv | \section{Introduction}
\label{section:introduction}
Let $(M,g)$ be an $m$-dimensional closed Riemannian manifold
with a Riemannian metric $g$,
and let $(N,J,h)$ be a $2n$-dimensional compact almost Hermitian manifold
with an almost complex structure $J$ and a Hermitian metric $h$.
Consider the initial value problem for Schr\"odinger maps
$u:\mathbb{R}{\times}M{\to}N$ of the form
\begin{alignat}{2}
\frac{\partial{u}}{\partial{t}}
& =
J_u\tau(u)
&
\quad\text{in}\quad
& \mathbb{R}{\times}M,
\label{equation:pde}
\\
u(0,x)
& =
u_0(x)
&
\quad\text{in}\quad
& M,
\label{equation:data}
\end{alignat}
where
$t\in\mathbb{R}$ is the time variable,
$x{\in}M$,
$\partial{u}/\partial{t}=du(\partial/\partial{t})$,
$du$ is the differential of the mapping $u$,
$u_0$ is a given map of $M$ to $N$,
$\tau(u)=\operatorname{trace}\nabla{du}$
is the tension field of the map $u(t):{M}\to{N}$,
and
$\nabla$ is the induced connection.
Here we observe local expression of $\tau(u)$.
Let $x^1,\dotsc,x^m$ be local coordinates of $M$,
and let $z^1,\dotsc,z^{2n}$ be local coordinates of $N$.
We denote
$$
g
=
\sum_{i,j=1}^m
g_{ij}dx^i{\otimes}dx^j,
\quad
\sum_{k=1}^m
g_{ik}g^{kj}
=
\delta_{ij},
\quad
G=\det(g_{ij}),
\quad
\Delta_g
=
\sum_{i,j=1}^m
\frac{1}{\sqrt{G}}
\frac{\partial}{\partial{x^i}}
g^{ij}\sqrt{G}\frac{\partial}{\partial{x^j}},
$$
$$
h=\sum_{a,b=1}^{2n}h_{ab}dz^a{\otimes}dz^b,
\quad
\sum_{c=1}^{2n}
h_{ac}h^{cb}
=
\delta_{ab},
\quad
\Gamma^a_{bc}
=
\frac{1}{2}
\sum_{d=1}^{2n}
h^{ad}
\left(
\frac{\partial{h_{bd}}}{\partial{z^c}}
+
\frac{\partial{h_{cd}}}{\partial{z^b}}
-
\frac{\partial{h_{bc}}}{\partial{z^d}}
\right),
$$
where $\delta_{ij}$ is Kronecker's delta.
If we set $u^a=z^a{\circ}u$,
the local expression of $\tau(u)$ is given by
$$
\tau(u)
=
\sum_{i,j=1}^m
g^{ij}
\nabla{du}\left(\frac{\partial}{\partial{x^i}},\frac{\partial}{\partial{x^j}}\right)
=
\sum_{a=1}^{2n}
\left\{
\Delta_gu^a
+
\sum_{i,j=1}^m
\sum_{b,c=1}^{2n}
g^{ij}(x)
\Gamma^{a}_{bc}(u)
\frac{\partial{u^b}}{\partial{x^i}}
\frac{\partial{u^c}}{\partial{x^j}}
\right\}
\left(\frac{\partial}{\partial{z^a}}\right)_u.
$$
Then, \eqref{equation:pde} is a $2n\times2n$ system of
quasilinear Schr\"odinger equations.
\par
The equation \eqref{equation:pde} geometrically generalizes
two-sphere valued partial differential equations
modeling the motion of vertex filament,
ferromagnetic spin chain system and etc.
See, e.g., \cite{darios}, \cite{hasimoto}, \cite{SSB}
and references therein.
In the last decade, these physics models have been generalized and
studied from a point of view of geometric analysis in mathematics.
In other words, the relationship between
the structure of the partial differential equation \eqref{equation:pde}
and geometric settings have been investigated in the recent ten years.
There are apparently two directions in the geometric analysis of partial
differential equations like \eqref{equation:pde}.
\par
One of them is a geometric reduction of equations to
simpler ones with values in the real or complex Euclidean space.
This direction originated from Hasimoto's work \cite{hasimoto}.
In their pioneering work \cite{CSU},
Chang, Shatah and Uhlenbeck first rigorously studied the PDE structure of
\eqref{equation:pde} when $(M,g)$ is the real line with the usual metric
and $(N,J,h)$ is a compact Riemann surface.
They constructed a good moving frame along the map and
reduced \eqref{equation:pde} to a simple complex-valued equation
when $u(t,x)$ has a fixed base point as $x\rightarrow+\infty$.
Similarly, Onodera reduced a one-dimensional third or fourth order
dispersive flow to a complex-valued equation in \cite{onodera2}.
In \cite{NSU1} and \cite{NSU2},
Nahmod, Stefanov and Uhlenbeck obtained
a system of semilinear Schr\"odinger equations
from the equation of the Schr\"odinger map
of the Euclidean space to the two-sphere
when the Schr\"odinger map never takes values
in some open set of the two-sphere.
Nahmod, Shatah, Vega and Zeng constructed a moving frame along
the Schr\"odinger map of the Euclidean space to a K\"ahler manifold
in \cite{NSVZ}.
Generally speaking, these reductions require some restrictions on
the range of the mappings, and one cannot make use of them
to solve the initial value problem for the original equations
without restrictions on the range of the initial data.
\par
The other direction of geometric analytic approach
to partial differential equations like \eqref{equation:pde}
is to consider how to solve the initial value problem.
In his pioneering work \cite{koiso},
Koiso first reformulated the equation of
the motion of vortex filament geometrically,
and proposed the equation \eqref{equation:pde}
when $(M,g)$ is the one-dimensional torus and
$(N,J,h)$ is a compact K\"ahler manifold.
Moreover, Koiso established the standard short-time existence theorem,
and proved that if $(N,J,h)$ is locally symmetric,
that is, $\nabla^NR=0$, then the solution exists globally in time,
where $\nabla^N$ and $R$ are the Levi-Civita connection
and the Riemannian curvature tensor of $N$ respectively.
See \cite{PWW} also.
Similarly, Onodera studied local and global existence theorem
of solutions to a third-order dispersive flow for closed curves
into K\"ahler manifolds in \cite{onodera1}.
Gustafon, Kang and Tsai studied time-global stability
of the Schr\"odinger map of the two-dimensional Euclidean space
to the two-sphere around equivariant harmonic maps in \cite{GKT}.
In \cite{DW}, Ding and Wang gave a short-time existence theorem of
\eqref{equation:pde}-\eqref{equation:data}
when $(M,g)$ is a general closed Riemannian manifold and
$(N,J,h)$ is a compact K\"ahler manifold.
However, they actually gave the proof only for the case that
$(M,g)$ is the Euclidean space or the flat torus.
We do not know whether their method of proof can work
for a general closed Riemannian manifold $(M,g)$.
Generally speaking,
Schr\"odinger evolution equations are very delicate on lower order terms,
in contrast with the heat equations,
which can be easily treated together with any lower order term
by the classical G{\aa}rding inequality.
\par
Both of these two directions of geometric analysis of equations like
\eqref{equation:pde} are deeply concerned with the relationship between
the geometric settings of equations and
the theory of linear dispersive partial differential equations.
For the latter subject, see, e.g.,
\cite{chihara2},
\cite{doi},
\cite{ichinose1},
\cite{ichinose2},
\cite[Lecture VII]{mizohata}
and references therein.
Being concerned with the compactness of the source manifold,
we need to mention local smoothing effect of dispersive partial
differential equations.
It is well-known that solutions to the initial value problem for some
kinds of dispersive equations gain extra smoothness in comparison with
the initial data.
In his celebrated work \cite{doi},
Doi characterized the existence of microlocal smoothing effect of
Schr\"odinger evolution equations on complete Riemannian manifolds
according to the global behavior of the geodesic flow on the unit
cotangent sphere bundle over the source manifolds.
Roughly speaking,
the local smoothing effect occurs if and only if
all the geodesics go to ``infinity''.
In particular, if the source manifold is compact,
then no smoothing effect occurs.
For this reason, it is essential to study the initial value problem
\eqref{equation:pde}-\eqref{equation:data} when $(M,g)$ is compact.
\par
We should also mention the influence of the K\"ahler condition
$\nabla^NJ=0$ on the structure of the equation \eqref{equation:pde}.
All the preceding works on \eqref{equation:pde} assume that
$(N,J,h)$ is a K\"ahler manifold.
If $\nabla^NJ=0$, then \eqref{equation:pde} behaves like
symmetric hyperbolic systems,
and the classical energy method works effectively.
See \cite{koiso} for the detail.
If $\nabla^NJ\ne0$, then \eqref{equation:pde} has a first
order terms in some sense, and the classical energy method breaks down.
\par
The purpose of the present paper is to show
a short-time existence theorem for
\eqref{equation:pde}-\eqref{equation:data}
without the K\"ahler condition.
To state our results, we here introduce function spaces of mappings.
Set $\nabla_i=\nabla_{\partial/\partial{x^i}}$ for short.
For a positive integer $k$,
$H^k(M;TN)$ is the set of all continuous mappings
$u:M{\rightarrow}N$ satisfying
$$
\lVert{u}\rVert_{H^k}^2
=
\sum_{l=1}^k
\int_M
\lvert\nabla^lu\rvert^2
d\mu_g
<\infty,
$$
$$
\lvert\nabla^lu\rvert^2
=
\sum_{\substack{i_1,\dotsc,i_l \\ j_1,\dotsc,j_l=1}}^m
g^{i_1j_1}{\dotsb}g^{i_lj_l}
h
\left(
\nabla^lu\left(\frac{\partial}{\partial{x^{i_1}}},\dotsc,\frac{\partial}{\partial{x^{i_l}}}\right),
\nabla^lu\left(\frac{\partial}{\partial{x^{j_1}}},\dotsc,\frac{\partial}{\partial{x^{j_l}}}\right)
\right),
$$
where $d\mu_g=\sqrt{G}dx^1{\dotsb}dx^m$
is the Riemannian measure of $(M,g)$.
See e.g., \cite{hebey} for the Sobolev space of mappings.
The Nash embedding theorem shows that
there exists an isometric embedding
$w{\in}C^\infty(N;\mathbb{R}^d)$ with some integer $d>2n$.
See \cite{GR}, \cite{gunther} and \cite{nash}.
Let $I$ be an interval in $\mathbb{R}$.
We denote by $C(I;H^k(M;TN))$ the set of all
$H^k(M;TN)$-valued continuous functions on $I$,
In other words, we define it
by the pullback of the function space by the isometry $w$ as
$C(I;H^k(M;TN))=C(I;{w^\ast}H^k(M;\mathbb{R}^d))$,
where $H^k(M;\mathbb{R}^d)$ is the usual Sobolev space
of $\mathbb{R}^d$-valued functions on $M$.
\par
Here we state our main results.
\begin{theorem}
\label{theorem:main}
Let $k$ be a positive integer satisfying $2k>m/2+5$,
and let $k_0$ be the minimum of $k$.
Then, for any $u_0{\in}H^{2k}(M;TN)$, there exists
$T=T(\lVert{u}\rVert_{H^{2k_0}})>$ such that
{\rm \eqref{equation:pde}-\eqref{equation:data}}
possesses a unique solution
$u{\in}C([-T,T];H^{2k}(M;TN))$.
\end{theorem}
Our strategy of proof consists of fourth order parabolic regularization
and the uniform energy estimates of approximating solutions.
Let $\Gamma(u^{-1}TN)$ be the set of sections of $u^{-1}TN$.
Our idea of avoiding the difficulty due to $\nabla_iJ_u$ comes from
diagonalization technique for some $2n{\times}2n$ system of
Schr\"odinger evolution equations developed in our work \cite{chihara1}.
If we see \eqref{equation:pde} as a $2n{\times}2n$ system,
$\nabla^NJ$ corresponds to some off-diagonal blocks
of the coefficient matrices of the first order terms.
We introduce a bounded pseudodifferential operators acting on
$\Gamma(u^{-1}TN)$ to eliminate $\nabla_iJ_u$
by using a transformation of u with this pseudodifferential operator.
We here remark that
$(\nabla_{\partial/\partial{z^a}}J)J=-J(\nabla_{\partial/\partial{z^a}}J)$
since $J^2=C^2_1J{\otimes}J=-I$,
where $I$ is the identity mapping on $TN$ and
$C^2_1$ is a contraction of $(2,2)$-tensor.
This fact is the key to construct the pseudodifferential operator.
We evaluate $\tilde{\Delta}_g^{l-1}\tau(u)$, ($l=1,\dotsc,k$),
since
$$
\tilde{\Delta}_g
=
\frac{1}{\sqrt{G}}
\sum_{i,j=1}^m
\nabla_ig^{ij}\sqrt{G}\nabla_j
$$
commutes with $\tau(u)$ and
is never an obstruction to the energy estimates.
For this reason, we need to use even order Sobolev space $H^{2k}(M;TN)$.
It is easy to check that
$\tilde{\Delta}_g$ is invariant under the change of
variables of $M$ and $N$.
Indeed, one can check that $\tilde{\Delta}_g$ is invariant
under the change of variables of $M$ in the same way as $\Delta_g$,
and $\nabla_iV$ is invariant under the change of variables of $N$
for any section $V{\in}\Gamma(u^{-1}TN)$.
Hence $\tilde{\Delta}_g$ is globally well-defined on $\Gamma(u^{-1}TN)$.
\par
The plan of the present paper is as follows.
Section~\ref{section:approximation} is devoted to parabolic regularization.
In Section~\ref{section:proof} we prove Theorem~\ref{theorem:main}.
\section{Parabolic Regularization}
\label{section:approximation}
This section is devoted to the short-time existence of solutions to
the forward initial value problem
for a fourth order parabolic equations of the form
\begin{alignat}{2}
\frac{\partial{u}}{\partial{t}}
& =
-\varepsilon\tilde{\Delta}_g\tau(u)+J_u\tau(u)
&
\quad\text{in}\quad
& (0,\infty){\times}M,
\label{equation:pde-ep}
\\
u(0,x)
& =
u_0(x)
&
\quad\text{in}\quad
& M,
\label{equation:data-ep}
\end{alignat}
where $\varepsilon\in(0,1]$ is a parameter.
Roughly speaking,
\eqref{equation:pde-ep}-\eqref{equation:data-ep}
can be solved in the same way as the equation of harmonic heat flow
$\partial{u}/\partial{t}=\tau(u)$.
See e.g., \cite[Chapters~3 and 4]{nishikawa}
for the study of the harmonic heat flow.
In this section we shall show the following.
\begin{lemma}
\label{theorem:approximation}
Let $l$ be an integer satisfying
$l\geqslant{l_0}=[m/2]+4$.
Then, for any $u_0{\in}H^l(M;TN)$,
there exists
$T_\varepsilon=T(\varepsilon,\lVert{u_0}\rVert_{H^{l_0}})>0$
such that the forward initial value problem
{\rm \eqref{equation:pde-ep}-\eqref{equation:data-ep}}
possesses a unique solution
$u_\varepsilon{\in}C([0,T_\varepsilon];H^l(M;TN))$.
\end{lemma}
We consider $\mathbb{R}^d$-valued equation pushed by $dw$.
We split the proof of Lemma~\ref{theorem:approximation} into two steps.
Firstly, we construct a solution
taking values in a tubular neighborhood of $w(N)$.
Secondly, we check that the solution is $w(N)$-valued.
\par
If $u$ is a solution to \eqref{equation:pde-ep},
then $v=w{\circ}u$ satisfies
$$
\frac{\partial{v}}{\partial{t}}
=
dw\left(\frac{\partial{u}}{\partial{t}}\right)
=
dw\left(-\varepsilon\tilde{\Delta}_g\tau(u)+J_u\tau(u)\right)
=
-\varepsilon\Delta_g^2v+F(v),
$$
where $F(v)$ is of the form
$$
F(v)
=
\varepsilon
F_1(x,v,\bar{\nabla}v,\bar{\nabla}^2v,\bar{\nabla}^3v)
+
F_2(x,v,\bar{\nabla}v,\bar{\nabla}^2v)
$$
satisfying
$F_1(x,y,0,0,0)=0$, $F_2(x,y,0,0)=0$
for any $(x,y){\in}M{\times}w(N)$,
and $\bar{\nabla}$ is the connection induced by
$v(t):M\rightarrow{w(N)}$.
\par
To solve this equation for $v$,
we need elementary facts on the fundamental solution.
Let $H^s(M)=(1-\Delta_g)^{-s/2}L^2(M)$
be the usual Sobolev space on $M$ of order $s\in\mathbb{R}$.
We denote by $\mathscr{L}(\mathscr{H}_1,\mathscr{H}_j)$
the set of all bounded linear operators of
a Hilbert space $\mathscr{H}_1$ to a Hilbert space $\mathscr{H}_2$.
Set
$\mathscr{L}(\mathscr{H}_1)=\mathscr{L}(\mathscr{H}_1,\mathscr{H}_1)$
for short.
The existence and the properties of the fundamental solution
are the following.
\begin{lemma}
\label{theorem:psdo}
There exists an operator $E(t)$ satisfying
$$
E(t)
\in
C((0,\infty);\mathscr{L}(H^s(M)))
\cap
C^1((0,\infty);\mathscr{L}(H^{s+4}(M),H^s(M)))
$$
for any $s\in\mathbb{R}$, such that
$$
\left(
\frac{\partial}{\partial{t}}
+
\varepsilon\Delta_g^2
\right)
E(t)
=
0
\quad\text{in}\quad
C((0,\infty);\mathscr{L}(H^{s+4}(M),H^s(M))),
$$
$$
\lim_{t\downarrow0}E(t)v=v
\quad\text{for any}\quad
v{\in}H^s(M)
$$
$$
\left\{E(t)t^{3/4}\right\}_{t>0}
\quad\text{is bounded in}\quad
\mathscr{L}(H^{s-3}(M),H^s(M)).
$$
\end{lemma}
Lemma~\ref{theorem:psdo} is proved
by the symbolic calculus of pseudodifferential operators.
See \cite[Chapter~7, Theorem~4.1]{kumano-go} and \cite{iwasaki}.
\par
Let $\delta>0$ be a sufficiently small constant.
We denote by $w_\delta(N)$
a tubular neighborhood of $w(N)$ in $\mathbb{R}^d$,
that is,
$$
w_\delta(N)
=
\left\{
v_1+v_2\in\mathbb{R}^d
\ \vert \
v_1{\in}w(N),\
v_2{\in}T_{v_1}w(N)^{\perp},\
\lvert{v_2}\rvert<\delta
\right\},
$$
where
$\lvert{v_2}\rvert=\sqrt{(v_2^1)^2+\dotsb+(v_2^d)^2}$
for
$v_2=(v_2^1,\dotsc,v_2^d)\in\mathbb{R}^d$.
Let $\pi:w_\delta(N){\rightarrow}w(N)$ be the projection
defined by
$\pi(v_1+v_2)=v_1$
for
$v_1{\in}w(N)$,
$v_2{\in}T_{v_1}w(N)^{\perp}$,
$\lvert{v_2}\rvert<\delta$.
We will solve the initial value problem
\begin{equation}
\frac{\partial{v}}{\partial{t}}
=
-\varepsilon\Delta_g^2v+F(\pi(v)),
\quad
v(0,x)=w(u_0)(x),
\label{equation:aux}
\end{equation}
which is equivalent to an integral equation
$v=\Phi(v)$, where
$$
\Phi(v)(t)
=
E(t)w(u_0)
+
\int_0^t
E(t-s)
F(\pi(v(s)))
ds.
$$
We apply the contraction mapping theorem to the integral equation
in the framework
$$
X_T^l
=
\left\{
v{\in}C([0,T];H^l(M;Tw_\delta(N)))
\ \vert\
\lVert{v(t)}\rVert_{H^l}
\leqslant
2\lVert{w(u_0)}\rVert_{H^l}
\quad\text{for}\quad
t\in[0,T]
\right\}
$$
with some small $T>0$ determined below.
By using Lemma~\ref{theorem:psdo} and the Sobolev embeddings, we have
\begin{align*}
\sup_{t\in[0,T]}
\lVert\Phi(v)(t)-E(t)w(u_0)\rVert_{H^l}
& \leqslant
C_lT^{1/4},
\\
\sup_{t\in[0,T]}
\lVert\Phi(v)(t)-\Phi(v^\prime)(t)\rVert_{H^l}
& \leqslant
C_lT^{1/4}
\sup_{t\in[0,T]}
\lVert{v(t)-v^\prime(t)}\rVert_{H^l}
\end{align*}
for any $v,v^\prime{\in}X_T^l$,
where $C_l$ is a positive constant depending on $\lVert{w(u_0)}\rVert_{H^l}$.
Thus, $\Phi$ is a contraction mapping of $X_T^l$ to itself
provided that $T$ is sufficiently small.
The contraction mapping theorem shows the existence of a unique solution
to the integral equation.
\par
Next we check that the solution
$v{\in}C([0,T];H^l(M;Tw_\delta(N)))$
to \eqref{equation:aux} is $w(N)$-valued.
Set $\rho(v)=v-\pi(v)$ for short.
We remark that
\begin{align*}
\frac{\partial{v}}{\partial{t}}
& =
-
\varepsilon\Delta_g^2v
+
F(\pi(v))
\\
& =
-
\varepsilon\Delta_g^2\rho(v)
-
\varepsilon\Delta_g^2\pi(v)
+
F(\pi(v))
\\
& =
-
\varepsilon\Delta_g^2\rho(v)
+
dw
\left(
-\varepsilon\tilde{\Delta}_g\tau(w^{-1}\circ\pi(v))
+
J_{w^{-1}\circ\pi(v)}\tau(w^{-1}\circ\pi(v))
\right).
\end{align*}
Since
$\partial\pi(v)/\partial{t}\perp\rho(v)$
and
$\rho(v){\in}T_{\pi(v)}w(N)^{\perp}$,
we deduce
\begin{align*}
\frac{d}{dt}
\int_M
\lvert\rho\rvert^2
d\mu_g
& =
2
\int_M
\left\langle\frac{\partial{\rho(v)}}{\partial{t}},\rho(v)\right\rangle
d\mu_g
\\
& =
2
\int_M
\left\langle\frac{\partial{v}}{\partial{t}},\rho(v)\right\rangle
d\mu_g
\\
& =
2
\int_M
\left\langle-\varepsilon\Delta_g^2\rho(v)+dw(\dotsb),\rho(v)\right\rangle
d\mu_g
\\
& =
-2\varepsilon
\int_M
\left\langle\Delta_g^2\rho(v),\rho(v)\right\rangle
d\mu_g
\\
& =
-2\varepsilon
\int_M
\left\langle\Delta_g\rho(v),\Delta_g\rho(v)\right\rangle
d\mu_g
\leqslant
0,
\end{align*}
where $\langle\cdot,\cdot\rangle$ is
the standard inner product in $\mathbb{R}^d$.
We conclude that $\rho(v)=0$ and therefore $v$ is $w(N)$-valued
since $\rho(v(0))=\rho(w(u_0))=0$
This completes the proof of Lemma~\ref{theorem:approximation}.
\section{Uniform Energy Estimates}
\label{section:proof}
The present section proves Theorem~\ref{theorem:main}
in three steps: existence, uniqueness and recovery of continuity.
Let $u_\varepsilon{\in}C([0,T_\varepsilon];H^{2k}(M;TN))$ be the unique solution to
\eqref{equation:pde-ep}-\eqref{equation:data-ep}.
\begin{proof}[Existence]
We shall show that there exists $T>0$ which is independent of $\varepsilon\in(0,1]$,
such that $\{u_\varepsilon\}_{\varepsilon\in(0,1]}$ is bounded in
$L^\infty(0,T;H^{2k}(M;TN))$,
which is the set of all $H^{2k}$-valued essentially bounded functions on $(0,T)$.
If this is true, then the standard compactness argument shows that
there exist $u$ and a subsequence $\{u_\varepsilon\}$ such that
\begin{align*}
u_\varepsilon \longrightarrow u
& \quad\text{in}\quad
C([0,T];H^{2k-1}(M;TN)),
\\
u_\varepsilon \longrightarrow u
& \quad\text{in}\quad
L^\infty(0,T;H^{2k}(M;TN))
\quad\text{weakly star},
\end{align*}
as $\varepsilon\downarrow0$,
and $u$ solves \eqref{equation:pde}-\eqref{equation:data}
and is $H^{2k}$-valued weakly continuous in time.
\par
For $V\in\Gamma(u^{-1}TN)$, set
$$
\lVert{V}\rVert^2
=
\int_Mh(V,V)d\mu_g
$$
for short.
In view of the Sobolev embeddings,
$\lVert{u}\rVert_{H^{2k}}$ is equivalent to
$$
\left\{
\sum_{l=1}^k\lVert{\tilde{\Delta}_g^lu}\rVert^2
\right\}^{1/2},
\quad
\tilde{\Delta}_g^lu
=
\tilde{\Delta}_g^{l-1}\tau(u),
$$
as a norm.
We shall evaluate this for $u_\varepsilon$ instead of $\lVert{u_\varepsilon}\rVert_{H^{2k}}$.
\par
The properties of the torsion tensor and the Riemannian curvature tensor
show that for any vector fields $X$ and $Y$ on $M$,
and for any $V\in\Gamma(u^{-1}TN)$,
\begin{align}
\nabla_Xdu(Y)
& =
\nabla_Ydu(X)
+
du([X,Y]),
\label{equation:commutator1}
\\
\nabla_X\nabla_YV
& =
\nabla_Y\nabla_XV
+
\nabla_{[X,Y]}V
+
R(du(X),du(Y))V.
\label{equation:commutator2}
\end{align}
Set
$\nabla_t=\nabla_{\partial/\partial{t}}$,
$X_i=\partial/\partial{x^i}$
and
$Y_i=\sum_{j=1}^mg^{ij}X_j$
for short.
\par
In what follows we express $u_\varepsilon$ by $u$ simply.
Any confusion will not occur.
We sometimes write $\tilde{\Delta}_gu=\tau(u)$ and $Xu=du(X)$.
We evaluate $\mathcal{N}_k(u)$ defined by
$$
\mathcal{N}_k(u)^2
=
\sum_{j=1}^{k-1}
\lVert\tilde{\Delta}_g^lu\rVert^2
+
\lVert\Lambda\tilde{\Delta}_g^lu\rVert^2,
$$
where $\Lambda=\Lambda_\varepsilon$ is a pseudodifferential operator defined later.
Set
$$
T_\varepsilon^\ast
=
\sup\{
T>0
\ \vert \
\mathcal{N}_k(u(t))\leqslant2\mathcal{N}_k(u_0)
\ \text{for}\
t\in[0,T]
\}.
$$
To obtain the uniformly estimates of $\mathcal{N}_k(u)$,
we need to compute
$$
\tilde{\Delta}_g^l
\left(
\frac{\partial{u}}{\partial{t}}
+
\varepsilon\tilde{\Delta}_g^2u
-
J_u
\tilde{\Delta}_gu
\right)
=
0,
\quad
l=1,\dotsc,k.
$$
\par
In view of \eqref{equation:commutator1} and \eqref{equation:commutator2},
we have
\begin{align*}
\tilde{\Delta}_g\frac{\partial{u}}{\partial{t}}
& =
\frac{1}{\sqrt{G}}
\sum_{i=1}
\nabla_{X_i}
\left\{
\sqrt{G}\nabla_{Y_i}
du(\partial/\partial{t})
\right\}
\\
& =
\frac{1}{\sqrt{G}}
\sum_{i=1}
\nabla_{X_i}
\left\{
\nabla_t\sqrt{G}du(Y_i)
+
\sqrt{G}
du([Y_i,\partial/\partial{t}])
\right\}
\\
& =
\frac{1}{\sqrt{G}}
\sum_{i=1}
\nabla_{X_i}
\left\{
\nabla_t\sqrt{G}du(Y_i)
\right\}
\\
& =
\nabla_t
\left\{
\frac{1}{\sqrt{G}}
\sum_{i=1}
\nabla_{X_i}\sqrt{G}du(Y_i)
\right\}
\\
& \quad
+
\frac{1}{\sqrt{G}}
\sum_{i=1}
\nabla_{[X_i,\partial/\partial{t}]}\sqrt{G}du(Y_i)
\\
& \quad
+
\sum_{i=1}^m
R(du(X_i),du(\partial/\partial{t}))du(Y_i)
\\
& =
\nabla_t\tilde{\Delta}_gu
+
\sum_{i=1}^m
R(du(X_i),du(\partial/\partial{t}))du(Y_i).
\end{align*}
Repeating this computation and using \eqref{equation:pde-ep}, we obtain
\begin{align}
\tilde{\Delta}_g^l
\frac{\partial{u}}{\partial{t}}
& =
\nabla_t\tilde{\Delta}_g^lu
+
\sum_{p=0}^{l-1}
\tilde{\Delta}_g^{l-1-p}
\left\{
R(du(X_i),du(\partial/\partial{t}))\nabla_{Y_i}\tilde{\Delta}_g^pu
\right\}
\nonumber
\\
& =
\nabla_t\tilde{\Delta}_g^lu
+
\sum_{p=0}^{l-1}
\tilde{\Delta}_g^{l-1-p}
\left\{
R(du(X_i),-\varepsilon\tilde{\Delta}_g^2u+J_u\tilde{\Delta}_gu)
\nabla_{Y_i}\tilde{\Delta}_g^pu
\right\}
\nonumber
\\
& =
\nabla_t\tilde{\Delta}_g^lu
-
\varepsilon{P_{1,l}}
-
Q_{1,l},
\label{equation:aibu}
\\
P_{1,l}
& =
\sum_{p=0}^{l-1}
\tilde{\Delta}_g^{l-1-p}
\left\{
R(du(X_i),\tilde{\Delta}^2u)
\nabla_{Y_i}\tilde{\Delta}_g^pu
\right\},
\nonumber
\\
Q_{1,l}
& =
-
\sum_{p=0}^{l-1}
\tilde{\Delta}_g^{l-1-p}
\left\{
R(du(X_i),J_u\tilde{\Delta}_gu)
\nabla_{Y_i}\tilde{\Delta}_g^pu
\right\}.
\nonumber
\end{align}
The Sobolev embeddings show that for $l=1,\dotsc,k$,
there exists a constant $C_k>1$ depending only on $k\geqslant{k_0}$ and
$\mathcal{N}_k(u_0)$ such that for $t\in[0,T_\varepsilon^\ast]$
$$
\lVert{P_{1,l}}\rVert
\leqslant
C_k
\sum_{p=1}^{l+1}
\lVert\tilde{\Delta}_g^pu\rVert,
\quad
\lVert{Q_{1,l}}\rVert
\leqslant
C_k
\sum_{p=1}^l
\lVert\tilde{\Delta}_g^pu\rVert.
$$
Different positive constants depending only on $k\geqslant{k_0}$ and
$\mathcal{N}_k(u_0)$ are denoted by the same notation $C_k$ below.
\par
A direct computation shows that
\begin{align}
\tilde{\Delta}_g^l\left(J_u\tau(u)\right)
& =
\frac{1}{\sqrt{G}}
\sum_{i=1}^m
\nabla_{X_i}
\left(
J_u\sqrt{G}\nabla_{Y_i}
\tilde{\Delta}_g^lu
\right)
\nonumber
\\
& \quad
+
l
\sum_{i=1}^m
(\nabla_{Y_i}J_u)\nabla_{X_i}
\tilde{\Delta}_g^lu
\nonumber
\\
& \quad
+
(l-1)
\sum_{i=1}^m
(\nabla_{X_i}J_u)\nabla_{Y_i}
\tilde{\Delta}_g^lu
+
Q_{2,l}
\nonumber
\\
& =
\frac{1}{\sqrt{G}}
\sum_{i,j=1}^m
\nabla_i
\left(
g^{i,j}\sqrt{G}J_u\nabla_j
\tilde{\Delta}_g^lu
\right)
\nonumber
\\
& \quad
+
(2l-1)
\sum_{i,j=1}^m
g^{ij}
(\nabla_iJ_u)\nabla_j
\tilde{\Delta}_g^lu
+
Q_{2,l},
\label{equation:saki}
\end{align}
where $Q_{2,l}$ is a linear combination of terms of the form
$(\nabla^{p+2}J_u)\nabla^{2l-p}u$,
$p=0,1,\dotsc,2l-2$,
and has the same estimate as $Q_{1,l}$.
\par
Combining \eqref{equation:aibu} and \eqref{equation:saki}, we have
\begin{align}
\left\{
\nabla_t
+
\varepsilon\tilde{\Delta}_g^2
-
\frac{1}{\sqrt{G}}
\sum_{i,j=1}^m
\nabla_i
g^{i,j}\sqrt{G}J_u\nabla_j
-
(2l-1)
\sum_{i,j=1}^m
g^{ij}
(\nabla_iJ_u)\nabla_j
\right\}
\tilde{\Delta}_g^lu
&
\nonumber
\\
=
{\varepsilon}P_{1,l}+Q_{1,l}+Q_{2,l}.
&
\label{equation:pde2}
\end{align}
Set
$$
P_{2,l}
=
(2l-1)
\sum_{i,j=1}^m
g^{ij}
(\nabla_iJ_u)\nabla_j
\tilde{\Delta}_g^lu
+
{\varepsilon}P_{1,l}+Q_{1,l}+Q_{2,l}.
$$
for short.
$P_{2,l}$ can be estimated in the same way as $P_{1,l}$.
Using \eqref{equation:pde2}, we deduce
\begin{align}
\frac{d}{dt}
\sum_{l=1}^{k-1}
\lVert\tilde{\Delta}_g^lu\rVert^2
& =
\frac{d}{dt}
\sum_{l=1}^{k-1}
\int_M
h(\tilde{\Delta}_g^lu,\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& =
2
\sum_{l=1}^{k-1}
\int_M
h(\nabla_t\tilde{\Delta}_g^lu,\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& =
-2\varepsilon
\sum_{l=1}^{k-1}
\int_M
h(\tilde{\Delta}_g^2\tilde{\Delta}_g^lu,\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& \quad
+
2
\sum_{l=1}^{k-1}
\sum_{i,j=1}^m
\int_M
\frac{1}{\sqrt{G}}
h\left(
\nabla_ig^{ij}\sqrt{G}J_u\nabla_j
\tilde{\Delta}_g^lu,
\tilde{\Delta}_g^lu
\right)
d\mu_g
\nonumber
\\
& \quad
+
2
\sum_{l=1}^{k-1}
\int_M
h(P_{2,l},\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& =
-2\varepsilon
\sum_{l=1}^{k-1}
\int_M
h(\tilde{\Delta}_g^{l+1}u,\tilde{\Delta}_g^{l+1}u)
d\mu_g
\nonumber
\\
& \quad
-
2
\sum_{l=1}^{k-1}
\sum_{i,j=1}^m
\int_M
g^{ij}
h(J_u\nabla_j\tilde{\Delta}_g^lu,\nabla_i\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& \quad
+
2
\sum_{l=1}^{k-1}
\int_M
h(P_{2,l},\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& =
-
2\varepsilon
\sum_{l=1}^{k-1}
\lVert\tilde{\Delta}_g^{l+1}u\rVert^2
+
2
\sum_{l=1}^{k-1}
\int_M
h(P_{2,l},\tilde{\Delta}_g^lu)
d\mu_g
\nonumber
\\
& \leqslant
2C_k
\sum_{l=1}^k
\lVert\tilde{\Delta}_g^lu\rVert^2.
\label{equation:shiota}
\end{align}
\par
To complete the energy estimates,
we need to eliminate the first order term in \eqref{equation:pde2}.
For this purpose, we here give the definition of
the pseudodifferential operator $\Lambda$.
Let $\{M_\nu\}$ be the finite set of local coordinate neighborhood of $M$,
and let $x_\nu^1,\dotsb,x_\nu^m$ be the local coordinates in $M_\nu$.
Let $\{N_\alpha\}$ be the set of local coordinate neighborhood of $N$,
and let $z_\alpha^1,\dotsc,z_\alpha^{2n}$ be the local coordinates of $N_\alpha$.
We denote by $C^\infty_0$ the set of all smooth functions with a compact support.
Pick up partitions of unity
$\{\phi_\nu\}{\subset}C^\infty_0(M)$
and
$\{\Phi_\alpha\}{\subset}C^\infty_0(N)$
subordinated to $\{M_\nu\}$ and $\{N_\alpha\}$ respectively.
Take $\{\psi_\nu\}, \{\xi_\nu\} \subset C^\infty_0(N)$
and $\{\Psi_\alpha\}, \{\Xi_\alpha\} \subset C^\infty_0(N)$ so that
\begin{alignat*}{3}
\xi_\nu=1
& \quad\text{in}\quad
\operatorname{supp}[\phi_\nu],
& \quad
\psi_\nu=1
& \quad\text{in}\quad
\operatorname{supp}[\xi_\nu],
& \quad
\operatorname{supp}[\psi_\nu]
& \subset
M_\nu,
\\
\Xi_\alpha=1
& \quad\text{in}\quad
\operatorname{supp}[\Phi_\alpha],
& \quad
\Psi_\alpha=1
& \quad\text{in}\quad
\operatorname{supp}[\Xi_\alpha],
& \quad
\operatorname{supp}[\Psi_\alpha]
& \subset
N_\alpha.
\end{alignat*}
We define a local $(1,1)$-tensor by
$$
B_{\nu,\alpha,j}
=
-(2k-1)
\sum_{i=1}^m
g^{ij}(\nabla_iJ_u)
\quad\text{in}\quad
M_\nu{\cap}u^{-1}(N_\alpha).
$$
Note that $J_uB_{\nu,\alpha,j}=-B_{\nu,\alpha,j}J_u$.
Using the partitions of unity and the local tensors,
we define a properly supported pseudodifferential operators of order $-1$
acting on $\Gamma(u^{-1}TN)$ by
\begin{align*}
\tilde{\Lambda}
& =
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
J_uB_{\nu,\alpha,j}\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
(1-\Delta_g)^{-1}
\psi_\nu(x)\Psi_\alpha(u)
\\
& -
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}J_u\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
(1-\Delta_g)^{-1}
\psi_\nu(x)\Psi_\alpha(u),
\end{align*}
where $\nabla_j$ is expressed by the coordinates
$x_\nu^1,\dotsc,x_\nu^m$
and
$z_\alpha^1,\dotsc,z_\alpha^{2n}$.
Here we remark that for
$$
V
=
\sum_{a=1}^{2n}
V_{\nu,\alpha}^a
\left(\frac{\partial}{\partial{z_\alpha^a}}\right)_u
$$
supported in $M_\nu{\cap}u^{-1}(N_\alpha)$,
$$
\xi_\nu(x)\Xi_\alpha(u)(1-\Delta_g)^{-1}V
=
\sum_{a=1}^{2n}
\left(
\xi_\nu(x)\Xi_\alpha(u)(1-\Delta_g)^{-1}V_{\nu,\alpha}^a
\right)
\left(\frac{\partial}{\partial{z_\alpha^a}}\right)_u
$$
is well-defined and supported in $M_\nu{\cap}u^{-1}(N_\alpha)$ also.
We make use of elementary theory of pseudodifferential operators freely.
We remark that
each term in $\tilde{\Lambda}$ is properly supported
and invariant under the change of coordinates in $M$ and $N$
up to cut-off functions.
Then, we can deal with each term as if it were
a pseudodifferential operator on $\mathbb{R}^m$
acting on $\mathbb{R}^d$-valued functions.
Then, we can make use of pseudodifferential operators
whose symbols have limited smoothness.
See \cite[Section~2]{chihara1} and \cite{nagase}.
In other words, we do not have to take care of the type of
this pseudodifferential operator.
It is well-known that the type of pseudodifferential operators
on manifolds have some restrictions in general.
\par
Set $\Lambda=I-\tilde{\Lambda}$ and $\Lambda^\prime=I+\tilde{\Lambda}$,
where $I$ is the identity mapping.
Since $\Lambda^\prime\Lambda=I-\tilde{\Lambda}^2$
and $\tilde{\Lambda}^2$ is a pseudodifferential operator of order $-2$,
we deduce that for $t\in[0,T_\varepsilon^\ast]$,
$$
C_k^{-1}\mathcal{N}_k(u)^2
\leqslant
\sum_{l=1}^k
\lVert\tilde{\Delta}_g^lu\rVert^2
\leqslant
C_k\mathcal{N}_k(u)^2.
$$
\par
We apply $\Lambda$ to \eqref{equation:pde2} with $l=k$.
A direct computation shows that
$$
\Lambda\nabla_t=\nabla_t\Lambda-\frac{\partial\Lambda}{\partial{t}},
\quad
\left\lVert
\frac{\partial\Lambda}{\partial{t}}
\tilde{\Delta}_g^ku
\right\rVert
\leqslant
C_k\mathcal{N}_k(u)
$$
for $t\in[0,T_\varepsilon^\ast]$.
The matrices of principal symbols of $\Lambda$ and $\tilde{\Delta}_g^2$
commute with each other since the matrix of the principal symbol of
$\tilde{\Delta}_g^2$ is $(g^{ij}\xi_i\xi_j)^2I_{2n}$,
where $I_{2n}$ is the $2n\times2n$ identity matrix.
Then,
$[\Lambda,\tilde{\Delta}_g^2]=[\tilde{\Lambda},\tilde{\Delta}_g^2]$
is a pseudodifferential operator of order $2$.
Hence we have
$$
\Lambda\tilde{\Delta}_g^2
=
\tilde{\Delta}_g^2\Lambda
+
[\Lambda,\tilde{\Delta}_g^2],
\quad
\lVert[\Lambda,\tilde{\Delta}_g^2]\tilde{\Delta}_g^ku\rVert
\leqslant
C_k\lVert\tilde{\Delta}_g^{k+1}u\rVert
$$
for $t\in[0,T_\varepsilon^\ast]$.
\par
Here we set
$$
A
=
\sum_{i,j=1}^m
\frac{1}{\sqrt{G}}
\nabla_ig^{ij}\sqrt{G}J_u\nabla_j
$$
for short.
Since $I=\Lambda^\prime\Lambda+\tilde{\Lambda}^2$,
we deduce that
$$
\Lambda(-A)
=
-\Lambda{A}(\Lambda^\prime\Lambda+\tilde{\Lambda}^2)
=
-A+\tilde{\Lambda}A-A\tilde{\Lambda}
+(\tilde{\Lambda}A\tilde{\Lambda}+A\tilde{\Lambda}^2),
$$
and $\tilde{\Lambda}A\tilde{\Lambda}+A\tilde{\Lambda}^2$
is $L^2$-bounded for $t\in[0,T_\varepsilon^\ast]$.
The principal symbol of $(1-\Delta_g)^{-1}$ is globally defined as
$I_{2n}/g^{ij}\xi_i\xi_j$.
We deduce that modulo $L^2$-bounded operators,
\begin{align*}
\tilde{\Lambda}A
& \equiv
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}J_u^2\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
(1-\Delta_g)^{-1}
\psi_\nu(x)\Psi_\alpha(u)
\tilde{\Delta}_g
\\
& \equiv
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
\psi_\nu(x)\Psi_\alpha(u)
\\
& =
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}\nabla_j
\\
& =
\frac{(2k-1)}{2}
\sum_{i,j}g^{ij}(\nabla_iJ_u)\nabla_j,
\\
-A\tilde{\Lambda}
& \equiv
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\tilde{\Delta}_g
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
J_u^2B_{\nu,\alpha,j}\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
(1-\Delta_g)^{-1}
\psi_\nu(x)\Psi_\alpha(u)
\\
& \equiv
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}\nabla_j
\xi_\nu(x)\Xi_\alpha(u)
\psi_\nu(x)\Psi_\alpha(u)
\\
& =
-
\frac{1}{2}
\sum_\nu\sum_\alpha
\phi_\nu(x)\Phi_\alpha(u)
\sum_{j=1}^m
B_{\nu,\alpha,j}\nabla_j
\\
& =
\frac{(2k-1)}{2}
\sum_{i,j}g^{ij}(\nabla_iJ_u)\nabla_j.
\end{align*}
\par
Combining computations above, we obtain
$$
\left\{
\nabla_t
+
\varepsilon\tilde{\Delta}_g^2
-
\frac{1}{\sqrt{G}}
\sum_{i,j=1}^m
\nabla_i
g^{i,j}\sqrt{G}J_u\nabla_j
\right\}
\Lambda
\tilde{\Delta}_g^ku
=
{\varepsilon}P_k+Q_k,
$$
where $P_k$ and $Q_k$ are estimated as
$$
\lVert{P_k}\rVert
\leqslant
C_k(\lVert\tilde{\Delta}_g\Lambda\tilde{\Delta}_g^ku\rVert+\mathcal{N}_k(u)),
\quad
\lVert{Q_k}\rVert
\leqslant
C_k\mathcal{N}_k(u).
$$
In the same way as \eqref{equation:shiota}, we deduce
\begin{align}
\frac{d}{dt}\lVert\Lambda\tilde{\Delta}_g^ku\rVert^2
& \leqslant
-2\varepsilon
\lVert\tilde{\Delta}_g\Lambda\tilde{\Delta}_g^ku\rVert^2
+
C_k\varepsilon
\lVert\tilde{\Delta}_g\Lambda\tilde{\Delta}_g^ku\rVert
\lVert\Lambda\tilde{\Delta}_g^ku\rVert
+
C_k
\mathcal{N}_k(u)
\lVert\Lambda\tilde{\Delta}_g^ku\rVert
\\
& \leqslant
C_k
\mathcal{N}_k(u)
\lVert\Lambda\tilde{\Delta}_g^ku\rVert.
\label{equation:reiko}
\end{align}
Combining \eqref{equation:shiota} and \eqref{equation:reiko}, we obtain
\begin{equation}
\frac{d}{dt}\mathcal{N}_k(u)
\leqslant
C_k
\mathcal{N}_k(u)
\quad\text{for}\quad
t\in[0,T_\varepsilon^\ast].
\label{equation:joe}
\end{equation}
If we take $t=T_\varepsilon^\ast$, then we have
$2\mathcal{N}_k(u_0)\leqslant\mathcal{N}_k(u_0)e^{C_kT_\varepsilon^\ast}$,
which implies that
$T_\varepsilon^\ast{\geqslant}T=\log2/C_k>0$.
Thus $\{u_\varepsilon\}_{\varepsilon{\in(0,1]}}$ is bounded in $L^\infty(0,T;H^{2k}(M;TN))$.
This completes the proof.
\end{proof}
\begin{proof}[Uniqueness]
Let $u_1,u_2{\in}L^\infty(0,T;H^{2k}(M;TN))$ be solutions to
\eqref{equation:pde}-\eqref{equation:data}.
Set $v_1=w{\circ}u_1$ and $v_2=w{\circ}u_2$ for short.
We denote by $\Pi_v:\mathbb{R}^d \rightarrow \mathbb{R}^d$
the projection to $T_vw(N)$ for $v{\in}w(N)$.
Since
$$
\frac{\partial{v}}{\partial{t}}
=
\tilde{J}_{v_j}\Pi_{v_j}\Delta_gv_j,
\quad
\tilde{J}_{v_j}
=
du_j{\circ}J_{u_j}{\circ}du_j^{-1},
\quad
j=1,2,
$$
$v=v_1-v_2$ solves
\begin{align*}
\frac{\partial{v}}{\partial{t}}
& =
\tilde{J}_{v_1}\Pi_{v_1}\Delta_gv
+
\left(\tilde{J}_{v_1}\Pi_{v_1}-\tilde{J}_{v_2}\Pi_{v_2}\right)\Delta_gv_2
\\
& =
\tilde{J}_{v_1}\Pi_{v_1}\Delta_gv
+
A(v_1,v_2,\Delta_gv_2)v.
\end{align*}
Here we applied the mean value theorem to the second term of
the right hand side of the above equation,
and $A(v_1,v_2,\Delta_gv_2)$ is an appropriate $d{\times}d$ matrix.
\par
Using the integration by parts, we have
\begin{align}
\frac{d}{dt}
\int_M\langle{v,v}\rangle
d\mu_g
& =
2
\int_M
\left\langle\frac{\partial{v}}{\partial{t}},v\right\rangle
d\mu_g
\nonumber
\\
& =
2
\int_M
\left\langle{
\tilde{J}_{v_1}\Pi_{v_1}\Delta_gv
+
A(v_1,v_2,\Delta_gv_2)v,
v
}\right\rangle
d\mu_g
\nonumber
\\
& \leqslant
C
\int_M
\left\{
\langle{v,v}\rangle
+
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial{v}}{\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
\right\}
d\mu_g.
\label{equation:katase}
\end{align}
Using the properties of $\tilde{J}_{v_1}$ and the projection,
and the integration by parts, we deduce
\begin{align}
\frac{d}{dt}
\int_M
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial{v}}{\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
d\mu_g
& =
2
\int_M
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial^2{v}}{\partial{t}\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
d\mu_g
\nonumber
\\
& =
-2
\int_M
\left\langle
\frac{\partial{v}}{\partial{t}},
\Delta_gv
\right\rangle
d\mu_g
\nonumber
\\
& =
-
2
\int_M
\left\langle
\tilde{J}_{v_1}\Pi_{v_1}\Delta_gv+Av,
\Delta_gv
\right\rangle
d\mu_g
\nonumber
\\
& =
-
2
\int_M
\left\langle
\tilde{J}_{v_1}\Pi_{v_1}\Delta_gv,
\Pi_{v_1}\Delta_gv
\right\rangle
d\mu_g
\nonumber
\\
& \quad
-
2
\int_M
\langle{Av,\Delta_gv}\rangle
d\mu_g
\nonumber
\\
& =
-
2
\int_M
\langle{Av,\Delta_gv}\rangle
d\mu_g
\nonumber
\\
& \leqslant
C
\int_M
\left\{
\langle{v,v}\rangle
+
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial{v}}{\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
\right\}
d\mu_g.
\label{equation:nana}
\end{align}
Combining \eqref{equation:katase} and \eqref{equation:nana}, we have
$$
\frac{d}{dt}
\int_M
\left\{
\langle{v,v}\rangle
+
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial{v}}{\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
\right\}
d\mu_g
\leqslant
C
\int_M
\left\{
\langle{v,v}\rangle
+
\sum_{i,j=1}^m
g^{ij}
\left\langle
\frac{\partial{v}}{\partial{x^i}},
\frac{\partial{v}}{\partial{x^j}}
\right\rangle
\right\}
d\mu_g,
$$
which implies $v=0$.
This completes the proof.
\end{proof}
\begin{proof}[Continuity in time]
Let $u{\in}L^\infty(0,T;H^{2k}(M;TN))$
be the unique solution to \eqref{equation:pde}-\eqref{equation:data}.
We remark that
$u{\in}C([0,T];H^{2k-1}(M;TN))$
and $\tilde{\Delta}_g^ku$ is a weakly continuous
$L^2(M;TN)$-valued function on $[0,T]$.
We identify $N$ and $w(N)$ below.
Let $\{u_\varepsilon\}_{\varepsilon\in(0,1]}$ be a sequence of solutions
to \eqref{equation:pde-ep}-\eqref{equation:data-ep},
which approximates $u$.
We can easily check that for any $\phi{\in}C^\infty([0,T]\times{M};\mathbb{R}^d)$,
\begin{align*}
\Lambda_\varepsilon\phi \longrightarrow \Lambda\phi
& \quad\text{in}\quad
L^2((0,T){\times}M;\mathbb{R}^d),
\\
u_\varepsilon \longrightarrow u
& \quad\text{in}\quad
L^2((0,T){\times}M;\mathbb{R}^d),
\\
\Lambda_\varepsilon\tilde{\Delta}_g^ku_\varepsilon \longrightarrow \tilde{u}
& \quad\text{in}\quad
L^2((0,T){\times}M;\mathbb{R}^d)
\quad\text{weakly star},
\end{align*}
as $\varepsilon\downarrow0$
with some $\tilde{u}$.
Then, $\tilde{u}=\Lambda\tilde{\Delta}_g^ku$
in the sense of distributions.
The time-continuity of $\tilde{\Delta}_g^ku$ is equivalent to
that of $\Lambda\tilde{\Delta}_g^ku$ since
$\Lambda{\in}C([0,T];\mathscr{L}(L^2(M;\mathbb{R}^d)))$,
\par
It suffices to show that
\begin{equation}
\lim_{t\downarrow0}
\Lambda(t)\tilde{\Delta}_g^ku(t)
=
\Lambda(0)\tilde{\Delta}_g^ku_0
\quad\text{in}\quad
L^2(M;\mathbb{R}^d),
\label{equation:gillian}
\end{equation}
since the other cases can be proved in the same way.
\eqref{equation:joe} and the lower semicontinuity of $L^2$-norm imply
$$
\sum_{l=1}^{k-1}
\lVert{\tilde{\Delta}_g^lu(t)}\rVert^2
+
\lVert{\Lambda(t)\tilde{\Delta}_g^ku(t)}\rVert^2
\leqslant
\sum_{l=1}^{k-1}
\lVert{\tilde{\Delta}_g^lu_0}\rVert^2
+
\lVert{\Lambda(0)\tilde{\Delta}_g^ku_0}\rVert^2
+
C_k\mathcal{N}_k(u_0)^2t.
$$
Letting $t\downarrow0$, we have
$$
\limsup_{t\downarrow0}
\lVert{\Lambda(t)\tilde{\Delta}_g^ku(t)}\rVert^2
\leqslant
\lVert{\Lambda(0)\tilde{\Delta}_g^ku_0}\rVert^2,
$$
which implies \eqref{equation:gillian}.
This completes the proof.
\end{proof}
|
2,869,038,156,663 | arxiv | \section{Introduction}
The visual cognitive ability of machines has significantly improved mostly due to the recent development of deep learning techniques. Owing to its high complexity, the decision process is inherently a black-box, and therefore, much research has focused on making the machine explain the reason behind its decision to verify its trustability.
The present study particularly focuses on building a system that not only classifies a video, but also explains why a given sample is {\bf not} predicted to one class but another, in spite of almost all existing research pursuing the reason for the positive class. Concretely, we intend to generate the explanation in the form ``X is classified to A not B because C and D exists in X.''\footnote{rephrased by ``X would be classified as B not A if C and D not in X.''}
This type of explanation is referred to as {\it counterfactual explanation}~\cite{wachter2017counterfactual} in this paper. It may aid us in understanding why the model prediction is different from what we think, or when discrimination is difficult between two specific classes.
The explanation is valuable especially for long videos because it provides information efficiently on the content of what humans cannot see immediately.
\begin{figure}[t!]
\begin{center}
\begin{tabular}{c}
\includegraphics[clip, width=0.97\linewidth, height=3.9cm]{./img/example.eps}
\end{tabular}
\end{center}
\vspace{-0.35cm}
\caption{Our model not only classifies a video to a category (Pole vault), but also generates explanations why the video is not classified to another class (Long jump). It outputs several pairs of attribute (e.g., using pole) and spatio-temporal region (e.g., red box) as an explanation.}
\vspace{-0.45cm}
\label{fig:example}
\end{figure}
We first need to discuss the desired output. This work treats an explanation as the one satisfying two conditions as follows:
\begin{enumerate}[label=(\Alph*), noitemsep,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item The output should be interpretable for humans,
\item The output should have fidelity to the explained target.
\end{enumerate}
Related to (A), one natural way of obtaining interpretable output would be to assign an importance to each element in the input space and visualize it. Although this can help us to perceive the important region for the model prediction, the interpretation leading to it cannot be uniquely determined. We enhance the interpretability of the explanation by leveraging not only parts of visual input but also linguistics, which is compatible with the visual information similar to the previous work~\cite{Hendricks_2018_ECCV}.
More specifically, dealing with a spatio-temporal region of the target video and (the existence of) an attribute as elements, we concatenate them to generate explanations.
An example is shown in the Fig.~\ref{fig:example}.
To realize (B) while satisfying (A), the expected output of the visual-linguistic explanation should have the following two properties:
\begin{enumerate}[noitemsep,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item[(1)] Visual explanation is the region which retains high positiveness/negativeness on model prediction for specific positive/negative classes,
\item[(2)] Linguistic explanation is compatible with the visual counterpart.
\end{enumerate}
The score to measure how the requirements above are fulfilled is hereafter referred to as the {\it counterfactuality score}, or simply {\it counterfactuality}.
The above-listed requirements cannot be achieved by naively exploiting output of the existing method, which considers only positiveness for explanation, such as in ~\cite{Hendricks_2018_ECCV}, where the authors attempt to generate visual-linguistic explanations of positive class.
This is mainly because positiveness/negativeness need to be considered simultaneously for specific positive/negative classes in the same region.
To build a system that generates explanations satisfying both (1) and (2), we propose a novel framework for generating counterfactual explanations based on predicting counterfactuality. The outline of the framework is depicted in two steps:
(a) Train a classification model that is the target of the explanation,
(b) Train an auxiliary explanation model in a post-hoc manner by utilizing output and mid-level features of the target classifier after freezing its weights to prevent output change.
An explanation model predicts counterfactuality scores for all negative classes.
It is trained by exploiting the fact that supervised information ``X is classified to category A.'' can be translated into ``X is not classified to any category B except A.''
The proposed explanation model holds a trainable classifier that predicts simultaneous existence of the pair of class and attribute.
Counterfactuality for a specific visual-linguistic explanation (or region-attribute) can be simply calculated by subtracting classifier outputs corresponding to positive/negative classes of interest.
When the system outputs the explanation, several pairs of [attribute, region] are selected, whose counterfactuality is large for input positive/negative classes.
Maximization (or minimization) of the prediction score with regard to the region is required during the training/inference process, which is computationally intractable in a naive computation. Under the natural assumption that candidate regions are tube-shaped, the maximum (or minimum) value and corresponding region path can be efficiently computed by dynamic programming. We construct the algorithm such that it can be implemented as a layer of a neural network with only standard functions (e.g., max pooling, relu), pre-implemented in most deep learning libraries~\cite{paszke2017automatic, tensorflow2015-whitepaper, chollet2015, jia2014caffe, chainer_learningsys2015} by changing the computation order, which enables combining it easily with Convoluional Neural Networks (CNNs) on GPUs.
The proposed simple and intuitive framework for predicting counterfactuality is justified as the maximization of the lower bound of conditional mutual information, as discussed later, providing an information-theoretical point of view toward it.
Moreover, we assigned additional annotations for existing action-recognition datasets to enable quantitative evaluation for this task, and evaluated our method utilizing the created dataset.
The contributions of this work are as follows:
\begin{itemize}[noitemsep,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item Introduce a novel task, which is generating counterfactual explanations with spatio-temporal region,
\item Propose a novel method based on simultaneous estimations of visual-linguistic compatibility and discriminativeness of two specific classes,
\item Propose a novel neural network layer for efficiently solving the dynamic programming problem which appears during training/inference procedures,
\item Derive a connection between the proposed method and conditional mutual information maximization, providing better understanding of the proposed model from the information-theoretical viewpoint,
\item Propose a metric as well as extending datasets for this task, enabling quantitative evaluation,
\item Demonstrate the effectiveness of the proposed approach by experiment.
\end{itemize}
\section{Related Work}\label{sec:relatedwork}
We divide existing research on visual explanation into two categories, i.e., justification, or introspection.
Methods for justification are expected to explain as humans do, by projecting input to {\it correct} reason obtained from the outside~\cite{ross2017right}. Methods exploiting textual~\cite{hendricks2016generating} or multimodal~\cite{park2018multimodal, Hendricks_2018_ECCV,Kanehira_2019_CVPR_Learning} supervision belong to this category. True explanations, as by humans, are expected to be generated regardless of the type of model, and evaluation is performed by comparison with ground-truth supervision.
In the latter category, the main goal is to know where the model actually ``looks'' in the input by propagating the prediction to the input space
~\cite{simonyan2013deep, bach2015pixel, zhang2016top, selvaraju2016grad, zhou2016learning, fong2017interpretable, zhou2018interpretable}, or by learning instance-wise importance of elements~\cite{chen2018learning, dabkowski2017real} with an auxiliary model.
As opposed to the methods of the former category, it is important to show the region where the model focuses for prediction rather than whether the prediction is true for humans. The evaluation is often performed by the investigating before/after output of the model when the element considered to be important is changed.
Although almost all previous research pursues the reason for positiveness, we attempt to provide the reason for negativeness as well. While our work is categorized to the latter, it also has an aspect of the former; The important region for the model prediction is outputted, while exploiting the linguistic attribute for enhancing interpretability for human.
Beside works of attempting counterfactual explanations by example selection~\cite{goyal2017making, wachter2017counterfactual, Kanehira_2018_CVPR}, ~\cite{Hendricks_2018_ECCV} conducted research similar to the present study, stating an application for grounding visual explanation to counterfactual explanation, where the textual explanations (without the region) are generated by comparing the output of generated explanations for target sample and the nearest sample to it.
Because their work is in the former category where negativeness cannot be well-defined, no quantitative evaluation was provided.
The main differences between our work and this previous study are:
\begin{itemize}[noitemsep,topsep=0pt]
\item We set generating counterfactual explanations as the main goal, and propose a method which utilizes multimodal information specifically for this task,
\item Our work belongs to the latter category where negativeness can be well-defined by model output, and therefore, quantitative evaluation is possible,
\item We provide quantitative evaluation with the metric.
\end{itemize}
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[clip, width=0.9\linewidth, height=6.3cm]{./img/pipeline.eps}
\end{tabular}
\end{center}
\vspace{-0.8cm}
\caption{Pipeline of the proposed method. Our model holds two modules, the classification and explanation module. The outline of the framework follows two steps: (a) Train a classification model to be explained, (b) Train an auxiliary explanation model in a post-hoc manner by utilizing output and mid-level features of the target classifier after freezing its weights.}
\label{fig:system}
\vspace{-0.4cm}
\end{figure*}
\section{Method}\label{sec:proposed}
We describe the details of our proposed method in this section. The main goal of this work is to build a system that not only classifies a video, but also explains why a given sample is not predicted to one class but another. As stated earlier, we utilize the combination of the spatio-temporal region (tube) and the attribute as the element of explanations. The expected output explanation should have following two properties:
\begin{enumerate}[noitemsep,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item[(1)] Visual explanation is the region which retains high positiveness/negativeness on the model prediction for specific positive/negative classes.
\item[(2)] Linguistic explanation is compatible to the visual counterpart.
\end{enumerate}
First, we formulate the task addressed in subsection~\ref{subsec:form}, and describe the outline of the framework as well as the actual training/inference process of the explanation model from subsection~\ref{subsec:outline} to~\ref{subsec:train}. In the subsequent subsection~\ref{subsec:mspool} and~\ref{subsec:multiple}, we elucidate the method and its implementation for efficiently solving the optimization problem in the training/inference step. Theoretical background of our method will be discussed in subsection~\ref{subsec:mutalinfo}.
\subsection{Task formulation}\label{subsec:form}
Notations used throughout this paper and formulation of the task we deal with are described in this subsection.
Let ${\mathbf x} \in {\mathbb R}^{W'\times H' \times T' \times 3}$ be the input video where $W', H',T'$ are width, height, and the number of frames of the video, respectively. We denote the class and the attribute by $c \in {\mathcal C}$ and $s \in {\mathcal S}$ , respectively. We assume that attributes are assigned to each sample used for training, and that the assigned set of attributes is represented as ${\mathcal S}({\mathbf x}) \subset {\mathcal S}$.
${\mathbf R} \subset {\mathcal R}$ denotes the spatio-temporal region used for visual explanation, and its element $[i, j, t, {\rm scale}, {\rm shape}] \in {\mathbf R}$ is the spatio-temporal coordinate, scale, and shape of the element of visual region, respectively. ${\mathcal R}$ is a possible set of ${\mathbf R}$. In this work, we particularly limit ${\mathcal R}$ to the set containing all possible tubes.
In other words, ${\mathbf R}$ contains at most one element corresponding to the time step $t$, and all the elements are spatially and temporally continuous.
We build an explainable model, having the following two functions:
(a) Classify the video ${\mathbf x}$ to a specific class $c_{\rm pos} \in {\mathcal C}$,
(b) Explain the reason for specific negative class $c_{\rm neg} \in {\mathcal C}\backslash c_{\rm pos}$ by the combination of attribute $s$ (linguistic) and spatio-temporal tube ${\mathbf R}$ (visual).
Our model predicts several pairs of $(s, {\mathbf R})$ for specific class pair $c_{\rm pos}, c_{\rm neg}$, and simply puts them together as final output.
\subsection{System pipeline}\label{subsec:outline}
We outline the pipeline of the proposed method in Fig.~\ref{fig:system}.
Our model holds two modules, namely, the classification module and the explanation module. The outline of the framework follows two steps:
(a) Train a classification model, which is the target of the explanation,
(b) Train an auxiliary explanation model in a post-hoc manner as in existing research (e.g.,\cite{hendricks2016generating}) by utilizing output and mid-level activation of the target classifier after freezing its weights to prevent change in output.
Specifically, we explicitly represent feature extraction parts in the pre-trained classification network as
\begin{eqnarray}\label{eq:classifier}
p(c|{\mathbf x}) = f({\mathbf h}({\mathbf x})),\ {\mathbf h}({\mathbf x}) \in {\mathbb R}^{W \times H \times T \times d},
\end{eqnarray}
where $W, H, T$ indicate width, height, and the number of frames in the mid-level feature, respectively. The $d$-dimensional feature vector corresponding to each physical coordinate is denoted by ${\mathbf h}({\mathbf R})[i, j, t] \in {\mathbb R}^d$.
We introduce an auxiliary model ${\mathbf g}$, which is responsible for the explanation. It predicts {\it conterfactuality}, which measures (1) the positiveness/negativeness of the region ${\mathbf R}$ on the model prediction $p(c|{\mathbf x})$ for a specific pair of $c_{\rm pos}, c_{\rm neg}$, and (2) the compatibility of linguistic explanation $s$ to the visual counterpart ${\mathbf R}$. By fixing the parameter of the feature extraction part ${\mathbf h}(\cdot)$, we obtain
\begin{eqnarray}
y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}} = {\mathbf g}({\mathbf h}({\mathbf x})) \in [0, 1 ]^{(|{\mathcal C}| - 1)\times |{\mathcal S}|\times |{\mathcal R}|}
\end{eqnarray}
which holds a counterfactuality score corresponding to one combination of $c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}$ in each dimension.
Positive class $c_{\rm pos}$ is sampled from $p(c|{\mathbf x})$ during training, and $c_{\rm pos}= \argmax_{c} p(c|{\mathbf x})$ is applied in the inference step. Any remaining class $c_{\rm neg}\in {\mathcal C}\backslash c_{\rm pos}$ is regarded as negative.
We consider the element of ${\mathbf R}$ in the space of ${\mathbf h}({\mathbf x})$. In other words, the coordinate $(i, j, t)$ of ${\mathbf R}$ corresponds to that of ${\mathbf h}({\mathbf x})$. The shape of ${\mathbf R}$ is fixed to $[W/W', H/H']$ for the sake of simplicity. The extension to multiple scales and the aspect ratio will be discussed in subsection~\ref{subsec:multiple}.
\subsection{Predicting counterfactuality}\label{subsec:train}
Our explanation model predicts counterfactuality, that is, (1) how much the region ${\mathbf R}$ retains high positiveness/negativeness on the $p(c|{\mathbf x})$ for $c_{\rm pos}, c_{\rm neg}$, and (2) how much $s$ is compatible to ${\mathbf R}$. The counterfactuality score is predicted in the following steps.
A target sample ${\mathbf x}$ is inputted to obtain the mid-level representation ${\mathbf h}({\mathbf x})$ and the conditional probability $p(c|{\mathbf x})$.
The explanation model holds classifiers ${\hat {\mathbf g}_{cs}}$ for each pair of $(c, s)$. These classifiers are applied to each element feature of ${\mathbf h}({\mathbf x})$, and predict simultaneous existence of $(c, s)$ as
\begin{eqnarray}\label{eq:inner-func}
{\mathbf m}_{cs} = {\hat {\mathbf g}_{cs}}({\mathbf h}({\mathbf x}))\in {\mathbb R}^{W \times H \times T},
\end{eqnarray}
is obtained for each pair of $(c, s)$.
For simplicity, we utilize a linear function that preserves geometrical informatiton, that is, the convolutional layer as classifier ${\hat {\mathbf g}}_{cs}$.
To measure how likely the region element is considered to be $c_{\rm pos}$, not $c_{\rm neg}$, with linguistic explanation $s$, we element-wise subtract the value of (\ref{eq:inner-func}) for all $c_{\rm neg} \in {\mathcal C} \backslash \{c_{\rm pos}\}$ and $s$ as
\begin{eqnarray}\label{eq:deltam}
\Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s} = {\mathbf m}_{c_{\rm pos}s} - {\mathbf m}_{c_{\rm neg}s}
\end{eqnarray}
To obtain the score for the region ${\mathbf R}$, we define the procedure of aggregating scalar values through ${\mathbf R}$ in the 3 dimensional tensor $\Delta {\mathbf m}$ as
\begin{eqnarray}
\Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}]
= \sum_{(i, j, t) \in {\mathbf R}} \Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[i, j, t].
\end{eqnarray}
Please note $\Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}] \in {\mathbb R}$.
By applying a sigmoid activation function to the output, we obtain counterfactuality $y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}} \in [0, 1 ]^{(|{\mathcal C}| - 1)\times |{\mathcal S}|}$ as
\begin{eqnarray}\label{eq:out}
y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}} = \sigma(\Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}])
\end{eqnarray}
where $\sigma(a) = \frac{1}{1 + {\rm exp}(-a)}$.
\subsection{Training and inference}\label{subsec:train}
We illustrate the loss function optimized in the training step and the procedure in the inference step.
\textbf{Loss function}:
The supervised information ``A sample is classified to category $c_{\rm pos}$.'' can be translated to ``A sample is not classified to any category $c_{\rm neg} \in {\mathcal C}\backslash c_{\rm pos}$.'' By utilizing this, the model is trained to enlarge the counterfactuality score corresponding to class pairs ${c_{\rm pos}, {c_{\rm neg}}}$ and attributes $s \in {\mathcal S}({\mathbf x})$. The output obtained after sigmoid activation in (\ref{eq:out}) can be interpreted as a probability, and its negative log likelihood is minimized.
Because computing output $y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}}$ for all pairs of $c_{\rm neg} \in {\mathcal C}\backslash c_{\rm pos}$, $s\in{\mathcal S}$ and ${\mathbf R}\in {\mathcal R}$ is not feasible, only ${\mathbf R}$ maximizing the loss is utilized for each pair of $c_{\rm neg}, s$ while training. Formally, for a given ${\mathbf x}$ and $c_{\rm pos}$, the loss
\begin{eqnarray}\label{eq:train}
{\ell}({\mathbf x}, c_{\rm pos}) = \frac{1}{|{\mathcal S({\mathbf x})}|}\sum_{s\in {\mathcal S({\mathbf x})}}\sum_{c_{\rm neg}\in {\mathcal C}\backslash c_{\rm pos}}-{\rm log}\ {\hat y_{c_{\rm pos}, c_{\rm neg}, s}}\nonumber \\
{\rm where}\ \ {\hat y_{c_{\rm pos}, c_{\rm neg}, s}} = \mathop{\rm min}\limits_{\mathbf R}\ y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}}
\end{eqnarray}
is calculated.
As stated in the next subsection~\ref{subsec:mspool},
${\hat y_{c_{\rm pos}, c_{\rm neg}, s}}$ can be efficiently computed by dynamic programming under the condition where ${\mathcal R}$, a possible set of ${\mathbf R}$, is limited to the set of all spatio-temporal tubes.
The overall loss function to be optimized is obtained by taking the expectation of (\ref{eq:train}) over ${\mathbf x}$ and $c_{\rm pos}$ as
\begin{eqnarray}\label{eq:loss}
{\mathcal L} = {\mathbb E}_{p(\mathbf{x})p(c_{\rm pos}|\mathbf{x})}
\left[ {\ell}({\mathbf x}, c_{\rm pos})\right].
\end{eqnarray}
$p(\mathbf x)$ indicates the true sample distribution, and $p(c_{\rm pos} | {{\mathbf x})}$ is the pre-trained network in (\ref{eq:classifier}).
Empirically, the expectation over ${\mathbf x}$ is calculated by summing up all training $N$ samples, and that over $c_{\rm pos}$ is achieved by sampling from the conditional distribution $p(c_{\rm pos} | {{\mathbf x})}$ given ${\mathbf x}$.
\textbf{Inference}: During inference, provided with positive and negative class $c_{\rm pos}, c_{\rm neg}$ as well as input ${\mathbf x}$, pairs of the attribute $s$ and the region ${\mathbf R}$ are outputted whose score is the largest. Formally,
\begin{eqnarray}\label{eq:inference}
s^{\star}, {\mathbf R}^{\star} = \argmax_{s, {\mathbf R}}\ y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}}
\end{eqnarray}
is calculated as the element of explanation.
For computing $k$ multiple outputs, we compute $\mathop{\rm max}\limits_{\mathbf R}\ y_{c_{\rm pos}, c_{\rm neg}, s, {\mathbf R}}$ for all $s$ and pick $k$ pairs whose scores are the largest. Minimization for ${\mathbf R}$ is also efficiently calculated as is the case in the training step.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[clip, width=\linewidth, height=4.5cm]{./img/mspool.eps}
\end{tabular}
\end{center}
\vspace{-0.5cm}
\caption{The illustration of maximum subpath pooling. Finding the subpath in the 3d tensor whose summation is maximum can be implemented by sequentially applying the elementwise sum, relu, and 2d max pooling in the time direction, following global 2d max pooling.}
\vspace{-0.5cm}
\label{fig:motivation}
\end{figure}
\subsection{Maximum subpath pooling}\label{subsec:mspool}
We describe in detail the maximization (minimization) problem for ${\mathbf R}$ appearing in (\ref{eq:train}) and (\ref{eq:inference}) in this subsection.
We limit ${\mathcal R}$, a possible set of ${\mathbf R}$, to the set containing all possible tubes. A tube can be expressed as a path in the 3d tensor, starting at one spatial coordinate $(i, j)$ in time $t$, and move to the neighbor $\{(i+l, j+m)\ |\ -k\le l, m \le k\}$ in time $t+1$ where $k$ controls how much movement of the spatial coordinate $(i, j)$ is allowed when the time changes from $t$ to $t+1$.
The path can start and end at any time. The ${\mathbf R}$ consists of coordinates $(i, j, t)$ satisfying the path condition.
With this limitation, the maximization problem with regard to ${\mathbf R}$ can be cast as an extension of finding a sub-array whose summation is maximum, and it can be efficiently solved by the algorithm proposed in ~\cite{tran2014video} (shown in the supplementary material), which is an extension of the Kadane's algorithm~\cite{bentley1984programming}.
Although~\cite{tran2014video} utilized it only for the inference, we need to train the parameters, especially by back-propagation on GPUs to combine CNNs. To realize this, we construct the algorithm such that it can be implemented as a layer of a neural network with only standard functions pre-implemented in most deep learning libraries~\cite{paszke2017automatic, tensorflow2015-whitepaper, chollet2015, jia2014caffe, chainer_learningsys2015}. Interestingly, as shown in Fig.~\ref{fig:motivation}, the same result can be achieved by sequentially applying relu, 2d maxpooling, and element-wise summation in the time direction followed by global 2d maxpooling.
The kernel size of maxpooling is a hyper-parameter corresponding to $k$ mentioned above. We fixed it to $3\times3$, which means that $k=1$. The computational cost of this algorithm is $O(WHT)$, which can be solved by a single forward path, since the iteration for $W$ and $H$ can be parallelized on GPU without significant overhead.
In the case of minimization of the objective, the same algorithm can be applied just by inverting sign of input and output.
To acquire the path ${\mathbf R}^{\star}$ whose summation is maximum, we simply need to calculate the partial derivative of the maximum value with regards to each input. Because
$\frac{\partial \Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}^{\star}]}{\partial \Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}}[i, j, t] = \begin{cases}
1 & ((i, j, t) \in {\mathbf R}^{\star}) \\
0 & ({\rm otherwise})
\end{cases}
$ \\
we can obtain the path corresponding to the maximum by extracting the element whose derivative is equal to 1. Implementation is likewise easy for the library, which has the function of automatic differentiation.
This procedure can be interpreted as a kind of pooling.
To observe this, we denote the aggregated feature after applying sum pooling to mid-level local features throughout the region ${\mathbf R}^{\star}$ by ${\rm pool}({\mathbf h}({\mathbf x}), {\mathbf R}^{\star})$, and redefine $\Delta{\mathbf w}={\mathbf w}_{c_{\rm pos}s}-{\mathbf w}_{c_{\rm neg}s}$, where ${\mathbf w}_{cs}$ is the parameter of the convolutional layer ${\hat {\mathbf g}}_{cs}$. The summation of the score inside the sigmoid function in (\ref{eq:inference}) can be written as
\begin{eqnarray}
\mathop{\rm max}\limits_{\mathbf R} \Delta {\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}]
=\sum_{(i, j, t)\in{\mathbf R}^{\star}} \Delta{\mathbf w}^{\top} {\mathbf h}({\mathbf x})[i, j, t] \nonumber \\
=\Delta{\mathbf w}^{\top} \sum_{(i, j, t)\in{\mathbf R}^{\star}} {\mathbf h}({\mathbf x})[i, j, t]
=\Delta{\mathbf w}^{\top} {\rm pool}({\mathbf h}({\mathbf x}), {\mathbf R}^{\star}) \nonumber
\end{eqnarray}
We refer to the sequence of this process as the maximum subpath pooling layer.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{./img/mask.pdf}
\vspace{-0.4cm}
\caption{The spatial weights multiplied with the parameters of the convolutional layer. Each element of the weight has a value proportional to the overlap to the outputted shape (in red). These values are normalized such that the summation equals 1.}
\vspace{-0.6cm}
\label{fig:mask}
\end{figure}
\subsection{Multiple scales and aspect ratio}\label{subsec:multiple}
So far, we only considered the situation where the scale and aspect ratio of ${\mathbf R}$ is fixed to $[W/W', H/H']$, $1:1$.
We modify the algorithm to treat different scales and shapes of the region.
As described in~\ref{subsec:mspool},
the input of the optimization algorithm $\Delta {\mathbf m}_{c_{\rm pos},c_{\neg},s}$ (defined in (\ref{eq:deltam})) is a 3d tensor, which corresponds to each physical coordinate in the region. We expand the input from 3d to 5d by expanding ${\mathbf m}_{c,s}$ (defined in (\ref{eq:inner-func})),
taking scales and shapes into consideration.
To obtain the 5d tensor, we prepare multiple parameters ${\mathbf w}$ of the convolutional layer ${\hat {\mathbf g}_{cs}}$ (\ref{eq:inner-func}) for each scale and shape. (For simplicity, the subscripts $c, s$ of ${\mathbf w}$ will be omitted below).
partial
To treat different scales, we extract mid-level representations from different layers of the target classifier to construct ${\mathbf h}({\mathbf x})$.
After convolution is applied separately, they are appended for the scale-dimension.
When considering different shapes at each scale, we prepare different parameters by multiplying the significance corresponding to the shape of a region. Formally, ${\mathbf w}_{i}={\mathbf w} \odot {\mathbf a}_i$ is computed, where ${\mathbf a}_i$ has the same window size as ${\mathbf w}$ and consists of the importance weight for each position of parameter, which is determined by the overlap ratio between the region and each local element. They are normalized to satisfy $|{\mathbf a}_i| = 1$. Different ${\mathbf w}_i$ are applied separately and output is appended to the shape-dimension of the tensor. Concretely, we compute the scores corresponding to four kinds of shapes $(2\times 2, 2\times 3, 3\times 2, 3 \times 3 )$ from $3 \times 3$ convolution for each scale as in Fig.~\ref{fig:mask}.
The optimization problem can be solved by applying the same algorithm described in subsection~\ref{subsec:mspool} to the obtained 5d tensor.
\subsection{Theoretical background}\label{subsec:mutalinfo}
To demonstrate that linguistic explanation $s$ for region ${\mathbf R}$ is strongly dependent on the output of the target classifier $c$ by minimizing the loss function proposed above,
we reveal the relationship between the loss function and conditional mutual information.
Conditional mutual information of the distribution parameterized by our model is denoted by ${\rm MI}(c, s| {\mathbf x}, {\mathbf R})$ and can be bounded as
\begin{eqnarray}\label{eq:mi}
{\rm MI}(c, s| {\mathbf x}, {\mathbf R}) &=& \mathbb{E}_{p(c, s, {\mathbf x}, {\mathbf R})}\left[ \rm{log}\ \frac{{\hat p}(c, s | {\mathbf x}, {\mathbf R})}{{\hat p}(c| {{\mathbf x}, \mathbf R}){\hat p}(s | {\mathbf x}, {\mathbf R})}\right] \nonumber \\
&\ge& \mathbb{E}_{p(c, s, {\mathbf x}, {\mathbf R})}\left[{\rm log}\ {\hat p}(c | s, {\mathbf x}, {\mathbf R})\right]\label{eq:a}
\end{eqnarray}
(\ref{eq:a}) is derived from ${\rm H}(c|{\mathbf x}, {\mathbf R})\ge 0$ and ${\rm KL}[p | q]\ge 0$ for any distribution $p, q$ where ${\rm H}(\cdot)$ and ${\rm KL}[\cdot|\cdot]$ indicate the entropy and KL divergence, respectively. In our case, we parameterize the joint distribution as
\begin{table}[t!]
\small
\centering
\begin{tabular}{|c||c|c|c|c|} \hline
dataset & video & class & attribute & bbox \\ \hline\hline
Olympic & 783 & 16 & 39 & 949 \\ \hline
UCF101-24 & 3204 & 24 & 40 & 6128 \\ \hline
\end{tabular}
\vspace{-0.3cm}
\caption{statistics of dataset used in the experiment}
\vspace{-0.9cm}
\label{tb:statistics}
\end{table}
\begin{eqnarray}\label{eq:score}
{\hat p}(c, s | {\mathbf x}, {\mathbf R}) &=& \frac{\rm{exp}({\mathbf m}_{cs}[{\mathbf R}])}{\sum_{c'\in C, s'\in S}\rm{exp}({\mathbf m}_{c's'}[{\mathbf R}])}
\end{eqnarray}
(\ref{eq:a}) is further bounded as
\begin{flalign
(\ref{eq:a})
\ge& \mathbb{E}_{p(c, s, {\mathbf x})}\left[ \mathop{\rm min}\limits_{\mathbf R}\ \rm{log}\frac{\rm{exp}({\mathbf m}_{cs}[{\mathbf R}])}
{\sum_{c's'}\rm{exp}({\mathbf m}_{c's'}[{\mathbf R}])} \right] \label{eq:c}&&\\
\ge& \mathbb{E}_{p(c_{\rm pos}, s, {\mathbf x})}\!\left[\! \sum_{c_{\rm neg}\in {\mathcal C}\backslash c_{\rm pos}} \!\!\!\!\!\!\mathop{\rm min}\limits_{\mathbf R} \sigma(\Delta{\mathbf m}_{c_{\rm pos}, c_{\rm neg}, s}[{\mathbf R}]) \! \right] \label{eq:d} &&\\
=& \mathbb{E}_{p(c_{\rm pos}, s, {\mathbf x})}\left[ \sum_{c_{\rm neg}\in {\mathcal C}\backslash c_{\rm pos}} {\hat y_{c_{\rm pos}, c_{\rm neg}, s}} \right] &&
\end{flalign}
$\sigma(\cdot)$ is the sigmoid function and (\ref{eq:c}) is derived from the fact $\mathbb{E}_{a}[{\rm f}(a)] \ge \mathop{\rm min}\limits {\rm f}(a)$.
On the bound (\ref{eq:d}), we utilize the relationship $(1+\sum_{i} a_i) \le \prod_{i} (1 + a_i)$~\cite{aueb2016one}.
Finally, by decomposing as $p(s, c| {\mathbf x}) = p(s | c, {\mathbf x})p(c | {\mathbf x})$ and setting $p(c | {\mathbf x})$ as the target classifier and
$p(s | c, {\mathbf x}) = [s \in {\mathcal S}({\mathbf x})] / |{\mathcal S}({\mathbf x})|$ following the inversion of sign, (\ref{eq:loss}) is obtained. The minimization of the loss function can be justified as the maximization of the lower bound of conditional mutual information.
It may be beneficial to investigate the relationship with other methods proposed for the explanation task, such as~\cite{chen2018learning}, based on mutual information maximization.
\section{Experiment}\label{sec:experiment}
We describe experiments to demonstrate the effectiveness of proposed method, in particular the explanation module, which is the main proposal for this task. After the details of experimental settings including datasets and metrics used for quantitative evaluation are described
in subsection~\ref{subsec:setting}, the obtained results are discussed in subsection~\ref{subec:neg}.
\subsection{Setting}\label{subsec:setting}
Given an input video ${\mathbf x}$, and a pair of positive/negative class $c_{\rm pos}, c_{\rm neg}$, the explanation module outputs several pairs of attribute/region $\{(s_i, {\mathbf R}_i)\}_{i=1}^{k}$. We separately evaluate each pair of output.
\begin{figure}[t!]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_olympic_fc1.pdf}
\end{minipage}\\ \vspace{-0.1cm}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_ucf_fc1.pdf}
\end{minipage}
\vspace{-0.3cm}
\caption{The negative class accuracy on the Olympic Sports dataset (above) and the UCF101-24 dataset (below). The y-axis depicts the mean accuracy and the x-axis denotes the number of negative classes used for averaging, whose prediction value is maximum.}
\label{fig:negacc}
\vspace{-0.70cm}
\end{figure}
\textbf{Dataset}:
Two existing video datasets for action recognition: Olympic Sports~\cite{niebles2010modeling} and UCF101-24 categories~\cite{THUMOS14} were used in the experiments.
The Olympic Sports dataset consists of 16 categories for sports action. The UCF101-24 categories is a subset of the UCF101 dataset~\cite{soomro2012ucf101} extracting 24 out of 101 general action classes. We utilized the original train/test split provided by the datasets.
We additionally assigned these datasets with Amazon Mechanical Tutk (AMT) to make the evaluation possible as follows:
(a) Assign a set of attributes to all videos in the dataset,
(b) Assign a bounding box of assigned attributes for samples in the test split.
Statistics of the dataset are shown in Table.~\ref{tb:statistics}, and a few examples of the annotations are provided in the supplementary material.
\textbf{Metric}:
As stated earlier, the method for this task is expected to satisfy the following two requirements:
\begin{enumerate}[noitemsep,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item[(1)] Visual explanation is the region which retains high positiveness/negativeness on the model prediction for specific positive/negative classes,
\item[(2)] Linguistic explanation is compatible to the visual counterpart,
\end{enumerate}
and methods are evaluated based on them.
To make the quantitative evaluation possible, we propose two metrics, both of which are based on the accuracy.
As for (1), we need to evaluate whether the obtained region is truly an explanation of the reason for
``the target classifier predicts a sample to not $c_{\rm neg}$ but $c_{\rm pos}$''. More specifically, we would like to confirm whether the region explains the specific negative class $c_{\rm neg}$, not other negatives ${\hat c_{\rm neg}} \in {\mathcal C}\backslash \{c_{\rm pos}, c_{\rm neg} \}$. To evaluate this quantitatively, we investigate how the output of the target classifier changes corresponding to $c_{\rm neg}$ when region ${\mathbf R}$ is removed from the input.
A mask ${\mathbf z} = \{0, 1\}^{W'\times H' \times T'\times 3}$ is prepared, which takes the value of 0 if the corresponding pixel's location is contained to ${\mathbf R}$ restored on the input space, otherwise it takes the value of 1. Applying again the masked sample to the target classifier, the difference from the original output
$f({\mathbf h}({\mathbf x} \odot {\mathbf z})) - f({\mathbf h}({\mathbf x}))$ is calculated for all negative classes where $\odot$ denotes the Hadamard product.
We calculate the accuracy, i.e., we pick the largest values out of the obtained difference scores, and examine if $c_{\rm neg}$ exists within this set. We refer to this metric as negative class accuracy.
As for (2), we assess how the region ${\mathbf R}$ makes the concept $s$ identifiable by humans for each output pair $(s, {\mathbf R})$. To quantify this, we exploit the bounding boxes assigned for each attribute in the test set, and compute the accuracy as follows.
IoU (intersection over union) is calculated between given ${\mathbf R}$ and all bounding boxes ${\mathbf R}'$, which corresponds to attribute $s'$. We measure the accuracy by selecting the attribute $s'$ with the largest IoU score, and checking its consistency with $s$, which is the counterpart of ${\mathbf R}$. This metric is referred to as concept accuracy in the following parts.
\begin{table}[t!]
\small
\centering
\begin{tabular}{|c||c|c|} \hline
method & Olympic & UCF101-24 \\ \hline\hline
baseline & 0.76 & 0.68 \\ \hline
propose & 0.89 & 0.88 \\ \hline
\end{tabular}
\vspace{-0.3cm}
\caption{The ratio of the probability $p(c_{\rm pos}|{\mathbf x})$ for the positive class $c_{\rm pos}$ decreasing after the region is masked out.}
\label{tab:posacc}
\vspace{-0.1cm}
\end{table}
\begin{table}[t!]
\small
\centering
\label{multiprogram}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\ & \multicolumn{3}{|c|}{Olympic Sports}& \multicolumn{3}{|c|}{UCF101-24}\\
\hline
method & top1 & top3 & top5 & top1 & top3 & top5 \\
\hline
\hline
baeline & 0.02 & 0.07 & 0.12 & 0.02 & 0.07 & 0.13 \\
\hline
propose & 0.13 & 0.38 & 0.59 & 0.14 & 0.41 & 0.65 \\
\hline
\end{tabular}
\vspace{-0.3cm}
\caption{The concept accuracy on the Olympic Sports dataset and the UCF101-24 dataset.}
\label{tb:cocept}
\vspace{-0.5cm}
\end{table}
\textbf{Detailed settings}
In the classification module, the output of convolutional layers was used as ${\mathbf h}(\cdot)$. Fc layers following convolutional layers were considered as $f(\cdot)$.
Specifically, we dealt with C3D-resnet~\cite{tran2017convnet, hara3dcnns} as the target classifier in the experiments, which is based on the spatio-temporal convolution~\cite{tran2015learning} with
residual architecture~\cite{he2016deep}.
Our network for classification consists of nine convolutional layers and one fully connected layer accepting a $112\times 112\times 16$ size input. Relu activation was applied to all layers, except the final fc layer.
We selected the outputs of the last and the 2nd to last convolutional layers to construct ${\mathbf h}(\cdot)$.
Moreover, we replaced 3d max pooling to 2d max pooling to guarantee $T = T'$ for all activations.
The target classifier was trained with SGD, where learning rate, momentum, weight decay, and batch size, were set to 0.1, 0.9, 1e-3, and 64 respectively. To train the model in Olympic Sports, we pre-trained it with the UCF101-24.
For the training of the explanation module, all the settings (e.g., learning rate) were set to the same as in the training of classification module, except that the batch size was 30. We decomposed the weight of the convolutional layer ${\mathbf w}_{cs}$ corresponding to the pair of $(c, s)$ to ${\mathbf w}_{cs}={\mathbf w}_{c}\odot {\mathbf w}_{s} +{\mathbf w}_{c}+{\mathbf w}_{s}$
to reduce the number of parameters where ${\mathbf w}_{s}$ and ${\mathbf w}_{c}$ are the parameter shared by the same attribute and class respectively.
\subsection{Identifiability of negative class}\label{subec:neg}
To assess whether the obtained region ${\mathbf R}$ is truly an explanation of the reason for a specific negative class, we compared the negative class accuracy with a baseline.
Because there is no previous work for this task, we employed a simple baseline. For the UCF101-24 dataset, we exploited the bounding box for the action detection provided in~\cite{singh2016online}, which provides an upper limit of the performance for the action detection task.
As bounding boxes are not provided for the Olympic Sports dataset, we cropped the center ($32 \times 32$) from all frames. This forms a simple but strong baseline because the instance containing category information usually appears in the center of the frame in this dataset.
\begin{table}[t!]
\small
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\ & \multicolumn{3}{|c|}{Olympic Sports}& \multicolumn{3}{|c|}{UCF101-24}\\
\hline
method & fc1 & fc2 & fc3 & fc1 & fc2 & fc3 \\
\hline
\hline
baeline & 0.45 & 0.44 & 0.46 & 0.45 & 0.43 & 0.44 \\
\hline
propose & 0.62 & 0.64 & 0.59 & 0.64 & 0.62 & 0.61 \\
\hline
\end{tabular}
\vspace{-0.3cm}
\caption{The top3 negative class accuracy on the Olympic Sports dataset and the UCF101-24 dataset averaged over 3 negative classes whose prediction probability is the largest, by changing the number of fully-connected layers.}
\label{tb:fc}
\vspace{-0.5cm}
\end{table}
The results of the negative class accuracy for the Olympic Sports and the UCF101-24 datasets are shown in Fig.~\ref{fig:negacc}. The accuracy averaged over negative classes having largest $p(c | {\mathbf x})$ is calculated and the x-axis of the figures depicts the number of used negative classes.
Our method performed consistently better than the baseline on both datasets, demonstrating the generalization of identifiability of negativeness in unseen samples. The gap between the accuracy of our method and that of baseline is decreased when negative classes having small $p(c | {\mathbf x})$ are included. We conjecture the reason is that such a {\it easy negative} class, which is highly dissimilar to the positive class, does not have common patterns to identify them. For example, for positive class `pole vault', the negative class `high jump' is considered to be more similar than `bowling' for the classifier, and detecting the negativeness for such a {\it hard negative} `pole vault' is relatively easy (e.g. the region of `pole'), although detecting it is difficult for the {\it easy negative} class.
We believe the low-accuracy for such an {\it easy negative} class does not have significant impact because in real applications,
we may be interested in the reason for the negativeness of the {\it hard negative} class, which is difficult to discriminate.
In addition, we also report the ratio of the probability $p(c_{\rm pos}|{\mathbf x})$ for the positive class $c_{\rm pos}$ decreasing after the region is masked out in Table.~\ref{tab:posacc}. From these results, we claim that our method can find both better of negativeness/positiveness on the negative/positive classes.
\subsection{Identifiability of concept}
To evaluate whether the obtained region makes the concept (linguistic explanation) identifiable by human, we measured concept accuracy described above for the case where category prediction is correct. The same baseline was applied for region selection as in the previous subsection~\ref{subec:neg}, and the attribute is randomly selected.
Results are shown in Table~\ref{tb:cocept}. In both datasets, our method is consistently better than the baseline. Finding the region by which a specific concept can be identified is the significantly challenging task, where methods need to identify small objects or atomic-actions~\cite{gu2017ava}. Although it is conceivable that there is still room for improvement of this metric, we believe that our simple method can serve as a base for future work regarding this novel task.
\subsection{Influence of the complexity of classifier}
To investigate the influence of the complexity of the classifier module on the generalization ability of the explanation module, we measured the negative class accuracy by changing the number of fc-layers of $f(c | {\mathbf x})$ in 1 $\sim$ 3. The other settings remain the same as those in subsection~\ref{subec:neg}.
The top3 accuracy averaged on 3 negative classes is shown in Table.~\ref{tb:fc} (Other results are shown in supplementary material). In both datasets, the gap between the baseline and the proposed method is consistent regardless of the number of fc-layers, demonstrating the robustness of the proposed method to the complexity of classifiers to be explained.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{./img/smp.eps}
\vspace{-0.35cm}
\caption{Example output from our system for the samples of the UCF101-24 dataset.}
\label{fig:examples}
\vspace{-0.6cm}
\end{figure}
\subsection{Output examples}
We show a few output examples in Fig.~\ref{fig:examples} for the sample videos of the UCF101-24 dataset. As observed in the figures, our model is considered to appropriately localize the area compatible with the linguistic explanation. Other examples are shown in the supplementary material.
\section{Conclusion}\label{sec:conclusion}
In this work, we particularly focused on building a model that not only categorizes a sample, but also generates an explanation with linguistic and region. To this end, we proposed a novel algorithm to predict {\it counterfactuality}, while identifying the important region for the linguistic explanation. Furthermore, we demonstrated the effectiveness of the approach on two existing datasets extended in this work.
\section{Acknowledgement}
This work was partially supported by JST CREST Grant Number JPMJCR1403, Japan, and partially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) as "Seminal Issue on Post-K Computer." Authors would like to thank Kosuke Arase, Mikihiro Tanaka, Yusuke Mukuta for helpful discussions.
{\small
\bibliographystyle{ieee}
\section{Algorithm for finding maximum subpath in the 3D tensor}
We show the algorithm to find the maximum subpath in the 3D tensor based on the dynamic programming proposed by~\cite{tran2014video} as below.
\begin{algorithm}[h!]
\caption{Algorithm for finding maximum subpath in the 3D tensor~\cite{tran2014video}}
\label{alg1}
\begin{framed}
\begin{algorithmic}
\REQUIRE
\STATE{$M(u,t):$ the local discriminative scores;}
\COMMENT{$u = (i,j):$ the 2D index of spatial coordinate}
\COMMENT{$t:$ the frame in the video}
\ENSURE
\STATE{$S(u,t):$ the accumulated scores of the best path leads to $(u,t)$;}
\STATE{$P(u,t):$ the best path record for tracing back;}
\STATE{$S^* :$ the accumulated score of the best path;}
\STATE{$l^* :$ the ending location of the best path;}
\STATE{\\}
\STATE{$S(u,1) = M(u,1), \forall u$;}
\STATE{$P(u,t) = null, \forall (u,t);$}
\STATE{$S^* = -\infty;$}
\STATE{$l^* = null;$}\\
\FOR{$i \leftarrow 2$ \bf{to} $n$}
\FORALL{$u \in [1..w] \times [1..h]$}
\STATE{$v_0 \leftarrow \argmax_{v \in N(u)} S(v,i-1);$}
\IF{$S(v_0,i-1)>0$}
\STATE{$S(u,i) \leftarrow S(v_0,i-1) + M(u,i);$}
\STATE{$P(u,i) \leftarrow (v_0, i-1);$}
\ELSE
\STATE{$S(u,i) \leftarrow M(u,i);$}
\ENDIF
\IF{$S(u,i) > S^*$}
\STATE{$S^* \leftarrow S(u,i);$}
\STATE{$l^* \leftarrow (u,i);$}
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{framed}
\end{algorithm}
\newpage
\
\section{The influence of the complexity of classification model on the negative class accuracy}
\begin{figure*}[h!]
\centering
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_olympic_fc1_supp.pdf}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_ucf_fc1_supp.pdf}
\end{minipage}
\\
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_olympic_fc2_supp.pdf}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_ucf_fc2_supp.pdf}
\end{minipage}
\\
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_olympic_fc3_supp.pdf}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.9\linewidth]{./img/acc_ucf_fc3_supp.pdf}
\end{minipage}
\\
\vspace{1cm}
\caption{The negative class accuracy on Olympic Sports dataset (left) and UCF101-24 dataset (right). Each row corresponds to the number of fully-connected layer of the classification module. y-axis indicates the mean accuracy and x-axis means the number of the negative classes used for averaging whose prediction value is maximum.}
\label{fig:negacc}
\end{figure*}
\newpage
\section{List of attributes}
\subsection{Olympic Sports dataset}
'Run', 'Slow run', 'Fast run', 'Indoor', 'outdoor', 'Ball', 'small ball', 'big ball', 'Jump', 'Small local Jump', 'Local jump up', 'Jump Forward', 'Track', 'Bend', 'StandUp', 'Lift something', 'Raise Arms', 'One Arm Open', 'Turn Around', 'Throw Up', 'Throw away', 'water', 'Down Motion in Air', 'Up Motion in Air', 'Up Down Motion Local', 'Somersault in Air', 'With Pole', 'Two hand holding pole', 'One hand holding pole', 'Spring Platform', 'Motion in the air', 'one arm swing', 'Crouch', 'Two Arms Open', 'Two Arms Swing overhead', 'Turn around with two arms open', 'Run in Air', 'Big Step', 'Open Arm Lift', 'With Pat'
\subsection{UCF101-24 dataset}
'Body Motion is Flipping', 'Body Motion is Walking', 'Body Motion is Running', 'Body Motion is Riding', 'Body Motion is Up down', 'Body Motion is Pulling', 'Body Motion is Lifting', 'Body Motion is Pushing', 'Body Motion is Diving', 'Body Motion is Jumping Up', 'Body Motion is Jumping Forward', 'Body Motion is Jumping Over Obstacle', 'Body Motion is Spinning', 'Body Motion is Climbing Up', 'Body Motion is Horizontal', 'Body Motion is Vertical Up', 'Body Motion is Vertical Down', 'Body Motion is Bending', 'Object is Ball Like', 'Object is Big Ball Like', 'Object is Stick Like', 'Object is Rope Like', 'Object is Sharp', 'Object is Circular', 'Object is Cylinderical', 'Object is Musical Instrument', 'Object is Portable Musical Instrument', 'Object is Animal', 'Object is Boat Like', 'Posture is Sitting', 'Posture is Sitting In Front Of Table Like Object', 'Posture is Standing', 'Posture is Lying', 'Posture is Handstand', 'Body Parts Used is Head', 'Body Parts Used is Hands', 'Body Parts Used is Arms', 'Body Parts Used is Legs', 'Body Parts Used is Foot'
\clearpage
\newpage
\section{Output Examples}
\input{./img_supp_counter/readimg}
\newpage
\section{Dataset collection}
\begin{figure*}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[clip, width=0.8\linewidth, height=0.85\textheight]{./img/beaverdam.pdf}
\end{tabular}
\end{center}
\caption{Screen shot of the instruction for collecting bounding box annotation on AWS.}
\label{fig:exp}
\end{figure*}
\newpage
\section{Dataset Examples}
\input{./img_supp_counter/readdataset}
\clearpage
{\small
\bibliographystyle{ieee}
|
2,869,038,156,664 | arxiv |
\section{Introduction}
Knowledge graph (KG) is a graph structured database~\cite{miller1995wordnet}, in which nodes represent entities (e.g., \emph{Hedgehog in the Fog}, \emph{Sergei Kozlov}), and edges reflect the relations between entities (e.g., \emph{Hedgehog in the Fog} - \emph{written\_by} - \emph{Sergei Kozlov}). Users can get crisp answers by querying KGs with natural language questions, and this process is called Question Answering over Knowledge Graph (KGQA). Recently, consumer market witnesses a widespread application of this technique in a variety of virtual assistants, such as Apple Siri, Google Home, Amazon Alexa, and Microsoft Cortana, etc.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.30\textheight]{figure1.pdf}\vspace{-5pt}
\caption{(Q1) Example question involving multi-hop reasoning, and (Q2) Example question with constraints}\label{fig:complex_questions}
\end{figure}
Early works~\cite{bordes2015large, golub2016character} on KGQA mainly focus on simple questions, such as \textit{where is the hometown of Obama?} This question only involves in one relation path (e.g.,~\textit{hometown} or~\textit{birth-place}) in KG, and is relatively easy to solve. However, many questions in daily QA sessions are often more complex, manifested by multi-hop reasoning or questions with multiple constraints. Therefore, recently there is a flurry of interests on complex KGQA~\cite{shi2021transfernet,yadati2021knowledge}.
There are two types of complexity when dealing with complex KGQA, i.e., multi-hop questions and questions with multiple constraints (See Figure~\ref{fig:complex_questions} for example). Question 1 in Figure~\ref{fig:complex_questions} is a typical multi-hop question, to which the answer is related to \texttt{Yuriy Norshteyn} with two-hop relations: \texttt{directed.by.reverse} and \texttt{written.by}. In response to this challenge, \citet{Kun2019Enhancing} enhances the traditional Key-Value Memory Neural Networks (KV-MemNNs)~\cite{Miller2016Key} for multi-hop question answering. They design a query updating strategy to decompose the question and predict the relevant relation paths at each hop. TransferNet~\cite{shi2021transfernet} is an effective and transparent model for multi-hop questions; it attends to different parts of the question at each hop and computes activated scores for relation path prediction. Despite the promising results, it's still challenging for these models to predict relation paths accurately at each hop, and thus suffer from error propagation over multi-hop reasoning. Similarly, Question 2 in Figure~\ref{fig:complex_questions} is an example of question with constraints. Apparently, there is a single relation path between the topic entity \textit{Cher} and the answer \textit{Chaz Bono}, but the constraint of~\texttt{person.gender=Male} must be satisfied. To handle this type of complex questions, many works built on the idea of query ranking are proposed~\cite{yih2015semantic,lan2020query,chen2020formal}, which rank candidate queries by the similarity scores between question and candidate queries. Specifically, these ranking methods use query graphs to represent queries, and explore various strategies to generate candidate query graphs for ranking. Typical strategies assume the answers are within $n$ hops of topic entity, and enumerate all the relation paths within $n$ hops to generate candidate query graphs. Although this candidate generation strategy can yield all valid query graphs from a topic entity, they have two main limitations: (1) The generated candidate query graphs are very noisy. As shown in Figure~\ref{fig:query_graph}(a), a candidate query graph with an incorrect structure is presented; this candidate query graph is generated by the traditional enumeration strategy but lacks of the constraint on \texttt{person.gender}, which can incur considerable error in query graph ranking (See Table~\ref{ref:with ss}). For the example in Figure~\ref{fig:query_graph}(a), both \texttt{parent} and \texttt{birth.place} are relevant to the question; even though this candidate query graph has an incorrect semantic structure (to be defined in Sec.~\ref{section:SS}), it is still challenging for ranking models to demote it below the correct query graph -- the one in Figure~\ref{fig:query_graph}(b). (2) When building a ranking model to rank query graphs, recent works~\cite{lan2020query} treat the candidate query graph and question as a sequence of words and leverage BERT~\cite{devlin2018bert} to extract feature representation from its pooled output. However, this pooled output is usually not a good representation of the semantics of the input sequence~\cite{khodeir2021bi}. Therefore, improved ranking models are to be developed.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.30\textheight]{figure2.pdf}\vspace{-10pt}
\caption{Example two-hop candidate query graphs for a question with constraints. (a) A candidate query graph with an incorrect semantic structure, (b) A candidate query graph with a correct semantic structure.}\label{fig:query_graph} \vspace{-10pt}
\end{figure}
To mitigate the aforementioned issues, this paper proposes \emph{SSKGQA}, a Semantic Structure based framework for complex KGQA. We represent both the multi-hop questions and questions with constraints in a way similar to query graphs\footnote{For example, one candidate query graph for Question~1 in Figure~\ref{fig:complex_questions} can be written as $Yuriy. Norshteyn \to directed.by.reverse\to y \to written.by \to x
$.}
and rank the generated candidate query graphs by the similarity scores between question and candidate query graphs. Inspired by~\citet{chen2020formal}, if the structure of a question is known in advance, the noise in candidate query graphs can be reduced significantly by filtering. Thus, SSKGQA first predicts the semantic structure of a natural language question, which is then used to filter out noisy candidate query graphs (which have incorrect structures).
Specifically, we define six semantic structures based on the question topology that is introduced by~\citet{srivastava-etal-2021-complex}. With the defined semantic structures, our SSKGQA processes a natural language question in two stages. In the first stage, we develop a novel Structure-BERT to predict the semantic structure of a natural language question, which is then used to filter out noisy candidate query graphs and produce a set of query graph candidates that match the predicted structure. In the second stage, we rank the remaining candidate query graphs of a question by a BERT-based ranking model and identify the top-1 candidate query graph, which is then issued to retrieve the final answer from a KG. Our experiments demonstrate that this semantic structure based query graph prediction strategy is very effective and enables SSKGQA to outperform state-of-the-art methods.
Our main contributions are summarized as follows. (1) We propose SSKGQA, a semantic structure based method to predict query graphs from natural language questions. SSKGQA can handle both multi-hop questions and questions with constraints and is a unified framework for complex KGQA. (2) We develop a novel Structure-BERT to predict the semantic structure of each question, and a BERT-based ranking model with a triplet loss to identify the top-1 query graph candidate. (3) Compared to state-of-the-arts methods, our SSKGQA demonstrates superior performance on two popular complex KGQA benchmarks.
\section{Related Work}
\subsection{Multi-hop Question Answering}
Current works on multi-hop question answering mainly focus on how to retrieve answers by calculating the relation paths step by step. In general, a right answer can be retrieved if the relation paths are identified correctly at each hop. \citet{Kun2019Enhancing} enhances the traditional Key-Value Memory Neural Networks (KV-MemNNs)~\cite{Miller2016Key} and designs a query updating strategy to decompose the question and predict the relevant relation paths at each hop. TransferNet~\cite{shi2021transfernet} calculates the relation path scores based on an updated question at each hop, but they leverage the attention mechanism to update question representations over multiple hops.
More recently, \citet{cai2021deep} introduces the dual process theory to predict the relation paths at each hop. Although these methods achieve promising results, they suffer from error propagation when predicting the relation paths step by step. To mitigate this issue, SSKGQA identifies the top-1 query graph by directly calculating the similarity scores between question and candidate query graphs (or similarly relation paths).
On the other hand, \citet{Sun2019PullNet,Sun2018Open} incorporate external corpus to enhance the performance of KGQA. They focus on how to get the answers by constructing a subgraph for each question. A challenge of this method is that it is difficult to construct a subgraph around topic entity because we need to identify relevant entities from external corpus, and this process is error-prone. \citet{Apoorv2020Improving} predicts the answers by utilizing the KG embedding model. However, complex questions with long relation paths can reduce the learnability of KG embedding significantly. Our SSKGQA does not need external corpus to improve prediction accuracies and can solve complex multi-hop questions by a semantic structure based ranking.
\subsection{Complex Questions with Constraints}
For questions with constraints, a sequence of works focus on how to reach the answers by generating query graphs. \citet{yih2015semantic} enumerates all possible entities and relation paths that are connected to a topic entity to generate candidate query graphs, and uses a CNN-based ranking model to identify the query graph. Following a similar candidate query graph generation of \cite{yih2015semantic}, \citet{maheshwari2019learning} propose a novel query graph ranking method based on self-attention. \citet{qin2021improving} introduces a query graph generation method by using their proposed relation subgraphs. However, these methods largely ignore the noise when generating the candidate query graphs, which undermines the predictive performance during query graph ranking. To mitigate this issue, SSKGQA first predicts the semantic structure of a question, which is then used to reduce the noise in candidate query graphs.
\subsection{Query Graph Ranking}
Current research on KGQA mainly focuses on how to generate the candidate query graphs, and there are only a few works exploring how to rank the candidate query graphs. \citet{lan2020query} concatenates question and candidate query graph into a single sequence, and leverages BERT \cite{devlin2018bert} to process the whole sequence for ranking. However, \citet{reimers2019sentence} show that this strategy is inefficient as it can incur a massive computation due to the combinatorial nature of concatenation of question and candidate query graphs, leading to duplicated calculation. \citet{chen2020formal} explore GRUs to encode the question and query graph information, and utilize a hinge loss to learn a ranking model. However, GRUs can only learn a limited interaction among words in a sentence, while the global interactions among words has proven to be critical for text representation in various NLP applications~\cite{khan2020mmft}. To solve the aforementioned issues, SSKGQA exploits separated BERT models to process questions and query graphs, respectively, and reuses the extracted features to avoid duplicated calculation and leverages a triplet loss to train a BERT-based ranking model.
\section{Preliminaries}
\subsection{Query Graph}
\label{query graph}
Query graph is a graph representation of a natural language question~\cite{yih2015semantic}. See Figure~\ref{fig:query_graph} for example. A query graph usually contains four types of components: (1) a grounded entity, which is an entity in KG and is often the topic entity of a question, e.g., \texttt{Queen Isabella} in Figure~\ref{fig:query_graph}. (2) an existential variable, which is an ungrounded entity, e.g., \texttt{Isabel de Portugal} in Figure~\ref{fig:query_graph}. (3) a lambda variable, which is the answer to question but usually an ungrounded entity, e.g., \texttt{Lisboa} in Figure~\ref{fig:query_graph}. (4) some constraints on a set of entities, e.g., \texttt{gender} in Figure~\ref{fig:query_graph}. A question can be correctly answered if its query graph is built correctly and the right answer can be retrieved by issuing the query graph (represented by a SPARQL~\cite{perez2009semantics} command) to a KG.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textheight]{figure3.pdf}\vspace{-10pt}
\caption{Six semantic structures defined in the paper. There are three semantic structures for questions in MetaQA: \textit{(SS1, SS2, SS3)}, and five semantic structures for questions in WSP: \textit{(SS1, SS2, SS4, SS5, SS6)}.}\label{fig:SS}
\end{figure}
\subsection{Semantic Structures}~\label{section:SS}
As observed by~\cite{chen2020formal}, if the structure of a question is known in advance, the noise in candidate query graphs can be reduced significantly by filtering. Thus, in this paper we define six semantic structures based on the question topology that is introduced by~\citet{srivastava-etal-2021-complex}. These six semantic structures are listed in Figure~\ref{fig:SS} and example questions for each semantic structure can be found in Figure~\ref{fig:excample SS} in Appendix. As we can see, a semantic structure is a graph that is an \emph{abstract} of the query graphs of the same pattern. Typically, a semantic structure consists of four components $\{E, r, v, C\}$, where $E$ denotes an entity, $r$ refers to all types of relations, $v$ is an existential variable, and $C$ denotes a constraint.
To identify the semantic structure of a question, we can train a classifier for prediction. But first we need to annotate each training question with its semantic structure. Fortunately, this annotation can be achieved readily for questions in MetaQA and WebQuestionsSP (WSP) since these questions are either partitioned by number of hops or accompanied by the SPARQL commands. Details on question annotation are provided in Sec.~\ref{Labeled SS} in Appendix. By annotating the questions in MetaQA and WSP, we found that these six semantic structures can cover 100\% of questions in MetaQA, and 77.02\% of questions in the test set of WSP as shown in Table~\ref{tab:coverage}. It is challenging to design additional semantic structures to cover 100\% of questions in WSP because there are some unusual operators in WSP, such as \texttt{Or} and \texttt{<=}, which are difficult to map to a common semantic structure. Even though there is only a 77.02\% coverage on the WSP test questions, our experiments show that SSKGQA already outperforms state-of-the-art methods on WSP. As a future work, we plan to explore new techniques to cover the remaining 22.98\% of questions in WSP.
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.3}
\begin{tabular} {l|ccc|c}
\hline
\hline
\multirow{2}*{Dataset}& \multicolumn{3}{|c|}{MetaQA} & \multirow{2}*{WSP}\\
& Hop-1 &Hop-2 & Hop-3 & \\
\hline
Train&100 & 100& 100 &91.37 \\
Test&100 &100 & 100&77.02 \\
Dev& 100& 100&100& n/a\\
\hline
\hline
\end{tabular}
\caption{Semantic structure coverage (\%) for questions in the training, test and development sets of MetaQA and WSP.}\label{tab:coverage}
\end{table}
\section{The Proposed Method}
We first provide an overview of SSKGQA, and then discuss its main components: (1) Structure-BERT and (2) query graph ranking in details.
\begin{figure}[t]
\centering
\includegraphics[width=0.30\textheight]{framework.pdf}\vspace{-10pt}
\caption{Overview of SSKGQA. A subgraph related to \texttt{Yuriy Norshteyn} from a KG is provided for illustration.}\label{fig:framework}
\end{figure}
\subsection{Overview}
The overview of our proposed SSKGQA is depicted in Figure~\ref{fig:framework}. Given a question $q$, following previous works~\cite{Apoorv2020Improving,chen2020formal,cai2021deep} we assume the topic entity of $q$ has been obtained by preprocessing. Then the answer to $q$ is generated by the following steps. First, the semantic structure of $q$ is predicted by a novel Structure-BERT classifier. For the example in Figure~\ref{fig:framework}, $q$ is a 2-hop question and the classifier predicts its semantic structure as \textit{SS2}. Second, we retrieve all the candidate query graphs (CQG) of $q$ by enumeration\footnote{For clarify, only 1-hop and 2-hop candidate query graphs are considered in this example.}, and use the predicted semantic structure \textit{SS2} as the constraint to filter out noisy candidate query graphs and keep the candidate query graphs with correct structure (CQG-CS). Afterwards, a BERT-based ranking model is used to score each candidate query graph in CQG-CS, and the top-1 highest scored candidate is selected as the query graph $g$ for question $q$. Finally, the selected query graph is issued to KG to retrieve the answer \texttt{Sergei Kozlov}.
\subsection{Structure-BERT}
Given a question $q$, we first need to predict its semantic structure, which is a multi-class classification problem that classifies question $q$ to one of the six semantic structures defined in Figure~\ref{fig:SS}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.30\textheight]{figure4.pdf}\vspace{-10pt}
\caption{Structure-BERT: given a question and its topic entity, the model predicts the semantic structure of the question.}\label{fig:Structure-BERT}
\end{figure}
Figure~\ref{fig:Structure-BERT} depicts the architecture of Structure-BERT. The input to Structure-BERT is question $q$ and its topic entity $te$. The output of Structure-BERT is a probability distribution over six semantic structures, i.e., $p(y|q,\theta)$, where $\theta$ denotes the model parameters of Structure-BERT. Structure-BERT contains three sub-modules. \textbf{Question Encoder} encodes question $q$ by a BERT language model~\cite{devlin2018bert},
and the final hidden state corresponding to token \texttt{[CLS]} is used as the question embedding $e^q$. \textbf{Entity Encoder} leverages a pre-trained knowledge embedding model, such as
TransE~\cite{bordes2013translating} or ComplEx~\cite{trouillon2016complex}, to extract the entity embedding $e^h$. Next, the extracted question embedding $e^q$ and entity embedding $e^h$ are fed to a \textbf{RotatE} module for information fusion. First, we utilize a pre-trained RotatE~\cite{Sun2019RotatE} model to calculate a ``tail" embedding $e^t$ using $e^h$ and $e^q$, and then fuse the topic entity, question and ``tail" embeddings by combining $e^h$, $e^q$ and $e^t$ to a latent representation $s$ by
\begin{align}
&\bar{e}^h=e^h_{lh}+ie^h_{hh}\nonumber\\
&\bar{e}^q=e^q_{lh}+ie^q_{hh}\nonumber\\
&\bar{e}^t=\bar{e}^h\!\times\!\bar{e}^q\!=\!e^h_{lh}e^q_{lh}\!-\!e^h_{hh}e^q_{hh}\!+\!i(e^h_{hh}e^q_{lh}\!+\!e^h_{lh}e^q_{hh})\nonumber\\
&s=e^h+e^q+e^t,
\end{align}
where $e_{lh}^*$ ($e_{hh}^*$) denotes the lower (higher) half of vector $e^*$. As such, $\bar{e}^*$ is a complex vector whose real (imaginary) part is from $e_{lh}^*$ ($e_{hh}^*$). Therefore, we can convert between $e^*$ and $\bar{e}^*$ readily.
Finally, the latent representation $s$ is fed to a fully connected layer, followed by a softmax for classification. The whole network is fully differentiable and can be optimized by minimizing the traditional cross-entropy loss.
To train Structure-BERT, we need to annotate questions with their semantic structures to develop a training, validate and test sets. As discussed in Sec.~\ref{section:SS}, this annotation can be conducted readily for questions in MetaQA and WSP. The details are provided in Sec.~\ref{Labeled SS} in Appendix.
\subsection{Query Graph Ranking}
Another important component of SSKGQA is a BERT-based ranking model for query graph ranking that can be trained by a triplet loss~\cite{facenet}. Specifically, the ranking model has three inputs: (1) question $q$=\{\texttt{[CLS]}, $w_1$, $w_2$, $\cdots$, $w_m$, \texttt{[SEP]}\}, where $w_i$ is the $i$-th word of $q$; (2) positive query graph\footnote{The positive query graph of a question can be found from the relation paths provided in MetaQA or the SPARQL command provided in WSP.} $g^p$=\{\texttt{[CLS]}, $u_1$, $u_2$, $\cdots$, $u_n$, \texttt{[SEP]}\}, where $u_i$ is the $i$-th unit of a query graph that is split by space or special symbol such as ``.". For example, given query graph (\texttt{Natalie Portman, film.actor.film, v}), $o_1$=\texttt{Natalie}, $o_2$=\texttt{Portman}, ... and $o_6$=\texttt{v}; and (3) negative query graph $g^n$ that is a candidate query graph in CQG-CS except the positive candidate of the question.
We utilize a BERT model $f(.)$ to extract the semantic representations $f(q)$, $f(g^p)$, $f(g^n)$ for $q, g^p, g^n$, respectively. This BERT model is built on a pre-trained BERT from Hugging Face\footnote{https://huggingface.co/bert-base-uncased}; we add one extra multi-head attention layer on top of the hidden state of the pre-trained BERT (See the ablation study in Sec.~\ref{sec:exp}). This BERT-based ranking model $f(.)$ is then optimized by minimizing the triplet loss~\cite{facenet}
\begin{align}
\max(\|f(q)\!\!-\!\!f(g^p)\|\!-\!\|f(q)\!\!-\!\!f(g^n)\|\!+\!\alpha,\!0),
\end{align}
where $\|.\|$ denotes the Euclidean distance and $\alpha$ is a margin parameter, which we set to 1 as default.
During training, the triplet loss reduces the distance between $f(q)$ and $f(g^p)$, while enlarging the distance between $f(q)$ and $f(g^n)$. At inference time, we calculate the similarity scores between question and its candidate query graphs from CQG-CS, and choose the top-1 highest scored candidate as query graph $g$ to retrieve final answer from KG.
\section{Experiments}\label{sec:exp}
We evaluate the performance of SSKGQA on two popular KGQA benchmarks MetaQA and WebQuestionsSP (WSP), and compare it with seven state-of-the-arts methods. Ablation study is also conducted to understand the effectiveness of different components of SSKGQA.
Our PyTorch source code is provided as supplementary material. All our experiments are performed on Nvidia RTX GPUs.
\subsection{Datasets}
\begin{itemize}
\item \textbf{MetaQA}~\cite{Zhang2017Variational} is a large scale KGQA dataset with more than 400k questions. It contains questions with 1, 2 or 3 hops. In our experiments, we use the vanilla version of the QA dataset. MetaQA also provides a KG from the movie domain with 43,233 entities, 9 relations and 134,741 triples.
\item \textbf{WebQuestionsSP (WSP)}~\cite{Yih2016The} is a small scale KGQA dataset with 5,119 questions which are answerable through Freebase KG. Since Freebase has more than 338,580,000 triples, for ease of experimentation we use a light version provided by ~\citet{Apoorv2020Improving}. This smaller KG has 1.8 million entities and 5.7 million triples.
\end{itemize}
The statistics of training, development and test datasets of MetaQA and WSP is provided in Table~\ref{tab:statistics}. Compared to MetaQA, WSP is relatively small QA dataset even though its KG is much larger than that of MetaQA.
\begin{table}[h]
\centering
\renewcommand\arraystretch{1.3}
\begin{tabular} {lrrr}
\hline
\hline
Dataset& Train & Dev & Test \\
\hline
MetaQA- hop1&96,106 & 9,992& 9,947 \\
MetaQA- hop2& 118,980 & 14,872 & 14,872 \\
MetaQA- hop3& 114,196& 14,274&14,274\\
\hline
WSP& 3,304 & -- & 1,815\\
\hline
\hline
\end{tabular}
\caption{Statistics of the MetaQA and WSP datasets}\label{tab:statistics}
\end{table}
\subsection{Hyperparameter Settings}
\paragraph{Structure-BERT}
We set the dropout rate to 0.1, batch size to 32, and use AdamW optimizer~\cite{loshchilov2017decoupled} with the learning rate of 5e-8. We also apply gradient clipping to constrain the maximum value of $L_2$-norm of the gradients to be 1. To extract the latent representations of topic entities, pre-trained ComplEx~\cite{trouillon2016complex} and TransE~\cite{bordes2013translating} are adopted for MetaQA and WSP, respectively.
\paragraph{BERT-based Ranking Model} We add one extra multi-head attention layer on top of the hidden state of the pre-trained BERT. This extra multi-head attention layer contains three attention heads and a 3072-dim fully connected layer. The dropout rate is set to 0.5. We use AdamW Optimizer~\cite{loshchilov2017decoupled} with the learning rate of 2e-5. We also use gradient clipping to constrain the max $L_2$-norm of the gradients to be 1.
\subsection{Baselines}
We compare our SSKGQA against seven state-of-the-art complex KGQA models: 1) GraftNet~\cite{Sun2018Open}, which answers the questions based on the subgraphs it creates. 2) PullNet~\cite{Sun2019PullNet}, which proposes a ``pull" operation to retrieve the relevant information from KG and external corpus. 3) Key-Value Memory Network (KV-MemNN)~\cite{Miller2016Key}, which uses key-value pairs as the memory units to answer questions. 4) EmbedKGQA~\cite{Apoorv2020Improving}, which proposes a knowledge embedding method for Complex KGQA. 5) TransferNet~\cite{shi2021transfernet}, which utilizes an interpretable model for complex KGQA. 6) DCRN~\cite{cai2021deep}, which proposes a Bayesian network to retrieve the final answers. For MetaQA, we also include 7) VRN~\cite{Zhang2017Variational} as the baseline, which proposes an embedding reasoning graph and utilizes variational inference to improve the performance of Complex KGQA.
\begin{table}
\centering
\renewcommand\arraystretch{1.1} \begin{tabular} {lcccc}
\hline
\hline
Model& Hop-1 & Hop-2 & Hop-3 & WSP \\
\hline
KV-MemNN& 96.2 & 82.7& 48.9&46.7\\
VRN&97.5& 89.2& 62.5&- \\
GraftNet& 97.0&94.8& 77.7&66.4 \\
PullNet& 97.0&\textbf{99.9}&91.4&68.1\\
EmbedKGQA& 97.5 & 98.8 & 94.8&66.1\\
DCRN& 97.5 &\textbf{ 99.9 }& 99.3&67.8\\
TransferNet& 96.0 & 98.5 & 94.7&\textbf{71.4}\\
\hline
SSKGQA& \textbf{99.1} & 99.7 &\textbf{99.6}&\textbf{71.4}\\
\hline
\hline
\end{tabular}
\caption{Hits@1 values of different KGQA methods on MetaQA and WSP. Hop-$n$ denotes the hop-$n$ questions of MetaQA.}
\label{Ref:hit1 inMetaQA and WSP}
\end{table}
\subsection{Comparison with State-of-the-Arts} Table~\ref{Ref:hit1 inMetaQA and WSP} reports the performances of SSKGQA and seven state-of-the-art methods on MetaQA and WSP. As can be seen, the performances of KV-MemNN are limited by the error propagation over multi-hop reasoning, i.e., as the number of hops increases, its performance is degraded significantly. GraftNet and PullNet perform similarly well on all datasets (expect MetaQA-hop3) as both of them rely on subgraphs to retrieve the answers. Compared to GraftNet, PullNet has much improved results on MetaQA-hop3, indicating that the proposed pull operation is more suitable to complex questions. EmbedKGQA achieves a good performance on MetaQA, but a relatively lower performance on WSP. This is because treating question as a relation path in a triple may introduce more noise especially when the question is more complex. Even though DCRN achieves the best performance on MetaQA-hop2, it still suffers from error propagation when inferring the reasoning paths. For more complex WSP questions, DCRN has a 3.6-point lower accuracy than that of our method. In general, TransferNet is the most competitive one to our SSKGQA. While both methods have the best results on WSP, SKGQA has an improved performance on MetaQA over TransferNet. Overall, SSKGQA outperforms or achieves comparable exact-match hits@1 performances to the other methods, demonstrating the effectiveness of our proposed method.
\subsection{Ablation Study}
We further investigate the effectiveness of different components of SSKGQA, including semantic structure based filtering, Structure-BERT, the BERT-based ranking model, etc.
\subsubsection{Impact of Semantic Structure based Filtering}
One of the core ideas of SSKGQA is the semantic structure based filtering. In this section, we evaluate the effectiveness of this operator by enabling / disabling it and report the final performances of SSKGQA, which correspond to the w/ SS and w/o SS results in Table~\ref{ref:with ss}. For the purpose of illustration, when we enable the filtering (w/ SS), we assume that our Structure-BERT classifier can correctly predict the semantic structures of all the questions with a 100\% accuracy, and therefore the impact of the filtering isn't affected by the accuracy of the classifier. For ease of experimentation, we use a BiGRU as the ranking model in this experiment.
Table~\ref{ref:with ss} reports the impacts of the semantic structure based filtering. It can be observed that for simple questions, e.g., MetaQA-hop1, SSKGQA w/ SS and w/o SS have very similar performances. However, when the questions are more complex, SSKGQA w/ SS achieves significantly higher accuracies (sometimes over 10\%) than SSKGQA w/o SS, demonstrating the effectiveness of the semantic structure based filtering for complex questions.
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular} {lllll }
\hline
\hline
&Hop-1& Hop-2 & Hop-3& WSP\\
\hline
w/o SS&99.11& 93.71 & 62.10& 45.89\\
\hline
w/ SS&\textbf{ 99.26}&\textbf{ 99.03} & \textbf{95.69}& \textbf{ 58.51}\\
\hline
\hline
\end{tabular}
\caption{Hits@1 values of SSKGQA w/ SS and w/o SS on MetaQA and WSP.}
\label{ref:with ss}
\end{table}
\subsubsection{Accuracy of Structure-BERT}
Structure-BERT plays a critical role in SSKGQA as it predicts the semantic structure of a question, which is then used to filter out noisy candidate query graphs. In this section, we evaluate the accuracy of Structure-BERT and compare it with other design choices.
Specifically, we compare the performance of Structure-BERT with four other classifiers, including BiGRU and three pre-trained language models: BERT~\cite{devlin2018bert}, DistilBERT~\cite{budzianowski2019hello} and CamemBERT~\cite{martin2020camembert}). For these four classifiers, they classify a question directly to one of the six semantic structures without considering topic entity and information fusion as in Structure-BERT.
\begin{table}[h]
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular} {lllll}
\hline
\hline
Model& Hop1&Hop2&Hop3&WSP\\
\hline
BiGRU& 96.44&94.49&98.83&80.95\\
BERT& 94.52&98.70&96.22&82.62\\
DistilBERT& 95.66&98.30&97.02& 83.37\\
CamemBERT& 96.66&97.30&98.26& 81.90\\
Structure-BERT&\textbf{99.24} &\textbf{99.87}&\textbf{99.73}&\textbf{86.97}\\
\hline
\hline
\end{tabular}
\caption{Classification accuracies of different classifiers on predicting semantic structures of questions from MetaQA and WSP.} \label{ref:structure:bert}
\end{table}
Table~\ref{ref:structure:bert} reports the classification accuracies of different classifiers on the questions from MetaQA and WSP. As can be seen, our Structure-BERT achieves nearly 100\% accuracies on MetaQA and 86.97\% accuracy on WSP, demonstrating the effectiveness of Structure-BERT on semantic structure prediction. Further, Structure-BERT consistently outperforms all the other classifiers by notable margins, indicating the importance of leveraging both question and topic entity for information fusion for semantic structure prediction. We also notice that the classification accuracy on WSP is much lower than that of MetaQA. This is likely due to: (1) the class imbalance issue of the WSP questions, and (2) much smaller number of training questions in WSP (3,304) than that of MetaQA (329,282). We will leave the further improvements of Structure-BERT on WSP to future work.
\subsubsection{Performance of BERT-based Ranking Model}
The BERT-based ranking model decides which candidate query graph is to be used to retrieve the final answer. Therefore, its performance is of the paramount importance to SSKGQA. In this section, we evaluate the effectiveness of our proposed BERT-based ranking model and compare it with other three ranking methods, including 1) CNN~\cite{yih2015semantic}, which uses a CNN to learn the representation of question and candidate query graph for ranking. 2) BiGRU, which uses a BiGRU to learn the representation of question and candidate query graph for ranking.
3) BERT~\cite{devlin2018bert}, which uses a pre-trained BERT\footnote{https://huggingface.co/bert-base-uncased} to extract the representation of question and candidate query graph for ranking.
\begin{table}[h]
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular} {lllll}
\hline
\hline
Model &Hop-1&Hop-2&Hop-3&WSP\\
\hline
CNN &97.70 &99.21 &92.91&50.24\\
BiGRU &98.87 &98.95 &95.43&56.51\\
BERT &\textbf{99.49} &99.26 &99.54& 71.02\\
BERT$^*$ (ours) &99.10 &\textbf{99.69} &\textbf{99.64}&\textbf{71.40}\\
\hline
\hline
\end{tabular}
\caption{Hits@1 values of different ranking models on MetaQA and WSP. BERT$^*$ denotes our BERT-based ranker.}
\label{tab:ranking model}
\end{table}
Table~\ref{tab:ranking model} reports the performances of different ranking models on MetaQA and WSP, where BERT$^*$ denotes our proposed BERT-based ranking model that has one extra multi-head attention layer on top of the hidden state of the pre-trained BERT. As we can see, the BERT-based ranking models (BERT and BERT$^*$) outperform the traditional CNN or BiGRU based ranking models since the former can leverage large scale pre-trained BERT for transfer learning. Our BERT$^*$ further improves the performance of the pre-trained BERT due to the additional attention layer which enables model to reweight the attention values to different semantic units in the input and enhance the semantic representation of question and candidate query graphs for ranking.
To validate the design choices of our BERT-based model, we run additional ablation studies on different factors of our ranking model, such as number of negative query graphs for the triplet loss based training and number of heads in the added multi-head attention layer. The details are relegated to Sec.~\ref{ref:more_ablation} in Appendix.
\section{Conclusions}
This paper introduces SSKGQA, a semantic structure based method to predict query graphs from natural language questions. Compared to prior query graph prediction based methods, SSKGQA filters out noisy candidate query graphs based on the semantic structures of input questions. To this end, we define six semantic structures from common questions of the KGQA benchmarks. A novel Structure-BERT classifier is then introduced to predict the semantic structure of each question, and a BERT-based ranking model with a triplet loss is proposed for query graph ranking. Extensive experiments on MetaQA and WSP demonstrate the superior performance of SSKGQA over seven state-of-the-art methods.
As for future work, we plan to investigate techniques to design additional semantic structures to cover the remaining 22.98\% of questions in WSP. We would like also to improve Structure-BERT's accuracy on WSP by addressing the class imbalance and data scarcity issues of WSP.
\bibliographystyle{acl_natbib}
\section{Related Work}
\subsection{Multi-hop Question Answering}
Current works on multi-hop question answering mainly focus on how to retrieve answers by calculating the relation paths step by step. In general, a right answer can be retrieved if the relation paths are identified correctly at each hop. \citet{Kun2019Enhancing} enhances the traditional Key-Value Memory Neural Networks (KV-MemNNs)~\cite{Miller2016Key} for multi-hop question answering. They design a query updating strategy to decompose the question and predict the relevant relation paths at each hop. TransferNet~\cite{shi2021transfernet} also calculates the relation path scores based on an updated question at each hop, but they leverage the attention mechanism to update question representations over multiple hops. More recently, \citet{cai2021deep} introduces the dual process theory to predict the relation paths at each hop. Although these methods achieve promising results, they suffer from error propagation when predicting the relation path step by step. To mitigate this issue, SSKGQA identifies the top-1 query graph by directly calculating the similarity scores between question and candidate query graphs (or similarly relation paths).
On the other hand, \citet{Sun2019PullNet,Sun2018Open} incorporate external corpus to enhance the performance of KGQA. First, they construct a subgraph for each question from KG and the Wikipedia documents. This subgraph stores the entities and text with a high probability to the topic entity and question. Afterwards, a Graph Neural Network (GNN)~\cite{Kipf2016Semi} is exploited to identify the final answer. A challenge of this method is that it is difficult to construct a subgraph around topic entity because we need to identify relevant entities from external corpus, and this process is error-prone. \citet{Apoorv2020Improving} predicts the answers by training a KG embedding model under a tuple of (topic entity, question, answer). However, complex questions with long relation paths can reduce the learnability of KG embedding significantly. Our SSKGQA does not need external corpus to improve prediction accuracies, and can solve complex multi-hop questions by a semantic structure based ranking model.
\subsection{Complex Questions with Constraints}
For questions with constraints, a sequence of works focus on how to reach the answers by generating query graphs. \citet{yih2015semantic} enumerates all possible entities and relation paths that connect with a topic entity to generate candidate query graphs, and uses a CNN-based ranking model to identify the query graph. Following a similar candidate query graph generation methods of \cite{yih2015semantic}, \citet{maheshwari2019learning} propose a novel query graph ranking method based on self-attention. \citet{qin2021improving} introduce a query graph generation method by using their proposed relation subgraphs. However, these methods largely ignore the noise when generating the candidate query graphs, which undermines the predictive performance during query graph ranking. To mitigate this issue, SSKGQA predicts the semantic structure of a question, which is then used to reduce the noise in candidate query graphs.
\subsection{Query Graph Ranking}
Current research on KGQA mainly focuses on how to generate the candidate query graphs, and there are only a few works exploring how to rank the candidate query graphs. \citet{lan2020query} concatenates question and candidate query graph into a single sequence, and leverages BERT \cite{devlin2018bert} to process the whole sequence for ranking. However, \citet{reimers2019sentence} show that this strategy is inefficient as it can incur a massive computation due to the combinatorial nature of concatenation of question and candidate query graphs, leading to duplicated calculation. \citet{chen2020formal} explore GRUs to encode the question and query graph information, and utilize a hinge loss to learn a ranking model. However, GRUs can only learn a limited interaction among words in a sentence, while the global interactions among words has proven to be critical for text representation in various NLP applications~\cite{khan2020mmft}. To solve the aforementioned issues, SSKGQA exploits separated BERT models to process questions and query graphs, respectively, and reuses the extracted features to avoid duplicated calculation and leverages a triplet loss to train a BERT-based ranking model. |
2,869,038,156,665 | arxiv | \section{Introduction}
One of the most important problems of contemporary material science is investigation of onset and development of gas porosity in materials. The creation of materials with improved radiation hardness is important for development of atomic energetics development as well as for other sectors of industry. Along with vacancy pores, gas-filled pores were discovered forming due to irradiating metals by quick neutron or charged particle fluxes in accelerators. For the first time, theoretical investigation of these problems was performed in the works \cite{1s}-\cite{7s}. In the same works, the growth of pores filled with noble gases was considered as applied to material swelling, that is, to a large degree, connected with pore coalescence. The physical cause of material swelling as a consequence of gas porosity consists in absorbing of thermal vacancies at redistribution of pores during the coalescence. Pore behaviour becomes even more complicated if it is filled chemically active gas (or gases) that at coalescence temperatures can interact matrix material or other gases, forming inside the pore one or several gaseous compounds). Such situation can take place, for example, under irradiation. At that, fragments in the form of chemically active gas molecules are formed in the material. The process of gas-filled bubble formation can, probably, occur in many materials, since practically all real materials contain interstitial impurities in the form of oxide, carbide, nitride, and other phases \cite{4s}.
Another up-to-date trend connected with investigation of gas porosity relates to the creation of new nano- and mesomaterials. Such materials are formed via consolidation of nano-and mesoscale particles that, initially, possess complex defect structure. Properties of such particles to a large degree are determined just by this defect structure \cite{8s}-\cite{17s}. Regularities of diffusion growth, healing and motion of such defects in nanoparticles present an important problem for further compactification of nanoparticles and creation of new materials.
Such materials find important applications in optical spectroscopy, biomedicine, electronics and other areas \cite{18s}-\cite{19s}.
Creation of the theory of diffusive interaction of pores in bounded media, for example, in spherical particles, is an exceptionally complicated task.
In bounded particles of the matrix, the influence of close boundary complicates strongly pore behaviour. Closeness of boundary leads to principally different pore behaviour as compared to that in unbounded materials. It is worse to note, that pore formation in spherical nanoshells was discovered relatively recently \cite{8s}. In the review \cite{20s} the results are presented of theoretical and numerical investigations related to formation and disappearing of pores in spherical and cylindrical nanoparticles. Great attention in \cite{20s} is paid to the problem of hole nanoshell stability, i.e. to the case when in the nanoparticle center large vacy pores are situated.
Analytical theory of diffusive interaction of the nanoshell and the pore situated at arbitrary distance from particle center was considered in the work \cite{21s}. Here, the behaviour of vacancy pore inside solid matrix of spherical shape. With the supposition of quasiequilibrium of diffusive fluxes, the equations have been obtained analytically for the change of the radii of pore and spherical granule as well as of center-to-center distance between the pore and the granule. The absence of critical pore size has been demonstrated unlike the case of inorganic matrix. In general case, pore in such particles dissolves diffusively, while diminishing in size and shifting towards granule center.
In the present work, a simple case in considered of zero diffusion coefficient of the gas in the matrix. It has been shown that the behaviour of gas-filled pore is qualitatively different from that of vacancy pore in spherical matrix. Thus, unlike vacancy pore, the gas-filled one is of stable size, that is determined by the gas density. Asymptotic regimes as well as main regularities of the gas-filled pore behaviour has been established.
\section{Evolution equations of gas-filled pore}
Let us consider the spherical granule of the radius $R_s$ containing the gas-filled pore of the radius $R < R_{s}$ (see Fig.\ref{fg1}).
Let us designate the initial values of these radii (at time moment $t=0$) as $R_s(0)$ and $R(0)$ correspondingly.
Suppose that granule and pore centers are separated from each other by the distance $l$. We assume that the gas can be found only inside the pore and neglect gas diffusion through pore boundaries. We are interested in pore and granule evolution under the influence of diffusive vacancy fluxes. The complete description of such evolution assumes the knowledge of pore and granule size change with time as well as of time change of their center-to-center distances. In order to obtain the equations describing such evolution, the boundary conditions are required, that are determined by equilibrium vacancy concentrations near the pore and granule surfaces. The equilibrium vacancy concentration near a spherical pore surface is determined, with the account of gas pressure, by the relation (see e.g. \cite{7s},\cite{22s} ),
\begin{equation}\label{eq1}
c_{R}=c_{V}\exp\left(\frac{2\gamma\omega}{kTR}-\frac{P\omega}{kT}\right)\,,
\end{equation}
where $c_V$ is equilibrium vacancy concentration near the plane surface, $\gamma$ is the surface energy per unit area, $T$ is granule temperature, $\omega$ is the volume per lattice site $\omega$ is the atomic volume of a vacancy, $P$ is gas pressure inside the pore satisfying the equation of state of ideal gas:
\[P\cdot \frac{4\pi}{3}\cdot R^3=N_g kT,\]
here $N_g$ is the gas quantity inside the pore.
In the same way, the equilibrium concentration of vacancies near the spherical granule free surface is determined. At this, taking into account that of interest are small granules of nano- and meso- sizes, it is natural to assume the smallness of the external pressure as compared to the Laplass pressure. Then the equilibrium vacancy concentration near the spherical granule free surface is determined as
\begin{equation}\label{eq2}
c_{R_{s}}=c_{V}\exp\left(-\frac{2\gamma\omega}{kTR_s}\right)\,,
\end{equation}
These concentration values will determine vacancy fluxes. In the further consideration,
we will suppose that equilibrium concentrations adjust quickly to the change of pore
and granule sizes. In other words, equilibrium concentrations tune themselves to pore
and granule size change. Certainly, the problem remains extremely complicated. For
the sake of simplicity, it is natural to make one more assumption, namely, to suppose
that stationary fluxes of vacancies inside granule are quickly established. There are
two arguments in favour of this.
\begin{figure}
\centering
\includegraphics[width=7 cm]{pr1.eps}\\
\caption{Gas-filled pore in spherical granule in bispherical coordinate system. Pore and granule surfaces in this system are coordinate planes $\eta =\textrm{const}.$} \label{fg1}
\end{figure}
First of all, even if one gets out of the limits of such assumption, vacancy distribution inside granule is unknown. Besides, in a number of cases, stationary fluxes are established quickly enough. The evaluation of characteristic time during which stationary fluxes are established gives $\tau \ll l^2/D$.
Under such assumptions, diffusion flux of vacancies onto pore and granule boundaries is determined by stationary diffusion equation and corresponding boundary conditions
\begin{equation}\label{eq3}
\Delta c =0 ,
\end{equation}
\[c(r)|_{r=R}=c_{R} ,\]
\[c(r)|_{r=R_{s}}=c_{R_{s}} .\]
The geometry of pore and granule boundaries dictates the use of bispherical coordinate system \cite{23s}, as the most convenient one. In bispherical coordinate system (see Fig.\ref{fg1}) each point $A$ of the space is matched to three numbers $(\eta,\xi,\varphi)$, where $\eta=\ln(\frac{|AO_1|}{|AO_2|})$, $\xi=\angle O_1AO_2$, $\varphi$ is polar angle.
Let us cite relations connecting bi-spherical coordinates with Cartesian ones:
\begin{equation}\label{eq4}
x=\frac{a\cdot \sin\xi\cdot\cos\varphi}{\cosh\eta-\cos\xi}, \quad
y=\frac{a\cdot \sin\xi\cdot\sin\varphi}{\cosh\eta-\cos\xi}, \quad
z=\frac{a\cdot \sinh\eta}{\cosh\eta-\cos\xi},
\end{equation}
where $a$ is the parameter, that at fixed values of pore and granule radii as well as of their center-to-center distance is determined by the relation
$$a=\frac{\sqrt{[(l-R)^2-R_s^2][(l+R)^2-R_s^2]}}{2\cdot l}\,.$$
Pore and granule surfaces in such coordinate system are given by relations
\begin{equation}\label{eq5}
\eta_1=\textrm{arsinh} \left(\frac{a}{R}\right), \quad
\eta_2=\textrm{arsinh} \left(\frac{a}{R_s}\right).
\end{equation}
These relations determine values of $\eta_1$ and $\eta_2$ from pore and granule radii, while $a$ includes additionally center-to center distance between the pore and the granule. In the bispherical coordinate system the equation determining vacancy concentration and boundary condition takes on a following form:
\begin{equation}\label{eq6}
\frac{\partial}{\partial\eta}\left(\frac{1}{\cosh\eta-\cos\xi}\frac{\partial c}{\partial\eta}\right)+
\frac{1}{\sin\xi}\frac{\partial}{\partial\xi}\left(\frac{\sin\xi}{\cosh\eta-
\cos\xi}\frac{\partial c}{\partial\xi}\right)+\frac{1}{(\cosh\eta-
\cos\xi)\cdot \sin^2\xi}\frac{\partial^2
c}{\partial\varphi^2}= 0
\end{equation}
\[c(\eta , \xi, \varphi)|_{\eta_1}=c_{R}\]
\[c(\eta , \xi, \varphi)|_{\eta_2}=c_{R_s}\]
Due to symmetry of the problem, vacancy concentration does not depend on variable $\varphi$. Consequently, equation (\ref{eq6}) is reduced to
\begin{equation}\label{eq7}
\frac{\partial}{\partial\eta}\left(\frac{1}{\cosh\eta-\cos\xi}\frac{\partial c}{\partial\eta}\right)+
\frac{1}{\sin\xi}\frac{\partial}{\partial\xi}\left(\frac{\sin\xi}{\cosh\eta-
\cos\xi}\frac{\partial c}{\partial\xi}\right)= 0
\end{equation}
Let us perform substitution for the required function $c(\eta,\xi)=\sqrt{\cosh\eta-\cos\xi}\cdot
F(\eta,\xi)$ gives us equation for function $F(\eta,\xi)$ in the following form:
\begin{equation}\label{eq8}
\frac{\partial^2F}{\partial\eta^2}+\frac{1}{\sin\xi}\frac{\partial}{\partial\xi}
\left(\sin\xi\frac{\partial F}{\partial\xi}\right)-\frac14F=0\,,
\end{equation}
Let us try solution of the equation by the method of separation of variables: $F(\eta,\xi)= F_1(\eta)\cdot F_2(\xi)$. As a result, the following equations are obtained:
\[\frac{d^2F_1}{d\eta^2}=\left(k+\frac{1}{2} \right)^2\cdot F_1 ,\]
\[ \frac{1}{\sin\xi}\frac{d}{d\xi}
\left(\sin\xi\frac{d F_2}{d\xi}\right)=-k\cdot(k+1)\cdot F_2\,.\]
Here parameter $k$ is a separation constant.
The solution of these equations can be easily found, taking into account that the second one coincides with Legendre equation. Then, general solution can be written down in the form:
\begin{equation}\label{eq9}
c(\eta,\xi) = \sqrt{\cosh\eta-\cos\xi}\times$$
$$\times\sum_{k=1}^{\infty} (A_k\cdot\exp(k+1/2\eta)\cdot
P_k(\cos(\xi))+B_k\cdot\exp(-(k+1/2\eta))\cdot P_k(\cos(\xi)))\,,
\end{equation}
where $A_k$ and $B_k$ are, arbitrary constants, and $P_k(x)$ are Legendre polynoms.
\[P_k(x) = \frac1{2^k\cdot k!}\frac{d^k}{dx^k}(x^2-1)^k,\,\quad P_0(x)\equiv 1.\]
We still have to determine the values of arbitrary constants from boundary conditions and find boundary problem solution (\ref{eq6}) as
\begin{equation}\label{eq10}
c(\eta, \xi ) =
\sqrt{2(\cosh\eta-\cos\xi)}\left\{{c_R}\sum_{k=0}^\infty
\frac{\sinh(k+1/2)(\eta-\eta_2)}{\sinh(k+1/2)(\eta_1-\eta_2)}\exp(-(k+1/2)\eta_1)P_k(\cos\xi)-
\right.$$
$$ \left.- c_{R_s}\sum_{k=0}^\infty
\frac{\sinh(k+1/2)(\eta-\eta_1)}{\sinh(k+1/2)(\eta_1-\eta_2)}\exp(-(k+1/2)\eta_2)P_k(\cos\xi)
\right\}\,.
\end{equation}
Let us note, that here boundary concentration $c_R$ is expressed through $\eta_1$ and $a$, and $c_{R_s}$ through $\eta_2$ and $a$. This solution determines stationary vacancy concentration anywhere inside spherical granule of radius $R_s$ and outside pore of radius $R$. However, the knowledge of vacancy concentration allows one to find vacancy fluxes onto the pore as well as onto granule boundary at the given positions of granule and pore. These fluxes cause change size and position of pore. With account of this, one can write down the equations for the time change of pore and granule radii as well as of their center-to-center distance. Vacancy flux is determined by the first Fick's low as
\begin{equation}\label{eq11}
\vec{j}=-\frac {D}{\omega} \nabla c\,,
\end{equation}
where $D$ is diffusion coefficient. Let denote the outer pore surface normal as $\vec{n}$. Then vacancy flux onto pore surface is determined by scalar product $\vec{n} \cdot \vec{j}|_{\eta=\eta_1}$. Let us write down the expression for vacancy flux onto unit area of pore surface using the expression for gradient in bispherical coordinates \cite{23s}
\begin{equation}\label{eq12}
\vec{n} \cdot \vec{j}|_{\eta=\eta_1}=\frac {D}{\omega} \cdot
\frac{\cosh\eta_1-\cos\xi}{a}\frac{\partial
c}{\partial\eta}\left|_{\eta=\eta_1}\right.\,.
\end{equation}
Similar expression determines vacancy flux onto unit area of granule surface
\begin{equation}\label{eq13}
\vec{n} \cdot \vec{j}|_{\eta=\eta_2}=\frac {D}{\omega} \cdot
\frac{\cosh\eta_2-\cos\xi}{a}\frac{\partial
c}{\partial\eta}\left|_{\eta=\eta_2}\right.\,.
\end{equation}
Here $\vec{n}$ is granule surface normal. Evidently, the total vacancy flux onto pore surface determines the rate of pore volume change. It is natural to suppose, that surface diffusion, whose diffusion coefficient usually much exceeds that of the bulk, is in time to restore spherical shape of the pore and the granule. Thus, it is easy to write down the equation for pore volume change in the form
\[\dot{R}=-\frac{\omega}{4\pi R^2} \oint\vec{n}\vec{j}|_{\eta=\eta_1} \, dS\]
In the same way one obtains the equation that determines granule radius:
\[\dot{R_s}=-\frac{\omega}{4\pi R_s^2}\oint \vec{n}\vec{j}|_{\eta=\eta_2} dS\]
After substitution of the exact solution and performing integration, one obtains an equation for pore radius change with time:
\begin{equation}\label{eq14}
\dot{R}=-\frac{D}{R}\left[\frac{c_R}{2}
+\sinh\eta_1\cdot(c_R\cdot(\Phi_1+\Phi_2)-2c_{R_s}\cdot\Phi_2)\right],
\end{equation}
where functions $\Phi_1$ and $\Phi_2$ are introduced, that consist of the sum of exponential series:
\[\Phi_1=\sum_{k=0}^\infty \frac{ e^{-(2k+1)\eta_1}}{e^{(2k+1)(\eta_1-\eta_2)}-1}, \quad \Phi_2=\sum_{k=0}^\infty \frac{ e^{-(2k+1)\eta_2}}{e^{(2k+1)(\eta_1-\eta_2)}-1}\]
The details of the derivation are given in the appendix. Here $\eta_1$ and $\eta_2$ are expressed through pore and granule radii in correspondence with relations (\ref{eq5}), while $c_R$ and $c_{R_s}$ through (\ref{eq1}), (\ref{eq2}). Thus, the right part of this equation depends nonlinearly on $R$, ${R_s}$ and $l$. In a similar way we obtain equation
\begin{equation}\label{eq15}
\dot{R_s}=-\frac{D}{R_s}\left[\frac{c_{R_s}}{2}
+\sinh\eta_2\cdot(2c_{R}\cdot\Phi_2 - c_{R_s}\cdot(\Phi_2+\Phi_3)) \right]\,,
\end{equation}
where the following definition for the function $\Phi_3$ is introduced:
\[\Phi_3=\sum_{k=0}^\infty \frac{ e^{-(2k+1)(2\eta_2-\eta_1)}}{e^{(2k+1)(\eta_1-\eta_2)}-1}=\sum_{k=0}^\infty \frac{ e^{-(2k+1)\eta_3}}{e^{(2k+1)(\eta_1-\eta_2)}-1}\]
In order to obtain a closed set of equations determining granule and pore evolution, one needs to complement these equations with one for the rate of changing center-to-center distance between the pore and the granule. Of course, the displacement rate of vacancy pore relative to granule center is also determined by diffusion fluxes of vacancies onto pore surface (see e.g. \cite{7s},\cite{22s} ). In the present case, the displacement rate is determined by relation
\begin{equation}\label{eq16}
\vec{v}=-\frac{3\omega}{4\pi R^2}\oint \vec{n}(\vec{n}\cdot\vec{j}_v)|_{\eta=\eta_1} dS.
\end{equation}
Using again the exact solution (\ref{eq10}) and performing integration (see Appendix), one obtains:
\begin{equation}\label{eq17}
\vec{v}=\vec{e_z}\cdot\frac{3 D}{R}\times$$
$$\times
\left[\sinh^2\eta_1 \cdot(c_R\cdot
(\widetilde{\Phi}_1+\widetilde{\Phi}_2)-2c_{R_s}\cdot \widetilde{\Phi}_2)-\frac{1}{2}\sinh2\eta_1\cdot(c_R\cdot(\Phi_1+\Phi_2)-2c_{R_s}\cdot \Phi_2)\right]
\end{equation}
Here new functions $\widetilde{\Phi}_1$ and $\widetilde{\Phi}_2$ are defined:
\[\widetilde{\Phi}_1=\sum_{k=0}^\infty \frac{(2k+1)e^{-(2k+1)\eta_1}}{e^{(2k+1)(\eta_1-\eta_2)}-1}, \quad \widetilde{\Phi}_2=\sum_{k=0}^\infty \frac{(2k+1)e^{-(2k+1)\eta_2}}{e^{(2k+1)(\eta_1-\eta_2)}-1}\,.\]
Taking into account that displacement rate along $z$ coincides with $dl/dt$, let us write down the equation in the final form
\begin{equation}\label{eq18}
\frac{dl}{dt} = \frac{3 D}{R}\times$$
$$\times
\left[\sinh^2\eta_1 \cdot(c_R\cdot
(\widetilde{\Phi}_1+\widetilde{\Phi}_2)-2c_{R_s}\cdot \widetilde{\Phi}_2)-\frac{1}{2}\sinh2\eta_1\cdot(c_R\cdot(\Phi_1+\Phi_2)-2c_{R_s}\cdot \Phi_2)\right]
\end{equation}
The obtained equation set (\ref{eq14}), (\ref{eq15}) and (\ref{eq18}) determines completely evolution of the gas-filled pore and the granule with time. In the limiting case when the gas is absent, $P=0$ (vacancy pore), equations (\ref{eq14}), (\ref{eq15}) and (\ref{eq18}) agree with results of the work \cite{23s}. Let us discuss several general properties of the obtained equation set. First of all, it is clear that the volume of granule material does not change with time. Vacancies only carry away 'emptiness'. It is easy to establish this conservation law from the obtained equation set. It can be shown easily that
\[R_s(t)^2\dot{R}_s(t)-R(t)^2\dot{R}(t)=0\]
The validity of such conservation law is connected closely with current quasi stationary approximation. Vacancy fluxes, that come out from the pore and from the granule are balanced with each other. Thus, the volumes of the pore and of the granule are connected with each other by an easy relation
\begin{equation}\label{eq19}
R_s(t)^3 ={V+R(t)^3}
\end{equation}
where $V=R_s(0)^3 -R(0)^3$ is initial volume of granule material (multiplier $4\pi /3$ is omitted for convenience). From the very statement of the problem, the second conservation low follows, i.e. the low of conservation of gas amount inside the pore: $m_g=\textrm{const}$ or $N_g=\textrm{const}$. The existence of conservation low (\ref{eq19}) enables us to reduce the number of unknown quantities. As a result, we obtain Cauchy problem for the system of two differential equations $R$ and $l$, whose solution describes evolution of gas-filled pore inside nanoparticle:
\begin{equation}\label{eq20}
\begin{cases}
\frac{d l}{d t}=\frac{3Dc_V}{R}\cdot\exp\left(\frac{2\gamma\omega}{kTR}-\frac{3\omega N_g}{4\pi R^3}\right) \cdot \left[\frac{a^2}{R^2} \cdot (\widetilde \Phi _1 + \widetilde \Phi _2 ) - \frac{a}{R} \cdot \sqrt {1 + \frac{a^2}{R^2}} \cdot (\Phi _1 + \Phi _2 )\right]-\\
-\frac{6Dc_V}{R}\cdot\exp\left(-\frac{2\gamma\omega}{kTR_{s}}\right)\cdot \left[\frac{a^2}{R^2} \cdot \widetilde \Phi _2 - \frac{a}{R} \cdot \sqrt {1 + \frac{a^2}{R^2}} \cdot \Phi _2 \right],\\
\frac{d R}{d t}=-\frac{Dc_V}{R}\cdot\exp\left(\frac{2\gamma\omega}{kTR}-\frac{3\omega N_g}{4\pi R^3}\right) \cdot \left[\frac{1}{2} + \frac{a}{R} \cdot (\Phi _1 + \Phi _2 )\right]+\\
+\frac{2Dc_V}{R}\cdot\exp\left(-\frac{2\gamma\omega}{kTR_{s}}\right)\cdot \frac{a}{R} \cdot \Phi _2,\\
R_s =\sqrt[3]{V+R^3},\\
R|_{t=0}=R(0),\\
l|_{t=0}=l(0).
\end{cases}
\end{equation}
For the sake of convenience, let us make equation set (\ref{eq20}) dimensionless with characteristic length $R_0 = R(0)$ (that is pore radius at the initial time moment $t = 0$) and characteristic time $t_D = R_0^2/Dc_V$. Let us now go over to the following dimensionless variables:
\[r = \frac{R}{R_0},\quad r_s = \frac{R_{s}}{R_0},\quad L = \frac{l}{R_0},\quad {\tau} = \frac{t}{{t_D }}, \quad \alpha = \frac{a}{R_0},\quad \frac{{2\gamma \omega }}{{kTR}} = \frac{A}{{r}}, \]
\[\frac{{2\gamma \omega }}{{kTR_s}} = \frac{A}{{r_s}},\quad A = \frac{{2\gamma \omega }}{{kTR_0}},\quad \frac{3\omega N_g}{4\pi R^3} = \frac{B}{r^3}, \quad B=\frac{3\omega N_g}{4\pi R_0^3}.\]
Ultimately, the equation system (\ref{eq20}) can be rewritten in dimensionless form:
\begin{equation}\label{eq21}
\begin{cases}
\frac{d L}{d \tau}=\frac{3\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r} \cdot \left[\frac{\alpha^2}{r^2} \cdot (\widetilde \Phi _1 + \widetilde \Phi _2 ) - \frac{\alpha}{r} \cdot \sqrt {1 + \frac{\alpha^2}{r^2}} \cdot (\Phi _1 + \Phi _2 )\right]-\\
-\frac{6\exp\left(-\frac{A}{r_s}\right)}{r}\cdot \left[\frac{\alpha^2}{r^2} \cdot \widetilde \Phi _2 - \frac{\alpha}{r} \cdot \sqrt {1 + \frac{\alpha^2}{r^2}} \cdot \Phi _2 \right],\\
\frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r} \cdot \left[\frac{1}{2} + \frac{\alpha}{r} \cdot (\Phi _1 + \Phi _2 )\right]+\frac{2\exp\left(-\frac{A}{r_s}\right)}{r}\cdot \frac{\alpha}{r} \cdot \Phi _2,\\
r_s=\sqrt[3]{V+r^3},\\
r|_{\tau =0}=1,\\
L|_{\tau =0}=\frac{l(0)}{R(0)}.
\end{cases}
\end{equation}
The obtained non-linear system of evolution equations (\ref{eq21})
\begin{figure}
\centering
\includegraphics[width=7 cm, height=7 cm]{pr26.eps}
\includegraphics[width=7 cm, height=7 cm]{pr27.eps}\\
\caption{Phase portraits of trajectories in the plain $(r,L)$ are presented, obtained via numerical solution of Eqs. (\ref{eq21}). On the left: phase portrait for trajectories of "small" pores, obtained at initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=100$, $A=10^{-1}$ and $B=0.25$; on the right: phase portrait for trajectories of "large" pores, obtained at initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=1.5$, $A=10^{-1}$ and $B=0.25$ }
\label{fg2}
\end{figure}
is rather complicated. However we can analyse gas-filled pore evolution numerically via building vector field that mined by the right parts of equation system (\ref{eq21}). Corresponding vector field in the plain $(r,L)$ is demonstrated in Fig.\ref{fg2}. Here $r$ and $L$ are pore radius and position relative to the granule center correspondingly. Integral lines of this vector field determine phase portrait of equation system (\ref{eq21}).
The vector field for the case of "small" pores is shown at Fig. \ref{fg2} on the left, and for the "large" pores, on the right. The exact pore classification into "small" and "large" will be described below. It is easy to note that there is a limiting pore size $r_{cr}$, which a pore tries to accommodate during evolution. The size depends on the pore position either slightly or not at all. It can be understood from the physical point of view,if one takes into account that boundary conditions (\ref{eq1}), (\ref{eq2}) in this approximation do not depend on pore position. Pore motion, that is caused by vacancy fluxes onto the boundary of spherical surface, is limited by the value of gas pressure. Thus, after reaching the size $r_{cr} \approx \sqrt{3 N_g kT/8 \pi \gamma}$, at which boundary conditions become level due to gas pressure, the pore ceases changing its size and move.
Thus evolution of gas-filled pore consists in its tendency to reach some stationary size, while its position changes slowly and insignificantly. As pore size becomes close to its stationary value, pore motion is ceasing. For large pores, the direction of their motion depends on pore size. If the pore is larger then its stationary size, then, in the process of diminishing down to the stationary size, it moves towards the granule center (see the right part of Fig. \ref{fg2}). If the pore is smaller then its stationary size, then, in the process of growing up to the stationary size, it moves away from the granule center (see the right part of Fig. \ref{fg2}).
\section{Asymptotic evolution modes}
Let us consider asymptotic modes of equation set (\ref{eq21}). Possible modes are determined by three dimensionless values: $R/R_s$, $l/R_s$ and $R/l$. Let us suppose that the pore is situated at the distance $l$ from the granule center and its radius equals to $R$. The condition that such a pore is situated inside the granule leads to the purely geometrical inequality
\begin{equation}\label{eq22}
R/R_s+l/R_s < 1
\end{equation}
Such inequality is held for all evolution modes of a gas-filled pore. In different cases, the mentioned above characteristic dimensionless values are of different order of smallness. Thus, the value $\delta=R/R_s < 1$ is always smaller the unity. The same relates also to $l/R_s < 1$. Assuming smallness of some values with account of the geometrical restriction, we obtain possible asymptotic modes. Below, we will discuss in more details asymptotic modes that can be realized. Besides, it is clear, that the character of pore evolution is influenced by concentration ( or pressure) of the gas inside the pore. Here, three cases can be distinguished. The first is the case of high gas concentration, that leads to pore "swelling" up to the stationary size. Second case corresponds to such a value of gas concentration inside the pore, that initial radius and position of the pore do not change during evolution time. Third case relates to small gas concentration at which decrease of the pore size down to some stationary value occurs.
\subsection{Small pores}
First of all, let us consider the case of small pores $R/R_s \ll 1$. At this, distance from the pore to granule center can take on different values.
Thus, the case is possible when
\[ R/R_s \ll 1, \quad l/R_s \ll 1,\].
Here, the relation between this values can vary. The possibility exists that
\[R/R_s \ll l/R_s \Rightarrow R/l \ll 1\]
This means, that the distance from a small pore to the granule center is large as compared to granule radius. Thus, the mode exists when
\begin{equation}\label{eq23}
1) \quad R/R_s \ll 1, \quad l/R_s \ll 1,\quad R/l \ll 1
\end{equation}
Of course, another disposition is possible, when a small pore is situated close to the granule center. In this case, the relation between the values is opposite:
\[R/R_s \gg l/R_s \Rightarrow R/l \gg 1\]
Then, the next possible mode is determined by the relations of values
\begin{equation}\label{eq24}
2) \quad R/R_s \ll 1, \quad l/R_s \ll 1,\quad R/l \gg 1
\end{equation}
\begin{figure}
\centering
\includegraphics[ height=7 cm]{pr2.eps}
\includegraphics[ height=7 cm]{pr3.eps}\\
\includegraphics[ height=7 cm]{pr4.eps}
\includegraphics[ height=7 cm]{pr5.eps}\\
\caption{On the upper left, the dependence is shown of pore radius $r$ on time $\tau$ for different values of gas parameter $B$: $B=0.25$, $B=0.10101$ and $B=0.025$. Solid line corresponds to the numerical solution of complete equation set (\ref{eq26}), dash-and-dot line corresponds the numerical solution of approximate equations (\ref{eq30})-(\ref{eq31}); on the upper right, the dependence is presented of distance $L$ on time $\tau$ for parameter $B=0.25$; below on the left, the dependence is shown of distance $L$ on time $\tau$ for parameter $B=0.10101$; below on the right, the dependence is shown of distance $L$ on time $\tau$ for parameter $B=0.025$. All solutions are obtained at initial conditions $r|_{\tau=0}=1$, $r_s|_{\tau=0}=100$, $L|_{\tau=0}=10$ and $A=10^{-1}$.}
\label{fg3}
\end{figure}
Moreover, small pores can be situated at significant distance from the granule center that is comparable with granule size. In this case, the next relations are realized:
\begin{equation}\label{eq25}
3) \quad R/R_s \ll 1, \quad l/R_s \simeq 1, \quad R/l \ll 1.
\end{equation}
In this case, pore is situated close to granule boundary.
Let us note, that the case of small pores is distinguished by one more simplifying circumstance. It can be seen easily that healing of small pores
$\frac{R(0)}{R_s(0)} \ll 1$ cannot be accompanied by a significant change of granule dimensions. Indeed, using the relation (\ref{eq19})(19), one can estimate an order of granule size change during the evolution. According to (\ref{eq19}) this change can be written down in the form:
\[\frac{R_s(t)}{R_s(0)}=\sqrt[3]{1-\frac{R(0)^3}{R_s(0)^3}+\frac{R(t)^3}{R_s(0)^3}} \simeq 1-\frac{1}{3}\frac{R(0)^3}{R_s(0)^3} \]
Hence, within small pore approximation, granule size does not change $R_s(t) \approx R_s(0)=R_{0s}$ up to cubic order of smallness $\frac{R(0)^3}{R_s(0)^3} $. Then, neglecting granule radius change, the equation system (\ref{eq21}) takes on a simpler form:
\begin{equation}\label{eq26}
\begin{cases}
\frac{d L}{d \tau}=\frac{3\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r} \cdot \left[\frac{\alpha^2}{r^2} \cdot (\widetilde \Phi _1 + \widetilde \Phi _2 ) - \frac{\alpha}{r} \cdot \sqrt {1 + \frac{\alpha^2}{r^2}} \cdot (\Phi _1 + \Phi _2 )\right]-\\
-\frac{6\exp\left(-\frac{A}{r_{s0}}\right)}{r}\cdot \left[\frac{\alpha^2}{r^2} \cdot \widetilde \Phi _2 - \frac{\alpha}{r} \cdot \sqrt {1 + \frac{\alpha^2}{r^2}} \cdot \Phi _2 \right],\\
\frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r} \cdot \left[\frac{1}{2} + \frac{\alpha}{r} \cdot (\Phi _1 + \Phi _2 )\right]+\frac{2\exp\left(-\frac{A}{r_{s0}}\right)}{r}\cdot \frac{\alpha}{r} \cdot \Phi _2,\\
r|_{\tau =0}=1,\\
r_s|_{\tau =0}=r_{s0}, \\
L|_{\tau =0}=\frac{l(0)}{R(0)}.
\end{cases}
\end{equation}
Let us now consider asymptotic case (\ref{eq23}). By virtue of $L \gg r$, in Eqs. (\ref{eq26}), the expression for the parameter $\alpha$ is simplified
\begin{equation}\label{eq27}
\alpha \approx \frac{r_{s0}^2}{2L}\sqrt{\left(1-\frac{L^2}{r_{s0}^2}\right)^2}= \frac{r_{s0}^2}{{2L}}\left(1-\frac{L^2}{r_{s0}^2}\right),\end{equation}
and bispherical coordinates $\eta_{1,2}$, that are defined according to(\ref{eq5}), are, correspondingly, equal to:
\begin{equation}\label{eq28}
\eta _1 = \textrm{arsinh} \left( {\frac{r_s^2}{{2rL}}}\left(1-\frac{L^2}{r_s^2}\right) \right),\;\eta _2 = \textrm{arsinh} \left( \frac{r_s}{2L} \left(1-\frac{L^2}{r_s^2}\right) \right).\end{equation}
Since $\frac{\sinh\eta_1}{\sinh\eta_2}=\frac{r_s}{r} \gg 1$, then $\eta_1 \gg \eta_2$. In this case, series sums can be estimated via following expressions:
\begin{equation}\label{eq29}
\Phi_1 \approx \frac{1}{2\sinh2\eta_1},\; \Phi_2 \approx \frac{1}{\sinh2\eta_1},\; \widetilde{\Phi}_1 \approx \frac{1+2\sinh^2\eta_1}{8\sinh^2\eta_1\cosh^2\eta_1},\; \widetilde{\Phi}_2 \approx \frac{\cosh\eta_1}{2\sinh^2\eta_1},$$
$$ \sinh\eta_1=\frac{\alpha}{r},\quad \cosh\eta_1=\sqrt{1+\frac{\alpha^2}{r^2}}. \end{equation}
By substituting expressions (\ref{eq27})-(\ref{eq29}) into equation set (\ref{eq26}), we find simplified equation set:
\begin{equation}\label{eq30}
\frac{d L}{d \tau}=-\frac{3}{2}\,\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)\cdot\frac{r \left(\frac{L^2}{r_{s0}^2}\right)}{r_{s0}^2} ,\end{equation}
\begin{equation}\label{eq31}
\frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r} \cdot \left[1 + \frac{r}{2L}\cdot\left(\frac{L^2}{r_{s0}^2}\right) \right]+\frac{\exp\left(-\frac{A}{r_{s0}}\right)}{r}
\end{equation}
Equations (\ref{eq30})-(\ref{eq31}) are written down up to $L^2/r_{s0}^2$ terms. This nonlinear set signifies that, at high gas concentration, pore size increases monotonously while moving towards the granule center. Besides, smallness of the right part of (\ref{eq30}) means that pore displacement towards granule center during characteristic time of the establishment of stationary pore radius is small.
It is interesting to compare the behavior of the pore in this asymptotic mode with the solutions of complete equation set (\ref{eq26}). In Fig. \ref{fg3} the numerical solutions of exact (\ref{eq26}) and approximate (\ref{eq30})-(\ref{eq31}) equation sets are shown with the same initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=100$, $L|_{\tau =0}=10$ , $A=10^{-1}$ and different values of parameter $B$, connected to the value of gas concentration $N_g$: $B=0.25$ $(N_{g}=1.05\cdot 10^{5})$, $B=0.10101$ $(N_{g}=3.17 \cdot 10^{4})$, $B=0.025$ $(N_{g}=1.05 \cdot 10^{4})$. Fig. \ref{fg3} 2 demonstrates good agreement of the approximate solution with the solution of the complete equation set for pore radius time change for various gas concentrations. In Fig. \ref{fg3} the plots are also shown for the time change of center-to-center distance between the pore and the granule. These plots demonstrate, that at "high" gas concentrations, in the case of approximate solution, pore displacement speed is most underestimated as compared to other modes.
Such good agreement allows us to consider pore radius change at zeroth-order at $L^2/r_{s0}^2 \ll 1$. In this case we obtain a simple equation for the radius of an immobile pore:
\begin{equation}\label{eq32}
\frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r}+\frac{\exp\left(-\frac{A}{r_{s0}}\right)}{r}\end{equation}
It can be seen from here that the sign of right part of Eq. (\ref{eq32}) depends on the gas parameter $B$. One can easily obtain general solution of Eq. (\ref{eq32}) in integral form
\begin{equation}\label{eq33} \tau+\textrm{const}= e^{A/r_{s0}}\int\frac{rdr}{1-e^{A/r-B/r^3+A/r_{s0}}} \end{equation}
\begin{figure}
\centering
\includegraphics[ height=5.3 cm]{pr8.eps}
\includegraphics[ height=5.3 cm]{pr10.eps}
\includegraphics[ height=5.3 cm]{pr9.eps}\\
\caption{In Fig. solid line numerical solution equation set (\ref{eq26}), dash-and-dot -- analytical solution of Eq. (\ref{eq32}) for initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=100$, $L|_{\tau =0}=10$, $A=10^{-1}$ and different values of gas parameter $B$.}
\label{fg4}
\end{figure}
Here $\textrm{const}$ is determined by initial conditions.
Let us consider the following evolution stage of "small" ($r \ll r_{s0}$)gas-filled pores where $A \ll r$ and $B \ll r^3$: $$ \tau \approx -\frac{\exp\left(\frac{A}{r_{s0}}\right)}{A}\left(\frac{r(\tau)^3}{3}-\frac{r(0)^3}{3}+\left(\frac{B}{A}\right)\cdot \left(r(\tau)-r(0)\right)- \right.$$
\begin{equation}\label{eq34} \end{equation}
$$\left.-\frac{1}{2}\left(\frac{B}{A}\right)^{3/2}\cdot \ln \frac{\left(\sqrt{B/A}+r(\tau)\right)\left(\sqrt{B/A}-r(0)\right)}{\left(\sqrt{B/A}-r(\tau)\right)\left(\sqrt{B/A}+r(0)\right)} \right) $$
In the absence of gas inside the pore $B=0$, this dependence coincides with that obtained in \cite{21s}, where it was shown that vacancy pore healing time is proportional to third power if initial pore radius $r(0)$ and to material temperature $T$, since $A \sim 1/T$ (of course, without taking into account temperature dependence of diffusion coefficient). In the presence of the gas $B\neq 0$ one can consider the following modes:
\begin{enumerate}
\item gas density is "large" $B \gg A$;
\item "equilibrium" gas concentration $B \cong A$, at which pore radius practically does not change;
\item "low" gas density $B\ll A$.
\end{enumerate}
In Fig. \ref{fg4} dash line indicates plots of analytical solutions (\ref{eq34}) at different values of parameter $B$ that correspond to the described above modes. Solid line in Fig. \ref{fg4} indicates the numerical solution of equation set (\ref{eq26}). Here we observe a good agreement between analytical and numerical solutions.
Let us now turn to the case (\ref{eq24}) of a small pore situated close to the granule center:
\begin{equation}\label{eq35}
R/R_s \ll 1,\quad l/R_s \ll 1,\quad R \gg l. \end{equation}
Such inequalities comply with the geometrical condition $R/R_s+l/R_s\leq 1$. Taking into account (\ref{eq35}), it is easy to find expressions for parameter $\alpha$ and bispherical coordinates $\eta_{1,2}$:
\begin{equation}\label{eq36} \alpha \approx \frac{r_s^2}{2L}\left(1- \frac{r^2}{r_{s0}^2}\right),\; \eta _1 \approx \textrm{arsinh} \left( {\frac{r_{s0}^2}{{2rL}}}\left(1-\frac{r^2}{r_{s0}^2}\right) \right),\;\eta _2 \approx \textrm{arsinh} \left( \frac{r_s}{2L} \left(1-\frac{r^2}{r_{s0}^2}\right) \right)
\end{equation}
It can be seen from here, that, for small pores, the relation $\eta_1 \gg \eta_2 $ is valid. Using the estimation of series sums by formulas (\ref{eq29}), we approximate pore evolution equations for such case.
\begin{figure}
\centering
\includegraphics[ height=7 cm]{pr11.eps}
\includegraphics[ height=7 cm]{pr12.eps}\\
\includegraphics[ height=7 cm]{pr14.eps}
\includegraphics[ height=7 cm]{pr13.eps}\\
\caption{{ On the upper left, the dependence is shown of pore radius $r$ on time $\tau$ for different values of gas parameter $B$: $B=0.25$, $B=0.10101$ and $B=0.025$. Solid line corresponds to the numerical solution of complete equation set (\ref{eq26}), dash-and-dot line corresponds to the numerical solution of approximate equations (\ref{eq37})-(\ref{eq38}); on the upper right the dependence is shown of distance $L$ on time $\tau$ for parameter $B=0.25$; below on the left, the dependence is shown of distance time-change $L$ on time $\tau$ for parameter for parameter $B=0.10101$; below on the left, the dependence is shown of distance time-change $L$ on time $\tau$ for parameter for parameter $B=0.025$. All are solutions obtained at initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=100$, $L|_{t=0}=0.1$ and $A=10^{-1}$.}}
\label{fg5}
\end{figure}
\begin{equation}\label{eq37} \frac{d L}{d \tau}=-\frac{3}{2}\cdot \exp\left(\frac{A}{r}-\frac{B}{r^3}\right)\cdot \frac{r\left(\frac{L}{r_{s0}}\right)^2}{r_{s0}^2 \left(1-\frac{r^2}{r_{s0}^2}\right)^2} \end{equation}
\begin{equation}\label{eq38} \frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r}\cdot \left[1+\frac{1}{2}\cdot \frac{rL}{r_{s0}^2\left(1-\frac{r^2}{r_{s0}^2}\right)}\right]+\frac{\exp\left(-\frac{A}{r_{s0}}\right)}{r} \end{equation}
In the Fig. \ref{fg5} numerical solutions of Eqs. (\ref{eq26}) (solid line) and Eqs. (\ref{eq37})-(\ref{eq38}) (dashed line) are shown for initial conditions, satisfying inequalities (\ref{eq35}): $r|_{\tau=0}=1$, $r_s|_{\tau=0}=100$, $L|_{\tau=0}=0.1$ and $A=10^{-1}$ for different variants of gas concentration: 1) $B=0.25$ $(N_{g}=1.05\cdot 10^{5})$, 2) $B=0.10101$ $(N_{g}=3.17 \cdot 10^{4})$ and 3) $B=0.025$ $(N_{g}=1.05 \cdot 10^{4})$. The upper left part of Fig. \ref{fg5} demonstrates a good agreement of numerical solutions of Eqs. (\ref{eq25}) and (\ref{eq37})-(\ref{eq38}) for pore radius change. In the right upper part of Fig. \ref{fg5} time change of center-to-center distance between the pore and the granule is demonstrated, correspondingly, for equation set (\ref{eq26}) and Eqs. (\ref{eq37})-(\ref{eq38}) at $B=0.25$. The lower left part of Fig. \ref{fg5} shows time change of center-to-center distance between the pore and the granule correspondingly for equation set (\ref{eq26}) and Eqs. (\ref{eq37})-(\ref{eq38}) at $B=0.10101$. Lower right part of Fig. \ref{fg5} shows time change of center-to-center distance between the pore and the granule correspondingly for equation set (\ref{eq26}) and Eqs. (\ref{eq37})-(\ref{eq38}) at $B=0.025$. Similarly to the previous case, the pore is almost immobile: $L(t) \approx L(0)$. Therefore, we can confine ourselves to zeroth approximation for the pore evolution analysis. In this case, (\ref{eq31}) is obtained. The analytical solution of this equation well agrees with the numerical solution of equation set (\ref{eq26}) for initial conditions $r|_{\tau=0}=1$, $r_s|_{\tau=0}=100$, $L|_{\tau=0}=0.1$, $A=10^{-1}$ at various parameters $B$: $B=0.25$, $B=0.10101$ and $B=0.025$. These solutions are shown in Figs. \ref{fg6}.
\begin{figure}
\centering
\includegraphics[ height=5.3 cm]{pr15.eps}
\includegraphics[ height=5.3 cm]{pr17.eps}
\includegraphics[ height=5.3 cm]{pr16.eps}\\
\caption{In Fig. solid line numerical solution equation set (\ref{eq26}), dash-and-dot -- analytical solution of Eq. (\ref{eq38}) for initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=100$, $L|_{\tau =0}=0.1$, $A=10^{-1}$ and different values of gas parameter $B$.}
\label{fg6}
\end{figure}
Let us, finally, turn to the discussion of the mode (\ref{eq25}), when the pore is situated close to the granule boundary. In this case, the relation $l/R_s$ is close to unity:
\[\frac{l}{R_s}=1-\varepsilon,\]
Here $\varepsilon$ is small parameter, on which the asymptotic expansion is conducted.
\begin{figure}
\centering
\includegraphics[ height=7 cm]{pr18.eps}
\includegraphics[ height=7 cm]{pr19.eps}\\
\includegraphics[ height=7 cm]{pr20.eps}
\includegraphics[ height=7 cm]{pr21.eps}\\
\caption{On the upper left, dependencies of pore radius $r$ on time $\tau$ are given for different values of gas parameter $B$: $B=0.25$, $B=0.10101$ and $B=0.025$. Solid line corresponds to the numerical solution of complete equation set (\ref{eq26}), dash-and-dot line corresponds to the numerical solution of approximate equations (\ref{eq37})-(\ref{eq38}); on the upper right, dependence is shown of distance $L$ on time $\tau$ for parameter $B=0.25$; below on the left, dependence is shown of distance time-change $L$ on time $\tau$ for parameter $B=0.10101$; below on the right, dependence is shown of distance time-change $L$ on time $\tau$ for parameter $B=0.025$. All solutions have been obtained at initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=100$, $L|_{t=0}=90$ and $A=10^{-1}$.}
\label{fg7}
\end{figure}
Parameter $\varepsilon$ value is restricted by the geometrical inequality (the pore inside the granule)
\[\frac{R}{R_s} \leq \varepsilon .\]
In asymptotic expansion we will take into account the terms of the order of $\varepsilon^2$. With account of this remark, parameter $\alpha$ and, correspondingly, bispherical coordinates $\eta_{1,2}$ obtained within the small pore approximation $R\ll R_s$ and $R \ll l$, take on the form:
\begin{equation}\label{eq39}
\alpha \approx \frac{r_s^2}{2L}\varepsilon(2-\varepsilon), \; \eta_1 \approx \ln \left(\frac{r_s}{r}(2\varepsilon+\varepsilon^2)\right),\; \eta_2 \approx \varepsilon + \frac{\varepsilon^2}{2}. \end{equation}
It can be seen from here, that $\eta_1 \gg \eta_2$ , therefore we can use previous estimates for the sums of series given by formulas (\ref{eq29}). Substituting (\ref{eq29}) and (\ref{eq39}) into the right part of Eq. (\ref{eq26}), we obtain pore evolution equations within approximation (\ref{eq25}):
\begin{equation}\label{eq37a} \frac{d L}{d \tau}=-\frac{3}{8}\cdot \exp\left(\frac{A}{r}-\frac{B}{r^3}\right)\cdot \frac{r\left(\frac{L}{r_{s0}}\right)^2}{r_{s0}^2 \left(1-\frac{L}{r_{s0}}\right)^2} \end{equation}
\begin{equation}\label{eq38a} \frac{d r}{d \tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3}\right)}{r}\cdot \left[1+\frac{1}{2}\cdot \frac{rL}{r_{s0}^2\left(1-\frac{L^2}{r_{s0}^2}\right)}\right]+\frac{\exp\left(-\frac{A}{r_{s0}}\right)}{r} \end{equation}
In Fig. \ref{fg7}, the numerical solutions are shown both of the exact equation set (\ref{eq26}) and of the approximate one (\ref{eq37a})-(\ref{eq38a}) with the same initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=100$, $L|_{\tau =0}=90$ and $A=10^{-1}$. The left upper part of Fig. \ref{fg7} demonstrates very good agreement of the time dependences of pore radius. In Fig. \ref{fg7} the plots are shown for time dependence of the center-to-center distance between the pore and the granule. It can be seen from the figure, that the displacement of the pore towards the granule center, obtained from exact equation set (\ref{eq26}) exceeds that observed in approximate equation set (\ref{eq37a})-(\ref{eq38a}).
\subsection{Large pores}
Let us now proceed to discussing the evolution of large pores. Let us begin with the notion, that asymptotic mode
\begin{equation}\label{eq40}
R/R_s \cong 1,\quad l/R_s \cong 1,\quad R/l \cong 1. \end{equation}
is not, in fact, realized. Indeed, let us take into account the closeness of the two firs relations to the unity
\begin{equation}\label{eq41}
\frac{R}{R_s}=1-\varepsilon_1,\quad \frac{l}{R_s}=1-\varepsilon_2, \end{equation}
where $\varepsilon_1 \ll 1$ and $\varepsilon_2 \ll 1$ are small parameters. Substituting (\ref{eq41}) into geometrical condition (\ref{eq22}), we find $1 \leq \varepsilon_1+\varepsilon_2$. Since $\varepsilon_{1,2}$ are small parameters, this inequality does not hold. Thus, mode (\ref{eq40}) is not compatible with geometrical condition (\ref{eq22}).
Let us consider the valid regime of large pore evolution when relations between values $R$, $R_s$, $l$ are the following:
\begin{equation}\label{eq42}
4) \quad R/R_s \cong 1,\quad l/R_s \ll 1,\quad R \gg l. \end{equation}
Let us write down the first relation as $R/R_s=1-\epsilon$, where $\epsilon$ is a small parameter of asymptotic expansion. With an account of the validity of conservation low for the volume of granule material, we can find, from Eq. (\ref{eq19}) granule radius change
\[R_s(t)=\left(R_s(0)^3-R(0)^3+R(t)^3 \right)^{1/3} \]
or, in dimensionless units,
\begin{equation}\label{eq43} r_s(t)=\left(r_s(0)^3-r(0)^3+r(t)^3 \right)^{1/3} \end{equation}
Using this relation, we can describe the large pore evolution by the following dimensionless equations (\ref{eq21}). The numerical solution of equation set
\begin{figure}
\centering
\includegraphics[ height=7 cm]{pr22.eps}
\includegraphics[ height=7 cm]{pr23.eps}\\
\caption{Dependencies are shown for the case of large pore of radius $r$ on time $\tau$ (on the left) and of distance $L$ on time $\tau$ (on the right) for different values of gas parameter $B$: $B=0.25$, $B=0.1668$ and $B=0.025$. All plots correspond to numerical solutions of equation set (\ref{eq21}) for initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=1.5$, $L|_{t=0}=0.15$ and $A=10^{-1}$.}
\label{fg8}
\end{figure}
is shown in Fig. \ref{fg8} for initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=1.5$, $L|_{\tau=0}=0.15$, $A=10^{-1}$ at different values of gas parameter $B=0.25$, $B=0.1 668$ and $B=0.025$. Fixed values of gas parameter $B$ a chosen in such a way that three characteristic evolution modes of large pore could be demonstrated. The first mode corresponds to the case of "high" gas density inside the pore ($B=0.25$ or $N_{g}=1.05\cdot 10^{5}$). It can be seen from the plot at $B=0.25$ , that large pore evolution is accompanied by an increase of pore radius up to some stationary value $r_{cr}^{h}$ when pore is shifted relative to granule center at some critical distance $L_{cr}^{h}$. Second evolution mode is the case of the pore "at rest", i.e. at some definite value of gas parameter $B=0.1668$, radius and position of the pore do not change. Third case relates to small gas concentration. Finally, third evolution mode pore corresponds to "small" gas pressure ($B=0.025$ or $N_{g}=1.05 \cdot 10^{4}$), at which decrease of the pore radius down to some stationary value $r_{cr}^{l}$ occurs, that is accompanied by pore shifting towards granule center at some critical distance $L_{cr}^{l}$ (see Fig. \ref{fg8}).
Let us consider asymptotic mode (\ref{eq42}) for a large pore, confining ourselves, in connection with (\ref{eq43})-(\ref{eq48}), to second-order terms on $\epsilon$, that is
\begin{equation}\label{eq44} \epsilon(1-\epsilon)=\frac{V}{3r_s^3}, \end{equation}
where $V=r_s(0)^3-r(0)^3$ is initial volume of material.
substituting value $\epsilon=1-r/r_s$ into (\ref{eq44}), we obtain quadratic equation for granule radius $r_s$, with the solution in the following form: :
\begin{equation}\label{eq45} r_s=r\left(1+\frac{V}{3r^3}\right) \end{equation}
Thus, within asymptotic approximation (\ref{eq42}), the connection is obtained between the pore and granule radii. Let us now proceed to the calculation of parameter $\alpha$, taking into account the condition $r \gg L$:
\begin{equation}\label{eq46} \alpha\approx\frac{r_s^2}{2L}\left(1+(1-\epsilon)^4-2(1-\epsilon)^2\right)^{1/2}=\frac{r_s^2\epsilon}{L} \end{equation}
Hence, according to the definition (\ref{eq5}) one finds bispherical coordinates $\eta_{1,2}$:
\begin{equation}\label{eq47} \quad \eta _1 = \textrm{arsinh}\left(\frac{r_s^2\epsilon}{rL}\right),\quad
\eta _2 = \textrm{arsinh}\left(\frac{r_s}{L}\epsilon\right) \end{equation}
Because of the geometrical conditions, the inequality $\epsilon r_s/L\geq 1$ is valid. Thus, bispherical coordinates $\eta_{1,2}$ can be approximated for the case $\epsilon r_s/L \gg 1$ in the following form:
\begin{equation}\label{eq48}
\eta _1 \approx \ln \left(\frac{2r_s^2\epsilon}{rL}\right),\quad \eta _2 \approx \ln \left(\frac{2r_s\epsilon}{L}\right) \end{equation}
\begin{figure}
\centering
\includegraphics[ height=7 cm]{pr24.eps}
\includegraphics[ height=7 cm]{pr25.eps}\\
\caption{On the left, solid line indicates the dependencies for large pore of pore radius $r$ on time $\tau$, obtained by numerical solution of equation set (\ref{eq21}), dash-and-dot line relates to numerical solution of Eqs. (\ref{eq50})-(\ref{eq52}); on the right, dependence is shown of distance time-change $L$ on time $\tau$. Solid line corresponds to the numerical solution of equation set (\ref{eq21}), while dash-and-dot line relates to solution of Eqs. (\ref{eq50})- (\ref{eq52}). All solutions are obtained for initial conditions $r|_{t=0}=1$, $r_s|_{t=0}=1.5$, $L|_{t=0}=0.15$, $A=10^{-1}$ and $B=0.025$.}
\label{fg9}
\end{figure}
Then, we find the difference $\eta_1-\eta_2 =\ln \left(\frac{r_s}{r}\right) \approx \ln(1+\epsilon)\approx \epsilon$ and, correspondingly, make an estimate of series sums:
\[ \Phi_1 =\frac{1}{2\sinh(\eta_1+\epsilon)} \approx \frac{1}{2(\sinh\eta_1+\epsilon\cosh\eta_1) }\approx\frac{r}{2\alpha(1+\epsilon)}=\frac{r}{2\alpha}\left(1-\epsilon+\epsilon^2+\cdots\right),\]
\begin{equation}\label{eq49}
\Phi_2 =\frac{1}{2\sinh(\eta_2+\epsilon)} \approx \frac{1}{2(\sinh\eta_2+\epsilon\cosh\eta_2) }\approx \frac{r_s}{2\alpha(1+\epsilon)}=\frac{r_s}{2\alpha}\left(1-\epsilon+\epsilon^2+\cdots\right),
\end{equation}
\[ \widetilde{\Phi}_1 = \frac{\cosh(\eta_1+\epsilon)}{2\sinh^2(\eta_1+\epsilon)} \approx \frac{\cosh\eta_1+\epsilon\sinh\eta_1 }{2(\sinh\eta_1+\epsilon\cosh\eta_1)^2 } \approx \frac{r}{2\alpha(1+\epsilon)}= \frac{r}{2\alpha}\left(1-\epsilon+\epsilon^2+\cdots\right),\]
\[ \widetilde{\Phi}_2 =\frac{\cosh(\eta_2+\epsilon)}{2\sinh^2(\eta_2+\epsilon)} \approx \frac{\cosh\eta_2+\epsilon\sinh\eta_2 }{2(\sinh\eta_2+\epsilon\cosh\eta_2)^2 } \approx \frac{r_s}{2\alpha(1+\epsilon)}=\frac{r_s}{2\alpha}\left(1-\epsilon+\epsilon^2+\cdots\right). \]
Substituting relations (\ref{eq45}), (\ref{eq46}) and (\ref{eq48})into equation set (\ref{eq21}) we obtain evolution equations for a large pore with the accuracy up to second order term $\epsilon^2$:
\begin{equation}\label{eq50} \frac{dL}{d\tau}=O(\epsilon^3) \end{equation}
\begin{equation}\label{eq51} \frac{dr}{d\tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3} \right)}{r}\cdot\left[\frac{1}{2}+\frac{1}{2}\cdot\left(\frac{2-\epsilon}{1-\epsilon}\right)(1-\epsilon+\epsilon^2)\right]+\frac{\exp\left(-\frac{A}{r\left(1+\frac{V}{3r^3}\right)}\right)}{r}\cdot\frac{1-\epsilon+\epsilon^2}{1-\epsilon} \end{equation}
It follows from Eq. (\ref{eq51}) that, within the considered asymptotic approximation, the change of distance $L(\tau)$ is quite small: $L(\tau) \approx L(0)$. Eq. (\ref{eq51}) does not depend on $L(\tau)$ , thus, substituting into it the value $\epsilon=1-r/r_s$ we find pore radius evolution equation:
\begin{equation}\label{eq52} \frac{dr}{d\tau}=-\frac{\exp\left(\frac{A}{r}-\frac{B}{r^3} \right)}{r}\cdot\left[1+\frac{1}{2}\cdot\frac{1}{1+\frac{V}{3r^3}} +\left(\frac{\frac{V}{3r^3}}{1+\frac{V}{3r^3}}\right)^2 \right]+$$
$$+\frac{\exp\left(-\frac{A}{r\left(1+\frac{V}{3r^3}\right)}\right)}{r}\cdot\left(1+\left(\frac{\frac{V}{3r^3}}{1+\frac{V}{3r^3}}\right)^2\right) \end{equation}
In Fig. \ref{fg9} dash-and-dot line indicates numerical solution of approximate equations (\ref{eq50})-(\ref{eq52}) with initial conditions $r|_{\tau =0}=1$, $r_s|_{\tau =0}=1.5$, $L|_{\tau=0}=0.15$, $A=10^{-1}$ and values of gas parameter $B=0.025$. It can be seen from Fig. \ref{fg9}, that numerical solutions of exact (\ref{eq21}) and approximate (\ref{eq50})-(\ref{eq52}) well agree with each other at "low" gas pressure inside the pore. Here, deviation $\Delta L(\tau)$ does not exceed accuracy order $\Delta L(\tau) \ll \epsilon^2$.
\subsection{Gas-filled pore in the center of spherical granule}
Here, we will consider the simplest limiting case $(l=0)$, when gas-filled pore is situated in the center of spherical granule . Geometrical inequality (\ref{eq22}) turns into evident one: $R<R_s$. It is convenient to consider this case in a spherical coordinate system. Boundary conditions for concentration remain the same and are determined by formulas (\ref{eq1}) and (\ref{eq2})correspondingly. Then equations, determining vacancy concentration and boundary conditions, with account of the symmetry of the problem, take on a simple form:
\begin{equation}\label{eq53}
\Delta_r c = 0, \quad c(r)|_{r=R}=c_R, \quad c(r)|_{r=R_s}=c_{R_s},
\end{equation}
where $\Delta_r= \frac{1}{r^2}\frac{d}{d r}\left(r^2\frac{d}{d r}\right)$ is radial part of laplacian in spherical coordinates.
One can easily find the expression for vacancy concentration from Eq. (\ref{eq53}):
\begin{equation}\label{eq54} c(r)=-\frac{C_1}{r}+C_2, \end{equation}
where $C_{1,2}$ are arbitrary constants, that are determined by boundary conditions. Vacancy flux $\vec j$ is determined by the first Fick's law (\ref{eq11}), while vacancy fluxes per unit surface of the pore or the granule equal, correspondingly, to
\begin{equation}\label{eq55}
\vec n \cdot \vec j|_{r=R}=\frac{D}{\omega}\frac{\partial c}{\partial r}|_{r=R},\quad \vec n \cdot \vec j|_{r=R_s}=\frac{D}{\omega}\frac{\partial c}{\partial r}|_{r=R_s}
\end{equation}
Substituting these expressions into the equation for the change of the volume of the pore and the granule
\[\dot{R}=-\frac{\omega}{4\pi R^2} \oint\vec{n}\vec{j}|_{r=R} dS, \; \dot{R_s}=-\frac{\omega}{4\pi R_s^2}\oint \vec{n}\vec{j}|_{r=R_s} dS \]
we find the equation for time change of the radius of the pore and the granule:
\begin{equation}\label{eq56}
\left\{
\begin{aligned}
\dot{R}=-\frac{D}{R}\cdot\frac{(c_{R_s}-c_R)R_s}{R_s-R} \\
\dot{R_s}=-\frac{D}{R_s}\cdot\frac{(c_{R_s}-c_R)R}{R_s-R} \\
\end{aligned}
\right.
\end{equation}
It can be checked easily, that, from the evolution equation (\ref{eq56}) for the gas-filled pore in the granule center, the conservation law follows:
\[R_s(t)^2\dot{R}_s(t)-R(t)^2\dot{R}(t)=0.\]
Thus, granule radius is connected with pore pore volume by the simple relation:
\begin{equation}\label{eq57}
R_s(t) =\sqrt[3]{V+R(t)^3},
\end{equation}
where $V=R_s(0)^3 -R(0)^3$ is initial volume of granule material. Critical radius of the pore is determined from the first equation of the set (\ref{eq56}), assuming $\dot{R}=0$ and using formulas (\ref{eq1})-(\ref{eq2}) and ideal gas equation:
\begin{equation}\label{eq58}
R_{cr}=\frac{1}{2}\sqrt{\frac{3N_g kT}{2\pi\gamma}}\end{equation}
As it can be seen from (\ref{eq58}), for vacancy pore with $(N_g=0)$, threre exists no stationary radius $(R_{cr}=0)$, that agrees with the conclusions of the work \cite{21s}. Moreover, we see that stationary radius of gas-filled pore grows with an increase of concentration $N_g$ and temperature $T$ on the gas. Speaking generally, the value of stationary radius retains also in a more complicated case of an arbitrarily situated pore.
\section{Conclusions}
Let us finally discuss general regularities of the behaviour of a gas-filled pore inside a spherical granule in hydrodynamical approximation. First of all, in the limiting case of the absence of gas-diffusion in the matrix, there exists stationary pore radius, that is ultimately reached by the pore. This stationary radius is determined by the quantity of gas inside the pore as well as by granule temperature. Thus, depending on the relation between stationary radius value and initial pore radius, pore size can either increase or decrase with time. In particular case of pore radius coincidence of initial radius with the stationary one, pore size does not change. In general case, gas-filled pore shifts towards granule center if its size is diminishing down to stationary value or away from granule center if its size is increasing up to stationary value. It should be noted, that such shift is small since pore motion stops as soon as pore radius reaches its stationary value. The particular case of coaxial position of the pore in the granule yields simple equations for pore and granule size change, that are in a good agreement with the more complicated case of arbitrary position of the pore in the spherical granule.
\section*{Appendix}
\subsection*{1. Auxillary relations.}
\begin{equation}\label{EQ1}
\int_{-1}^1\frac{P_k(t)dt}{\sqrt{\cosh\eta-t}} = \frac{\sqrt{2}\cdot
e^{-(k+1/2)\eta}}{k+1/2}\,.
\end{equation}
Differentiating relation (\ref{EQ1}) with respect to the parameter $\eta$, we subsequently find
\begin{equation}\label{EQ2}
\int_{-1}^1\frac{P_k(t)dt}{(\cosh\eta-t)^{3/2}} =
\frac{2\sqrt{2}\cdot e^{-(k+1/2)\eta}}{\sinh\eta}\,,
\end{equation}
\begin{equation}\label{EQ3}
\int_{-1}^1\frac{P_k(t)dt}{(\cosh\eta-t)^{5/2}} =
\frac{4\sqrt{2}\cdot
e^{-(k+1/2)\eta}(\cosh\eta+(k+1/2)\sinh\eta)}{3\cdot\sinh^3\eta}\,.
\end{equation}
\subsection*{2. Calculation of pore radius change .}
\begin{equation}\label{EQ4} \dot{R}=-\frac{\omega}{4\pi
R^2}\oint\vec{n}\vec{j}dS\,,
\end{equation}
where
\begin{equation}\label{EQ5}
\vec{n}\vec{j}|_{\eta=\eta_1}=\frac {D}{\omega} \cdot
\frac{\cosh\eta_1-\cos\xi}{a}\frac{\partial
c}{\partial\eta}|_{\eta=\eta_1}\,,
\end{equation}
\begin{equation}\label{EQ6}
dS = \frac{a^2\cdot \sin\xi
d\xi d\varphi}{(\cosh\eta_1-\cos\xi)^2}\,,
\end{equation}
After application of Fubini's theorem, with account of the independence of
$\xi$ and $\varphi$ the expression for $\dot{R}$ takes on a form
\begin{equation}\label{EQ7}
\dot{R}=-\frac{a\cdot D}{2\cdot
R^2}\int_0^{\pi}\frac{\partial
c}{\partial\eta}|_{\eta=\eta_1}\frac{\sin\xi d\xi
}{\cosh\eta_1-\cos\xi}\,,
\end{equation}
Substituting
$$\frac{\partial
c}{\partial\eta}|_{\eta=\eta_1} = \sqrt{2}\left(
\frac{c_{R}\cdot\sinh\eta_1}{\sqrt{{\cosh\eta_1-\cos\xi}}}\cdot\sum_{k=0}^{\infty}
P_k(\cos\xi)\exp(-\eta_1(k+1/2))+
\sqrt{{\cosh\eta_1-\cos\xi}}\times\right.$$
$$\left.\times\sum_{k=0}^\infty\frac{(k+1/2)\cdot
P_k(\cos\xi)}{\sinh(k+1/2)(\eta_1-\eta_2)}\left[c_{R}\cdot\cosh(k+1/2)(\eta_1-\eta_2)e^{-\eta_1(k+1/2)}
-c_{R_s}\cdot e^{-\eta_2(k+1/2)}\right] \right)$$
into the expression for the speed of pore radius change, and exchanging integration and summation signs on the strength of convergence of corresponding sums and integrals, after substituting $\cos\xi=t$, we obtain
$$ \dot{R}=-\frac{a\cdot D\sqrt{2}}{2\cdot
R^2}\left[\frac{c_{R}\cdot\sinh\eta_1}{2}\sum_{k=0}^\infty
e^{-\eta_1(k+1)}\int_{-1}^1\frac{P_k(t)dt}{(\cosh\eta_1-t)^{3/2}}+\right.$$
$$\left. +\sum_{k=0}^\infty\frac{(k+1/2)}{\sinh(k+1/2)(\eta_1-\eta_2)}\left[c_{R}\cdot\cosh(k+1/2)
(\eta_1-\eta_2)e^{-\eta_1(k+1/2)} -\right.\right. $$
$$\left.\left.- c_{R_s}\cdot
e^{-\eta_2(k+1/2)}\right]\cdot
\int_{-1}^1\frac{P_k(t)dt}{\sqrt{\cosh\eta_1-t}}\right] \,.$$
Using values of integrals (\ref{EQ1}) and (\ref{EQ2}), we can reformulate this expression
$$ \dot{R}=-\frac{a\cdot D\sqrt{2}}{2\cdot
R^2}\left[\frac{c_{R}\cdot\sinh\eta_1}{2}\sum_{k=0}^\infty
e^{-\eta_1(k+1/2)}\cdot \frac{2\sqrt{2}\cdot
e^{-\eta_1(k+1/2)}}{\sinh\eta_1} +\right.$$
$$\left. +\sum_{k=0}^\infty\frac{(k+1/2)}{\sinh(k+1/2)(\eta_1-\eta_2)}\left[c_{R}\cdot\cosh(k+1/2)
(\eta_1-\eta_2)e^{-\eta_1(k+1/2)} -\right.\right. $$
$$\left.\left.- c_{R_s}\cdot
e^{-\eta_2(k+1/2)}\right]\cdot \frac{\sqrt{2}\cdot
e^{-(k+1/2)\eta_1}}{k+1/2} \right]= -\frac{a\cdot D}{R^2}\left[
c_{R}\cdot\sum_{k=0}^\infty e^{-\eta_1(2k+1)} +\right.$$
$$\left. +\sum_{k=0}^\infty \frac{c_{R}\cdot\cosh(k+1/2)
(\eta_1-\eta_2)e^{-\eta_1(k+1/2)} + c_{R_s}\cdot e^{-\eta_2(k+1/2)}}
{\sinh(k+1/2)(\eta_1-\eta_2)}\cdot e^{-\eta_1(k+1/2)}\right] =
-\frac{a\cdot D}{R^2}\times$$
$$\times\left[\frac{c_{1}}{2\cdot\sinh\eta_1} +\sum_{k=0}^\infty
\frac{c_{R}\cdot\cosh(k+1/2) (\eta_1-\eta_2)e^{-\eta_1(k+1/2)} -
c_{R_s}\cdot e^{-\eta_2(k+1/2)}} {\sinh(k+1/2)(\eta_1-\eta_2)}\cdot
e^{-\eta_1(k+1/2)}\right]\,.$$
Substituting $a=R\cdot \sinh\eta_1$ and transforming summing terms,\\
we ultimately obtain:
\begin{equation}\label{EQ8} \dot{R}=-\frac{D}{R}\left[\frac{c_{R}}{2}
+\sinh\eta_1\cdot\sum_{k=0}^\infty
\frac{c_{R}\cdot(e^{-(2k+1)\eta_1}+e^{-(2k+1)\eta_2}) -2\cdot
c_{R_s}\cdot e^{-(2k+1)\eta_2}}
{e^{(2k+1)(\eta_1-\eta_2)}-1}\right]\,.
\end{equation}
\subsection*{3. Calculation of the speed of pore motion.}
$$ \vec{v}=\vec{e_z}\cdot\frac{3\cdot D\cdot
a}{2\cdot R^2}\int_0^\pi \frac{\partial
c}{\partial\eta}|_{\eta=\eta_1}\frac{\cosh\eta_1\cdot
\cos\xi-1}{(\cosh\eta_1-\cos\xi)^2}\cdot\sin\xi d\xi = $$
$$=\vec{e_z}\cdot\frac{3\cdot D\cdot
a}{2\cdot R^2}\int_0^\pi \frac{\partial
c}{\partial\eta}|_{\eta=\eta_1}\left(-\frac{\cosh\eta_1}{\cosh\eta_1-\cos\xi}+
\frac{\sinh^2\eta_1}{(\cosh\eta_1-\cos\xi)^2}\right) \cdot\sin\xi
d\xi =$$
$$=\vec{e_z}\cdot\frac{3\sqrt{2}\cdot D\cdot
a}{2\cdot R^2}
\int_0^\pi\left(-\frac{\cosh\eta_1}{\cosh\eta_1-\cos\xi}+
\frac{\sinh^2\eta_1}{(\cosh\eta_1-\cos\xi)^2}\right) \cdot\sin\xi
d\xi\times$$
$$\times
\left[ \frac{c_R\cdot\sinh\eta_1}{2\cdot\sqrt{\cosh\eta_1-\cos\xi}}\cdot\sum_{k=0}^\infty P_k(\cos\xi)
e^{-\eta_1(k+1/2)} + \sqrt{\cosh\eta_1-\cos\xi}\times\right.$$
$$\left.\times\left( \sum_{k=0}^\infty\frac{(k+1/2)
P_k(\cos\xi)(c_R\cdot e^{-\eta_1(k+1/2)}\cosh(k+1/2)(\eta_1-\eta_2) -c_{R_s}\cdot
e^{-\eta_2(k+1/2)})}{\sinh(k+1/2)(\eta_1-\eta_2)}
\right)\right]\,.$$
The substitution $t=\cos\xi$ and exchange of summation and integration signs yield the expression
$$\vec{v}=\vec{e_z}\cdot\frac{3\sqrt{2}\cdot D\cdot
a}{2\cdot R^2}
\sum_{k=0}^\infty\left[\int_{-1}^1\frac{P_k(t)dt}{(\cosh\eta_1-t)^{5/2}}\cdot
\frac{c_R\cdot e^{-\eta_1(k+1/2)}\sinh^3\eta_1}{2}
\right.+$$
$$+\int_{-1}^1\frac{P_k(t)dt}{(\cosh\eta_1-t)^{3/2}}\times\left(\frac{-c_R\cdot e^{-\eta_1(k+1/2)}
\cosh\eta_1\sinh\eta_1}{2}\right.+\sinh^2\eta_1\times$$
$$\left.
\times\frac{(k+1/2)
(c_R\cdot e^{-\eta_1(k+1/2)}\cosh(k+1/2)(\eta_1-\eta_2) -c_{R_s}\cdot
e^{-\eta_2(k+1/2)})}{\sinh(k+1/2)(\eta_1-\eta_2)}
\right)+\int_{-1}^1\frac{P_k(t)dt}{\sqrt{\cosh\eta_1-t}}\times$$
$$ \left.\times\left(
-\cosh\eta_1\cdot\frac{(k+1/2)
(c_R\cdot e^{-\eta_1(k+1/2)}\cosh(k+1/2)(\eta_1-\eta_2) -c_{R_s}\cdot
e^{-\eta_2(k+1/2)})}{\sinh(k+1/2)(\eta_1-\eta_2)}
\right)\right]\,.$$
Now le us substitute values of corresponding integrals \\ into the obtained expression and reform the result
$$\vec{v}=\vec{e_z}\cdot\frac{3\sqrt{2}\cdot D\cdot
a}{2\cdot R^2} \sum_{k=0}^\infty\left[\frac{4\sqrt{2}\cdot
e^{-(k+1/2)\eta_1}(\cosh\eta_1+(k+1/2)\sinh\eta_1)}{3\cdot\sinh^3\eta_1}\times
\right. $$
$$\times\frac{c_R\cdot e^{-\eta_1(k+1/2)}\sinh^3\eta_1}{2}+\frac{2\sqrt{2}\cdot
e^{-(k+1/2)\eta_1}}{\sinh\eta_1}\times
\left(\frac{-c_R\cdot e^{-\eta_1(k+1/2)}\cosh\eta_1}{2}
+\sinh^2\eta_1\times\right.$$
$$\left. \times\frac{(k+1/2)
(c_R\cdot e^{-\eta_1(k+1/2)}\cosh(k+1/2)(\eta_1-\eta_2) -c_{R_s}\cdot
e^{-\eta_2(k+1/2)})}{\sinh(k+1/2)(\eta_1-\eta_2)}
\right)+\frac{\sqrt{2}\cdot e^{-(k+1/2)\eta_1}}{k+1/2}\times$$
$$\left. \times\left(
-\cosh\eta_1\cdot\frac{(k+1/2)
(c_R \cdot e^{-\eta_1(k+1/2)}\cosh(k+1/2)(\eta_1-\eta_2) -c_{R_s}\cdot
e^{-\eta_2(k+1/2)})}{\sinh(k+1/2)(\eta_1-\eta_2)}
\right)\right]=$$
\begin{equation}\label{eq53s}=\vec{e_z}\cdot\frac{3 D
a}{ R^2}
\sum_{k=0}^\infty\frac{((2k+1)\sinh\eta_1-\cosh\eta_1)\left(c_R\cdot(e^{-(2k+1)\eta_1}+e^{-(2k+1)\eta_2})-2c_{R_s}\cdot
e^{-(2k+1)\eta_2}\right)}{e^{(2k+1)(\eta_1-\eta_2)}-1}\,.
\end{equation}
|
2,869,038,156,666 | arxiv | \section{Introduction}
The theory of open quantum dynamics
\cite{Leggett87,Slichter90,Walls94} is an important and active area
of physics with applications in nanotechnology, chemical physics,
quantum biology, and quantum information. In open quantum theories,
the system under consideration is assumed to interact with an
environment that has many degrees of freedom. Because details of the
environmental Hamiltonian are usually unknown, measurable quantities
such as temperature $T$ or noise spectrum $S(\omega)$ are used to
describe the average statistical behavior of the environment. An
open quantum model, therefore, provides a set of differential
equations that describe the statistical dynamics of the quantum
system, taking the temperature and the spectrum of the bath as input
parameters.
Increased research and development of technology in quantum
computing is renewing interest in open quantum modeling. One such
promising computation scheme is quantum annealing (QA)
\cite{Kadowaki98,Santoro02,Johnson11} (in particular, adiabatic
quantum computation \cite{Farhi01}). In QA, the system is evolved
slowly so that it stays at or near the ground state throughout the
evolution. At the end of the evolution, the system will occupy a
low-energy state of the final Hamiltonian, which may represent a
solution to an optimization or a sampling problem.
Open quantum dynamics of a QA processor have been studied
theoretically \cite{Sahel06,AminLove,Albash12,Albash15}. These
models assume weak coupling to an environment, which is typically
taken to have Ohmic spectrum with large high-frequency content. This
limit is well described by the Bloch-Redfield theory
\cite{Leggett87,Slichter90,Walls94,AminLove,ES81}. Realistic qubits
\cite{Harris10}, however, suffer from strong interaction with
low-frequency noise (in particular, noise with 1/f-like spectrum).
Incoherent dynamics of a qubit coupled to such an environment are
described by the Marcus theory \cite{Marcus93,Cherepanov01,LNT17}. A
complete ({\em hybrid}) open quantum model should account for both
low-frequency and high-frequency environments. Such a model for a
single qubit has been developed and agreement with experiment has
been demonstrated \cite{Amin08,AminBrito09,Lanting11}. A
generalization of this theory to multiqubit systems has also been
developed and compared with experimental observation
\cite{Boixo16,Boixo14}. Several attempts to combine the
Bloch-Redfield and Marcus methods have been undertaken in chemical
physics and quantum biology(see, for example,
Refs.\cite{Yang02,Ghosh11,Lambert13}).
In this paper, we expand the work of Refs.~\cite{Boixo16,Boixo14}.
We provide a systematic and detailed derivation of a hybrid open
quantum model, which agrees with the results of
Ref.~\cite{Boixo16,Boixo14} for problems with large spectral gaps.
Our theory, however, can also be applied to small-gap problems with
nonstationary Hamiltonians, for which the model in
\cite{Boixo16,Boixo14} is not applicable. We provide an intuitively
appealing and computationally convenient form for the transition
rates in terms of a convolution between Redfield and Marcus
formulas. As an example, we investigate a dissipative evolution of a
16-qubit system strongly interacting with low-frequency noise and
weakly coupled to a high-frequency environment. The problem is
characterized by an extremely small gap in the energy spectrum of
qubits in the middle of annealing. Solving the problem requires the
right combination of Bloch-Redfield and Marcus approaches as well as
a proper consideration of the calculation basis, which takes into
account the nonstationary effects. The results of the present paper
can also be applied to any other open quantum system.
The paper is organized in the following way. Section~\ref{SingleQ}
describes a single-qubit system to provide the necessary intuition
before moving to more complicated multi-qubit problems.
Section~\ref{HamSec} formulates the Hamiltonian and introduces
important definitions and notations for the system and the bath.
Master equations for the probability distribution of the quantum
system are derived in Section~\ref{IntPic}. Section~\ref{BRM}
presents the relaxation rates as convolution integrals of the
Bloch-Redfield and Marcus envelopes. In the Section \ref{Limits} we
show that in equilibrium the master equations obey the detailed
balance conditions. We also demonstrate that the convolution
expression for the relaxation rates turns into the Marcus or to
Bloch-Redfield formulas in the corresponding limits. Dissipative
dynamics of a 16-qubit system with an extremely small energy gap is
considered in Section~\ref{Dickson16}. A brief compilation of the
commonly encountered notations is presented in Appendix
\ref{ApxNot}. In other Appendixes we provide a detailed derivation
of many important formulas.
\section{Single-qubit system}
\label{SingleQ}
We begin with a single-qubit system that has a Hamiltonian
\ba \label{HamTLS} H_S = - \frac{\Delta}{2} \, \sigma_x -
\frac{h}{2}\, \sigma_z, \ea
with the Pauli matrices $\sigma_x, \sigma_y, \sigma_z$, tunneling
amplitude $\Delta$, and bias $h$. The ground state, $\ket{1}$, and
the excited state, $\ket{2}$, of the Hamiltonian (\ref{HamTLS}) have
energies $E_{1,2} = \mp \frac{\Omega_0}{2}$, with the energy
splitting $\Omega_0 = \sqrt{\Delta^2 + h^2}.$ We assume that the
system-bath interaction is determined by the Hamiltonian
\ba H_{\rm int} = - Q\, \sigma_z, \ea
where $Q$ is a quantum-mechanical operator of the bath. The bath
itself has a Hamiltonian $H_B$, so that the total Hamiltonian $H$ of
the problem is a sum of three terms:
\ba \label{H1} H = H_S + H_{\rm int} + H_B. \ea
We assume that the free bath (with no coupling to the system) has a
Gaussian statistics \cite{ES81} determined by a spectrum of
fluctuations: \be
S(\omega) = \int dt e^{i\omega t}\langle Q(t) Q(0) \rangle ,
\ee where $Q(t) = e^{iH_Bt} Q e^{-iH_Bt}$. Frequently, the Gaussian
bath is represented as a collection of harmonic oscillators
\cite{Leggett87}. Within our general formalism we do not have to
resort to any specific representation of the bath.
In many realistic situations (see, for example,
Refs.~\cite{Lanting11}), the noise may come from different sources,
some dominating at low frequencies (such as 1/f noise) and others
dominating at high frequencies. As such, we consider $S(\omega)$ to
be a sum of two terms:
\ba \label{SpLH} S(\omega)~=~S_L(\omega)~+~S_H(\omega), \ea
where $S_L(\omega)$ and $S_H(\omega)$ are functions that are peaked
at low and high frequencies, respectively. Each function may tail
into the other function's region. Hereafter we refer to the noise
with the spectrum (\ref{SpLH}) as the \emph{hybrid} noise. The
formula of high-frequency spectrum $S_H$ is given in Sec. III E. For
the explicit expression for $S_L(\omega)$ we refer to Eq.~(B3) shown
in the Supplementary Information Section of Ref.~\cite{Johnson11},
although there are no need for these formulas here. Notice also
that in the present paper we operate with the
experimentally-measured parameters of the low-frequency bath, such
as the noise intensity $W^2$ and the reorganization energy
$\varepsilon_L$, which are are defined below.
The relaxation dynamics of the qubit become simple in two
situations. First, when the qubit is weakly coupled to only a
high-frequency (HF) bath and the energy splitting of the qubit is
larger than the broadening of qubit's energy levels. In this case,
the relaxation is described by the Bloch-Redfield rate
\cite{Leggett87,Slichter90,Walls94},
\ba \label{GaBR} \Gamma = \frac{\Delta^2}{\Delta^2 + h^2} \,
S_H(\Omega_0), \ea
which is valid when $\Gamma \ll \Omega_0$. Herein, we make the
following assumptions for the Boltzmann and Planck constants: $k_B =
1, \; \hbar = 1.$
The second case is when the qubit is coupled only to a
low-frequency (LF) bath and its tunneling amplitude is much smaller
than the energy broadening caused by noise. The qubit dynamics
therefore becomes incoherent and the resulting macroscopic resonant
tunneling (MRT) rate is given by \cite{Amin08, AminBrito09}
\ba \label{GaM} \Gamma = \frac{\Delta^2}{8}\, \sqrt{\frac{2
\pi}{W^2}}\, \exp\left[ - \frac{(h - 4\, \varepsilon_L )^2}{ 8 W^2}
\right], \ea
where
\ba \label{eW} W^2 = \int \frac{d\omega}{2\pi} S_L(\omega),
\;\;\varepsilon_L = \int \frac{d\omega}{2\pi}
\frac{S_L(\omega)}{\omega} \ea
determine the intensity of the noise and the reorganization energy
(the shift of the bath energy due to the change of the qubit
state), respectively. The fluctuation-dissipation theorem leads to
$W^2 = 2 \,\varepsilon_L\, T,$ where $T$ is the equilibrium
temperature of the bath \cite{Amin08}. Equation (\ref{GaM}),
commonly known as the Marcus formula
\cite{Marcus93,Cherepanov01,Yang02, Ghosh11}, is valid when the
tunneling amplitude $\Delta$ is much smaller than the MRT line-width
$W$: $\Delta \ll W.$ This equation has been successful in
explaining experimental data from flux qubits
\cite{Lanting11,Harris08}.
In practice, low and high-frequency noises coexist and both have to
be considered in the dynamics of the qubit. In
Refs.~\cite{Lanting11,Amin08} the formula (\ref{GaM}) has been
generalized to include effects of high-frequency noise on the MRT
rate. For small tunneling amplitudes $\Delta$ the modified rate
$\Gamma$ is described by the following integral:
\ba \label{GaLan}\Gamma = \frac{\Delta^2}{4} \, \int d\tau \; e^{i
(h - 4 \varepsilon_L) \tau - 2 W^2 \tau^2 } \times \nn \exp\left[ 4
\int \frac{d\omega}{2 \pi} \,\frac{S_H(\omega)}{\omega^2} \, \left(
e^{- i \omega \tau} - 1 \right) \right]. \ea
We notice that the integrand of Eq.~(\ref{GaLan}) is equal to the
product of the low-frequency component, $ e^{-2 W^2 \tau^2 -4 i
\varepsilon_L \tau}$, multiplied by the high-frequency factor, which
depends on the spectrum $S_H$. The low-frequency component has the
Gaussian Fourier image,
\ba G^L(\omega) = \sqrt{\frac{\pi}{ 2 W^2} }\, \exp\left[ -
\frac{(\omega - 4 \varepsilon_L)^2 }{8 W^2} \right]. \ea
The high-frequency factor is characterized by the more complicated
integral:
\ba \label{GaH} G^H(\omega) = \int_{-\infty}^{+\infty} d \tau\;
e^{i\omega \tau} \times \nn \exp\left[ 4 \int \frac{d\Omega}{2
\pi} \,\frac{S_H(\Omega)}{\Omega^2} \, \left( e^{- i \Omega \tau} -
1 \right) \,\right]. \ea
We notice that both functions, $G^L(\omega)$ and $G^H(\omega)$,
satisfy the normalization condition,
\ba \label{GaNorm} \int \frac{d\omega}{2\pi}\, G^\mu(\omega) = 1,
\ea
where $\mu = L, H.$
The rate (\ref{GaLan}) can be represented as a convolution of the
Gaussian envelope $G^L(\omega)$ and the function $G^H(\omega)$,
\ba \label{GCon} \Gamma = \frac{\Delta^2}{4}\, \int \frac{d
\omega}{2\pi}\, G^L(h - \omega)\, G^H(\omega). \ea
In the Markovian case, where the spectrum $S_H(\omega)$ is flat,
$S_H(\omega) = S_H(0)$, the function $G^H(\omega)$ has a Lorentzian
shape,
\ba \label{GLoR} G^H(\omega) = \frac{ 4 S_H(0)}{\omega^2 + [ 2
S_H(0) ]^2 }. \ea
Here we need not to assume that the qubit-bath coupling is small.
Equation~(\ref{GLoR}) is valid at frequencies $0 \leq \omega \leq
1/\tau_H, $ where $\tau_H$ is a correlation time of the
high-frequency fluctuations described by the function
$S_H(\omega).$ Later we introduce a spectral density $S_H$ of the
Ohmic noise characterized by the correlation time $\tau_H \sim 1/T.$
The Bloch-Redfield limit is described by Eq.~(\ref{GaH}) with the
frequency $\omega$, which is much larger than the coupling to the
environment given by the spectrum $S_H(\omega)$:
$\frac{S_H(\omega)}{\omega } \ll 1. $ With this small parameter, we
can expand the dissipative factor in Eq.~(\ref{GaH}). Now the
function $G^H(\omega)$ turns into the form
\ba \label{GBRed} G^H(\omega) = 4 \frac{ S_H(\omega)}{\omega^2}. \ea
Equations~(\ref{GLoR}) and (\ref{GBRed}) can be approximately
combined into one Lorentzian formula that has a frequency-dependent
numerator,
\ba \label{GaLR} G^H(\omega) = \frac{ 4 S_H(\omega)}{\omega^2 + [ 2
S_H(0) ]^2 }. \ea
The Markovian and Bloch-Redfield expressions follow from this
formula in the corresponding limits. Notice also that the function
(\ref{GaLR}) is normalized according to Eq.~(\ref{GaNorm}).
Thus, the single-qubit relaxation rate (\ref{GaLan}) can be
conveniently represented as a convolution of the Gaussian and
Lorentzian line shapes,
\ba \label{GaMR}
\Gamma= \frac{\Delta^2}{4}\, \int \frac{d\omega}{2\pi} \, \frac{S_H(\omega)}{
\omega^2 + [ 2 S_H(0) ]^2 } \times \nn
\sqrt{\frac{\pi}{ 2 W^2} }\, \exp\left[ -
\frac{(h - \omega - 4 \varepsilon_L)^2 }{8 W^2} \right]. \ea
The convolution integral in (\ref{GaMR}) has a simple
interpretation. One can think of the low-frequency noise as a random
shift in energy bias: $h\to h+h_{\rm noise}$, where $h_{\rm noise}$
has Gaussian distribution with variance of $2W$. The high-frequency
relaxation rate, given by the Lorentzian line-shape, will therefore
be shifted by $h_{\rm noise}$. Ensemble averaging over low-frequency
fluctuations will lead to a convolution integral similar to
(\ref{GCon}) and (\ref{GaMR}). The reorganization energy
$\varepsilon_L$ is a result of the action of the qubit on the
environment.
The intuitive description above holds beyond the validity of
Eq.~(\ref{GaMR}). In the next sections, we will generalize this
approach to multiqubit systems without resorting to the small
tunneling amplitude approximation.
\section{Definitions and notations}
\label{HamSec}
\subsection{ The Hamiltonian}
We are interested in dissipative evolution of a quantum annealer
\cite{Johnson11, Lanting14, Dickson13} treated as a system of $N$
qubits coupled to a heat bath. The qubits are described by the
Hamiltonian:
\ba \label{HS} H_S = {\cal A}(s) H_D + {\cal B}(s) H_P, \ea
where $H_D$ and $H_P$ are the driving (tunneling) and problem
Hamiltonians defined as
\ba \label{HDP} H_D &=& - \frac{1}{2} \, \sum_\alpha \,
\Delta_{\alpha} \, \sigma_x^{\alpha} , \nn H_P &=& \frac{1}{2}
\,\sum_{\alpha} h_{\alpha} \sigma_z^{\alpha} + \frac{1}{2}
\,\sum_{\alpha \neq \beta} J_{\alpha \beta} \,\sigma_z^{\alpha}
\sigma_z^{\beta}. \ea
The energy functions ${\cal A}(s)$ and ${\cal B}(s)$ determine the
annealing schedule with $s = t/t_f$ being the dimensionless
annealing parameter ($ 0\leq s \leq 1$), and $t $ is and $t_f$ being
the running time and the total annealing time, respectively. Details
of the annealing schedule are unimportant for the current discussion
as long as the time-dependent Hamiltonian changes slowly, which is
exactly the case for quantum annealing algorithms.
We assume an interaction with a bath of the form:
\ba \label{Hint} H_{\rm int} = - \sum_{\alpha=1}^N Q_{\alpha} \,
\sigma_z^{\alpha}, \ea
with operators $Q_{\alpha}$ characterized by Gaussian statistics
with zero average values, $\langle Q_{\alpha}\rangle = 0.$ We also
suppose that different qubits, labeled as $\alpha$ and $\beta$, are
coupled to statistically independent environments, such that
$\langle Q_{\alpha} Q_{\beta} \rangle = 0$ if $\alpha \neq \beta.$
This has been experimentally confirmed for flux qubits
\cite{Lanting10}.
\subsection{Schr\"odinger picture}
The total system-bath Hamiltonian $H$ written in the Schr\"odinger
representation has the form given by Eq.~(\ref{H1}). Here, the time
evolution of the system-bath can be described by the density matrix
$\rho_{SB} = \ket{\psi_{SB}}\bra{\psi_{SB}}$, where
$\ket{\psi_{SB}}$ is the system-bath wave function. The
time-evolution of $\rho_{SB}$ is governed by the von Neumann
equation,
\ba\label{rhoSB} i \dot \rho_{SB} = [ H_S + H_B + H_{\rm int},
\rho_{SB} ], \ea
where $[A,B]$ means a commutator of operators $A$ and $B$.
We assume that the initial system-bath matrix can be
factorized into the product
\ba \label{rhoF}\rho_{SB}(0) = \rho_S(0)\otimes \rho_B \ea
of the initial density matrix of the qubits, $\rho_S(0)$, and the
equilibrium matrix of the bath \cite{Blum12},
\ba \label{rhoB} \rho_B = \frac{e^{-H_B/T}}{{\rm Tr}_B
(e^{-H_B/T})}. \ea
Here, ${\rm Tr}_B$ denotes a trace over bath variables, and $T$ is
the bath temperature.
With the unitary matrix $U_B = e^{- i\, H_B t}$, the Hamiltonian $H$
(\ref{H1}) turns into the form
\ba \label{H2} H' &=& U_B^\dag\,( H_S + H_{\rm int} + H_B ) U_B -
i\, U_B^\dag \frac{\partial}{\partial t}\, U_B \nn &=& H_S
-\sum_\alpha Q_\alpha(t)\, \sigma^z_\alpha, \ea
where $ Q_\alpha(t) $ is the free-evolving bath operator,
\ba \label{Qa} Q_\alpha(t) = e^{i H_B t}\, Q_\alpha \, e^{-i H_B
t}. \ea
The evolution of the density matrix can now be defined in terms of
the unitary operator \ba \label{UIa} U(t) = {\cal T}\, e^{-
i\,\int_0^t d\tau\, H'(\tau) }. \ea
Hereafter, for simplicity of notation, we remove time dependences
from unitary matrices. A consecutive application of the operators
$U_B$ and $U$ produces the system-bath density matrix $\rho_{SB}(t)$
at time $t$,
\ba \label{rhoSBa} \rho_{SB}(t) = U_B U \rho_{SB}(0) U^\dag
U_B^\dag. \ea
This time-dependent matrix presents the solution of the von Neumann
equation (\ref{rhoSB}). The average value of an arbitrary
Schr\"odinger operator ${\cal O}$, which describes a physical
variable of the qubits or of the bath, is determined by the
density matrix $\rho_{SB}(t)$ (\ref{rhoSBa}) taken at
time $t$,
\ba \label{sAv} \langle {\cal O} \rangle_{SB}(t) = {\rm Tr} [
\,\rho_{SB}(t) \, {\cal O}\, ] = \nn {\rm Tr}_B \sum_k \langle k\, |
\,\rho_{SB}(0) \, U^\dag\, U_B^\dag \,{\cal O} \,U_B\, U \,|\, k
\rangle. \ea
Here the total trace ${\rm Tr}$ includes the trace ${\rm Tr}_B$ over
free-bath variables and also the trace ${\rm Tr}_S$,
\ba \label{TrS} {\rm Tr}_S = \sum_k \langle k | \ldots | k \rangle,
\ea
over a full set of qubit states $\{\ket{k}\}.$
\subsection{Heisenberg picture}
In the density matrix approach, the state of the system is described
via the reduced density matrix, which is obtained by averaging
$\rho_{SB}$ over the bath fluctuations. Some information about
quantum fluctuations is lost after the averaging. This limits the
method to calculations of only the averages and same-time
correlation functions. Other properties such as different-time
correlations remain beyond the reach of this approach. In the
Heisenberg picture, the equations are written in terms of the
operators without taking averages. This allows calculations of
correlation functions to any order as long as the equations can be
solved.
In the Heisenberg representation, the average value of an arbitrary
operator ${\cal O}$ in (\ref{sAv}) can be written as
\ba \label{sAvH} \langle {\cal O} \rangle_{SB}(t) = {\rm Tr} [
\,\rho_{SB}(0) \,{\cal O}^H(t) \,], \ea
where
\ba \label{sH} {\cal O}^H(t) = U^\dag U_B^\dag \,{\cal O} \,U_B U
\ea
is the Heisenberg operator of the variable ${\cal O}.$ The
Schr\"odinger operator ${\cal O} $ may explicitly depends on time.
In this case, its partial derivative over time, $\frac{\partial
{\cal O}}{\partial t}$, is not equal to zero. It follows from
Eq.~(\ref{sH}) that the time evolution of the operator ${\cal
O}^H(t)$ is described by the Heisenberg equation
\ba \label{dsdt} i \, \frac{d}{dt}\, {\cal O}^H = [\, {\cal O}^H,
H^H \,] + (U_B U)^\dag i \;\frac{\partial {\cal O}}{\partial t}
\;U_B U, \ea
where the total Hamiltonian (\ref{H1}) is written in the Heisenberg
picture as
\ba \label{Ha}
H^H = U^\dag U_B^\dag \,H \,U_B U \ea
\subsection{The bath}
\label{SecBath} We assume that the bath coupled to $\alpha-$qubit is
described by Gaussian statistics \cite{ES81}. These statistics are
characterized by a correlation function $K_{\alpha}(t,t')$
\ba \label{KS} K_{\alpha}(t,t') = \langle Q_{\alpha}(t)
Q_{\alpha}(t') \rangle . \ea
Here $Q_{\alpha}(t)$ is a free-evolving bath operator (\ref{Qa}).
The brackets $ \left< \ldots \right>$ denote the average of ${\cal
O}$ over the free-bath fluctuations,
\ba \label{TrB} \left< {\cal O} \right> = {\rm Tr}_B [\, \rho_B \,
{\cal O} \,], \ea
unless otherwise specified. For stationary processes,
$K_{\alpha}(t,t')$ depends on the time difference, hence allowing
the spectral density to be defined as
\ba S_\alpha(\omega) = \int dt e^{i \omega t} K_{\alpha}(t). \ea
In addition to the correlator (\ref{KS}), we introduce dissipative
functions $f_{\alpha}(t)$ and $g_{\alpha}(t)$ defined as
\ba \label{fgA} f_{\alpha}(t) = \int \frac{d \omega}{2 \pi} \,
\frac{S_{\alpha}(\omega)}{\omega^2} \, (1 - e^{- i \omega t }), \nn
g_{\alpha}(t) = - i\, \dot f_{\alpha}(t) = \int \frac{d \omega}{2
\pi} \, \frac{S_{\alpha}(\omega)}{\omega} \, e^{- i \omega t }.
\ea
Notice that $K_{\alpha}(t)~=~\ddot f_{\alpha}(t).$ The total
reorganization energy of the bath is defined as \ba \label{epA}
\varepsilon_\alpha = \int \frac{d \omega}{2 \pi} \,
\frac{S_{\alpha}(\omega)}{\omega}. \;\ea
The response of the bath to an external field is described by the
retarded Green function
\ba \label{phi1} \varphi_{\alpha}(t{-}t') = \langle i [
Q_{\alpha}(t), Q_{\alpha}(t') ] \rangle \, \theta(t{-}t').
\ea
The causality is provided by the Heaviside step function
$\theta(t-t').$ The response function $\varphi_{\alpha}$ is related
to the susceptibility of the bath defined through
\ba \label{phi2} \chi_{\alpha}(\omega) = \int d\tau \, e^{i \omega
\tau }\varphi_{\alpha}(\tau). \ea
According to the fluctuation-dissipation theorem, in equilibrium
$S_{\alpha}(\omega)$ is proportional to the imaginary part
$\chi_{\alpha}''(\omega)$ of the bath susceptibility,
\ba \label{FDT} S_{\alpha}(\omega) = \chi_{\alpha}''(\omega)\,
\left[ \coth\left(\frac{\omega}{2 T}\right) + 1 \right], \ea
where $T$ is the temperature of the equilibrium bath.
\subsection{Hybrid noise}
\label{HybridNoise}
Hereafter we assume that the dissipative
environments coupled to different qubits, although uncorrelated,
have the same spectral density of bath fluctuations:
$S_\alpha(\omega) = S(\omega)$. The same is true of the functions
$f_\alpha(\tau) = f(\tau)$, $g_\alpha(\tau) = g(\tau)$, and
$\chi_\alpha(\omega) = \chi(\omega).$ In the case of hybrid noise,
$S(\omega)$ is given by Eq.~(\ref{SpLH}).
The dissipative functions $f$
and $g$ can be split into low and high-frequency components,
\ba \label{fgB} f = f_L + f_H, \qquad g = g_L + g_H. \ea
For the low-frequency part of the function $f$, one can expand
$e^{i\omega \tau}$ in Eq.~(\ref{fgA}), assuming $\omega\tau \ll 1$.
Keeping up the second order in $\omega\tau$, we obtain
\ba \label{fL} f_L(\tau ) = i\, \varepsilon_L \, \tau +
\frac{1}{2}\, W^2 \, \tau^2, \ea
with $\varepsilon_L$ and $W$ defined in Eq.~(\ref{eW}).
To treat the high-frequency parts, we assume Ohmic noise
\ba \label{SH} S_H(\omega) = \frac{\eta \omega}{ 1 - e^{-\omega/T}}
\, e^{-|\omega|/\omega_c}. \ea
Here $\eta$ is a small dimensionless coupling constant and
$\omega_c$ is a large cutting frequency of the high-frequency noise.
This assumption is justified experimentally \cite{Lanting11} and
also theoretically \cite{Leggett87}. The dissipative functions
$f_H$ and $g_H$ are calculated in Appendix~\ref{ApxDF}. The total
reorganization energy, $\varepsilon_\alpha \equiv \varepsilon,$ is
defined by (\ref{epA}), so that $\varepsilon = \varepsilon_L +
\varepsilon_H.$ Here $\varepsilon_L$ is defined in (\ref{eW}), and
$\varepsilon_H$ is the high-frequency component of the
reorganization energy (\ref{epA}). For the Ohmic spectrum (\ref{SH})
of the bath, we have $\varepsilon_H = \frac{\eta \omega_c}{2 \pi}.$
\subsection{Selection of the basis}
\label{Basis}
The dynamical equations we aim to derive must be represented in a
convenient basis, which we denote by $\{\ket{n(t)}\}$. This basis
could be the instantaneous eigenstates of the system Hamiltonian or
some superpositions of those. The system-bath Hamiltonian $H'$ in
(\ref{H2}) can be written as
\ba \label{H4} &&H' = \sum_n [ \,E_n - Q_n(t) \,]\, \ket{n}\bra{n} +
\nn &&\sum_{m\neq n} [ \,T_{mn} - Q_{mn}(t) \,]\,\ket{m}\bra{n}.
\ea
where
\ba \label{ETmn} E_n = \langle n | H_S | n \rangle, \quad T_{mn} =
\langle m | H_S | n \rangle, \ea
\ba \label{Qmn} &&Q_n(t) = \sum_{\alpha=1}^N \sigma^\alpha_n\,
Q_\alpha(t), \nn &&Q_{mn}(t) = \sum_{\alpha=1}^N
\sigma^\alpha_{mn}\, Q_\alpha(t), \ea
with $Q_\alpha(t)$ defined in (\ref{Qa}), and
\ba \label{sAmn} \sigma^\alpha_n = \langle n | \sigma_z^\alpha | n
\rangle, \;\; \sigma^\alpha_{mn} = \langle m | \sigma_z^\alpha | n
\rangle. \ea
We also introduce the following notations, which we will use later:
\ba \label{abc} a_{mn} = \sum_{\alpha} (\sigma_m^\alpha -
\sigma_n^\alpha)^2, \; b_{mn} = \sum_\alpha |\sigma^\alpha_{mn}|^2,
\hspace{0.75cm}
\\ c_{mn} = \sum_\alpha \sigma^\alpha_{mn} (\sigma_m^\alpha
- \sigma_n^\alpha), \; d_{mn} = \sum_\alpha \sigma^\alpha_{mn}
(\sigma_m^\alpha +\sigma_n^\alpha). \nonumber \ea
Hereafter, we will refer to the parameter $a_{mn}$ as to the Hamming
distance between states $\ket{m}$ and $\ket{n}$. We also notice that
the parameters $a_{mn}$ and $b_{mn}$ are real and positive, and
$c^*_{mn} = - \,c_{nm}, \; d_{mn}^* = d_{nm}. $
\section{System evolution in the interaction representation}
\label{IntPic}
Properties of the system of qubits are determined by the reduced
density matrix
\ba \label{rhoSa} \rho_S = {\rm Tr}_B \rho_{SB} = {\rm Tr}_B [ U
\rho_{SB}(0) U^\dag ], \ea
where the system-bath density matrix $\rho_{SB}$ is given by
Eq.~(\ref{rhoSBa}). A system-bath average of an arbitrary operator
${\cal O}_S$ of the system is written as
\ba \langle {\cal O} \rangle _{SB} = {\rm Tr}_S [ \rho_S(t) {\cal
O}_S ]. \ea
In the basis introduced in Sec.~\ref{Basis}, the matrix $\rho_S$
has the form
\ba \label{rhoSn} \rho_S = \sum_{mn} \rho_{nm} \ket{n}\bra{m}, \ea
with the matrix elements defined as
\ba \label{rhoMN} \rho_{nm} = \langle n | \rho_S | m \rangle. \ea
Our goal is to derive a set of master equations for the probability
distribution of the qubits, $P_n$, over the states $\{\ket{n}\}$,
where
\ba \label{Pna} P_n = \rho_{nn} = \langle n | \rho_S | n \rangle.
\ea
The time evolution of the matrix (\ref{rhoSa}) is determined by the
unitary operator $U$ defined by (\ref{UIa}) where the Hamiltonian
$H'$ is given by Eq.~(\ref{H4}). The objective is to go beyond the
perturbation theory in the system-bath coupling. This can be done by
treating $Q_n$ exactly, but $Q_{mn}$ perturbatively. The interaction
representation is best suited for this goal.
A transition to the interaction picture, although straightforward
for time-independent bases, becomes more involved if the basis
changes in time. Let us introduce a unitary operator
\ba \label{Ua} U_0(t) = \sum_n e^{- i \phi_n(t) } \, {\cal S}_n(t)
\, \ket{n}\bra{n}, \ea
where $\ket{n}$ is a time-dependent basis of the system, and
\ba \label{phin}
\phi_n(t) = \int_0^t d\tau
E_n(\tau) \ea
is written in terms of average energies $E_n(t)~=~\langle n(t) |
H_S(t) | n(t) \rangle$. We also introduce the $S$-matrix:
\ba \label{snA} {\cal S}_n(t) = {\cal T} \exp\left[ i \sum_\alpha
\sigma_n^\alpha(t) \int_0^t dt_1\, Q_\alpha(t_1) \right], \ea
with ${\cal T}$ being the time-ordering operator for $t_1$. Notice
that the time-dependent matrix element $\sigma_n^\alpha(t)$ is taken
out of the integral over $t_1$. This becomes necessary when we want
to express correlation functions in terms of dissipative functions.
The interaction Hamiltonian is given by the expression
\ba \label{HIa} H_I = U_0^\dag H' U_0 - i U_0^\dag \dot U_0. \ea
This Hamiltonian defines the unitary evolution operator
\ba \label{UI} U_I(t) = {\cal T} e^{- i \int_0^t d\tau H_I(\tau) }.
\ea
We expect that the Hamiltonian $H_I$ does not contain the
nonperturbative diagonal terms $Q_n$. To calculate the
time-derivative $\dot U_0$ in (\ref{HIa}), we need
\ba \label{sndot} - i \dot{\cal S}_n(t) = Q_n(t) \, {\cal S}_n(t) +
\nn \sum_\alpha \dot \sigma_n^\alpha (t) \, {\cal S}_n(t)
\, \int_0^t d\tau \, \tilde {\cal S}_n^\dag(t,\tau) Q_\alpha(\tau)
\, \tilde {\cal S}_n(t,\tau), \ea
where
\ba \label{snB} \tilde {\cal S}_n(t,\tau) = {\cal T} \exp\left[ i
\sum_\alpha \sigma_n^\alpha(t) \int_0^\tau dt_1\, Q_\alpha(t_1)
\right]. \ea
Notice that $ {\cal S}_n(t)= \tilde {\cal S}_n(t,t)$. In the
interaction picture, the system-bath Hamiltonian $H_I$ (\ref{HIa})
takes the form
\ba \label{HIc} H_I = i \, \sum_{n} | \dot n\rangle \bra{n} -
\sum_{mn} \tilde Q_{mn}(t)\, \ket{m(t)}\bra{n(t)}.
\ea
The modified bath operator $\tilde Q_{mn}$ has diagonal terms
\ba \label{QTn} \tilde Q_{nn}(t) = \nn - \sum_\alpha \dot
\sigma_n^\alpha (t) \int_0^t d\tau \, \tilde{\cal S}_n^\dag(t,\tau)
\, Q_\alpha(\tau) \, \tilde{\cal S}_n(t,\tau) , \hspace{0.25cm} \ea
and off-diagonal ($m\neq n$) terms,
\ba \label{QTmn} \tilde Q_{mn} &=& e^{i \phi_{mn}(t)} \,{\cal
S}_m^\dag(t)\, [ Q_{mn}(t) - \tilde T_{mn} ] \, {\cal S}_n(t).
\ea
Here
\ba \label{phiMN} \phi_{mn}(t) = \phi_m(t) - \phi_n(t) = \int_0^t d
\tau \, \omega_{mn}(\tau), \ea
is defined in terms of
\ba \label{omA} \omega_{mn}(t) = E_m(t) - E_n(t), \ea
and
\ba \label{dotSz} \dot \sigma_n^\alpha (t) = \sum_{m\neq n} [\;
\sigma_{mn}^\alpha \langle \dot n | m\rangle + \langle m | \dot n
\rangle\, \sigma^\alpha_{nm} \;]. \ea
We also introduce
\ba \label{TmnT} \tilde T_{mn} = T_{mn} - i \langle m | \dot n
\rangle , \ea
with $T_{mn}$ defined in (\ref{ETmn}). When the basis $\{\ket{n}\}$
is formed by the instantaneous eigenstates of the Hamiltonian $H_S$,
$ H_S \ket{n} = E_n \ket{n},$ we obtain
\ba \label{MdotN} \langle m| \dot n\rangle = \frac{1}{t_f} \, \frac{
\langle m (s) | \frac{d H_S(s)}{ds} | n(s)\rangle }{E_n(s) -
E_m(s)}. \ea
Here we assume that the spectrum
$E_n$ is nondegenerate and that $H_S$ (\ref{HS}) is characterized by
real parameters. In this case we have $\langle n| \dot n\rangle =
0.$
In Appendix \ref{ApxQC}, we calculate correlation functions $\tilde
K_{mn}^{m'n'}(t,t')$ of the bath variables (\ref{QTmn}),
\ba \label{KmnA} \tilde K_{mn}^{m'n'}(t,t') = \langle \tilde
Q_{mn}(t), \tilde Q_{m'n'}(t') \rangle. \ea
We show that the only terms that survive during the annealing run
are characterized by the relation
\ba \label{KmnB} \tilde K_{mn}^{m'n'}(t,t') = \delta_{m n'}
\delta_{n m'} \, \tilde K_{mn}(t,t'), \ea
where the function $\tilde K_{mn}(t,t')$ is given by
Eq.~(\ref{Cor7a}). In addition, we demonstrate that, during
annealing, correlations between diagonal operators $\tilde Q_{kk}$
(\ref{QTn}) and off-diagonal bath variables $\tilde Q_{mn}$
(\ref{QTmn}) rapidly disappear in time, such that $\langle \tilde
Q_{mn}(t), \tilde Q_{kk}(t') \rangle \sim 0.$ The same is true for
the average values of the operators (\ref{QTmn}): $\langle \tilde
Q_{mn}(t)\rangle \sim 0.$
\subsection{Time evolution}
The evolution of the matrix $\rho_S$ is determined by the unitary
matrix (\ref{UIa}), which can be written as: $U = U_0 U_I$, where
$U_0$ and $U_I$ are given by Eqs.~(\ref{Ua}) and (\ref{UI}). In the
interaction picture, we have
\ba \label{rhoNM} \rho_{nm} &=& e^{ i \phi_{mn}} {\rm Tr} [
\rho_{SB}(0) U_I^\dag \ket{m}{\cal S}_m^\dag {\cal S}_n \bra{n} U_I
] \nn &=& e^{ i \phi_{mn}} {\rm Tr} [ \rho_{SB}(0) U_I^\dag {\cal
S}_m^\dag {\cal S}_n U_I \Lambda_{mn} ], \nn &=& e^{ i \phi_{mn}}
{\rm Tr}_S [ \rho_{S}(0) \langle U_I^\dag {\cal S}_m^\dag {\cal S}_n
U_I \Lambda_{mn} \rangle ]. \ea
Here, we have used (\ref{rhoF},\ref{rhoSa},\ref{rhoMN}) and have
introduced the interaction picture operator
\ba \label{rhoTa} \Lambda_{mn} = U_I^\dag \, \ket{m}\bra{n}\, U_I,
\ea
which will play an important role in our theory. The bath average
$\langle ... \rangle$ is defined in (\ref{TrB}). Equation
(\ref{rhoNM}) becomes simplified for the diagonal elements:
\ba \label{Pnb} P_n = \rho_{nn} = {\rm Tr}_S [ \rho_S(0) \langle
\Lambda_{nn} \rangle ]. \ea
In the following, we consider time evolution of the operators
$\Lambda_{nn}$ instead of working with the elements (\ref{rhoNM})
and (\ref{Pnb}) of the system density matrix. Working with operators
instead of averages allows derivation of more accurate master
equations.
Taking the derivative of (\ref{rhoTa}) and using (\ref{HIc}), we
obtain
\ba \label{rTa} i \frac{d}{dt} \,\Lambda_{mn} =\sum_{k} ( Q_{km}^I
\, \Lambda_{kn} - Q_{nk}^I \, \Lambda_{mk}), \ea where
\ba \label{QTa} Q_{mn}^I = U_I^\dag\, \tilde Q_{mn} \, U_I. \ea
Here we use the fact that $\Lambda_{mn}$ and $ Q_{kl}^I$, taken at
the same moment of time $t$, commute: $[ \Lambda_{mn}, Q_{kl}^I
]=0,$ for any set of indexes $m,n,k,l.$ The evolution of the
diagonal elements $\Lambda_{nn}$ is of prime interest since these
elements determine the probabilities (\ref{Pnb}):
\ba \label{rTb} i \frac{d}{dt} \, \Lambda_{nn} = \sum_{m\neq n} (
Q_{mn}^I \, \Lambda_{mn} - Q_{nm}^I \, \Lambda_{nm}
). \ea
Notice that the diagonal elements of the bath, $Q_{mm}^I $ and
$Q_{nn}^I$, have no influence on the evolution of $\Lambda_{nn}$.
Averaging over free bath fluctuations leads to
\ba \label{rT1} i \frac{d}{dt} \, \langle \Lambda_{nn}\rangle =
\sum_{m\neq n} ( \langle Q^I_{mn} \, \Lambda_{mn}\rangle - \langle
Q^I_{nm} \, \Lambda_{nm}\rangle ). \ea
This equation is exact and difficult to solve without
approximations.
To simplify Eq.~(\ref{rT1}), we use perturbation expansion assuming
that $\tilde Q_{mn}$ is small. Appendix \ref{ApxMeq} shows that the
probability distribution $P_n$ of the system (\ref{Pnb}) follows the
master equation:
\ba \label{rT5a} \dot P_n + \Gamma_n P_n = \sum_m \Gamma_{nm} \,
P_m , \ea
where $\Gamma_n = \sum_m\Gamma_{mn}$ and
\ba \label{GamB} \Gamma_{nm} = \int_{-\infty}^{+\infty} d\tau\, e^{i
\omega_{mn} \tau - a_{mn} f(\tau)} \times \nn \{ b_{mn} \ddot
f(\tau) + [ \bar T_{mn} - c_{mn} g(\tau) ] \,[ \bar T_{mn}^* -
c_{mn}^* g(\tau) ] \}.
\ea
Coefficients $a_{mn}, b_{mn}, d_{mn}$ are defined by
Eq.~(\ref{abc}) and
\ba \label{TkC} \bar T_{mn} = T_{mn} - i \langle m | \dot n \rangle
- d_{mn} \,\varepsilon, \ea
where $\varepsilon$ is the total reorganization energy described in
Sec.~\ref{HybridNoise}, $\bar{T}_{mn}^* = \bar{T}_{nm}.$ All matrix
elements of the system operators in Eq.~(\ref{GamB}) are taken at
the running moment of time $t$.
The rate $\Gamma_{nm}$ can be written in a form similar to the single-qubit expression
(\ref{GaLan}) and also to the multiqubit rate $\Gamma_{1 \rightarrow 0}$ given by Eq.~(5)
from Ref. \cite{Boixo16} and by Eq.~(68) from Ref.~\cite{Boixo14},
\ba \label{GamTime} \Gamma_{nm} = \int_{-\infty}^{+\infty} d \tau \;
e^{i \omega_{mn} \tau}\, e^{- i \varepsilon_{mn} \tau - \frac{1}{2}
W^2_{mn} \tau^2 } \times \nn \left[ ( 1 + i \omega_c \tau ) \frac{
\sinh(\pi T \tau)}{\pi T \tau} \right] ^{- \frac{\eta_{mn}}{2 \pi} }
\times \nn \{ b_{mn} \ddot f(\tau) + [ \bar T_{mn} - c_{mn} g(\tau)
] \,[ \bar T_{mn}^* - c_{mn}^* g(\tau) ] \}.
\ea
Here we use Eqs.~(\ref{fL}) and (\ref{fH}) and introduce the
following parameters:
\ba \label{Wmn} \varepsilon_{mn} = a_{mn} \varepsilon_L, \, W^2_{mn}
= a_{mn} W^2,\, \eta_{mn} = a_{mn} \eta. \ea
We notice that, compared to previous results (see Eqs.~(5), (6) in
\cite{Boixo16} and Eqs.~(43),(52),(54),(68) in \cite{Boixo14}), the
rate (\ref{GamTime}) does not contain any polaron shifts to the
frequency $\omega_{mn}$. Moreover, we have no need to represent the
bath as a system of harmonic oscillators as done in Refs.
\cite{Boixo16} and \cite{Boixo14}.
\subsection{Applicability conditions}
The master equations (\ref{rT5a}) have been derived in
Appendix~\ref{ApxMeq} with the proviso that
\ba \label{GamTau} \Gamma_{nm} \tau_{mn} \ll 1, \ea
where $\Gamma_{nm}$ is the relaxation rate (\ref{GamB}). The inverse
correlation time of the bath, $\tau_{mn}^{-1}$, is estimated in
Appendix~\ref{ApxQC} as the maximum of two parameters: the average
energy distance $|E_m - E_n|$ between the states $\ket{m}$ and
$\ket{n}$ and the MRT line-width $W_{mn} = W \sqrt{a_{mn}}$,
\ba \label{tauC} \frac{1}{\tau_{mn}} = {\rm max} \{ |E_m - E_n|,
W_{mn} \}. \ea
\section{Relaxation rate as a convolution of Bloch-Redfield and
Marcus envelopes} \label{BRM}
In this section we show that, in addition to the expression
(\ref{GamTime}) of the rate $\Gamma_{nm}$ as the integral over time,
the same rate can be conveniently represented as a convolution
integral over frequencies of the Gaussian envelope multiplied by the
Lorentzian function. As in the case of a single qubit described in
Sec.~\ref{SingleQ}, the Gaussian curve is produced by low-frequency
bath noise. The Lorentzian factor is due to effects of the
high-frequency environment. Here, our aim is a generalization of the
single-qubit formula (\ref{GaMR}) to the multi-qubit case where
both, low-frequency noise and the single-qubit tunneling, can be
large.
\subsection{Convolution form of the rate $\Gamma_{nm}$} \label{ConGL}
Using integration by parts and $\dot f(\tau)=ig(\tau)$, we obtain
\ba \int d\tau\; e^{i \omega \tau} e^{-a_{mn} f(\tau)} g(\tau) =
\frac{\omega}{a_{mn}} \int d\tau \; e^{i \omega \tau} e^{-a_{mn}
f(\tau)},\nonumber \ea
and
\ba \int d\tau \; e^{i \omega \tau} e^{-a_{mn} f(\tau)} g^2(\tau) =
\nn \int d\tau \;e^{i \omega \tau} e^{-a_{mn} f(\tau)} \left[
\left(\frac{\omega}{a_{mn}}\right)^2 - \frac{1}{a_{mn}}\,\ddot
f(\tau) \right]. \nonumber \ea
Equation (\ref{GamB}) can therefore be represented as
\ba \label{Gx2} \Gamma_{nm} = \;\int d\tau \,e^{i \,\omega_{mn}
\tau} \, e^{-a_{mn} \,f(\tau)} \times \hspace{0.5cm} \nn \left[
\left( b_{mn} -
\frac{|c_{mn}|^2}{a_{mn}} \right)\ddot f(\tau) + \left| \bar T_{mn}
- \omega_{mn} \, \frac{c_{mn}}{a_{mn}} \right|^2 \right].
\hspace{0.25cm} \ea
Let us introduce Fourier transformations
\ba \label{GLH} G^{\mu}_{mn}(\omega) = \int_{-\infty}^{\infty}\,
d\tau \, e^{i\omega \tau} \; e^{-a_{mn} f_{\mu}(\tau)}, \ea
where $\mu = L,H$ for low and high-frequency noise, respectively.
Our goal is to write (\ref{Gx2}) as a convolution of the two
functions $ G_{mn}^L(\omega)$ and $ G_{mn}^H(\omega)$. The integrand
of Eq.~(\ref{Gx2}) contains a term $e^{- a_{mn} f(\tau)}$, which can
be written in the following form,
\ba e^{- a_{mn} f(\tau)} = e^{- a_{mn} f_L(\tau)} e^{- a_{mn}
f_H(\tau)} = \nn \int \frac{d \omega_1}{2\pi} \int \frac{d
\omega_2}{2\pi} \,e^{- i (\omega_1 + \omega_2) \tau}
\,G_{mn}^L(\omega_1)\, G_{mn}^H(\omega_2). \nonumber \ea
Substituting in (\ref{Gx2}) and taking the integral over $\tau$, we
obtain:
\ba \label{GamD} \Gamma_{nm} = \int
\frac{d\omega}{2\pi}\;\Delta^2_{mn}(\omega)\,
G_{mn}^L(\omega_{mn}-\omega)\, G_{mn}^H(\omega), \ea
where
\ba \label{DeKN} \Delta^2_{mn}(\omega) =
| A_{mn}|^2 +
B_{mn}\, (\omega^2 + W^2_{mn}), \ea
with
\ba \label{AB} A_{mn} = \bar T_{mn} - \omega_{mn} \,
\frac{c_{mn}}{a_{mn}},
\nn B_{mn} = \frac{a_{mn} b_{mn} - |c_{mn}|^2}{a_{mn}^2} = \nn
\frac{1}{2 a_{mn}^2} \; \sum_{\alpha \beta} | (\sigma_m^{\alpha} -
\sigma_n^{\alpha} ) \,\sigma_{mn}^{\beta} - (\sigma_m^{\beta} -
\sigma_n^{\beta} ) \,\sigma_{mn}^{\alpha} |^2. \ea
Here, we have used $\ddot f = W^2 + \ddot f_H$ and neglected $\dot
f_H^2$, which is $O(\eta^2)$ in the weak coupling approximation.
Notice that $B_{mn}$ is always positive and disappears in a
single-qubit case where $\alpha = \beta = 1.$ Also, we have
$A_{mn}^* = A_{nm}$ and $ B_{nm} = B_{mn}.$
Appendix~\ref{ApxGL} shows that the low-frequency function
$G_{mn}^L(\omega)$ has a Gaussian shape,
\ba \label{GLa} G^L_{mn}(\omega) = \sqrt{\frac{2 \pi}{W^2_{mn}} }\,
\exp \left[ - \frac{(\omega - \varepsilon_{mn} )^2}{ 2 W^2_{mn}}
\right]. \ea
A similar line shape describes the rate of macroscopic resonant
tunneling (MRT) in a system of qubits \cite{Amin08}. The
high-frequency component $G_{mn}^H(\omega)$ can be approximated by a
Lorentzian form, combining both Bloch-Redfield and Markovian rates:
\ba \label{GHa}
G_{mn}^H(\omega) = \frac{a_{mn}\, S_H(\omega)}{\omega^2 +
\gamma_{mn}^2}. \ea
The parameter $\gamma_{mn} = \frac{a_{mn}}{2}\, S_H(0)$ does not
depend on frequency $\omega$. It follows from Eq.~(\ref{SH}) that
$S_H(0) = \eta T.$
\section{Special cases}
\label{Limits}
In this section we verify detailed balance conditions for the
equilibrium distribution of qubits and also consider the
Bloch-Redfield and Marcus limits of the rates (\ref{GamD}). In
addition, we apply the results of the previous section to a single
qubit interacting with a hybrid environment.
\subsection{Equilibrium condition}
We conclude from Eq.~(\ref{GamD}) that
\ba \label{GeqB} \Gamma_{mn} = \exp\left(
-\frac{\omega_{mn}}{T}\right) \, \Gamma_{nm}, \ea
where $\omega_{mn}$ is defined by Eqs.~(\ref{ETmn}) and (\ref{omA}).
It follows from Eq.~(\ref{rT5a}) that the equilibrium
probabilities $P_n^{\rm eq}$ and $P_m^{\rm eq}$ to observe the
qubits in the states $\ket{n}$ and $\ket{m}$, respectively, obey the
equation:
\ba \label{PeqA} \sum_m (\Gamma_{mn} P_n^{\rm eq} - \Gamma_{nm}
P_m^{\rm eq} ) = 0. \ea
The solution of this equation follows the detailed balance
condition:
\ba \label{PeqB} \frac{ P_m^{\rm eq} }{ P_n^{\rm eq} } = \exp\left[
- \frac{E_m - E_n }{T} \right], \ea
with the local energy levels $E_m$ and $E_n$ (\ref{ETmn}) and the
bath temperature $T$.
The set of master equations (\ref{rT5a}) with the rates
$\Gamma_{nm}$ given by Eq.~(\ref{GamD}) provides a description of
the dissipative dynamics of a quantum annealer during the entire
annealing process. This description should be complemented by the
equation for the off-diagonal elements $\rho_{nm}$ of the system
density matrix. The time evolution of $\rho_{nm}$ is approximately
described by the formula
\ba \label{rNM} \rho_{nm} = e^{i \phi_{mn}} \langle {\cal S}_m^\dag
{\cal S}_n \rangle {\rm Tr}_S [ \rho_S(0) \langle \Lambda_{mn}
\rangle ] \simeq \nn e^{i \phi_{mn}(t)} \langle {\cal S}_m^\dag(t)
{\cal S}_n(t) \rangle {\rm Tr}_S [ \rho_S(0) \langle \Lambda_{mn}(0)
\rangle ] . \ea
To derive this relation, we start with Eq.~ (\ref{rhoNM}) and move
out the dephasing factor $\langle {\cal S}_m^\dag(t) {\cal S}_n(t)
\rangle $ assuming that the matrices ${\cal S}_m^\dag$ and $ {\cal
S}_n$ are weakly correlated with the operator $\Lambda_{mn} $
(\ref{rhoTa}). In Eq.~(\ref{rNM}) we have two possibilities: in the
first case the energy gap between states $\ket{m}$ and $\ket{n}$ is
large, therefore the factor $e^{i\phi_{mn}} \simeq e^{i \omega_{mn}
t}$ rapidly oscillates in time; in the second case the factor
$\langle {\cal S}_m^\dag(t) {\cal S}_n(t) \rangle $, which is given
by Eq.~(\ref{SmSn}), is the fast-decaying function of time. In both
cases, the correlation time $\tau_{mn}$ defined by Eq.~(\ref{tauC})
is much shorter than the time scale $\Gamma_{nm}^{-1}$ of the
variables $\langle \Lambda_{mn}\rangle $ and $\langle
\Lambda_{nn}\rangle $. Therefore, in Eq.~(\ref{rNM}) the function
$\langle \Lambda_{mn}(t) \rangle$ can be replaced by its initial
value $\langle \Lambda_{mn}(0) \rangle$. Equation~(\ref{rNM})
describes fast dephasing of the system of qubits.
\subsection{Bloch-Redfield and Marcus limits}
For the qubits weakly interacting with the high-frequency noise in
the absence of low-frequency noise, the parameters of the
low-frequency bath go to zero: $W = 0, \varepsilon_L = 0.$ The
Gaussian envelope $G^L_{mn}(\omega)$ (\ref{GLa}) is approximated by
the function $ 2 \pi \delta(\omega)$, and the rate $\Gamma_{mn}$
(\ref{GamD}) takes the form
\ba \label{GamBR} \Gamma_{nm}^R = \Delta^2_{mn}(\omega_{mn})\,
\frac{ a_{mn}\, S_H(\omega_{mn})}{\omega_{mn}^2 + \gamma_{mn}^2.}.
\ea
At a sufficiently large distance $\omega_{mn}$ between the
energy levels $E_m$ and $E_n$, we find that
$$\Delta^2_{mn}(\omega_{mn}) =
\frac{ b_{mn}}{a_{mn}}\, \omega_{mn}^2.$$
It is evident from Eq.~(\ref{GamBR}) that, at $|\omega_{mn}| \gg
\gamma_{mn}$, the relaxation rate $\Gamma_{mn}$ is proportional to
the noise spectrum $S_H(\omega_{mn})$,
\ba \label{GBR} \Gamma_{nm}^R = b_{mn}\,S_H(\omega_{mn}), \ea
with the coefficient $b_{mn} = \sum_{\alpha=1}^N |\langle m |
\sigma_z^\alpha | n \rangle |^2$, as it should be for the
Bloch-Redfield rate. Transitions between states $\ket{m}$ and
$\ket{n}$ separated by a zero Hamming distance ($a_{mn} = 0$) are
also described by the Redfield rate~(\ref{GBR}).
In the absence of high-frequency noise, with $\eta = 0$ and $S_H =
0$, the function (\ref{GHa}) peaks at zero frequency:
$G_{mn}^H(\omega) = 2 \pi \delta(\omega)$. In this case the
relaxation rate (\ref{GamD}) of the many-qubit system is determined
by the Gaussian line shape,
\ba \label{GMarc} \Gamma_{nm}^M = \Delta^2_{mn} \, \sqrt{\frac{2
\pi}{W^2_{mn}} } \exp \left[ - \frac{ (E_m - E_n -
\varepsilon_{mn})^2}{2 W^2_{mn}} \right]. \hspace{0.25cm} \ea
This line shape is typical of the Marcus formulas
\cite{Amin08,Yang02}. The multiqubit tunneling amplitude
$\Delta^2_{kn}(0)$ is determined by the expression
\ba \label{DM}\Delta^2_{mn} \equiv \Delta^2_{mn}(0) = \left( b_{mn}
- \frac{|c_{mn}|^2}{a_{mn}} \right)\, W^2 + \nn
\left| \,T_{mn} - i \langle m | \dot n \rangle - d_{mn}\,
\varepsilon_L - \omega_{mn} \, \frac{c_{mn}}{a_{mn}} \,\right|^2 .
\ea
\subsection{Relaxation rate of the single qubit}
We assume that the single qubit is described by a Hamiltonian
(\ref{HamTLS}),
\ba H_S = -\frac{h}{2} \, \sigma_z - \frac{\Delta}{2} \, \sigma_x,
\nonumber \ea
with a bias $h$, a tunneling amplitude $\Delta$, and energy
splitting $\Omega_0 = \sqrt{\Delta^2 + h^2}.$ The energy basis
$\{\ket{k}\}$ has only two states, $\ket{1}$ and $\ket{2}$. These
states can be found from the equation: $H_S \ket{m} = E_m \ket{m}$.
In Eq.~(\ref{GamD}) for the rate $\Gamma_{nm}$ we assume that $n =
1$ and $m = 2$. The ground state $\ket{n}$ and the first excited
state $\ket{m}$ have the energies: $E_m = - E_n = \Omega_0/2.$ We
work in the energy basis where $T_{mn} = 0.$ For the single qubit we
obtain the following set of parameters,
\ba \label{abS} a_{mn} = 4\,\frac{ h^2}{ \Omega_0^2},\; b_{mn} =
\frac{\Delta^2}{\Omega_0^2 }, \, c_{mn} = - 2\,\frac{ h \Delta}{
\Omega_0^2}, \, d_{mn} =0, \hspace{0.5cm} \ea
so that $B_{mn} = 0$ and $\Delta^2_{mn}(\omega) = \Delta^2/a_{mn} $
(see sections \ref{Basis} and \ref{ConGL} for definitions). It
follows from Eq.~(\ref{GamD}) that in the case of hybrid noise the
single-qubit relaxation rate combines both, Bloch-Redfield and
Marcus, formulas,
\ba \label{GamS} \Gamma_{nm} = \Delta^2 \, \int
\frac{d\omega}{2\pi}\; \frac{S_H(\omega)}{\omega^2 + \gamma_{mn}^2}
\times \nn \sqrt{\frac{2 \pi}{a_{mn} W^2} }\; \exp \left[ - \frac{
(\Omega_{0} - \omega - a_{mn} \,\varepsilon_L )^2}{2\, a_{mn}\, W^2}
\right], \ea
where $ \gamma_{mn} = a_{mn}\frac{ \eta T}{2}. $ In the limit of
small $\Delta$ the rate (\ref{GamS}) corresponds to the formula
(\ref{GaMR}) shown in Sec.~\ref{SingleQ}.
\section{Dissipative evolution of a 16-qubit system}
\label{Dickson16}
In this section we analyze dynamics of the 16-qubit structure
depicted in Fig.~\ref{DStructure}. The structure is determined by
the Dickson instance, which was proposed in Ref.~\cite{Dickson11}
and investigated in details in Ref.~\cite{Dickson13}. The energy
spectrum of the problem features an extremely small gap between the
ground and first excited states. The existence of such a gap
presents a computational bottleneck for quantum annealing. An
experimental technique to overcome this difficulty by individual
tuning qubit's transverse fields has been demonstrated in
Ref.~\cite{King17}. Nevertheless, a theoretical analysis of
dissipative dynamics in this system presents a real challenge.
The probability distribution of the qubits is governed by the master
equation (\ref{rT5a}) with the relaxation matrix given by
Eq.~(\ref{GamD}).
The qubits are described the Hamiltonian $H_S$
(\ref{HS}). In the problem Hamiltonian $H_P$ (\ref{HDP}) we have
ferromagnetic couplings between qubits, $J_{ij} = -1$, for every
pair of coupled qubits. Two internal qubits have zero biases,
$h_4=h_{10}=0$, whereas the other internal qubits are negatively
biased, with
$$h_1=h_2=h_3=h_9=h_{11}=h_{12}=-1.$$
All external qubits have positive biases:
$$ h_5 = h_6 = h_7 = h_8 = h_{13} = h_{14} = h_{15} = h_{16} = 1.$$
We use the annealing curves $\Delta_\alpha(s) = \Delta_\alpha {\cal
A } (s)$ and ${\cal B}(s)$ plotted in Fig.~\ref{AnFun}. We also take
into account minor variations of the annealing schedule between the
qubits.
The spectrum of the system has an extremely
small energy gap, $E_2-E_1$=0.011~mK, between the ground and the
first excited states \cite{Dickson13}. This gap is located at $ s^*
= 0.6396.$ In Fig.~\ref{EnLev16} we show the four lowest energy
levels of the system near the anticrossing. The most interesting
annealing dynamics happen in the interval $ s_1 < s < s_2,$ where
$s_1 = 0.625$ and $s_2 = 0.65$.
\begin{figure}
\includegraphics[width=0.4\textwidth]{Hyb1.eps}
\caption{\label{DStructure} The 16-qubit instance. Qubits are
denoted as circles, FM couplings as black lines. Colors correspond
to biases applied to the qubits.}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Hyb2.eps}
\caption{\label{AnFun} Annealing parameters ${\cal B}(s)$ (black
line) and tunneling amplitudes $\Delta_1(s), ... \Delta_{16}(s)$
(all other colors)
plotted as functions of $s$. }
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Hyb3.eps}
\caption{\label{EnLev16} Four energy levels of the 16-qubit system
as functions of the annealing parameter near the anticrossing of two
lowest energy levels. Energies are counted from the energy $E_1$ of
the ground state.}
\end{figure}
The two diabatic states with the lowest energies, $\ket{GM}$ and
$\ket{\Sigma}$, are given by the expressions
\ba \label{GM} \ket{\rm GM} = \ket{ \downarrow_1 \downarrow_2
\downarrow_3 \downarrow_4 \downarrow_9 \downarrow_{10}
\downarrow_{11} \downarrow_{12}} \otimes \nn \ket{ \downarrow_5
\downarrow_6 \downarrow_7 \downarrow_8 \downarrow_{13}
\downarrow_{14} \downarrow_{15} \downarrow_{16} },\;\, \nn
\ket{\Sigma} = \ket{ \uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4
\uparrow_9 \uparrow_{10} \uparrow_{11} \uparrow_{12} } \otimes
\nn\ket{\rightarrow_5 \rightarrow_6 \rightarrow_7 \rightarrow_8
\rightarrow_{13} \rightarrow_{14} \rightarrow_{15} \rightarrow_{16}
}. \ea
Here we introduce the eigenstates $\ket{\uparrow_\alpha}$ and
$\ket{\downarrow_\alpha}$ of the matrix $\sigma_z^\alpha$, and also
their superposition $\ket{\rightarrow_\alpha}$,
\ba
\sigma_z^\alpha \ket{\uparrow_\alpha } =
\ket{\uparrow_\alpha },\quad \sigma_z^\alpha \ket{\downarrow_\alpha
} = - \ket{\downarrow_\alpha },\nn \ket{\rightarrow_\alpha} =
\frac{1}{\sqrt{2}} ( \ket{\uparrow_\alpha } + \ket{\downarrow_\alpha
} ).\quad \; \nonumber \ea
More details can be found in Ref. \cite{Dickson13} and in
the supplementary information for that paper. It follows from
Fig.~2c of Ref.~\cite{Dickson13} that, before the anticrossing at $
s < s^* $, the instantaneous eigenstates of the 16-qubit system
coincide with the diabatic states: $\ket{1} = \ket{\Sigma},$
$\ket{2} = \ket{\rm GM}. $ After the anticrossing point at $s >
s^*$, we have the reverse situation, with $\ket{1} = \ket{\rm GM}$
and $\ket{2} = \ket{\Sigma}.$ Although the experimental results
provided in Ref.~\cite{Dickson13} were in accordance with the
physical intuition given in the paper, no theoretical analysis was
provided. This was due to the lack of an open quantum theory that
takes into account both low-frequency and high-frequency noises.
Here, we apply our approach to provide a theoretical explanation of
the experimental results of Ref.~\cite{Dickson13}.
The presence of a very small gap and the time-dependence of the
system Hamiltonian, which becomes nonadiabatic near the minimum gap,
make the problem instance in Fig.~\ref{DStructure} difficult to
analyze within one theoretical framework in all regions during the
annealing. As such, some tricks are necessary to choose the proper
basis as we discuss next.
\subsection{Rotation of the basis}
The dissipative dynamics of the qubits coupled to a heat bath is
described by the master equations (\ref{rT5a}). These equations are
derived with the proviso that the rate $\Gamma_{nm}$ of the
relaxation (\ref{GamD}) between the states $\ket{n}$ and $\ket{m}$
is much less than the inverse time scale $\tau_{mn}^{-1}$ given by
Eq.~(\ref{tauC}), so that: $ \Gamma_{nm} \tau_{mn} \ll 1. $ For the
system of 16 qubits under study the perturbation requirement breaks
down at the anticrossing point as it is evident from
Fig.~\ref{GapTheta}a. Here we plot the energy gap, $E_2 - E_1$,
between two instantaneous eigenstates of the Hamiltonian $H_S$ (see
dot-dashed blue line), and also the MRT line width, $W_{21} = a_{21}
W $ (see continuous green line), as functions of the annealing
parameter $s$. At $s = s^*$ both parameters, $E_2-E_1$ and $W_{21}$,
become extremely small, leading to a diverging correlation time
$\tau_{mn}$ (\ref{tauC}). At the same time, $\Gamma_{12}$ becomes
very large due to the contribution from $\tilde{T}_{21}$. Both of
these break the applicability condition (\ref{GamTau}) in the
instantaneous energy basis. Moreover, the time dependence of the
Hamiltonian can create nonzero off-diagonal elements of the density
matrix near the minimum gap due to nonadiabatic transitions. These
terms do not decay quickly as required by our theory. As we shall
see, all these issues can be resolved by rotating the basis. This is
equivalent to the introduction of the pointer basis as described in
Refs.~\cite{Boixo16, Boixo14,Zurek03}.
\begin{figure}
\includegraphics[width=0.6\textwidth]{Hyb4.eps}
\caption{\label{GapTheta} (a) The energy scales $E_2-E_1$ and the
line width $ W_{21} = W a_{21}$ calculated in the instantaneous
basis of qubit states. These variables are shown as functions of the
annealing parameter $s$ near the anticrossing point. According to
(\ref{tauC}), the scales $E_2-E_1$ and $W_{21}$ determine the
inverse correlation time $\tau_{21}^{-1}$ of the bath. (b) The
$s$-dependence of the matrix element $\langle 2 | \frac{d}{d s} |
1\rangle$ calculated with Eq. (\ref{MdotN}). This matrix element is
a part of the renormalized tunneling coefficient $\bar T_{mn}$
(\ref{TkC}) and, thus, of the rate $\Gamma_{21}$ (\ref{GamB}). (c)
The optimal rotation angle $\Theta /\pi$ obtained as a solution of
Eq.~(\ref{ThetaS}). }
\end{figure}
We rotate the two anticrossing states as:
\ba \label{rot1}\ket{1'} &=& \cos\Theta \;\ket{1} + \sin \Theta
\;\ket{2}, \nn \ket{2'} &=& -\sin\Theta \;\ket{1} + \cos \Theta\;
\ket{2}. \ea
The rotation angle $\Theta$ can depend on the annealing parameter
$s$ and therefore on time $t$. For real eigenstates $\ket{1}$ and
$\ket{2}$ it follows that
\ba \langle 2' | \frac{d}{ds} | 1'\rangle = \langle 2 |
\frac{d}{ds} | 1\rangle + \frac{d \Theta}{ds}. \nonumber \ea
We choose the rotation angle $\Theta(s)$ such that in the rotated
basis
\ba \label{LZC} \langle 2' | \frac{d}{ds} | 1'\rangle = 0.\ea
This means that
the $s$-dependence of the angle $\Theta$ is determined by
\ba \label{ThetaS} \frac{d \Theta}{ds} = \frac{\langle 2 | \frac{d
H_S}{ds} | 1\rangle }{E_2 - E_1}. \ea
Notice that (\ref{LZC}) assures minimum quantum transition between
the two states $\ket{1'}$ and $\ket{2'}$ near the anticrossing and
therefore minimum generation of off-diagonal elements of the density
matrix. As we show in Appendix \ref{ApxRates}, it also resolves all
issues with the applicability condition (\ref{GamTau}) discussed
above.
A solution of Eq.~(\ref{ThetaS}) is shown in Fig.~\ref{GapTheta}c.
In the beginning of annealing $\Theta \simeq 0$, so that the rotated
basis coincides with the instantaneous basis. Near the anticrossing
point, at $s = s^*$, the angle $\Theta$ rapidly switches to $\pi/2$.
We notice that, with the condition (\ref{LZC}), the renormalized
matrix element $\tilde T_{2'1'}$ (\ref{TmnT}) is
\ba \label{T21} \tilde T_{2'1'} = \langle 2' | H_S | 1'\rangle =
\frac{E_2 - E_1}{2} \, \sin 2 \Theta. \ea
This is zero before ($\Theta = 0$) and after ($\Theta = \pi/2$) the
anticrossing due to the sine function and is very small at the
anticrossing due to the small gap ($E_2-E_1\approx 0$). This means
that $\tilde T_{2'1'}$ does not contribute to the rate
$\Gamma_{1'2'}$ keeping it small, within the applicability range of
our model. For this example we rotate only the two lowest-energy
states that have an anticrossing. However, the rotation can be
applied to anticrossing excited states as well, if necessary.
\subsection{Thermal enhancement of the success probability}
The goal of annealing is to reach the ground state at the end of the
evolution. It follows from Fig.~2c of Ref.~\cite{Dickson13} that at
the end of annealing, at $t = t_f$, the ground state of the 16-qubit
system coincides with the state $\ket{\rm GM}$ shown in
Eq.~(\ref{GM}). We therefore define the success probability as the
probability $P_{\rm GM}$ to observe the system in $\ket{\rm GM}$ at
$t = t_f$. Figure 3 of Ref.\cite{Dickson13} demonstrates the
temperature dependence of $P_{\rm GM}$. It is clear from this figure
that, at sufficiently fast annealing ($t_f \leq 100$~ms), $P_{\rm
GM}$ grows with increasing temperature from 20 to 40~mK and
decreases after. Our goal is to reproduce this non-monotonic
behavior with our open quantum model. Notice that we do not aim to
precisely fit the experimental data to the results of our model.
We solve numerically the master equations (\ref{rT5a}) with the
relaxation rates given by the convolution formula (\ref{GamD})
written in the rotated basis (\ref{rot1}). We perform simulation for
the total anneal time $t_f$ between 0.04 to 4 ms. An example of
$P_{\rm GM}$ as a function of time is shown in Fig.~\ref{PGM}b in
Appendix \ref{ApxRates}. Fig.~\ref{PTF} plots the success
probability, $P_{\rm GM}(t_f),$ as a function of temperature $T$ for
different speeds of annealing characterized by the anneal time
$t_f$. In the theoretical calculations we assume that $\eta = 0.1$
and $W = 20$~mK.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Hyb5.eps}
\caption{\label{PTF} End-of-annealing probability to be in the
$\ket{\rm GM}$ state, $P_{\rm GM}$, as a function of temperature $T$
and the anneal time $t_f$ at $\eta = 0.1$ and $W = 20$~mK.}
\end{figure}
Figure~\ref{PTF} reproduces the results shown in Fig.~3 of
Ref.\cite{Dickson13}, including the enhancement $P_{\rm GM}$ at low
temperatures and its reduction at $T \geq 40$~mK. As mentioned in
Ref.~\cite{Dickson13}, this decrease may be related to the
excitement of the high-energy levels separated from the two lowest
states by a gap of order 40~mK (see the spectrum in
Fig.~\ref{EnLev16}).
\section{Conclusions}
In this paper, we have derived a set of master equations describing
a dissipative evolution of an open quantum system interacting with a
complex environment. The environment has low-frequency and
high-frequency components, as in the case of realistic qubits
affected by the hybrid bath, which includes $1/f$ and Ohmic noise. A
part of the system-bath interaction is treated in a nonperturbative
way. This treatment allows us to combine the Bloch-Redfield and
Marcus approaches to the theory of open quantum systems and obtain
the relaxation rates, which are well-suited for the description of
dissipative dynamics of many-qubit quantum objects, such as quantum
annealers. The relaxation rates are expressed in the convenient
convolution form clearly showing the interplay between the low- and
high-frequency noise. The main results of the paper are given by the
master equations (\ref{rT5a}) with the relaxation rates
(\ref{GamD}). As an illustration, we apply the theory to the
16-qubit quantum annealer investigated in Ref.~\cite{Dickson13}. The
instance studied there features an extremely small gap between the
ground and first excited states. With the proper rotation of the
basis, we have solved the master equations and theoretically
confirmed the main experimental findings of Ref.~\cite{Dickson13}.
The results of the paper may be useful for understanding a
dissipative evolution of various systems, from chromophores in
quantum biology \cite{Yang02,Ghosh11,Lambert13} to qubits in
real-world quantum processors \cite{Gibney17, Mohseni17}.
\acknowledgements
We acknowledge fruitful discussions with Evgeny Andriyash, Mark
Dykman, Andrew King, Chris Rich and Vadim Smelyanskiy. We also thank
Joel Pasvolsky and Fiona Hanington
for careful reading of the paper.
|
2,869,038,156,667 | arxiv | \section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
\hskip 1.5em
In \cite[C6, p. 87]{croft1991} the problem, attributed to S.K. Stein, is posed:
``Whether it is possible to partition the unit circle into congruent pieces so that the center is in the interior of one of the pieces?''.
At present, for arbitrary number of pieces it is considered to be unsolved (\cite{mathoverflow17313}).
It can be generalized and varied in many ways, as stated in same place (\cite[p. 88]{croft1991}), not only dimensionality.
Some related questions were studied and answered more or less fully, ---
\cite{douwen1993}, \cite{edelstein1988}, \cite{haddley2016}, \cite{kiss2016}, \cite{richter2008}, \cite{wagon1983} to name a few.
A problem of this kind may depend greatly on the meaning of involved terms like ``piece'', ``partition'', ``congruence'':
do we allow the pieces to intersect at boundaries?
does congruence include reflection? should the piece be connected? measurable?
For example, it is shown in \cite{wagon1983} that the ball in $\mathbb{R}^m$ cannot be ``strictly'' dissected
into $n\in [2;m]$ topologically congruent pieces, to say nothing of the centre; see also \cite{waerden1949}, \cite[25.A.6, p. 599]{gleason1980}, \cite{edelstein1988}.
Hereinafter, we distinguish between 3 types of ``decomposition'' of the set $B$ (in particular, the~ball) into the congruent (sub)sets $\{ A_i \}_{i\in I}$,
so that $B = \bigcup\limits_{i \in I} A_i$ (cf. \cite[p. 79]{croft1991}, \cite[p. 49]{hertel2003}):
$\bullet$ {\it partition}: $\{ A_i \}$ are pairwise disjoint;
$\bullet$ {\it dissection}: interiors of $\{ A_i \}$ are pairwise disjoint;
$\bullet$ {\it covering} (or {\sl intra}{\it covering} to emphasize $A_i \subseteq B$): no additional constraints are {\sl required}.
These terms aren't ``standardized'', and may have quite different meaning in other works.
Any partition is a dissection, and any dissection is a covering. Therefore, the impossibility of covering satisfying certain additional conditions (e.g. relating to the centre)
implies that dissection and partition satisfying the same conditions are not possible as well.
However, when such covering exists, the corresponding dissection or partition may not exist.
Here we consider the ``decomposition'' of (intra)covering type, in certain specific cases, while the original problem almost surely belongs to dissection type;
and the majority of referenced papers, temporally ordered from \cite{waerden1949} to \cite{kiss2016}, deals with partition.
{\ }
{\sl The routinism of inference suggests few/some/most/all of the presented ``results'' to be well known, even if not claimed explicitly or publicly,
and the aim is rather to remove the delusion that there's no such well-knowness...}
There are works concerning the original centre-in-interior dissection problem, under ``natural'' (or ``physical'') assumptions
(such as the space being Euclidean, boundaries of parts being rectifiable, parts being connected): cf. \cite{haddley2016}, \cite{kanelbelov2002}, \cite{banakh2010}.
In our opinion, the most similar negative result relating to covering is obtained for pre-Hilbert spaces in \cite[Th. 1.1]{douwen1993}:
in spite of ``indivisibility'' and ``partitioning'' terms, it is actually a covering considered there, under the assumption that exactly one set contains the centre.
See Rem. \ref{remIneqCounterEx} below.
{\ }
We try to attain the generality by considering spaces and coverings with as few additional properties and constraints as possible.
Thereby few different interpretations of the problem (the ball is closed/open etc.) are aggregated.
{\ }
Hereinafter, we consider a normed linear space $X$ over the field of reals $\mathbb{R}$, $\theta$ is the zero of $X$.
Where we need the specific space such as $\mathbb{R}^m$, we will note it.
Completeness of $X$ is not assumed.
$\| x \|$ is the norm of $x\in X$, inducing the metric $\rho (x, y) = \| x - y \|$.
The balls: open $B(x,r) = \{ y\in X\colon \| y - x \| < r \}$, closed $\overline{B}(x,r) = \{ y\in X\colon \| y - x \| \leqslant r \}$;
the (closed) sphere $S(x,r) = \{ y\in X\colon \| y - x \| = r \}$. $r > 0$ is assumed.
We call the sets $A\subseteq X$ and $B\subseteq X$ {\it congruent}, $A\cong B$, iff there is an isometric surjective mapping ({\it motion})
$f\colon X \leftrightarrow X$: $\forall x, y \in X$: $\| f(x) - f(y) \| = \| x - y \|$ (surjectivity implies that $f^{-1} \colon X \leftrightarrow X$ is a motion too), and $f(A) = B$.
The identity map $\mathcal{I}$: $\mathcal{I}(x) = x$ is a motion.
$\mathrm{Int}\, A = \{ x\in A\mid \exists \varepsilon > 0\colon B(x,\varepsilon) \subseteq A \}$ and
$\overline{A} = \{ x \in X \mid \forall \varepsilon > 0 \colon B(x, \varepsilon) \cap A \ne \varnothing \}$ are the interior and the closure of $A$, respectively.
{\ }
We assume that $X$ has these additional properties:
$\bullet$ $\dim X > 1$: $\exists a, b \in X$, which are linearly independent.
$\bullet$ NCS: $\| \cdot \|$ is strictly convex, that is, $\forall x, y \in S(\theta, 1)$, $x \ne y$: $\lambda \in (0;1)$ $\Rightarrow$ $\| \lambda x + (1 - \lambda ) y\| < 1$.
Conventional examples of non-NCS space are $\mathbb{R}^m_1$ and $\mathbb{R}^m_{\infty}$ when $m \geqslant 2$, or $L_1$ and $L_{\infty}$.
\section{Preliminaries}
\hskip 1.5em
Watery Warning: some of the following lemmas seem ``folkloric'', with proofs included for the sake of integrity and probably present elsewhere.
\begin{lemma}\label{lmMotionSph}
If $f\colon X \leftrightarrow X$ is a motion, then $\forall S(x,r)$: $f\bigl( S(x,r) \bigr) = S(f(x), r)$.
\end{lemma}
\begin{proof}
a) $\forall y \in S(x,r)$: $\| f(y) - f(x) \| = \| y - x \| = r$ $\Rightarrow$ $f\bigl(S(x,r)\bigr) \subseteq S(f(x),r)$.
b) $\forall z\in S(f(x),r)$: $z = f(y)$, $\| y - x \| = \| f(y) - f(x) \| = \| z - f(x) \| = r$ $\Rightarrow$ $y \in S(x,r)$ $\Rightarrow$ $S(f(x), r) \subseteq f\bigl(S(x,r)\bigr)$.
\end{proof}
\begin{lemma}\label{lmMotionBall}
If $f\colon X \leftrightarrow X$ is a motion, then $\forall B(x,r)$: $f\bigl( B(x,r) \bigr) = B(f(x), r)$.
\end{lemma}
\begin{proof}
$f\bigl( B(x,r) \bigr) = f\bigl( \{ x \} \cup \bigcup\limits_{u\in (0;r)} S(x,u) \bigr) = \{ f(x) \} \cup \bigcup\limits_{u\in (0;r)} f\bigl( S(x,u) \bigr)
\stackrel{\text{Lemma \ref{lmMotionSph}}}{=}$
\hfill $= \{ f(x) \} \cup \bigcup\limits_{u\in (0;r)} S\bigl( f(x), u \bigr) = B\bigl( f(x), r \bigr)$
\end{proof}
\begin{lemma}\label{lmMotionDecomp}
Let $f\colon X \leftrightarrow X$ be a motion. Then $f = h \circ g$ (that is, $f(x) = h\bigl(g(x)\bigr)$),
where $g\colon X \leftrightarrow X$ and $h\colon X \leftrightarrow X$ are uniquely determined motions such that
1) $\forall x \in X$: $\| g(x) \| = \| x \|$ ($\Leftrightarrow$ $g(\theta) = \theta$);
2) $\exists a \in X$: $\forall x \in X$: $h(x) = x + a$.
\end{lemma}
\begin{proof}
Consider $g(x) = f(x) - f(\theta)$ and $h(x) = x + f(\theta)$. Obviously, $h \circ g = f$.
$\| g(x) \| = \| f(x) - f(\theta) \| = \| x - \theta \| = \| x \|$. (Implied by $g(\theta) {=} \theta$: $\| g(x) \| {=} \| g(x) {-} g(\theta) \| {=} \| x {-} \theta \|$.)
$g$ and $h$ are motions: $\| g(x) - g(y) \| = \| f(x) - f(y) \| = \| x - y \|$ and $\| h(x) - h(y) \| = \| x - y \|$ (isometry),
inverse maps $g^{-1}(x) = f^{-1}(x + f(\theta))$ and $h^{-1}(x) = x - f(\theta)$ imply surjectivity.
Uniqueness: $f(\theta) = h(g(\theta)) = h(\theta) = a$, $g(x) = h^{-1}(f(x)) = f(x) - a = f(x) - f(\theta)$.
\end{proof}
Here, we call $h$ {\it shift} and $g$ {\it non-shift} components of the motion $f$. If $h=\mathcal{I}$ or $g=\mathcal{I}$, the respective component is called {\it trivial}.
It is easy to see that if $f$ has trivial shift or non-shift component, then the respective component of $f^{-1} = g^{-1} \circ h^{-1}$ is trivial as well.
{\ }
\begin{theorem}\label{thmIsomZero2Zero} (Mazur-Ulam, \cite{mazur1932}; \cite[5.3, Th. 12]{lax2002}).
The motion that maps $\theta$ to $\theta$ is linear.
\end{theorem}
{\bf Remark.} We consider the isometries that map $X$ onto itself,
while the theorem holds true for any bijective isometry between two normed spaces $X$ (with $\theta_X$) and $Y$ (with $\theta_Y$).
{\bf Corollary.} Non-shift component $g$ of the motion $f$ is linear: $g(\lambda x + \mu y) = \lambda g(x) + \mu g(y)$.
{\ }
\begin{lemma}\label{lmDiamTrivShift}
If the motion $f\colon X \leftrightarrow X$ is such that $\exists x \in X$: $\| f(x) \| \leqslant \| x \|$ and $\| f(-x) \| \leqslant \| x \|$, then the shift component of $f$ is trivial.
\end{lemma}
\begin{proof}
Using the notation of Lemma \ref{lmMotionDecomp}, let $f = h \circ g$ and $y = g(x)$.
For $x = \theta$: $y = \theta$, so $f(x) = a$, and $\| a \| \leqslant 0$ $\Leftrightarrow$ $a = \theta$. Suppose $x \ne \theta$.
By Th. \ref{thmIsomZero2Zero}, $-y = g(-x)$, so $f(x) = y + a$ and $\| y + a \| \leqslant \| x \| = \| y \|$,
$f(-x) = -y + a$ and $\| -y + a\| \leqslant \| y \|$ $\Leftrightarrow$ $\| y - a \| \leqslant \| y \|$.
If $\| y + a\| < \| y \|$ or $\| y - a \| < \| y \|$, then by triangle inequality $2 \| y \| = \| y - (-y) \| \leqslant \| y - a \| + \| a - (-y) \| < 2 \| y \|$, --- a contradiction;
thus $\| y - a \| = \| y + a \| = \| y \|$.
$y = \frac{1}{2}(y-a) + \frac{1}{2}(y+a)$. Assume $a \ne \theta$ $\Leftrightarrow$ $y - a \ne y + a$. For $s = (y-a) / \| y \|$, $t = y / \| y \|$, $u = (y+a) / \| y \|$:
$s,t,u \in S(\theta, 1)$, $s \ne u$, $\| \frac{1}{2}s + \frac{1}{2}u \| = \| t \| = 1$, contradicting NCS. So $a = \theta$.
\end{proof}
{\ }
Let $a_1$, ..., $a_m$ be linear independent (LI) elements of $X$ (thus $\dim X \geqslant m$).
We denote by $M(a_1, ..., a_m) = \bigl\{ \sum\limits_{i=1}^m x_i a_i \mid x_i \in \mathbb{R} \bigr\}$ the $m$-dimensional linear manifold generated by them.
It follows from LI that $\forall x \in M(a_1, ..., a_m)$ the coordinates $\{ x_i \}$ are determined uniquely.
Suppose $x^{(k)}, y \in M(a_1, ..., a_m)$. Since $\| x^{(k)} - y \| \leqslant \sum\limits_{i=1}^m |x^{(k)}_i - y_i| \cdot \| a_i \|$ by triangle inequality,
we immediately see that $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} y_i$ for $i=\overline{1,m}$
implies $x^{(k)} \xrightarrow[k\rightarrow \infty]{} y$, that is, $\| x^{(k)} - y \| \xrightarrow[k\rightarrow \infty]{} 0$.
The converse implication and the closedness of $M(a_1, ..., a_m)$ (making it a subspace of $X$),
though known well enough (see \cite[1.2.3]{cotlar1974}, \cite[5.2, Ex. 4]{lax2002}), are obtained in the next lemma by ``elementary'' reasonings,
without resort to norm equivalence or functionals.
\begin{lemma}\label{lmFinDimManifClosed}
If $x^{(k)} \in M(a_1, ..., a_m)$ and $x^{(k)} \xrightarrow[k\rightarrow \infty]{} x \in X$,
then $x \in M(a_1, ..., a_m)$, which is closed therefore, and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$ for $i=\overline{1,m}$.
\end{lemma}
\begin{proof}
The proof is by induction over $\dim M(a_1, ..., a_m)$.
Let $m = 1$. The sequence $\{ x^{(k)} \}_k = \{ x^{(k)}_1 a_1 \}_k$ is convergent (conv.), therefore it is fundamental (fund.)
Assume that $\{ x^{(k)}_1 \}_k$ is not conv., then it isn't fund. due to completeness of $\mathbb{R}$:
\centerline{$\exists \varepsilon_0 > 0$: $\forall N \in \mathbb{N}$: $\exists k_1, k_2 > N$: $|x^{(k_1)}_1 - x^{(k_2)}_1| \geqslant \varepsilon_0$}
But then $\| x^{(k_1)} - x^{(k_2)} \| = | x^{(k_1)}_1 - x^{(k_2)}_1 | \cdot \| a_1 \| \geqslant \varepsilon_0 \| a_1 \| > 0$, which contradicts the fund. of $\{ x^{(k)} \}_k$.
Hence $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_1 = \widetilde{x}_1$. Let $\widetilde{x} = \widetilde{x}_1 a_1$.
$\| x^{(k)} - \widetilde{x} \| = |x^{(k)}_1 - \widetilde{x}_1| \cdot \| a_1 \| \xrightarrow[k\rightarrow \infty]{} 0$ $\Rightarrow$ $x^{(k)} \rightarrow \widetilde{x}$ as $k\rightarrow \infty$.
This means that $x = \widetilde{x} \in M(a_1)$ and $x^{(k)}_1 \rightarrow x_1$ as $k\rightarrow \infty$.
Now suppose that the statement holds true for $\dim M\bigl(\{ a_i \}\bigr) = 1, 2, ..., m-1$.
Consider the conv. $\{ x^{(k)} \}_k = \{ \sum\limits_{i=1}^m x^{(k)}_i a_i \}_k$, it is fund.
Take any $i_0 = \overline{1,m}$, for instance $i_0 = m$. Assume that $\{ x^{(k)}_m \}$ isn't conv., then it isn't fund.,
$\exists \varepsilon_0 > 0$: $\forall N \in \mathbb{N}$: $\exists k_1, k_2 > N$: $|x^{(k_1)}_m - x^{(k_2)}_m| \geqslant \varepsilon_0$, and
\centerline{$\| x^{(k_1)} - x^{(k_2)} \| = \bigl\| \sum\limits_{i=1}^m (x^{(k_1)}_i - x^{(k_2)}_i) a_i \bigr\| =$}
\centerline{$= |x^{(k_1)}_m - x^{(k_2)}_m| \cdot \bigl\| a_m + \sum\limits_{i=1}^{m-1} \frac{x^{(k_1)}_i - x^{(k_2)}_i}{x^{(k_1)}_m - x^{(k_2)}_m} a_i \bigr\| =
|x^{(k_1)}_m - x^{(k_2)}_m| \cdot \| a_m - z \|$}
\noindent
where $z \in M (a_1, ..., a_{m-1}) = M_{m-1}$. It follows from $\dim M_{m-1} = m-1$ and the induction hypothesis that $M_{m-1}$ is closed.
$a_m \notin M_{m-1}$ due to LI, therefore $\| a_m - z \| \geqslant \rho (a_m, M_{m-1}) > 0$. And we obtain
$\| x^{(k_1)} - x^{(k_2)} \| \geqslant \varepsilon_0 \rho (a_m, M_{m-1}) > 0$,
which contradicts the fund. of $\{ x^{(k)} \}_k$. Hence $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_m = \widetilde{x}_m$,
and similarly $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_i = \widetilde{x}_i$ for $i = \overline{1,m-1}$. Let $\widetilde{x} = \sum\limits_{i=1}^m \widetilde{x}_i a_i$.
$\| x^{(k)} - \widetilde{x} \| \leqslant \sum\limits_{i=1}^m |x^{(k)}_i - \widetilde{x}_i| \cdot \| a_i \| \xrightarrow[k\rightarrow \infty]{} 0$, so $x^{(k)} \xrightarrow[k\rightarrow \infty]{} \widetilde{x}$.
Consequently, $x = \widetilde{x} \in M(a_1, ..., a_m)$ and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$ for $i = \overline{1,m}$.
By induction principle, the statement is true for $\forall m \in \mathbb{N}$.
\end{proof}
{\ }
\begin{theorem}\label{thmLSB} (Lusternik-Schnirelmann-Borsuk (LSB), \cite[II.5]{lusternik1930}, \cite{borsuk1933}; see also \cite{matousek2008}).
Let the sphere $S_m = \bigl\{ x\in \mathbb{R}^m \colon \| x \|_m = \sqrt{\sum\limits_{j=1}^m x_j^2} = r \bigr\} = \bigcup\limits_{i=1}^m A_i$, where $A_i$ are closed.
Then $\exists i_0$, $\exists x\in S_m$: $\{ x, -x \} \subseteq A_{i_0}$, --- one of $A_i$ contains the pair of antipodal points of $S_m$.
\end{theorem}
{\ }
The immediate corollary of LSB theorem is this generalization for normed spaces:
\begin{lemma}\label{lmLSBNormSpace}
Let $\dim X \geqslant m \in \mathbb{N}$, that is, $\exists a_1, ..., a_m \in X$, which are linearly independent.
If $S(\theta, r) = \bigcup\limits_{i=1}^m A_i$, where $A_i$ are closed, then $\exists A_{i_0}$, $\exists x \in S(\theta, r)$: $\{ x, -x \} \subseteq A_{i_0}$.
\end{lemma}
See \cite[p. 119]{bollobas2006}, and most likely it's mentioned in \cite{steinlein1985}; more general form is in e.g. \cite[p. 39]{arandjelovic1999}.
\begin{proof}
Let $L = M(a_1, ..., a_m)$ be the subspace of $X$ generated by $\{ a_i \}$ (by Lemma \ref{lmFinDimManifClosed}, $L$ is closed),
$C = S(\theta, r) \cap L$, $S_m = \bigl\{ y \in \mathbb{R}^m \colon \| y \|_m = 1 \bigr\}$.
$\forall x\in L$ has the unique representation $x = (x_1; ...; x_m) = \sum\limits_{i=1}^m x_i a_i$.
Therefore the mapping $s \colon C \rightarrow S_m$: $s(x) = \bigl( x_1 / \| x \|_m ; ...; x_m / \| x \|_m \bigr)$ is well defined.
Moreover, we claim that $s$ is a homeomorphism.
1) $s$ is injective. Indeed, if $s(x') = s(x'')$, where $x', x'' \in C$, then $\frac{x'_i}{\| x' \|_m} = \frac{x''_i}{\| x'' \|_m}$ for $i = \overline{1,m}$,
thus $x'_i = \alpha x''_i$ for $\alpha = \| x' \|_m / \| x'' \|_m > 0$. So $x' = \alpha x''$ $\Rightarrow$ $r = \| x' \| = |\alpha| \cdot \| x'' \| = r \alpha$ $\Rightarrow$ $\alpha = 1$.
2) $s$ is surjective. $\forall y=(y_1;...;y_m) \in S_m$: $s^{-1}(y) = \frac{r}{\| x \|} x$, where $x = \sum\limits_{i=1}^m y_i a_i$.
3) $s$ is continuous. Let $C \ni x^{(k)} \xrightarrow[k\rightarrow \infty]{} x$. Using closedness of $S(\theta, r)$ and Lemma \ref{lmFinDimManifClosed}, we obtain:
$x \in S(\theta, r) \cap L = C$ and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$.
Therefore $\| x^{(k)} \|_m \xrightarrow[k\rightarrow \infty]{} \| x \|_m$, and $s(x^{(k)}) \xrightarrow[k\rightarrow \infty]{} s(x)$.
4) $s^{-1}$ is continuous too. For $S_m \ni y^{(k)} \xrightarrow[k\rightarrow \infty]{} y$: $S_m$ is closed $\Rightarrow$ $y \in S_m$, and $y^{(k)}_i \xrightarrow[k\rightarrow \infty]{} y_i$.
Let $x^{(k)} = \sum\limits_{i=1}^m y^{(k)}_i a_i$, $x = \sum\limits_{i=1}^m y_i a_i$, then $x^{(k)} \xrightarrow[k\rightarrow \infty]{} x$,
$\| x^{(k)} \| \xrightarrow[k\rightarrow \infty]{} \| x \|$, so $s^{-1}(y^{(k)}) \xrightarrow[k\rightarrow \infty]{} s^{-1}(y)$.
Consider $C_i = A_i \cap C = A_i \cap S(\theta, r) \cap L$, they are closed.
Hence the image $s(C_i) \subseteq S_m$, under homeomorphic mapping $s$, is closed too (\cite[XII, \S 3]{kuratowski1961}).
$\bigcup\limits_{i=1}^m C_i = \bigl( \bigcup\limits_{i=1}^m A_i \bigr) \cap C = S(\theta, r) \cap C = C$, so $\bigcup\limits_{i=1}^m s(C_i) = S_m$.
By LSB theorem, $\exists i_0$, $\exists y\in S_m$: $\{ y,-y \} \in s(C_{i_0})$.
Since $s^{-1}(-y) = - s^{-1}(y)$ (by (2)), we obtain $x = s^{-1}(y) \in C$: $\{ x, -x \} \in C_{i_0} \subseteq A_{i_0}$.
\end{proof}
{\ }
In the Main section, certain infinite-dimensional ball covering will be considered, where the following Hilbert space-related lemmas are needed.
{\ }
We denote by $H = l_2$ the separable infinite-dimensional Hilbert space over $\mathbb{R}$.
Until the end of this section, $\| \cdot \| = \| \cdot \|_H$ denotes the norm in $H$. $S = S(\theta, 1)$ is the unit sphere of $H$.
$\inprod{x}{y}$ is the scalar/inner product of $x, y\in H$,
$\angle(x, y) = \arccos \frac{\inprod{x}{y}}{\| x \| \cdot \| y \|} \in [0;\pi]$ is the angle between $x$ and $y$
($\angle(x,y) = 0$ if $x = \theta$ or $y = \theta$). $x \perp y$ means $\inprod{x}{y} = 0$.
The ``basic'' properties of $H$ and $\inprod{\cdot}{\cdot}$ (like $\inprod{x}{x} = \| x \|^2$) are assumed to be known; see e.g. \cite[II.3]{cotlar1974}, \cite[6]{lax2002}.
\begin{lemma}\label{lmCountDenseSphGeod}
$\exists D = \{ d_i \}_{i\in \mathbb{N}} \subset S$ such that $\forall \beta > 0$, $\forall x \in S$: $\exists d \in D$: $\angle(x,d) < \beta$.
\end{lemma}
In other words, there's a countable subset $D$ of $S$, which is everywhere dense (ED) in ``geodesic'' metric $\rho_S(x,y) = \angle(x,y)$ on $S$
(see \cite[6.4, 17.4]{deza2009}). Such $D$ is said to be {\it geodesically dense in $S$}.
\begin{proof}
$H$ is separable: $\exists R \subset H$, countable and ED in $H$. Let $D = \{ \frac{y}{\| y \|} \mid y\in R, y\ne \theta \}$; $D \subset S$ and $D$ is countable.
Take any $x \in S$. $\forall \delta > 0$ $\exists y \in R$: $\| x - y \| < \delta$. Then, by triangle inequality,
\centerline{$ 1 - \delta < \| x \| - \| x - y \| \leqslant \| y \| \leqslant \| y - x \| + \| x \| < 1 + \delta$}
\noindent
hence $\| x - \frac{y}{\| y \|} \| = \frac{1}{\| y \|} \cdot \bigl\| x \cdot \| y \| - y \bigr\| \leqslant \frac{1}{\| y \|} \Bigl[\bigl\| (\| y \| - 1) x \bigr\| + \| x - y \| \Bigr] \leqslant
\frac{1}{1 - \delta} \bigl[ \delta \| x \| + \delta \bigr] = \frac{2\delta}{1 - \delta}$.
Since $\frac{2\delta}{1 - \delta} \rightarrow 0$ as $\delta \rightarrow 0$, we obtain for $\forall \varepsilon > 0$: $\exists d = \frac{y}{\| y \|} \in D$: $\| x - d \| < \varepsilon$.
Consider $\varepsilon = \frac{1}{n}$ to get $\{ d_n \}_{n\in \mathbb{N}} \subset D$: $d_n \xrightarrow[n\rightarrow \infty]{} x$.
(So, $D$ is ED on $S$ in $\| \cdot \|$-induced metric).
It follows from continuity of $\| \cdot \|$ and $\inprod{\cdot}{\cdot}$ that
$\frac{\inprod{d_n}{x}}{\| d_n \| \cdot \| x \|} \xrightarrow[n\rightarrow \infty]{} \frac{\inprod{x}{x}}{\| x \|^2} = 1$.
In turn, continuous $\arccos \frac{\inprod{d_n}{x}}{\| d_n \| \cdot \| x \|} \xrightarrow[n\rightarrow \infty]{} 0$,
therefore $\exists d_{n_{\beta}} \in D$: $\angle(x,d_{n_{\beta}}) < \beta$.
\end{proof}
{\bf Remark.} Given such $D$, it is easy to see that $\{ A_i \}_{i\in \mathbb{N}} = \{ \overline{B}(d_i, \varepsilon) \cap S \}_{i\in \mathbb{N}}$ for $\varepsilon < 1$
is a covering of $S$ by closed subsets (moreover, $A_i \cong A_j$, see Lemma \ref{lmOmtdCongr}).
$\dim H = \aleph_0 = |\{ A_i\}|$, however, $\mathrm{diam} A_i \leqslant \mathrm{diam} \overline{B}(d_i, \varepsilon) = 2\varepsilon < 2$, thus no $A_i$ contains antipodal points of $S$, ---
the ``straight'' attempt of infinite-dimensional generalization of LSB theorem fails. Cf. \cite{cutler1973}.
\begin{lemma}\label{lmHilbertNonShiftInprod}
If $g\colon H \leftrightarrow H$ is a non-shift motion, $g(\theta) = \theta$, then $\forall x, y \in H$: $\inprodbig{g(x)}{g(y)} = \inprod{x}{y}$.
\end{lemma}
\begin{proof}
$\inprodbig{g(x)}{g(y)} = \inprodbig{g(x) - g(y) + g(y)}{g(y)} = \inprodbig{g(x) - g(y)}{g(y)} + \| g(y) \|^2 =$
\centerline{$= \inprodbig{g(x) - g(y)}{g(y) - g(x) + g(x)} + \| y \|^2 \stackrel{\text{Th. \ref{thmIsomZero2Zero}}}{=}$}
\noindent
$= - \inprodbig{g(x - y)}{g(x - y)} + \| g(x) \|^2 - \inprodbig{g(x)}{g(y)} + \| y \|^2 =
\| x \|^2 + \| y \|^2 - \| x - y \|^2 - \inprodbig{g(x)}{g(y)}$,
therefore $\inprodbig{g(x)}{g(y)} = \frac{1}{2} \bigl[ \| x \|^2 + \| y \|^2 - \inprod{x - y}{x - y} \bigr] = \frac{1}{2} \bigl[ 2 \inprod{x}{y} \bigr] = \inprod{x}{y}$.
\end{proof}
{\ }
{\bf Definition.} Let $H \ni s \ne e \in H$, $\gamma \in [0;\pi]$. We call the set
\centerline{$C(s, e, \gamma) = \bigl\{ x \in H \colon \| x - s \| \leqslant \| e - s \| \text{ and } \angle(x - s, e - s) \leqslant \gamma \bigr\} \subseteq \overline{B}(s, \| e - s \|)$}
\noindent
the (closed) {\it ommatidium}, with origin at $s$, around $[s, e]$, of angle $\gamma$ and of radius $\| e - s \|$.
It's actually a ``sector'' of the ball $\overline{B}(s, \| e - s \|)$, and would be a usual disk sector in $\mathbb{R}^2$.
\begin{lemma}\label{lmOmtdSegm}
If $s \ne x \in C(s, e, \gamma)$, then $\forall \lambda \in [0; \frac{\| e - s\|}{\| x - s \|}]$: $s + \lambda (x - s) \in C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
It follows simply from the definition.
\end{proof}
\begin{lemma}\label{lmOmtdCongr}
Two ommatidiums of the same angle and radius are congruent in $H$.
\end{lemma}
\begin{proof}
Evidently, a parallel shift $h(x) = x + a$ transforms $C(s, e, \gamma)$ onto $C(s + a, e + a, \gamma)$.
Thus we consider, without loss of generality, $C_1 = C(\theta, e_1, \gamma)$ and $C_2 = C(\theta, e_2, \gamma)$,
where $\| e_1 \| = \| e_2 \| = r$, $e_1 \ne e_2$. We are going to find the non-shift motion $g$ such that $g(C_1) = C_2$.
It suffices to obtain $g$ such that $g(e_1) = e_2$. Indeed, $\forall x \in C_1$ we have then $\| g(x) \| = \| x \| \leqslant r$ and
$\angle(g(x), e_2) = \arccos \frac{\inprod{g(x)}{g(e_1)}}{\| g(x) \| \cdot \| g(e_1) \|} \stackrel{\text{Lemma \ref{lmHilbertNonShiftInprod}}}{=}
\arccos \frac{\inprod{x}{e_1}}{\| x \| \cdot \| e_1 \|} = \angle(x, e_1) \leqslant \gamma$, so $g(x) \in C_2$.
Conversely, $\forall x \in C_2$: $g^{-1} (x) \in C_1$, because $g^{-1}$ is a non-shift motion as well, and $g^{-1} (e_2) = e_1$.
We apply the ``coordinate'' approach to define such $g$.
Let $e_1' = e_1 / r$, $e_2' = e_2 / r$. They generate the 2-dimensional subspace $M = M(e_1', e_2')$ of $H$.
$\exists u \in M$ such that $\| u \| = 1$ and $u \perp e_1'$, hence $M = M(e_1', u)$ and $\forall z\in M$: $z = z_1 e_1' + z_2 u$, $\| z \|^2 = z_1^2 + z_2^2$.
Then $e_2' = (\cos \alpha) e_1' + (\sin \alpha) u$ for some $\alpha \in (0;2\pi)$.
$H = M \oplus L$, where $L$ is the orthogonal complement of $M$. It follows that $\forall x \in H$ has unique representation $x = x_1 e_1' + x_2 u + w_x$, where $w_x \in L$,
and $\| x \|^2 = x_1^2 + x_2^2 + \| w_x \|^2$. In particular, $e_1 = r e_1'$ and $e_2 = (r \cos \alpha) e_1' + (r \sin \alpha) u$.
Let $g(x) = (x_1 \cos \alpha - x_2 \sin \alpha) e_1' + (x_1 \sin \alpha + x_2 \cos \alpha) u + w_x$. It has the required properties:
1) $g$ is isometric: $\| g(x) - g(y) \|^2 =$
\centerline{$= \bigl[ (x_1 - y_1) \cos \alpha - (x_2 - y_2) \sin \alpha \bigr]^2 + \bigl[ (x_1 - y_1) \sin \alpha + (x_2 - y_2) \cos \alpha \bigr]^2 + \| w_x - w_y \|^2 =$}
\centerline{$ = (x_1 - y_1)^2 + (x_2 - y_2)^2 + \| w_x - w_y \|^2 = \| x - y \|^2$}
2) $g$ is surjective: $g^{-1}(x) = (x_1 \cos \alpha + x_2 \sin \alpha) e_1' + (-x_1 \sin \alpha + x_2 \cos \alpha) u + w_x$.
3) $g(\theta) = \theta + \theta + \theta = \theta$, and 4) $g(e_1) = (r \cos \alpha) e_1' + (r \sin \alpha) u + \theta = e_2$.
\end{proof}
\begin{lemma}\label{lmOmtdCoverBall}
If $D = \{ d_i \}_{i\in \mathbb{N}} \subset S$ is geodesically dense in $S$, then $\forall \beta > 0$:
$\overline{B}(\theta, 1) = \bigcup\limits_{i \in \mathbb{N}} C(\theta, d_i, \beta)$.
\end{lemma}
\begin{proof}
$C(\theta, d_i, \beta) \subseteq \overline{B}(\theta, 1)$ is obvious.
$\theta \in C(\theta, d_i, \beta)$ for any $i$. Take $\forall x \in \overline{B}(\theta, 1) \backslash \{ \theta \}$, then $x' = x / \| x \| \in S$ and $x = \| x \| \cdot x'$.
By definition of $D$, $\exists d \in D$: $\angle(x', d) < \beta$, thus $x' \in C(\theta, d, \beta)$.
By Lemma \ref{lmOmtdSegm}, $x \in C(\theta, d, \beta)$ too.
\end{proof}
\begin{lemma}\label{lmOmtdInsideBall}
Let $d \in S$ and $\gamma \leqslant \arccos \frac{1}{4}$. Then $C_0 = C(-\frac{1}{2}d, \frac{1}{2}d, \gamma) \subset \overline{B}(\theta, 1)$.
\end{lemma}
\begin{proof}
Due to convexity of $\overline{B}(\theta, 1)$, we only need to prove that $\forall x \in S(-\frac{1}{2}d, 1) \cap C_0$: $\| x \| \leqslant 1$,
because $\forall y \in C_0 \backslash \{ -\frac{1}{2}d \}$: $y = \lambda x + (1 - \lambda) (-\frac{1}{2} d)$
for $x = -\frac{1}{2}d + \frac{y + \frac{1}{2}d}{\|y + \frac{1}{2}d\|} \in S(-\frac{1}{2}d, 1) \cap C_0$ ($\in C_0$ follows from Lemma \ref{lmOmtdSegm})
and $\lambda = \| y + \frac{1}{2} d\| \in [0;1]$.
Let $H = M(d) \oplus T$, then $\forall y \in H$:
$y = y_1 d + y_2 u$, where $d \perp u \in T$, $\| u \| = 1$, and $\| y \|^2 = y_1^2 + y_2^2$.
In particular, for $y = x + \frac{1}{2}d$: $\| y \| = 1$, hence we can represent $y_1 = \cos \beta \geqslant 0$, $y_2 = \sin \beta$ for $\beta \in [0;2\pi)$.
At that $y_1 = \inprod{y}{d} = \frac{\inprod{y}{d}}{\| y \| \cdot \| d \|} = \cos \angle(x + \frac{1}{2}d,d)$.
$x \in C_0$, so $y_1 = \cos \beta \geqslant \cos \gamma \geqslant \frac{1}{4}$,
\hfill $\| x \|^2 = \| y - \frac{1}{2} d \|^2 = \| (\cos \beta - \frac{1}{2}) d + \sin \beta \cdot u \|^2 = (\cos \beta - \frac{1}{2})^2 + \sin^2 \beta =
\frac{5}{4} - \cos \beta \leqslant 1$
\end{proof}
\begin{lemma}\label{lmOmtdOriginNonIntr}
If $\gamma < \pi$, then $s \notin \mathrm{Int}\, C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
Let $v = e - s$, then $\forall \varepsilon > 0$: $\angle (-\varepsilon v, e - s) = \arccos \frac{\inprod{-\varepsilon v}{v}}{\| -\varepsilon v \| \cdot \| v \|} = \arccos (-1) = \pi > \gamma$,
hence $B(s, 2 \varepsilon \| e - s\|) \ni s -\varepsilon v \notin C(s, e, \gamma)$, and $B(s, 2 \varepsilon \| e - s\|) \nsubseteq C(s, e, \gamma)$.
\end{proof}
\begin{lemma}\label{lmOmtdMiddleInt}
If $\gamma > 0$, then $\frac{1}{2}(s + e) \in \mathrm{Int}\, C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $s = \theta$, $\| e \| = 1$, and $\gamma \leqslant \frac{\pi}{4}$
(otherwise move the ommatidium so that its origin becomes $\theta$ by Lemma \ref{lmOmtdCongr}, scale it to attain $\| e \| = 1$ ($x\leftrightarrow x / \| e \|$),
and consider $C(\theta, e, \frac{\pi}{4}) \subseteq C(\theta, e, \gamma)$).
We need to show that $\exists \varepsilon > 0$: $B(\frac{1}{2}e, \varepsilon) \subseteq C(\theta, e, \gamma)$ $\Leftrightarrow$ $\forall x\in B(\frac{1}{2}e, \varepsilon)$:
$\| x\| \leqslant 1$ and $\angle (x, e) \leqslant \gamma$; the latter inequality is equivalent to $\cos \angle(x, e) \geqslant \cos \gamma$.
For arbitrary $\varepsilon > 0$ and $\forall x \in B(\frac{1}{2}e, \varepsilon)$: $x = \frac{1}{2}e + b$, where $\| b \| < \varepsilon$.
Then $\| x\| \leqslant \| \frac{1}{2} e\| + \| b\| < \frac{1}{2} + \varepsilon$; the constraint $\varepsilon < \frac{1}{2}$ ensures $\| x\| < 1$.
\centerline{$\cos \angle(x, e) = \frac{\inprod{x}{e}}{\| x \| \cdot \| e\|} = \frac{1}{\| \frac{1}{2}e + b \|} \bigl[ \inprod{\frac{1}{2}e}{e} + \inprod{b}{e} \bigr] =
\frac{1}{\| e + 2b\|} + \frac{2}{\| e + 2b\|} \inprod{b}{e}$}
1) $\| e + 2b \| \leqslant \| e \| + 2 \| b \| < 1 + 2\varepsilon$, hence $\frac{1}{\| e + 2b\|} > \frac{1}{1 + 2 \varepsilon}$.
2) On the other hand, $\| e + 2b\| \geqslant \| e \| - \| - 2b\| > 1 - 2 \varepsilon$ $\Rightarrow$ $\frac{2}{\| e + 2b\|} < \frac{2}{1 - 2\varepsilon}$,
and Cauchy-Bunyakowsky-Schwartz inequality implies $\bigl| \inprod{b}{e} \bigr| \leqslant \| b \| \cdot \| e \| < \varepsilon$,
therefore $\frac{2}{\| e + 2b\|} \inprod{b}{e} > - \frac{2\varepsilon}{1 - 2\varepsilon}$.
Consequently (for sufficiently small $\varepsilon$) $\cos \angle(x, e) > \frac{1}{1 + 2 \varepsilon} - \frac{2\varepsilon}{1 - 2 \varepsilon} \rightarrow 1$ as $\varepsilon \rightarrow 0$,
thus for some $\varepsilon_0 \in (0;\frac{1}{2})$ we obtain: $\cos \angle(x, e) \geqslant \cos \gamma$ for each $x\in B(\frac{1}{2}e, \varepsilon_0)$.
\end{proof}
\begin{lemma}\label{lmOmtdConvex}
If $\gamma \leqslant \frac{\pi}{2}$, then $C(s, e, \gamma)$ is convex.
\end{lemma}
\begin{proof}
Again, we assume $s = \theta$ and $\| e \| = 1$ without loss of generality.
Let $x, y \in C(\theta, e, \gamma)$. We claim that $\forall \lambda \in [0;1]$: $z = \lambda x + (1 - \lambda) y \in C(\theta, e, \gamma)$.
If $x = \theta$ or $y = \theta$, then $z \in C(\theta, e, \gamma)$ by Lemma \ref{lmOmtdSegm}.
If not: clearly $\theta \in C(\theta, e, \gamma)$, suppose $z \ne \theta$.
1) $\| z\| \leqslant \lambda \| x \| + (1 - \lambda) \| y \| \leqslant \lambda \cdot 1 + (1 - \lambda) \cdot 1 = 1$;
2) $\cos \angle(z, e) = \frac{\inprod{z}{e}}{\| z \| \cdot \| e\|} = \bigl[ \lambda \frac{\inprod{x}{e}}{\| z \|} + (1 - \lambda) \frac{\inprod{y}{e}}{\| z \|} \bigr] =
\bigl[ \lambda \frac{\inprod{x}{e}}{\| x\|} \cdot \frac{\| x \|}{\| z \|} + (1 - \lambda) \frac{\inprod{y}{e}}{\| y\|} \cdot \frac{\| y \|}{\| z \|} \bigr] =$
\hfill $= \bigl[ \frac{\lambda \| x \|}{\| z\|} \cos \angle(x, e) + \frac{(1 - \lambda) \| y\|}{\| z \|} \cos \angle(y, e) \bigr] \geqslant
\frac{\lambda \| x\| + (1 - \lambda) \| y\|}{\| \lambda x + (1 - \lambda) y \|} \cos \gamma \stackrel{\cos \gamma \geqslant 0}{\geqslant} \cos \gamma$
$\Leftrightarrow$ $\angle(z, e) \leqslant \gamma$
\end{proof}
\section{Main}
\begin{prop}\label{propMain}
Let $\dim X \geqslant m \in \mathbb{N}$, $B(\theta, 1) \subseteq E \subseteq \overline{B}(\theta, 1)$, and $E = \bigcup\limits_{i=1}^m A_i$, where $A_i \cong A_j$.
Then either $\theta \in \bigcap\limits_{i=1}^m \mathrm{Int}\, A_i$, or $\theta \notin \bigcup\limits_{i=1}^m \mathrm{Int}\, A_i$.
\end{prop}
\begin{proof}
Suppose $m \geqslant 2$. Let $K = \overline{B}(\theta, 1)$, $S = S(\theta, 1)$.
$K = \overline{B(\theta, 1)} \subseteq \overline{E} \subseteq \overline{\overline{B}(\theta, 1)} = K$ $\Leftrightarrow$ $\overline{E} = K$.
Let $f_{ij}$ be the motion transforming $A_i$ to $A_j$, so that $f_{ij}(A_i) = A_j$, and $f_{ji} = f^{-1}_{ij}$ ($f_{ii} = \mathcal{I}$).
Consider $S_i = \overline{A_i} \cap S$. They are closed and $\bigcup\limits_{i=1}^m S_i = \bigl( \bigcup\limits_{i=1}^m \overline{A_i} \bigr) \cap S =
\overline{\bigcup\limits_{i=1}^m A_i} \cap S = K \cap S = S$. By Lemma \ref{lmLSBNormSpace}, $\exists S_k$, $\exists d \in S$: $\{ d, -d \} \subseteq S_k$.
Take any $i \ne k$. Let $A_k' = A_k \cup S_k \cup f^{-1}_{ki}(S_i)$ and $A_i' = A_i \cup S_i \cup f_{ki}(S_k)$.
1) $A_i' \subseteq K$. Indeed, a) $A_i \subseteq E \subseteq K$, b) $S_i \subseteq S \subset K$, c) $\forall x \in S_k \subseteq \overline{A_k}$
$\exists \{ x_l \}_{l=1}^{\infty}$, $x_l \in A_k$: $x_l \xrightarrow[l\rightarrow \infty]{} x$, then continuous $f_{ki}(x_l) \xrightarrow[l\rightarrow \infty]{} f_{ki}(x)$.
$f_{ki}(x_l) \in A_i$, hence $f_{ki}(x) \in \overline{A_i} \subseteq \overline{E} = K$.
2) $f_{ki}(A_k') = f_{ki} (A_k) \cup f_{ki}(S_k) \cup f_{ki} \bigl( f^{-1}_{ki} (S_i) \bigr) = A_i \cup S_i \cup f_{ki}(S_k) = A_i'$.
Since $\{ d, -d \} \subseteq S_k \subseteq A_k'$, we obtain $f_{ki} \bigl( \{d, -d \} \bigr) \subseteq A_i' \subseteq K$,
so $\| f_{ki} (d) \| \leqslant 1 = \| d \|$ and $\| f_{ki}(-d) \| \leqslant \| d \|$.
By Lemma \ref{lmDiamTrivShift}, the shift component $h_{ki}$ of $f_{ki} = h_{ki} \circ g_{ki}$ is trivial. Then the shift component $h_{ik}$ of $f_{ik} = f^{-1}_{ki}$ is also trivial.
There are 2 possible cases: either $\exists i$: $\theta \in \mathrm{Int}\, A_i$, or $\forall i$: $\theta \notin \mathrm{Int}\, A_i$ $\Leftrightarrow$ $\theta \notin \bigcup\limits_{i=1}^m \mathrm{Int}\, A_i$.
Consider the former case, then $\exists B(\theta, \varepsilon) \subseteq A_i$. Take $\forall j \ne i$.
By Lemma \ref{lmMotionBall}, $f_{ik} \bigl( B(\theta, \varepsilon) \bigr) = B\bigl( f_{ik}(\theta), \varepsilon \bigr) = B\bigl( g_{ik}(\theta), \varepsilon \bigr) = B(\theta, \varepsilon)$,
hence $B(\theta, \varepsilon) \subseteq A_k$. Apply Lemma \ref{lmMotionBall} again:
$f_{kj} \bigl( B(\theta, \varepsilon) \bigr) = B\bigl( f_{kj}(\theta), \varepsilon \bigr) = B(\theta, \varepsilon) \subseteq A_j$, and $\theta \in \mathrm{Int}\, A_j$.
Therefore $\theta \in \bigcap\limits_{i=1}^m \mathrm{Int}\, A_i$.
\end{proof}
{\bf Corollary.} If $\dim X \geqslant \aleph_0$, then the statement of Prop. \ref{propMain} holds true for $\forall m \in \mathbb{N}$:
a ball in such $X$ cannot be covered by any finite number of congruent subsets so that its centre belongs to the interiors of certain of them
and doesn't belong to the interiors of the others.
As for infinite coverings, see Ex. \ref{exmpUnivInfCover} and Ex. \ref{exmpHilbertCountCover} below.
{\ }
\begin{remark}\label{remIneqCounterEx}
One may ask why we do not generalize the approach from \cite{douwen1993} instead.
The reasonings there essentially make use of the inequality
\centerline{$\forall x, y, z \in X$: $\| x - y \|^2 + \| z \|^2 \leqslant \| x \|^2 + \| y \|^2 + \| x - z \|^2 + \| y - z \|^2$}
\noindent
which is the implication of the inequality \cite[p. 184, (c)]{douwen1993} (for $p \leftarrow \theta$, $q \leftarrow z = \sigma_A(\theta)$),
established for Euclidean/pre-Hilbert $X$. Unfortunately, it is not true for arbitrary NCS $X$: consider $X = \mathbb{R}^2_{3/2}$ with
$\| x \| = \bigl\| (x_1;x_2) \bigr\|_{3/2} = \bigl( |x_1|^{3/2} + |x_2|^{3/2} \bigr)^{2/3}$
and let $x = (1;0)$, $y = (0;1)$, $z = (1;1)$. Then
\centerline{$\| x - y \|^2 + \| z \|^2 = 2 \cdot 2^{\frac{4}{3}} = 4 \cdot 2^{\frac{1}{3}} >
4 = 1 + 1 + 1 + 1 = \| x \|^2 + \| y \|^2 + \| x - z \|^2 + \| y - z \|^2$}
(Maybe some subtler form of the inequality would work.)
\end{remark}
{\ }
\begin{remark}
On the other hand, LSB theorem is applied here too, being a ``foundation stone'' of the inference;
another pebble is that the motions transforming the subsets onto each other don't include parallel shift, otherwise one of antipodal points moves outside of the ball.
Antipodal/``diametral'' points and the constraints they impose are exploited, --- without resort to LSB theorem, --- in \cite[\S 4]{edelstein1988},
where NCS Banach spaces are considered; see Rem. \ref{remPlaneNonNCSBanachNCS} below.
\end{remark}
{\ }
\begin{remark}
If we replace the condition ``$A_i \cong A_j$'' by ``$\mathrm{Int}\, A_i \cong \mathrm{Int}\, A_j$'', then ``$\theta \in \mathrm{Int}\, A_1$ and $\theta \notin \mathrm{Int}\, A_2$'' becomes possible, evidently;
for example, in $\mathbb{R}^2$ take $z\in K$: $\| z \| = \frac{1}{2}$, and
\centerline{$A_1 = B(\theta, \frac{1}{8}) \cup \{ (x;y) \in K\mid x\in \mathbb{Q} \text{ and } y\in \mathbb{Q} \}$,
$A_2 = B(z, \frac{1}{8}) \cup \{ (x;y) \in K\mid x\notin \mathbb{Q} \text{ or } y\notin \mathbb{Q} \}$}
\noindent
then $A_1 \cup A_2 = K$, $\mathrm{Int}\, A_1 = B(\theta, \frac{1}{8}) \ni \theta$, $\mathrm{Int}\, A_2 = B(z, \frac{1}{8})$, so $\mathrm{Int}\, A_1 \cong \mathrm{Int}\, A_2$, $A_1 \cap A_2 = \varnothing$.
Same happens if we replace ``congruence'' by ``homotheticity'': take $A_1 = K$ and let $A_2$, $A_3$, ... be the balls of sufficiently small radius $\rho$ so that
all of them can be placed within $K$ and don't contain its centre (in other words, $A_i = \rho K + c_i$, $\rho < \| c_i \| < 1 - \rho$ for $i \geqslant 2$).
\end{remark}
{\ }
\begin{example}\label{exmpNonNCS}
Without NCS, the Prop. \ref{propMain} statement can become false.
Consider non-NCS $l_{\infty} = \bigl\{ x = (x_1;x_2;...)\colon \| x \|_{\infty} = \sup\limits_{i\in \mathbb{N}} |x_i| < \infty \bigr\}$
and its unit ball $\overline{B}(\theta,1) = \bigl\{ x\in l_{\infty} \colon \sup\limits_i |x_i| \leqslant 1 \bigr\}$. For any odd $n \geqslant 3$
the subsets $A_i = \bigl\{ x\in \overline{B}(\theta, 1) \colon x_1 \in [-1+2\frac{i-1}{n};-1 + 2 \frac{i}{n}]\bigr\}$, $i=\overline{1,n}$, are congruent
(motion $f_{ij}(x) = (x_1 + 2\frac{j-i}{n};x_2;x_3;...)$ transforms $A_i$ to $A_j$), $\theta \in B(\theta, \frac{1}{2n}) \subset \mathrm{Int}\, A_{1 + \lfloor\frac{n}{2}\rfloor}$,
while $\theta \notin A_i$ for $i \ne 1 + \lfloor\frac{n}{2}\rfloor$, and $A_1 \cup ... \cup A_n = \overline{B}(\theta, 1)$.
Instead of $l_{\infty}$, we can take $\mathbb{R}^m_{\infty}$ if $m \geqslant n$.
Note that this decomposition of $\overline{B}(\theta, 1)$ is a dissection and a covering (but not a partition).
\end{example}
{\ }
\begin{remark}\label{remPlaneNonNCSBanachNCS}
Consider $\mathbb{R}^2_p$, $\| x \|_{2;p} = \bigl( |x_1|^p + |x_2|^p \bigr)^{\frac{1}{p}}$.
For $p = 2$, usual Euclidean metric, the original dissection problem posed in \cite[C6]{croft1991} remains unsolved.
For $p = 1$ or $p = \infty$, --- non-NCS case, --- $\overline{B}(\theta, 1)$ is a square
(sides being parallel to $Ox_i$ for $p = \infty$, rotated by $\frac{\pi}{4}$ for $p = 1$),
trivially dissectable into 3 (5, 7, ...) congruent rectangles such that the centre $\theta$ is within one of them.
It is shown in \cite[\S 2]{edelstein1988} that $\overline{B}(\theta, 1)$ in non-NCS $c_0$, $C_{[0;1]}$ is partitionable into $n$ congruent subsets for $\forall n \leqslant \aleph_0$,
while in NCS Banach $X$ there's no such partition if $2 \leqslant n < \min\{ \dim X , \aleph_0 \} + 1$.
\end{remark}
{\ }
\begin{example}\label{exmpCoverDisk}
Obviously, as Fig. \ref{figCoverDiskGen} illustrates, the ball/disk in $\mathbb{R}^2$ can be covered by $n \geqslant 4$ congruent and convex subsets such that its centre belongs to the interior of exactly one set; moreover, the centre is at positive distance from other sets.
\begin{figure}[h]
\centerline{\begin{tabular}{ccc}
\includegraphics[width=3cm]{cbcs_pic_4.pdf}
&
\includegraphics[width=3cm]{cbcs_pic_5.pdf}
&
\includegraphics[width=3cm]{cbcs_pic_17.pdf}
\\
\includegraphics[height=1.5cm]{cbcs_pic_4_one.pdf}
&
\includegraphics[height=1.5cm]{cbcs_pic_5_one.pdf}
&
\includegraphics[height=1.5cm]{cbcs_pic_17_one.pdf}
\\
$n=4$ & $n=5$ & $n=17$
\end{tabular}}
\caption{Covering a disk by $n \geqslant 4$ congruent subsets}
\label{figCoverDiskGen}
\end{figure}
The case $n=3$ is slightly different: the sets are not convex and not 1-connected, each one has a circular hole in one of two symmetric segments it consists of.
At Fig. \ref{figCoverDisk3}, $\angle AOB = 150^{\circ}$ (for instance).
We do not know is there any such covering by three 1-connected congruent subsets.
\begin{figure}[h]
\centerline{
\begin{tabular}{ccc}
\includegraphics[width=3.6cm]{cbcs_pic_3.pdf}
&\quad\quad&
\includegraphics[width=3.5cm]{cbcs_pic_3_one.pdf}
\end{tabular}}
\caption{Covering a disk by $n=3$ congruent subsets (resembles ``Biohazard'' symbol)}
\label{figCoverDisk3}
\end{figure}
In fact, the case $n=k$ can be extended to all $n > k$ (which makes Fig. \ref{figCoverDiskGen} redundant),
because covering allows $A_i = A_j$ (not so for dissection): take $A_1$, ..., $A_{k-1}$, and $A_k = A_{k+1} = ... = A_n$.
Similar constructions can be used in $\mathbb{R}^m$. In particular, when $n = m + 2$,
note that the ``hollow'' around the centre at Fig. \ref{figCoverDiskGen}, case $n=4$, is an equilateral triangle and a 2-simplex in $\mathbb{R}^2$.
\end{example}
{\ }
\begin{remark}
Convexity of parts implies the negative answer not only to the original dissection problem, but also to its generalization:
the closed disk in $\mathbb{R}^2$ cannot be dissected into $n \geqslant 2$ homothetic, convex, and closed parts such that the interior of exactly one part contains the centre.
\begin{proof}
Let $K = \overline{B}(\theta, 1)$, $S = S(\theta, 1)$ in $\mathbb{R}^2$, and let $\{A_i\}_{i=1}^n$ be the parts, so that $K=\bigcup\limits_{i=1}^n A_i$,
$A_i \sim A_j$, $\mathrm{Int}\, A_i \cap \mathrm{Int}\, A_j = \varnothing$ for $i \ne j$.
Also, let $\partial A_i = \overline{A_i} \cap \overline{\mathbb{R}^2 \backslash A_i} \subset A_i$ be the boundary of $A_i$.
1) Claim: if $\partial A_i$ contains $2n+4$ different points $x_1$, ..., $x_{2n+4}$ that belong to some circle $S(a, r)$,
then $S(a,r) = S$.
(``The strictly convex section of $\partial A_i$ has to be on $\partial K = S$, not inside $K$.'')
To show that this claim is true, assume the contrary: $\exists x_j \notin S$. Let $N_+$ be the number of $x_j \in S$, and $N_- = \bigl|\{ x_j\colon x_j \notin S \}\bigr|$.
$N_+ + N_- = 2n + 4$.
If $N_+ \geqslant 3$, then 3 points that $\in S$ among $x_1$, ..., $x_{2n+4}$ determine the circle $S(a, r)$ uniquely (see e.g. \cite[2.3, Cor. 7]{agricola2008}),
so $S(a,r) = S$, which contradicts the assumption.
Thus $N_+ \leqslant 2$ $\Rightarrow$ $N_- \geqslant 2n+2$: we can take $2n+2$ points on $S(a,r)$ in $\mathrm{Int}\, K = B(\theta, 1)$.
Enumerate them sequentially, for instance, counter-clockwise: $x_1'$, ..., $x_{2n+2}'$.
\begin{figure}[h]
\centerline{\includegraphics[width=3.5cm]{cbcs_conv_diss.pdf}}
\label{figDissDiskConvex}
\end{figure}
Let $x_1' x_3' ... x_{2n+1}'$ be the convex polygon, with interior, inscribed into $S(a,r)$;
it follows from convexity of $A_i$ and $n+1\geqslant 3$ that $x_1' x_3' ... x_{2n+1}' \subseteq A_i$ and $\varnothing \ne \mathrm{Int}\, x_1' x_3' ... x_{2n+1}' \subseteq \mathrm{Int}\, A_i$.
Consider the rest of points: $x_2'$, $x_4'$, ..., $x_{2n+2}'$.
Since $x_j' \in \partial A_i \cap \mathrm{Int}\, K$, each of these $n+1$ points belongs to the boundary $\partial A_{k_j}$ of at least one other part, $k_j \ne i$.
There are $n-1$ other parts, hence two of these points, $x_{(1)}' $ and $x_{(2)}'$, belong to the boundary of the same $A_k$, $k \ne i$.
Then $[x_{(1)}', x_{(2)}'] \subset A_k$.
It is easy to see that $[x_{(1)}', x_{(2)}']$ intersects $\mathrm{Int}\, x_1' x_3' ... x_{2n+1}'$, hence $\mathrm{Int}\, A_i \cap \mathrm{Int}\, A_k \ne \varnothing$
(because $\forall y \in [x_{(1)}', x_{(2)}']$, $\forall B(y, \varepsilon)$: $B(y,\varepsilon) \cap \mathrm{Int}\, A_k \ne \varnothing$), --- a contradiction.
2) $\bigcup\limits_{i=1}^n (A_i \cap S) = S$ and $|S| > \aleph_0$ imply $\exists A_i$: $\partial A_i \supseteq A_i \cap S \supseteq \{ x_1, ..., x_{2n+4} \}$.
Consequently, $\partial A_k = f_{ik}(\partial A_i)$ of any other $A_k$ contains $x^{(k)}_j = f_{ik}(x_j)$, $j=\overline{1,2n+4}$,
which belong to some circle $S(a, r) = f_{ik}(S)$; here $f_{ik}\colon X \leftrightarrow X$, $\| f_{ik} (x) - f_{ik}(y) \| = \alpha_{ik} \| x - y \|$ is the homothety transforming $A_i$ to $A_k$.
By (1), $x^{(k)}_j \in S$ for any $j$ and $k$.
3) Now assume that there's exactly one part $A_{i_0}$ such that $\theta \in \mathrm{Int}\, A_{i_0}$: $B(\theta, \delta) \subseteq A_{i_0}$.
$\{ x^{(i_0)}_j \}_{j=1}^{2n+4} \subseteq S \cap \partial A_{i_0}$, and the point $\theta \in A_{i_0}$ is at distance 1, equidistant, from each $x^{(i_0)}_j$.
Take any $k \ne i_0$. As above, $\{y^{(k)}_j = f_{i_0 k}(x^{(i_0)}_j)\}_{j=1}^{2n+4} \subseteq S \cap A_k$.
And there must be $z \in A_k$, which is equidistant from each $y^{(k)}_j$; clearly, $z = \theta$ ($y^{(k)}_1$, $y^{(k)}_2$, $y^{(k)}_3$ determine it uniquely).
Also, $B(\theta, \delta) \subseteq A_k$ (apply similar arguments to $\forall x \in B(\theta, \delta) \subseteq A_{i_0}$),
so $\theta \in \mathrm{Int}\, A_k$. A contradiction.
\end{proof}
(1st step shortens if we assume that one of $\partial A_i$ contains the arc $\breve{a}$, which has to be on $S$ then,
otherwise 2 internal points of $\breve{a}$ are in $\partial A_k$, $k\ne i$, too, implying a contradiction.)
\end{remark}
{\ }
\begin{example}\label{exmpUnivInfCover}
Without the upper bound for the cardinal number of covering,
there is a ``universal'' covering of $\overline{B}(\theta, 1)$ such that the interior of exactly one subset contains the centre:
let $\mathcal{C} = \{ A_{\theta} \} \cup \bigl\{ A_y \bigr\}_{y \in S(\theta, \frac{1}{2})}$, where $A_{\theta} = \overline{B}(\theta, \frac{1}{2})$ and
$A_y = \overline{B}(y, \frac{1}{2})$.
Indeed, $\theta \in \mathrm{Int}\, A_{\theta} = B(\theta, \frac{1}{2})$, while $\forall y \in S(\theta, \frac{1}{2})$: $\theta \notin \mathrm{Int}\, A_y = B(y, \frac{1}{2})$,
and for $\forall x \in \overline{B}(\theta, 1) \backslash \{ \theta \}$ we take $y_x = \frac{1}{2 \| x \|} x \in S(\theta, \frac{1}{2})$,
then $\| x - y_x \| = |1 - \frac{1}{2 \| x\|}| \cdot \| x \| = \bigl| \| x \| - \frac{1}{2} \bigr| \leqslant \frac{1}{2}$, thus $x \in A_{y_x}$.
Certainly, $A_i \cong A_j$.
This covering doesn't require NCS or $\dim X > 1$; meanwhile $\dim X > 1$ implies $| \mathcal{C} | > \aleph_0$.
\end{example}
{\ }
\begin{example}\label{exmpHilbertCountCover}
Consider the Hilbert space over $\mathbb{R}$, $X = H = l_2$, and its closed unit ball $\overline{B_H} = \overline{B}(\theta, 1)$, unit sphere $S_H = S(\theta, 1)$.
We claim that there is a countable covering of $\overline{B_H}$ by its congruent and convex subsets $\{ A_i \}$ such that the interior of exactly one set contains the centre.
\begin{proof}
It's a direct corollary of Lemmas \ref{lmCountDenseSphGeod}, \ref{lmOmtdCongr}--\ref{lmOmtdConvex}:
1) Lemma \ref{lmCountDenseSphGeod} and Lemma \ref{lmOmtdCoverBall} along with Lemma \ref{lmOmtdCongr} provide
the countable covering of $\overline{B_H}$ by congruent ommatidiums $A_i = C(\theta, d_i, \frac{\pi}{4})$, $i \in \mathbb{N}$, where $d_i \in S_H$.
By Lemma \ref{lmOmtdOriginNonIntr}, $\theta \notin \mathrm{Int}\, A_i$.
2) Then Lemma \ref{lmOmtdInsideBall} allows to add the ommatidium $A_0 = C(-\frac{1}{2}d_1, \frac{1}{2}d_1, \frac{\pi}{4})$,
which is contained in $\overline{B_H}$ since $\frac{\pi}{4} < \arccos\frac{1}{4}$ and congruent with $A_i$ by Lemma \ref{lmOmtdCongr}.
3) Finally, by Lemma \ref{lmOmtdMiddleInt}, $\theta = \frac{1}{2}\bigl( -\frac{1}{2}d_1 + \frac{1}{2} d_1 \bigr) \in \mathrm{Int}\, A_0$.
By Lemma \ref{lmOmtdConvex}, $A_i$ are convex.
$|\{ A_i \}_{i \in \mathbb{N} \cup \{ 0\}}| = \aleph_0 = \dim H$.
\end{proof}
This covering somewhat resembles those from Fig. \ref{figCoverDiskGen}, except that
a)~the sets intersect ``a~lot'', b)~there's no ``hollow'' at the centre (corrigible by erasing sufficiently small neighborhood of template ommatidium's origin), and
c)~it's infinite-dimensional.
\end{example}
{\ }
The covering problem turns out to be easier about ``positive'' results than the problems of dissection and partition types.
|
2,869,038,156,668 | arxiv | \section{Introduction}
One of the main aims of nuclear physics
is to determine the equation of state (EOS) of nuclear matter and
constrain effective nucleon-nucleon (NN)
interactions. Nucleon single particle (s.p.) potentials, especially their
isospin dependence in asymmetric
nuclear matter are basic inputs for the dynamic simulations of
heavy ion collisions and are expected to play an important role in constraining
nucleon-nucleon (NN) effective
interactions in asymmetric nuclear matter\cite{li:2008}.
The phenomenological NN effective interactions such as the Skyrme and Skyrme-like interactions play an
important role in predicting the properties of finite
nuclei~\cite{vautherin:1972,friedrich:1986,dobaczewski:1996
goriely:2002,goriely:2003,lesinski:2007,brito:2007}, nuclear matter
and neutron
stars~\cite{onsi:2002,stone:2003,stone:2006,meissner:2007},
nucleus-nucleus interaction potential~\cite{denisov:2002,wang:2006}
and fission barriers~\cite{goriely:2007}. The parameters of the
effective interactions are usually constrained by the ground state
properties of stable nuclei and/or the saturation properties of nuclear
matter, and thus they are shown to be quite successful for
describing nuclear phenomena related to nuclear system not far from
the normal nuclear matter density ($\rho_0=0.17$fm$^{-3}$) at small
isospin-asymmetries. However, as soon as the density deviates from
the normal nuclear matter density and the isospin-asymmetry becomes
large, the discrepancy among the predictions of the
Skyrme-Hartree-Fock (SHF) approach by adopting different Skyrme
parameters could be extremely large~\cite{babrown:2000,chen:2005}.
As for the isospin dependence of nucleon single-particle properties,
different Skyrme parameters may lead to an opposite isospin
splitting of the neutron and proton effective masses in neutron-rich
nuclear matter even at densities around
$\rho_0$~\cite{lesinski:2006}. In order to improve the predictive
power of the Skyrme interaction at high densities and large isospin
asymmetries, some work was done in recent years to constrain
the Skyrme parameters by fitting the bulk properties of asymmetric
nuclear matter obtained by the SHF approach to those predicted by
microscopic many-body theories. For example, in Ref.~\cite{chabanat}
Chabanat {\it et al.} proposed a number of sets of Skyrme parameters
by reproducing the equation of states (EOSs) of symmetric nuclear
matter and pure neutron matter predicted by the microscopic
variational approach~\cite{pudliner:1995}. In Ref.~\cite{cao:2006},
the authors constructed the LNS parameters for the Skyrme
interaction by fitting to the EOS of asymmetric nuclear matter and
the neutron/proton effective mass splitting in neutron-rich matter
around saturation density obtained within the Brueckner-Hartree-Fock
(BHF) approach extended to include a microscopic three-body force
(TBF)~\cite{zuo:2002,zuo:2005}. Although these recent
parametrizations of Skyrme interaction can reproduce fairly well the
EOSs of symmetric nuclear matter and pure neutron matter predicted
by microscopic approaches(variational method and BHF approach), the
deviation from the microscopic results is shown to become
significantly large even for symmetric nuclear matter as soon as the
EOS is decomposed into different spin-isospin
channels~\cite{lesinski:2006}, and what is more,
the single particle (s.p.) properties (especially their isospin and momentum dependence)
obtained by the Skyrme-Hartree-Fock calculation could be significantly different
from the predictions of microscopic approaches.
In Ref.~\cite{zuo:2009}, we investigated the EOS of asymmetric
nuclear matter and its isospin dependence in various spin-isospin
$ST$ channels within the framework of the microscopic BHF approach.
In the present paper, we shall extend our work to study the proton
and neutron s.p. potentials in different spin-isospin channels for a
deeper understanding of the mechanism of the isospin dependence of
the nuclear matter properties and for providing more elaborate microscopic
constraints for effective \mbox{NN} interactions. We shall discuss
particularly the isovector part and the isospin dependence of the
neutron and proton s.p. potentials in asymmetric nuclear matter in different spin-isospin $ST$
channels.
\section{Theoretical Approaches}
Our present investigation is based on the Brueckner theory~\cite{day:1967}.
The Brueckner approach for asymmetric
nuclear matter and its extension to include a microscopic TBF can be
found in Refs.~\cite{zuo:2002,zuo:1999}. Here we simply give a brief
review for completeness. The starting point of the BHF approach is
the reaction $G$-matrix, which satisfies the following isospin
dependent Bethe-Goldstone (BG) equation,
\begin{eqnarray}
G(\rho, \beta, \omega )&=& \upsilon_{NN} +\upsilon_{NN}
\nonumber \\ &\times&
\sum_{k_{1}k_{2}}\frac{ |k_{1}k_{2}\rangle Q(k_{1},k_{2})\langle
k_{1}k_{2}|}{\omega -\epsilon (k_{1})-\epsilon (k_{2})}G(\rho,
\beta, \omega ) ,
\end{eqnarray}
where $k_i\equiv(\vec k_i,\sigma_1,\tau_i)$, denotes the momentum,
the $z$-component of spin and isospin of a nucleon, respectively.
$\upsilon_{NN}$ is the realistic NN interaction, and $\omega$ is the
starting energy. The asymmetry parameter is defined as
$\beta=(\rho_n-\rho_p)/\rho$, where $\rho, \rho_n$, and $\rho_p$
denote the total, neutron and proton number densities, respectively.
In solving the BG equation for the $G$-matrix, the continuous
choice~\cite{jeukenne:1976} for the auxiliary potential $U(k)$ is
adopted since it provides a much faster convergence of the hole-line
expansion than the gap choice~\cite{song:1998}. Under the continuous
choice, the auxiliary potential describes the BHF mean field felt by
a nucleon during its propagation in nuclear
medium~\cite{lejeune:1978}.
The BG equation has been solved in the total angular momentum
representation~\cite{zuo:1999}. By using the standard
angular-averaging scheme for the Pauli operator and the energy
denominator, the BG equation can be decoupled into different partial
wave $\alpha=\{JST\}$ channels~\cite{baldo:1999}, where $J$ denotes
the total angular momentum, $S$ the total spin and $T$ the total
isospin of a two-particle state.
For the NN interaction, we adopt the Argonne $V_{18}$ ($AV_{18}$)
two-body interaction~\cite{wiringa:1995} plus a microscopic based on
the meson-exchange current approach~\cite{grange:1989}. The
parameters of the TBF model have been self-consistently determined
so as to reproduce the $AV_{18}$ two-body force by using the
one-boson-exchange potential model~\cite{zuo:2002}. The TBF contains
the contributions from different intermediate virtual processes such
as virtual nucleon-antinucleon pair excitations, and nucleon
resonances ( for details, see Ref.~\cite{grange:1989}). The TBF
effects on the EOS of nuclear matter and its connection to the
relativistic effects in the DBHF approach have been reported in
Ref.~\cite{zuo:2002}.
The TBF contribution has been included by reducing the TBF to an
equivalently effective two-body interaction via a suitable average
with respect to the third-nucleon degrees of freedom according to
the standard scheme~\cite{grange:1989}. The effective two-body
interaction ${\tilde v}$ can be expressed in $r$-space
as\cite{zuo:2002}
\begin{equation}
\begin{array}{lll}
\langle\vec r_1 \vec r_2| {\tilde v} |
\vec r_1^{\ \prime} \vec r_2^{\ \prime} \rangle = \displaystyle
\frac{1}{4} {\rm Tr}\sum_n \int {\rm d} {\vec r_3} {\rm d} {\vec
r_3^{\ \prime}}\phi^*_n(\vec r_3^{\ \prime})(1-\eta(r_{23}'))
\\[5mm]\displaystyle
\times
\displaystyle (1-\eta(r_{13}' ))W_3(\vec r_1^{\ \prime}\vec r_2^{\
\prime} \vec r_3^{\ \prime}|\vec r_1 \vec r_2 \vec r_3)
(1-\eta(r_{13}))\\[3mm] \times (1-\eta(r_{23})) \phi_n(r_3)
\end{array}\label{eq:TBF}
\end{equation}
where the trace is taken with respect to the spin and isospin of the
third nucleon. The function $\eta(r)$ is the defect function. Since
the defect function is directly determined by the solution of the BG
equation\cite{grange:1989}, it must be calculated self-consistently
with the $G$ matrix and the s.p. potential $U(k)$\cite{zuo:2002} at
each density and isospin asymmetry. It is evident from
Eq.(\ref{eq:TBF}) that the effective force ${\tilde v}$ rising from
the TBF in nuclear medium is density dependent. A detailed
description and justification of the method can be found in
Ref.~\cite{grange:1989}.
\section{Results and Discussion}
\begin{center}
\begin{figure}\includegraphics[width=8cm]{fig1.eps}
\caption{
Decomposition of the neutron (the dash-dotted curves) and proton (the dashed curves)
s.p. potentials into various spin-isospin $ST$ channels in
asymmetric nuclear matter at density $\rho=0.17$fm$^{-3}$ and isospin-asymmetry $\beta=0.6$.
The results for symmetric matter are also shown by the solid curves for comparison.}
\label{fig1}
\end{figure}\end{center}
In Fig.~\ref{fig1} we display the neutron (the dash-dotted curves) and
proton (the dashed curves) s.p. potentials in various spin-isospin
channels of $ST =00, 10, 01, 11$, and $T=0, 1$ at a density of
$\rho=0.17$fm$^{(-3)}$ and an isospin-asymmetry of $\beta=0.6$.
Shown in Fig.~\ref{fig2} are the results for
$\rho=0.34$fm$^{-3}$. In the two figures, the s.p. potentials in
symmetric nuclear matter are also plotted (solid lines) for
comparison. It is seen that in symmetric nuclear matter, the s.p.
potentials in the isospin-singlet $T=0$ channel and in the
isospin-triplet $T=1$ channel are attractive and are compatible in
magnitude. One may notice that the contributions in the two even
channels ($ST=10$ and $ST=01$) are considerably larger in magnitude
than those in the two odd channels ($ST=00$ and $ST=11$).
Consequently, the attraction in both the $T=0$ and $T=1$ channels
mainly come from the two even channels. In asymmetric nuclear matter,
the neutron and proton s.p. potentials will split (i.e. become
different) with respect to their common values in symmetric nuclear
matter. It can be seen that the splitting of the proton and neutron
s.p. potentials is much larger in the isospin-singlet $T=0$ channel
than that in the isospin-triplet $T=1$ channel. And thus the
splitting is dominated by the isospin-singlet $T=0$ channel. This
result is consistent with the prediction for the EOS of asymmetric
nuclear matter in~\cite{zuo:1999,zuo:2005} where it is shown that
the isovector part of the EOS of asymmetric nuclear matter is
determined by the contribution from the $T=0$ channel.
As the isospin-asymmetry increases, the proton potential in the $ST=10$ channel
becomes more attractive and the neutron one in the $ST=10$ becomes less attractive.
The isospin-asymmetry dependence of the proton and neutron potentials in the $ST=00$
channel turns out to be opposite
to that in the $ST=10$ channel.
The isospin dependence of the proton and neutron potentials in the
two isospin-triplet ($ST=01$ and $ST=11$) channels is quite weak.
At densities around the nuclear saturation density $\rho=0.17$fm$^{-3}$,
the attraction decreases for the proton s.p. potential and increases for the neutron
s.p. potential slightly in the $T=1$ channel as the asymmetry increases.
At a high density ($\rho=0.34$fm$^{-3}$), the attraction in the $T=1$ channel becomes
slightly smaller for both proton and neutron at a higher asymmetry.
It is also seen from Fig. 1 and Fig.2 that
the isospin dependence of the neutron and proton s.p. potentials in the $T=0$
channel becomes weaker as the momentum increases, which responds for the
repaid decreasing of nuclear symmetry potential as a function of
momentum~\cite{zuo:2005}.
\begin{center}
\begin{figure}\includegraphics[width=8cm]{fig2.eps}
\caption{
The same as Fig.~\ref{fig1} but for a density $\rho=0.34$fm$^{-3}$.
}
\label{fig2}
\end{figure}\end{center}
To see more clearly the isospin dependence of the s.p. potentials in asymmetric nuclear matter,
we show in Fig. 3 the contribution to nuclear symmetry potential from different $ST$
channels at two densities $\rho=0.17$fm$^{-3}$ and 0.34fm$^{-3}$.
The symmetry potential is defined as: $U_{\rm sym} = (U_n-U_p)/(2\beta)$.
It describes the isovector part of the neutron and
proton s.p. potentials in neutron-rich nuclear matter and is crucial for predicting the isospin observables in
heavy ion collisions at medium and high energies~\cite{zuo:2005,li:2008}. Symmetry potential is defined as: $U_{\rm
sym}=(U_n-U_p)/2\beta$. It describes the isovector part of the s.p.
potential and is crucial for predicting the isospin observables in
heavy ion collisions at medium and high energies~\cite{li:2008}.
The symmetry potential in the isospin-singlet channel ($T=0$) is shown to be positive,
while that in the isospin-triplet channel $T=1$ is negative.
From Fig.~\ref{fig3}, one may see again that the contribution from the $T=0$ channel is much larger in magnitude
than that from the $T=1$ channel especially at low momenta, which indicates that
the momentum-dependence of the isovector part of nucleon s.p. potential in neutron-rich
nuclear matter is governed by the contribution from the $T=0$ channel. The positive symmetry
potential in the isospin-singlet $T=0$ channel implies that it is repulsive on neutrons and
attractive on protons in the momentum range considered here.
In the $T=0$ channel, the contribution from the odd partial wave $ST=00$ channel is negative,
while the contribution from the even partial
wave $ST=10$ channel is positive. At relatively low momenta, the contribution from the $ST=10$
channel turns out to be much larger in magnitude
than that in the $ST=00$ channel. As a consequence, the contribution from the even partial
wave $ST=10$ channel dominates the symmetry potential in the $T=0$
channel and it determines the total symmetry potential to a large extent. It is also noticed
that the symmetry potential in the $T=0$ channel is almost independent on density, while the
symmetry potential in the $T=1$ channel becomes slightly larger at a higher density.
In the $T=0$ channel, the symmetry potential is a decreasing function of momentum and
the decreasing rate is almost completely determined by the contribution from the even channel $ST=10$.
\begin{center}
\begin{figure}
\includegraphics[width=8cm]{fig3.eps}
\caption{
Decomposition of nuclear symmetry potential into various spin-isospin channels for
two values of density $\rho=0.17$ and 0.34fm$^{-3}$,
respectively.
}
\label{fig3}
\end{figure}
\end{center}
In symmetric nuclear matter, the present investigation shows that the contribution from the isospin-singlet $T=0$ channel is
almost the same as the contribution from the
isospin-triplet $T=1$ channel, indicating that both the interactions between the like
nucleons (i.e., neutron-neutron and proton-proton) and
between the unlike nucleons (neutron-proton) play a decisive role in determining the isoscalar properties of nuclear matter.
In asymmetric nuclear matter, it is found that the isovector part of the s.p. potential
stems mainly from the $T=0$ channel, implying that the isospin-dependence
of the s.p. potential in asymmetric nuclear matter is governed by the interaction between
proton and neutron. We also checked that the contribution of the $T=0$ channel stems almost
completely from the contribution of the $T=0$ tensor $SD$ coupled channel, while
the contributions of the other isospin-singlet $T=0$ channels cancel almost completely.
On the one hand, the above result reveals an microscopic mechanism of the symmetry potential.
In asymmetric nuclear matter (for example, $\beta=0.6$), the neutron number is greater than the
proton number. As a result, a neutron may feel less interaction from surrounding protons and a proton
may feel stronger interaction from surrounding neutrons in asymmetric nuclear matter as compared with
the case of symmetric nuclear matter. Consequently, the effect of the symmetry
potential (i.e., the isovector part of the s.p. potential) is repulsive on neutrons and attractive on
protons due to the attraction of the tensor $SD$ coupled channel. On the other hand,
the present result on the s.p. potential is consistent with our previous conclusion for
the EOS of nuclear matter\cite{zuo:2009}. Therefore, we may conclude that both the isospin-dependence
of the s.p. potential and the EOS of asymmetric nuclear matter are determined mainly by the contribution
of the $T=0$ channel, implying a direct correspondence between the symmetry potential and symmetry energy.
In the transport models for heavy ion collisions, the direct input is the symmetry potential rather than the
symmetry energy. The present result confirms that one can use the transport model simulation to extract the
information of symmetry energy by comparing the isospin observables from experiments.
\section{Summary}
In the present paper, we have extended our previous work of Ref.\cite{zuo:2009} and
investigated the proton and neutron s.p. potentials
in isospin asymmetric nuclear matter by decomposing the potentials into
various spin-isospin $ST$ channels within the framework of the BHF approach extended to
include a microscopic three-body force. In symmetric nuclear matter, the s.p. potentials in both
isospin-singlet $T=0$ and isospin-triplet $T=1$ channels are attractive and they are comparable in magnitude.
In asymmetric nuclear matter, the isospin-dependence of the s.p. potentials in the $T=1$ channel turn out to be quite weak as compared with the s.p. potentials in the $T=0$ channel. As a consequence, the isovector part of the proton and neutron s.p. potentials is shown to be essentially determined by the contribution from the $T=0$ channel in consistence with the conclusion obtained for the EOS of nuclear matter\cite{zuo:2009}. The symmetry potential has also been decomposed into various spin-isospin channels. It is shown that the symmetry potential in the $T=0$ channel is much larger than that in the $T=1$ channel at not too high momenta, and the momentum dependence of the symmetry potential is governed by the contribution from the $T=0$ channel.
The present results are expected to provide some microscopic information for
constraining the isospin dependence of effective nucleon-nucleon
interactions in asymmetric nuclear medium.
\section*{Acknowledgments}
{The work was supported by the National Natural Science
Foundation of China (11175219, 10875151, 10740420550), the Major State Basic
Research Developing Program of China under No. 2007CB815004, the
Knowledge Innovation Project (KJCX2-EW-N01) of the Chinese Academy of
Sciences, the Chinese Academy of Sciences Visiting Professorship for Senior International
Scientists (Grant No.2009J2-26), and the CAS/SAFEA International Partnership Program for Creative
Research Teams (CXTD-J2005-1).}
\vskip 8mm
|
2,869,038,156,669 | arxiv | \section{introduction}
Entity resolution aims at finding the records that refer to the same real-world entity. Usually considered as a classification task, ER is challenging in that the records may contain incomplete and dirty values. ER can be performed based on rules, probabilistic theory or machine learning \cite{Singh2017, christen2012data}. However, the traditional machine-based solutions may not be able to produce satisfactory results in many practical scenarios. Therefore, there is an increasing need to involve the human in the resolution process for improved quality \cite{wang2012crowder}. For instance, the active learning approach \cite{sarawagi2002interactive} proposed to select the instances for manual verification based on the benefit they can bring to a machine classifier. The approach of crowdsourcing \cite{Jain2017, wang2012crowder} instead investigated how to make the human work efficiently and effectively on a given workload. Depending on pre-specified assumptions (e.g. partial order relationship \cite{chai2016cost}), it usually makes the human label some instances in a workload for the purpose that the remaining instances can be automatically labeled by the machine with high accuracy.
It can be observed that the existing hybrid approaches select the instances for manual verification to maximize the benefit they can bring to a given workload as a whole. However, the marginal benefit of additional manual work usually decreases (sometimes dramatically) with the cost. For instance, in active learning, it has been well recognized \cite{schohn2000less} that increasing the number of training data points may quickly become ineffectual in improving classification performance after initial iterations. In the application scenarios where fast response is required, it is also desirable that a limited amount of human effort can be exclusively spent on the instances at high risk of being mislabeled by the machine.
In this paper, we investigate the problem of human and machine cooperation for improved quality from a risk perspective. Given a limited human cost budget, we propose to select the machine-labeled instances at high risk of being mislabeled for manual verification. The proposed risk-based solution is supposed to be used in the scenario where increasing training points for a learning model has become ineffectual or not cost-effective in improving classification performance. It can therefore serve as a valuable complement to the existing learning-based solutions. On the other hand, even though some of the proposed techniques for active learning (e.g. training instance selection based on uncertainty \cite{mozafari2014scaling}) can be naturally applied for this task, our work is the first to introduce the concept of risk and propose a formal risk model for the task. The major contributions of this paper can be summarized as follows:
\begin{itemize}
\item We investigate the problem of human and machine cooperation for ER from a risk perspective and define the corresponding optimization problem (Section.~\ref{sec:problem});
\item We present a risk model for prioritizing the machine-labeled instances for manual verification (Section.~\ref{sec:riskmodel});
\item We evaluate the performance of the proposed approach on real data by a comparative study. The experimental results validate its efficacy (Section.~\ref{sec:experiment}).
\end{itemize}
\section{Problem Definition} \label{sec:problem}
Given an ER workload consisting of record pairs, a machine classifier labels each pair as {\em match} or {\em unmatch}. Due to the inherent challenge of entity resolution, a classifier may be prone to mislabeling some of the pairs. In this paper, we investigate the problem of how to improve the results of machine resolution by manually correcting machine errors. Since human work is expensive, we impose a budget on the amount of spent human effort. For the sake of presentation simplicity, we quantify the budget by the number of manually-inspected pairs. Given a budget $k$, an ideal solution would identify $k$ mislabeled pairs. In this case, each manual inspection effectively corrects a machine error. However, in practice, it is more likely that a solution chooses both mislabeled and correctly labeled pairs. We formally define the optimization problem as follows:
\begin{definition}
\label{problemdefinition}
{\bf [Optimization Problem of Improving Machine Resolution by Manual Inspection].} Given an ER workload, $D$, which consists of $n$ record pairs, \{$d_1$,$d_2$,$\ldots$,$d_n$\}, a machine classifier labels each pair in $D$ as {\em match} or {\em unmatch}. Given the budget $k$ on human work, the optimization problem is to identify a set of $k$ machine-labeled pairs in $D$, denoted by $D_H$, for manual inspection such that the number of pairs misclassified by the machine in $D_H$ is maximized.
\end{definition}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/overview.pdf}
\vspace{-0.3cm}
\caption{The Risk-based Solution.}
\label{fig:overview}
\vspace{-0.4cm}
\end{figure}
\hspace{-0.14in}{\bf Risk-based Solution.} The optimization problem defined in Definition.~\ref{problemdefinition} is challenging due to the fact that the match probabilities of the machine-labeled pairs are difficult to estimate. In this paper, we propose to solve the optimization problem from a risk perspective. In other words, the machine-labeled pairs at higher risk of being mislabeled should be chosen first for manual inspection. It can be observed that if risk measurement is accurate given all the available information, the strategy of selecting by risk-wise order can be considered optimal. The workflow of the risk-based solution is presented in Figure.~\ref{fig:overview}. It iteratively selects the most risky machine-labeled pairs for manual inspection until the budget limit is reached. After each iteration, the set of manually-labeled pairs is updated, and is used to re-evaluate the risk of the remaining machine-labeled pairs.
It is worthy to point out that the risk-based solution can work properly with both supervised and unsupervised classifiers. Given a supervised classifier, risk analysis can be initially performed based on the human-labeled pairs as well as machine resolution. Given an unsupervised classifier, risk analysis can only start with machine resolution; after initial iterations, it can then be similarly performed based on the human-labeled pairs as well as machine resolution.
\section{Risk Analysis} \label{sec:riskmodel}
In this section, we propose the technique of risk analysis for prioritizing pair selection. Given an instance pair $d_i$ in $D$, we represent its match probability by a random variable, $P_i$. As usual, we model $P_i$ by a normal distribution, $\mathcal{N}(\mu_i, \sigma_i^2)$, where $\mu_i$ and $\sigma_i^2$ denote its expectation and variance respectively. In the rest of this section, we first describe how to estimate the match probability distribution in Subsection~\ref{sec:bayesian-analysis}, and then present the metric for risk measurement in Subsection~\ref{sec:cvar}.
\vspace{-3pt} \subsection{Distribution Estimation} \label{sec:bayesian-analysis}
It can be observed that there exist two information sources for the estimation of match probability distribution. Firstly, even though a machine classifier may fail to produce satisfactory resolution results, it can provide valuable hints about the status of the pairs. Therefore, the results of machine resolution can generally serve as a starting point for the estimation. The second source consists of the human-labeled results. Compared with machine labels, the labels provided by the human are usually more accurate, i.e. they can provide more information beyond the capability of machine resolution.
We employ the classical Bayesian inference \cite{berger1985statistical} to estimate the distribution. The inference process takes the match probability estimated by the machine as the prior expectation, and uses the human-labeled pairs as samples to estimate the posterior expectation and variance. The proposed approach has the desirable property that it can seamlessly integrate the hints provided by both the human and the machine into a unified inference process.
\vspace{-3pt} \subsubsection{Prior expectation estimation by machine}
A machine classifier labels instance pairs as match or unmatch based on a classification metric. Generally, the match probability of a pair can be considered to be monotonous with its metric value. In this paper, we use the SVM (Support Vector Machine) classifier based on active learning as the illustrative example. It classifies pairs through a hyperplane. Instead of randomly selecting training data points, it iteratively chooses the instance pair which is closest to the hyperplane of the current SVM as the next training data point, and updates the SVM until a preset training budget is exhausted. Note that an SVM classifier usually provides a pair's distance from the hyperplane, rather than a match probability, as the evidence for its given label. We therefore use Platt's probabilistic outputs for SVM \cite{platt1999probabilistic} to translate the distance into a match probability.
\vspace{-3pt} \subsubsection{Sample observation generation by human}
We generate the sample observations on the status of a target pair based on features. Features serve as the medium to convey valuable information from the human-labeled pairs to a target pair. Desirably, the features used for information conveyance should have the following three properties:
\begin{enumerate}
\item They can be easily extracted from the human-labeled pairs;
\item They should be evidential, or indicative of the status of a pair;
\item They should be to a large extent independent of the metric used by the machine classifier.
\end{enumerate}
The final property ensures that the sample observations can provide additional valuable information not implied by machine labels. To this aim, we extract two types of features from pairs, \emph{Same($t_i$)} and \emph{Diff($t_i$)}, where $t_i$ represents a token, \emph{Same($t_i$)} indicates that both records in a pair contain $t_i$, and \emph{Diff($t_i$)} indicates that one and only one record in a pair contains $t_i$. It can be observed that these two features are evidential and easily extractable. Moreover, they were not used in the existing classification metrics proposed for ER.
Suppose that a target pair, $d_i$, contains $m$ features, which are denoted by \{$f_1$, $f_2$, $\ldots$, $f_m$\}. A human-labeled pair containing all the $m$ features can be naturally considered to be a valid observation on the status of $d_i$. Unfortunately, due to their limited number in practical scenarios, the human-labeled pairs with this property may not provide with sufficient observations. Therefore, we also consider the human-labeled pairs that contain only a portion of the $m$ features in $d_i$. Suppose that a human-labeled pair, $d_j^h$, contains the $k$ features in $d_i$, \{$f_1$, $f_2$, $\ldots$, $f_k$\}, but does not contain the remaining $(m-k)$ features. Inspired by the portfolio investment theory \cite{rockafellar2002conditional}, we treat features as stocks, and a feature's match probability as its investment reward. Then, the match probability of $d_i$ corresponds to the combined reward of an investment portfolio consisting of $m$ stocks, \{$f_1$, $f_2$, $\ldots$, $f_m$\}.
Based on the label of $d_j^h$, we generate the corresponding sample observation on the status of $d_i$ by
\begin{equation}
O_j(d_i)=\frac{L(d_j^h)+\sum_{k<r\leq m}{w_rE(f_r)}}{1 + \sum_{k<r\leq m}{w_r}},
\label{eq:observation}
\end{equation}
in which $w_r$ denotes the feature weight, $L(d_j^h)$ denotes the manual label of $d_j^h$, and $E(f_r)$ denotes the expectation of $f_r$'s match probability. In Eq.~\ref{eq:observation}, $L(d_j^h)$=1 if the label is {\em match} and $L(d_j^h)$=0 otherwise. We estimate $E(f_r)$ by
\begin{equation}
E(f_r)=\frac{\sum_{1\leq s\leq n}{L(d_s^r)}}{n},
\end{equation}
in which $d_s^r$ denotes a human-labeled pair containing the feature $f_r$ and $n$ denotes its total number. An example of sample observation generation is shown in Example~\ref{exam:observation-generation}. More details can be found in our technical report \cite{chen2018riskerreport}. It is worthy to point out that in the generation of sample observations for $d_i$, we only consider the features contained in the human-labeled pairs. If a feature of $d_i$ never appears in the human-labeled pairs, we lack reliable information to reason about its match probability. It is therefore ignored in the observation generation process.
\begin{example} \label{exam:observation-generation}
Suppose that a target pair, $d_1$, contains 3 features, \{$f_1$,$f_2$,$f_3$\}, and a pair manually labeled as {\em unmatch} by the human, $d_2^h$, contains $f_1$ and $f_2$, but not $f_3$. For the sake of presentation simplicity, we also suppose that feature weights are equally set to be 1.
With the expectation of the match probability of $f_3$ being estimated at 0.3, the sample observation provided by $d_2^h$ for the status of $d_1$ is approximated by $O_2(d_1)=\frac{0+0.3}{2}=0.15$.
\end{example}
\vspace{-3pt} \subsubsection{Bayesian inference}
Given a random variable $V$ following a known prior distribution, $\pi(V)$, the technique of Bayesian inference ~\cite{berger1985statistical} estimates the posterior distribution of $V$ by combining the prior information provided by $\pi(V)$ and the sample observations. In our example, the prior distribution of the match probability of a target pair, $d_i$, is represented by the normal distribution of $\mathcal{N}(\mu_i, \sigma_i^2)$. Suppose that the prior expectation of $\mu_i$ provided by the machine classifier is $\mu_i^0$ and the human-labeled pairs provide with $n$ sample observations.
As usual, we suppose that $\mu_i$ and $\sigma_i^2$ follow a combined conjugate prior distribution, or a {\em normal-inverse-gamma distribution}. The prior distributions of $\mu_i$ and $\sigma_i^2$ can thus be represented by
\begin{equation}
p(\mu_i|\sigma_i^2; \mu_i^0, n^0) \sim \mathcal{N}(\mu_i^0, \frac{\sigma_i^2}{n^0}),
\label{eq:mean-dist}
\end{equation}
and
\begin{equation}
p(\sigma_i^2; \alpha, \beta) \sim InvGamma(\alpha^0, \beta^0),
\label{eq:variance-dist}
\end{equation}
where $n^0$, $\alpha^0$ and $\beta^0$ are the hyperparameters, and $InvGamma()$ denotes an {\em inverse-gamma distribution}. Denoting the posteriors by $\mathcal{N}(\mu_i^1, \frac{\sigma_i^2}{n^1})$ and $InvGamma(\alpha^1, \beta^1)$, we have
\begin{equation}
\begin{split}
&\mu_i^1 = \frac{n^0\cdot\mu_i^0 + n\cdot \bar{p}_i}{n^0 + n}, \\
&n^1 = n^0 + n, \\
&\alpha^1 = \alpha^0 + \frac{n}{2}, \\
\beta^1 = \beta^0 + \frac{1}{2}\sum&_{j=1}^{n}(p_i^j - \bar{p}_i)^2 + \frac{1}{2}\cdot \frac{n^0 n}{n^0 + n}\cdot (\mu_i^0 - \bar{p}_i)^2,
\end{split}
\end{equation}
where $\bar{p}_i$ denotes the average value of observed samples.
In Eq.~\ref{eq:mean-dist} and ~\ref{eq:variance-dist}, the hyperparameters $n^0$, $\alpha^0$ and $\beta^0$ are used to convey the belief about the prior information. Specifically, given a confidence level of $\theta$ on the prior expectation $\mu_i^0$, we set $n^0=\theta n / (1 - \theta)$. It means that the inference process will preserve $\theta \mu_i^0$ for the estimation of $\mu_i$. Similarly, we set $\alpha^0=\frac{n}{2}\cdot\frac{\theta}{1-\theta} + 1$, and $\beta^0 = S_n^2 \cdot (\alpha^0 - 1)$, in which $S_n^2$ represents the variance of all the samples. It means that the inference process will preserve $\theta S_n^2$ for the estimation of $\sigma_i^2$.
Based on the obtained posterior distributions of $\mu_i$ and $\sigma_i^2$, a point estimate $\hat{\mu}_i$ for the random variable $\mu_i$ (resp. $\hat{\sigma}_i^2$ for $\sigma_i^2$) can be inferred using a metric of Bayes risk. More details on the Bayesian inference can be found in our technical report~\cite{chen2018riskerreport}.
\vspace{-3pt} \subsection{Risk Model} \label{sec:cvar}
Inspired by the portfolio investment theory~\cite{rockafellar2002conditional}, we employ the metric of Conditional Value at Risk (CVaR) to measure the risk of pairs being mislabeled by the machine. Given a confidence level of $\theta$, CVaR is the expected loss incurred in the $1-\theta$ worst cases. Formally, given the loss function $z(X)\in L^p(\mathcal{F})$ of a portfolio $X$ and $\theta$, the metric of CVaR is defined as follows:
\begin{equation}
CVaR_\theta(X) = \frac{1}{1-\theta}\int_0^{1-\theta}VaR_{1-\gamma}(X)d\gamma,
\end{equation}
where $VaR_{1-\gamma}(X)$ represents the minimum loss incurred at or below $\gamma$ and can be formally represented by
\begin{equation}
VaR_{1-\gamma}(X)=inf\{z_*: P(z(X)\geq z_*)\leq \gamma\}.
\end{equation}
Given a pair, $d_i$, we denote its match probability by $x$, and its probability density function and cumulative distribution function by $pdf_{d_i}(x)$ and $cdf_{d_i}(x)$ respectively. If $d_i$ is labeled by the machine as {\em unmatch}, its probability of being mislabeled by the machine is equal to $x$. Accordingly, its worst-case loss corresponds to the case that $x$ is maximal. Therefore, given the confidence level of $\theta$, the CVaR of $d_i$ is the expectation of $z=x$ in the $1-\theta$ cases where $x$ is from $cdf_{d_i}^{ - 1}(\theta)$ to $+ \infty$. Formally, the CVaR risk of a pair $d_i$ with the machine label of {\em unmatch} can be estimated by
\begin{equation}
CVa{R_\theta }(d_i) =
\frac{1}{1 - \theta }\int\limits_{{{cdf}_{d_i}}^{ - 1}(\theta )}^{ + \infty } {pdf_{d_i}} (x) \cdot xdx.
\end{equation}
Otherwise, if $d_i$ is labeled by the machine as {\em match}, its potential loss of being mislabeled by the machine is equal to 1-$x$. Therefore, the CVaR risk of a pair $d_i$ with the machine label of {\em match} can be similarly estimated by
\begin{equation}
CVa{R_\theta }(d_i) =
\frac{1}{1 - \theta}\int\limits_{ - \infty }^{{cdf_{d_i}}^{ - 1}(1 - \theta )} {pdf_{d_i}} (x) \cdot (1 - x)dx.
\end{equation}
\section{Empirical Evaluation} \label{sec:experiment}
We have evaluated the performance of the proposed risk model, denoted by CVAR, on real data by a comparative study. We compare it with both a baseline alternative and a state-of-the-art technique proposed for active learning \cite{mozafari2014scaling}. The baseline method, denoted by BASE, selects the machine-labeled pairs solely based on the match expectation estimated by the machine. Specifically, given a pair $d_i$ and its match probability $\mu_i^0$ provided by a classifier, the risk of $d_i$ with the machine label of {\em unmatch} (resp. {\em match}) is simply estimated to be $\mu_i^0$ (resp. ($1-\mu_i^0$)). Since the two algorithms proposed in \cite{mozafari2014scaling}, {\em Uncertainty} and {\em MinExpError}, perform very similarly in our experiments, we only report the results of {\em Uncertainty}. We denote the algorithm of {\em Uncertainty} by UNCT. Intuitively, UNCT iteratively selects the pairs that the classifier is most uncertain about for manual verification.
Additionally, we also compare the proposed risk-based solution (denoted by RISK) with the active learning solution (denoted by ACTL) on the achieved resolution quality provided with the same amount of human cost budget. Note that the ACTL solution would tune classifier parameters after additional manual verification, thus can potentially improve classification accuracy, while RISK would not.
We used the real datasets DBLP-Scholar~\footnote{\url{https://dbs.uni-leipzig.de/file/DBLP-Scholar.zip}} and Abt-Buy~\footnote{\url{https://dbs.uni-leipzig.de/file/Abt-Buy.zip}} in the empirical study. As usual, we use the standard blocking technique to filter the instance pairs unlikely to match. After blocking, the DBLP-Scholar workload contains totally $41416$ instance pairs, and the Abt-Buy workload contains totally $20314$ instance pairs. We employ SVM as the machine classifier. On DBLP-Scholar, we use the Jaccard similarity over the attributes {\em title} and {\em authors}, the edit distance over the attributes {\em title}, {\em authors} and {\em venue}, and the number equality over {\em publication year} as the input features for SVM. With only $1\%$ of input data as training data, the achieved precision and recall of the SVM classifier are $0.917$ and $0.875$ respectively. On Abt-Buy, we use the Jaccard similarity and edit distance over the attributes {\em product name} and {\em description} respectively as the input features for SVM. With only 2\% of input data as training data, the achieved precision and recall are $0.567$ and $0.338$ respectively. In the implementation of risk analysis, the confidence level $\theta$ is set to 0.8. Since a valid match probability should be between 0 and 1, we transform the inferred normal distribution to a {\em truncated normal distribution} in the range of 0 to 1 \cite{burkardt2014truncated}.
\begin{figure}
\centering
\subfigure[The DBLP-Scholar dataset.]
{\includegraphics[width=0.45\linewidth]{figures/dscorrections_g.pdf}}
\subfigure[The Abt-Buy dataset.]
{\includegraphics[width=0.45\linewidth]{figures/abcorrections_g.pdf}}
\vspace{-0.5cm}
\caption{Pick-up accuracy comparison.}
\label{fig:pairs-number}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\centering
\subfigure[The DBLP-Scholar dataset.]
{\includegraphics[width=0.45\linewidth]{figures/dsf1_g.pdf}}
\subfigure[The Abt-Buy dataset.]
{\includegraphics[width=0.45\linewidth]{figures/abf1_g.pdf}}
\vspace{-0.5cm}
\caption{Resolution quality comparison between RISK and ACTL.}
\label{fig:f1-score}
\vspace{-0.4cm}
\end{figure}
The comparative results on pick-up accuracy are presented in Figure~\ref{fig:pairs-number}. It can be observed that provided with the same amount of budget, CVAR consistently picks up more mislabeled pairs than BASE and UNCT. Since both BASE and UNCT reason about the risk based on the match expectation estimated by the machine, it should not be surprising that they perform similarly. The improvement margins of CVAR over the alternatives first enlarge with the increase of budget, but then gradually narrow down as expected. Since the number of mislabeled pairs decreases with additional manual inspections, the performance difference between different approaches tend to decrease as well. These experimental results clearly validate the efficacy of the proposed risk model.
The comparative results on resolution quality, measured by the F-1 metric, between RISK and ACTL, are also presented in Figure~\ref{fig:f1-score}. The achieved quality is measured on the results consisting of both manually labeled pairs and the pairs labeled by the classifier. It can be observed that after initial iterations, RISK achieves considerably better quality than ACTL. Even though ACTL uses the additional labeled data to update its classifier, the marginal benefit of additional training data points drops quickly with the increase of budget as expected. These experimental results show that the risk-based approach can be more effective than the active learning approach in improving resolution quality.
\section{Conclusion} \label{sec:conclusion}
\vspace{-2pt}
In this paper, we propose to investigate the problem of human and machine cooperation for ER from a risk perspective. We have presented a risk model and empirically validated its efficacy. It is worthy to point out that the proposed risk-based framework can be potentially generalized for other classification tasks. It is interesting to investigate its application in the scenarios besides ER in future work.
\section*{Acknowledgment}
\vspace{-3.5pt}
This work was supported by the National Key R\&D Program of China (2016YFB1000703), NSF of China (61732014, 61332006, 61472321, 61502390 and 61672432) and Shaanxi NSBR Plan (2018JM6086).
|
2,869,038,156,670 | arxiv | \section{Introduction}\label{sect:introduction}
The observation of gravitational waves~\cite{LIGOScientific:2016aoc} and the reconstruction of the image of a black hole shadow~\cite{EventHorizonTelescope:2019dse} have provided impressive support to Einstein's General Relativity (GR), and to the existence of astrophysical objects whose properties reflect very closely those of GR's black holes.
Yet, it is expected that GR eventually breaks down at Planckian scales, leaving the stage to a more fundamental theory accounting for the quantum nature of gravity. Among the different theories of quantum gravity, string theory~\cite{Berkovits:2022ivl}, loop quantum gravity~\cite{Perez:2012wv,Ashtekar:2021kfp}, asymptotically safe gravity~\cite{Percacci:2017fkn,Reuter:2019byg}, and non-local gravity~\cite{Buoninfante:2016iuf,Modesto:2017sdr} have gained considerable attention. While seemingly diverse, a common feature is that the effective action and field equations stemming from their ultraviolet (UV) completions display additional higher-derivative terms~\cite{Gross:1986mw,Grimm:2004uq,Modesto:2013ioa,Hohm:2014sxa,Hohm:2016lim,Donoghue:2021cza,Buoninfante:2018mre,Knorr:2019atm,Mayer:2020lpa,Borissova:2022clg} which complement the Einstein-Hilbert dynamics. These corrections are expected to play an important role in determining the quantum spacetimes allowed by a principle of least action~\cite{Knorr:2022kqp} and their dynamics. In this respect, black holes and their alternatives are particularly important avenues: on the one hand, quantum gravity is expected to yield non-singular solutions~\cite{Ayon-Beato:1999qin,Bonanno:2000ep,Bronnikov:2000vy,Dymnikova:2004zc,Modesto:2004xx,Hayward:2005gi,Ansoldi:2006vg,Modesto:2008im,Ansoldi:2008jw,Nicolini:2008aj,Hossenfelder:2009fc,Modesto:2010rv,Spallucci:2011rn,Sprenger:2012uc,Bambi:2013caa,Culetu:2014lca,Frolov:2014jva,Casadio:2014pia,Carr:2015nqa,Frolov:2016pav,Bonanno:2016dyv,Bonanno:2017kta,Bonanno:2017zen,Adeifeoba:2018ydh,Buoninfante:2018stt,Carballo-Rubio:2018jzw,Carballo-Rubio:2019fnb,Simpson:2019mud,Platania:2019kyx,Bonanno:2019ilz,Bosma:2019aiu,Carballo-Rubio:2019nel,Lan:2020fmn,Lobo:2020ffi,Franzin:2021vnj,Maeda:2021jdc,Bokulic:2022cyk,Cadoni:2022chn,Casadio:2022ndh} or spacetimes with integrable singularities~\cite{Lukash:2011hd,Lukash:2013ts}; on the other hand, the quantum dynamics could shed light on how these objects are formed in a gravitational collapse, and on what the final stages of the evaporation process could be. Moreover, accounting for the quantum dynamics is key to establish whether the linear instabilities that potentially affect the inner horizon of regular or rotating black holes~\cite{DiFilippo:2022qkl,Bonanno:2020fgp} are damped or enhanced by quantum effects. Finally, the number of derivatives in the effective action is crucially related to the type of allowed solutions: truncating the full effective action to quartic order in a derivative expansion, the phase space of all possible solutions is dominated by wormholes and singular black holes~\cite{Lu:2015cqa,Lu:2015tle,Lu:2015psa,Bonanno:2019rsq,Bonanno:2022ibv}. Adding terms with six or more derivatives, the field equations instead admit spherically symmetric regular solutions~\cite{Holdom:2002xy,Knorr:2022kqp}.
Determining the shape and properties of quantum black holes from first principles is highly challenging: it requires resumming quantum-gravitational fluctuations, deriving an effective action or a similar mathematical object parametrized by finitely many free parameters, and finally determining the spacetime solutions to the corresponding field equations. In turn, computing the effective action requires solving either the path integral or its integro-differential re-writing in terms of functional renormalization group (FRG)~\cite{Dupuis:2020fhh} equations. To avoid these complications, the so-called renormalization group (RG) improvement has been used extensively in the framework of asymptotically safe gravity to investigate how quantum-gravitational effects could impact the short-distance behavior of gravity beyond GR and its solutions. This approach has emerged in the context of gauge theories~\cite{Coleman1973:rcssb,Migdal:1973si,Adler:1982jr,Dittrich:1985yb} as a way to access leading-order quantum effects while avoiding the computation of quantum loops or a full solution of the beta functions. It consists of promoting the classical constants to running couplings and subsequently replacing the RG scale with a characteristic energy scale of the system.
At a qualitative level, the application of the RG improvement to gravity~\cite{Bonanno:2006eu,Falls:2010he,Cai:2010zh,Falls:2012nd,torres15,Koch:2015nva,Bonanno:2015fga,Emoto:2005te,Bonanno:2016rpx,Kofinas:2016lcz,Falls:2016wsa,Bonanno:2016dyv,Bonanno:2017gji,Bonanno:2017kta,Bonanno:2017zen,Bonanno:2018gck,Liu:2018hno,Majhi:2018uao,Anagnostopoulos:2018jdq,Adeifeoba:2018ydh,Pawlowski:2018swz,Gubitosi:2018gsl,Bonanno:2019ilz,Held:2019xde,Platania:2019qvo,Platania:2019kyx,Ishibashi:2021kmf,Chen:2022xjk,Scardigli:2022jtt} has pointed to the following tentative conclusions. First, classical static black holes are replaced by regular black holes~\cite{Bonanno:1998ye,Bonanno:2000ep,Cai:2010zh,Falls:2010he,Torres:2014gta,Kofinas:2015sna,Emoto:2005te,Torres:2017ygl,Adeifeoba:2018ydh, Pawlowski:2018swz, Platania:2019kyx} or by compact objects~\cite{Bonanno:2019ilz,Borissova:2022jqj}. Secondly, accounting for the formation of black holes from the gravitational collapse of a massive star makes singularity resolution less straightforward and typically results in a weaker condition: black hole singularities are not fully resolved, but are rather replaced by so-called integrable singularities~\cite{Fayos:2011zza,torres15,Bonanno:2016dyv,Bonanno:2017kta,Bonanno:2017zen}. Thirdly, singularity resolution in cosmology leads to either bouncing cosmologies or to cyclic universes~\cite{Kofinas:2016lcz, Bonanno:2017gji}. Finally, in a cosmological context, the spectrum of temperature fluctuations in the cosmic microwave background radiation is intuitively understood in terms of fundamental scale invariance~\cite{Wetterich:2019qzx,Wetterich:2020cxq} in the UV---a key requirement for a theory to be UV-complete at a fixed point of the RG flow, cf.~\cite{Bonanno:2001xi,Bonanno:2001hi,Bonanno:2002zb,Guberina:2002wt,Reuter:2005kb,Bonanno:2007wg,Bonanno:2008xp,Bonanno:2010mk,Cai:2011kd,Bonanno:2015fga,Bonanno:2016rpx,Bonanno:2017pkg,Bonanno:2018gck,Gubitosi:2018gsl,Platania:2019qvo,Platania:2020lqb}.
Yet, the connection of these results with asymptotic safety and the FRG seems vague, as the application of the RG improvement to gravity is subject to ambiguities. In particular, in the context of gravity it is not obvious how to identify the RG scale consistently, as several characteristic physical scales may compete in a given process or phenomenon. This has led to a plethora of applications of the RG improvement in gravity, where the scale is identified based on physical intuition. In addition to this ambiguity, it is not clear whether the RG improvement should be implemented at the level of the action, field equations, or solutions. While these details typically do not affect the qualitative conclusions obtained via the RG improvement (at least when the scale is reasonably motivated and not manifestly inconsistent, e.g., with diffeomorphism invariance~\cite{Babic:2004ev}), a more rigorous approach might allow to determine the connection of these results with first-principle computations in quantum gravity, and in particular with the form factors program~\cite{Knorr:2018kog,Knorr:2019atm,Bosma:2019aiu,Draper:2020knh,Draper:2020bop,Knorr:2021niv,Knorr:2021iwv,Knorr:2022lzn}. The importance of the latter lies in the possibility to compute (via FRG calculations) the effective action in a curvature expansion, including infinitely many higher-derivative terms, and thus to determine formal properties of the theory~\cite{Gies:2016con,Draper:2020bop,Platania:2020knd,Knorr:2021slg,Knorr:2021niv,Bonanno:2021squ,Fehre:2021eob,Platania:2022gtt,Pastor-Gutierrez:2022nki} and of its solutions~\cite{Bosma:2019aiu}.
The scope of this work is to fill the gap between such FRG calculations and the current practise of the RG improvement. We do so by exploiting the so-called decoupling mechanism~\cite{Reuter:2003ca}: if below a certain critical RG scale---dubbed the decoupling scale---there are infrared (IR) scales dominating over the regulator which implements the Wilsonian integration of quantum gravitational fluctuations, then the RG flow freezes out and the scale-dependent effective action at the decoupling scale provides a good approximation to the effective action. In particular, identifying the decoupling scale typically grants access to some higher-derivative interaction terms which were not taken into account in the original truncation. For instance, this is the case in scalar electrodynamics, where the decoupling mechanism allows to derive the logarithmic interaction term in the Coleman-Weinberg effective potential (see~\cite{Reuter:2003ca,Platania:2020lqb} for details).
In this paper we investigate the first application of the decoupling mechanism in gravity. In particular, we will use it to determine qualitative features of the dynamics of black holes beyond GR, from formation to evaporation. As a first attempt in this direction, we will start from the Einstein-Hilbert truncation and use a simple model for the gravitational collapse where the mass function is linear in the advanced time.
Our key results can be summarized as follows. The dynamics of quantum-corrected black holes is governed by an effective Newton coupling which decreases both in time (down to a certain non-zero value), and along the radial direction. In particular, its radial dependence smoothly interpolates between the observed value at large distances and zero at the would-be singularity. As a consequence, the curvature of the quantum-corrected spacetime is weakened compared to its classical counterpart. Although we started from the Einstein-Hilbert truncation, the effective Newton coupling also features characteristic damped oscillations reminiscent of black hole solutions in higher-derivative gravity with specific non-local form factors: free oscillations in the lapse function are typical of black holes in local quadratic gravity assuming a specific sign of the Weyl-squared coupling~\cite{Bonanno:2013dja,Bonanno:2019rsq}, whereas their damping requires the presence of non-local form factors in the quadratic part of the action~\cite{Zhang:2014bea}. This is an expected outcome of the decoupling mechanism and provides evidence that a careful application of the RG improvement, where the scale is not set by physical intuition, but rather by rigorously exploiting the decoupling condition, might provide important insights into quantum gravity phenomenology~\cite{Addazi:2021xuf}.
Finally, within some approximations, a standard study of the black hole evaporation leads to conclusions in line with the literature~\cite{Bonanno:2006eu}: in the evaporation process, quantum black holes get hotter, and after reaching a maximum temperature, they start cooling down, eventually resulting in a cold black hole remnant.
The present paper is organized as follows. In Sect.~\ref{sect:FRG-RGimp-DecMech} we introduce the FRG and the decoupling mechanism. Next, we show how the decoupling mechanism can be exploited to access some of the higher-derivative terms in the effective action, and thus how to derive corrections to the solutions of GR. We present our setup in Sect.~\ref{sect:setup}, where we also derive the equations governing the dynamics of the quantum-corrected spacetime. We provide numerical and analytical solutions to these equations in Sects.~\ref{sect:collapse}, \ref{sect:Solutionsstatic}, and \ref{sect:evaporation}, where we study the dynamics of quantum-corrected black holes in three distinct regimes: formation, static configuration at the end of a collapse, and evaporation, whereby we assume that the evaporation starts only after the collapse is over. We discuss our results in Sect.~\ref{sect:conclu}.
\section{Functional renormalization group and decoupling mechanism}\label{sect:FRG-RGimp-DecMech}
This section introduces the key novel ingredient in our derivation of the dynamics of black holes beyond GR: the decoupling mechanism~\cite{Reuter:2003ca}. To this end, we shall start by briefly summarizing the FRG, its relation to quantum field theory, and its use in quantum gravity. Next, we shall clarify the difference between the RG scale built into the FRG and the physical running appearing in the effective action and in scattering amplitudes (see also~\cite{Donoghue:2019clr,Bonanno:2020bil}). Finally, we will review the idea behind the decoupling mechanism and we will explain how it can be exploited to extract qualitative information on quantum spacetimes and their dynamics.
\subsection{Effective actions and the functional renormalization group}
Schwarzschild black holes and the Friedmann-Lema\^{i}tre-Robertson-Walker cosmology can be found as solutions to the Einstein field equations
\begin{equation}
\frac{\delta S_{\text{EH}}}{\delta g_{\mu\nu}}=0\,,
\end{equation}
$S_{\text{EH}}$ being the classical Einstein-Hilbert action. In a quantum theory of gravity these field equations are replaced by their quantum counterpart,
\begin{equation}\label{eq:field-eqs-eff-act}
\frac{\delta \Gamma_0}{\delta g_{\mu\nu}}=0\,,
\end{equation}
where $\Gamma_0$ is the gravitational quantum effective action. The knowledge of the effective action thus paves the way to the investigation of quantum black holes and quantum cosmology.
Yet, computing the effective action is extremely challenging. One should solve either the gravitational path integral
\begin{equation}\label{eq:path-integral}
\int \mathcal{D} g_{\mu\nu}\, e^{i\,S_{\text{bare}}[g_{\mu\nu}]} \, ,
\end{equation}
equipped with a suitable regularization, or the FRG equation. Within the FRG, the idea is to first regularize the path integral by introducing an \emph{ad hoc} regulator term in the bare action, and then transform the integral over field configurations~\eqref{eq:path-integral} into a functional integro-differential equation for a scale-dependent version of the effective action, $\Gamma_k$, called effective average action. The resulting flow equation for $\Gamma_k$~\cite{Wetterich:1992yh,Reuter:1996cp} reads
\begin{equation}\label{eq: flow equation}
k \partial_k \Gamma_k = \frac{1}{2} \text{STr}\qty[\qty(\Gamma_k ^{(2)}+\mathcal{R}_k)^{-1}k \partial_k \mathcal{R}_k]\,.
\end{equation}
Here $\Gamma_k ^{(2)}$ denotes the matrix of second functional derivatives of the effective average action with respect to the quantum fields at fixed background. The function $\mathcal{R}_k$ is a regulator whose properties guarantee the suppression of IR and UV modes in the flow equation, such that the main contribution to $\Gamma_k$ comes from momentum modes at the scale $k$. Finally, the supertrace ``STr'' denotes a sum over discrete indices as well as an integral over momenta.
The solution to Eq.~\eqref{eq: flow equation} for a given initial condition identifies a single RG trajectory. The set of all RG trajectories defines the RG flow. A solution $\Gamma_k$ is physically well defined (i.e., the corresponding theory is renormalizable) if its RG trajectory approaches a fixed point in the UV, $k\to\infty$. In this limit~$\Gamma_k$ ought to approach the bare action $S_{\text{bare}}$, up to the reconstruction problem, see, e.g.,~\cite{Manrique:2008zw,Morris:2015oca,Fraaije:2022uhg}. The opposite limit, $k\to0$, corresponds to the case where all quantum fluctuations are integrated out, and yields the standard quantum effective action $\Gamma_0$. First steps towards computing the gravitational effective action have been taken in~\cite{Codello:2015oqa,Knorr:2018kog,Knorr:2019atm,Ohta:2020bsc,Bonanno:2021squ,Basile:2021krr,Knorr:2021niv} in the context of asymptotically safe gravity and in~\cite{Fradkin:1985ys,Gross:1986mw,Veneziano:1991ek,Meissner:1991zj,Meissner:1996sa,Tseytlin:2006ak,Hohm:2015doa,Hohm:2019ccp,Hohm:2019jgu,Basile:2021euh,Basile:2021krk,Hu:2022myf} within string theory. While deriving the coefficients and form factors in the effective action is highly challenging, one may attempt to find solutions to Eq.~\eqref{eq:field-eqs-eff-act} using alternative strategies. Before describing one of them, that is based on the decoupling mechanism, in the next subsection we shall first clarify a fundamental difference between the momentum scale $k$ in Eq.~\eqref{eq: flow equation} and the physical momentum dependence of $\Gamma_k$, as this difference is often a source of confusion.
\subsection{Clarifying nomenclature: RG scale dependence versus physical running}
The effective average action $\Gamma_k$ is constructed as an RG scale dependent action functional, where all couplings or functions are promoted to $k$-dependent quantities. The action $\Gamma_k$ can thus be parametrized by an infinite-dimensional coordinate vector containing the couplings associated with all possible diffeomorphism-invariant operators. In full generality, the flow equation~\eqref{eq: flow equation} can be associated with infinitely many ordinary coupled differential equations for the couplings. However, in practice the technical complexity requires a truncation of the theory space to a manageable subspace. For instance, at quadratic order in a curvature expansion one has
\begin{equation}\label{eq:EAA-quadratic}
\Gamma_k=\int \dd[4]x \sqrt{-g}\left(\frac{1}{16\pi G_k}\qty(R-2\Lambda_k)+R\, g_{R,k}(\Box)\,R+ C_{\mu\nu\sigma\rho}\,g_{C,k}(\Box)\,C^{\mu\nu\sigma\rho}\right) \,,
\end{equation}
where $G_k=g_k k^{-2}$ and $\Lambda_k=\lambda_k k^{2}$ are the RG scale dependent versions of the Newton and cosmological constants, $g_k$ and $\lambda_k$ being their dimensionless counterparts, and $g_{R,k}(\Box)$ and $g_{C,k}(\Box)$ are quartic couplings which can generally depend on the d'Alembertian operator. The $k$-dependence is attached with the Wilsonian integration of fluctuating modes from the UV to the IR. In particular, it is typically used to study the fixed point structure of the action, as the existence of suitable fixed points relates to renormalizability and guarantees that observables computed using the effective action $\Gamma_0$ are finite. Provided that such a suitable fixed point exists, one can integrate the flow down to the physical limit $k=0$, where the effective average action reduces to the quantum effective action $\Gamma_0$.
It is important to remark that the $k$-dependence is not related to the physical momentum dependence of couplings, which is to be read off from the effective action $\Gamma_0$. Specifically, the structure of the effective average action~\eqref{eq:EAA-quadratic} should be contrasted with that of the effective action~\cite{Knorr:2019atm}
\begin{equation}\label{eq:eff-action}
\Gamma_0=\int \dd[4]x \sqrt{-g}\left(\frac{1}{16\pi G_N}\qty(R-2\Lambda)+R\,\mathcal{F}_R(\Box)\,R+ C_{\mu\nu\sigma\rho}\,\mathcal{F}_C(\Box)\,C^{\mu\nu\sigma\rho}\right) \,,
\end{equation}
where the Newton coupling and the cosmological constant are constants whose values are fixed by observations, while the physical running---encoded in the form factors $\mathcal{F}_i(\Box)\equiv g_{i,0}(\Box)$---is attached to the couplings related to the terms at least quadratic in curvature. Note that the dependence on the d'Alembertian is the curved-spacetime generalization of the dependence of couplings on a physical momentum~$p^2$~\cite{Knorr:2019atm}.
The so-called RG improvement was originally devised as a method to obtain an approximation to the effective action~\eqref{eq:eff-action} (or to the solutions to its field equations) by starting from its $k$-dependent counterpart~\eqref{eq:EAA-quadratic} (typically a local version of it) and subsequently replacing the RG scale $k$ with a physical momentum or energy scale. This seems to be a viable strategy in the context of gauge and matter theories~\cite{Coleman1973:rcssb,Migdal:1973si,Adler:1982jr,Dittrich:1985yb}, and is based on the decoupling mechanism, which we review in the following subsection.
\subsection{Effective actions from the decoupling mechanism}
The flow of the effective average action $\Gamma_k$ from the UV fixed point to the physical IR is governed by the FRG equation~\eqref{eq: flow equation}. In particular, the variation of $\Gamma_k$ on the left-hand side of~\eqref{eq: flow equation} is induced by the artificial regulator $\mathcal{R}_k$. The latter is an effective mass-square term, $\mathcal{R}_k\sim k^2$, suppressing fluctuations with momenta $p^2\lesssim k^2$.
The decoupling mechanism~\cite{Reuter:2003ca}, if at work, could provide a short-cut linking (a truncated version of) $\Gamma_k$ to the effective action $\Gamma_0$ and relies on the following observation.
In the flow towards the IR, $\mathcal{R}_k$ decreases as $\sim k^2$, and at a certain scale $k_{dec}$ the running couplings and other physical scales in the action, for instance a mass term, may overcome the effect of the cutoff $\mathcal{R}_k$. As a result, the flow of the effective average action $\Gamma_k$ would freeze out, so that at the decoupling scale $\Gamma_{k=k_{dec}}$ approximates the standard effective action~$\Gamma_0$ (cf.~Fig.~\ref{fig:decoupl}).
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{decoupl.pdf}
\caption{\label{fig:decoupl} Idea behind the decoupling mechanism. If one or a combination of physical IR scales in the effective average action overcomes the effect of the regulator $\mathcal{R}_k$ in the flow equation~\eqref{eq: flow equation}, the flow freezes out and the effective average action at the critical scale $k_{dec}$ approximates the full effective action~$\Gamma_0$.}
\end{figure}
By identifying the decoupling scale, certain terms appearing in the full effective action can be predicted which were not contained in the original truncation. An emblematic example is scalar electrodynamics, where the RG improvement, combined with the decoupling mechanism, is able to correctly generate the logarithmic corrections in the Coleman-Weinberg effective potential~\cite{Coleman1973:rcssb}, see~\cite{Reuter:2003ca,Platania:2020lqb} for details and~\cite{Migdal:1973si,Adler:1982jr,Dittrich:1985yb} for other examples in the context of quantum electrodynamics and quantum chromodynamics.
It is important to notice that generally $k_{dec}$ will depend on a non-trivial combination of several physical IR scales appearing in the action, e.g., curvature invariants, masses, or field strengths. To make this statement more precise, one has to look at the structure of the regularized inverse propagator. Neglecting any tensorial structure one would have, schematically,
\begin{equation}
\Gamma^{(2)}_k+\mathcal{R}_k= c\,( p^2+A_k[\Phi]+\tilde{\mathcal{R}}_k)\,,
\end{equation}
where $c$ is a constant, $\Phi$ denotes the set of fields in the theory defined by $\Gamma_k$, and by definition $A_k[\Phi]\equiv \Gamma^{(2)}_k/c -p^2 $ and $\mathcal{R}_k\equiv c \, \tilde{\mathcal{R}}_k$. The regulator $\tilde{\mathcal{R}}_k$ efficiently suppresses modes with $p^2\lesssim k^2$ when it is the largest mass scale in the regularized inverse propagator. By contrast, if $A_k[\Phi]$ contains physical IR scales, there might be a critical momentum~$k_{dec}$ below which~$\tilde{\mathcal{R}}_k$ becomes negligible. Grounded on these arguments, the decoupling condition reads
\begin{equation}\label{eq:dec-condition}
\tilde{\mathcal{R}}_{k_{dec}} \approx A_{k_{dec}}[\Phi] \,,
\end{equation}
and provides an implicit definition of the decoupling scale $k_{dec}$. It is worth noticing that this equation might not have real solutions, in which case the RG improvement would not be applicable.
\subsection{Decoupling mechanism versus the practice of RG improvement}
Originally motivated by the decoupling mechanism, the procedure of RG improvement consists of promoting the coupling constants to RG scale dependent couplings, and ``identifying'' the IR cutoff $k$ with a physical scale in order to capture qualitatively the effect of higher-order and non-local terms in the full effective action~\eqref{eq:eff-action}.
The scale dependence is governed by the beta functions which can be computed using functional RG methods within a given truncation of the effective action. For instance, if the effective average action for gravity is approximated by monomials up to first order in the curvature, its form reduces to the Einstein-Hilbert action with a scale-dependent Newton coupling and cosmological constant. Neglecting the cosmological constant, the flow equation~\eqref{eq: flow equation} for the running Newton coupling gives rise to the approximate scale dependence~\cite{Bonanno:2000ep}\footnote{Although the RG scale dependence~\eqref{eq: G running} has been first derived in~\cite{Bonanno:2000ep} via computations in Euclidean signature, Eq.~\eqref{eq: G running} still appears to be a good approximation in Lorentzian signature~\cite{Fehre:2021eob}.}
\begin{equation}\label{eq: G running}
G(k) = \frac{G_0}{1+\omega G_0 k^2},
\end{equation}
where $\omega = 1/g_*$, $g_*$ being the non-Gaussian fixed point value of the dimensionless Newton coupling $g(k) = G(k)k^2$. The existence of such a fixed point in the UV, combined with the requirement of a finite number of relevant directions, are key requirements for the definition of asymptotically safe theories.
The application of the RG improvement in gravity suffers from the following problems:
\begin{itemize}
\item \emph{The challenge to relate the cutoff $k$ to a physical scale of the system}: A priori, the role of $k$ is to provide a way to parametrize the RG flow from the UV towards the IR. In principle, if one ignores the original idea behind the decoupling mechanism, there exists no general prescription of how to perform the scale identification on curved spacetimes and in situations where spacetime symmetries are insufficient to fix $k$ uniquely. Scale-setting procedures relying on diffeomorphism invariance and minimal scale dependence of the action were proposed in~\cite{Reuter:2003ca,Babic:2004ev,Domazet:2010bk,Koch:2010nn,Domazet:2012tw,Koch:2014joa}, but these are either not always applicable, or they provide insufficient information to completely fix the function~$k(x)$. Moreover, in generic physical situations, there is more than one scale.
\item \emph{Ambiguity in the application at the level of action, field equations, and solutions}: Although the RG improvement is motivated by the decoupling of the RG flow of the action functional $\Gamma_k$, the sequence of replacements $g_i\to g_i(k)\to g_i[k(x)]$ can in principle be implemented at the level of the action, field equations, or solutions. Typically, the latter two implementations are more straightforward than that at the level of the action, since in physical applications one could skip the step of deriving the field equations or their solutions, respectively. Nevertheless, the three procedures can yield different results. This can be intuitively understood, as, for instance, the replacement $k\mapsto k(x)$ at the level of the action would generate higher-derivative operators which would in turn reflect in additional terms in the field equations.
\item \emph{Backreaction effects in gravity}: In the context of quantum field theories other than gravity, the RG improvement can be applied straightforwardly~\cite{Coleman1973:rcssb,Migdal:1973si,Adler:1982jr,Dittrich:1985yb}. The reason is that in this case coordinates and momenta are related trivially, $p\sim 1/r$, and the background metric is fixed and typically flat. In the context of gravity this procedure is less controlled, since the definition of any diffeomorphism-invariant (i.e., scalar) quantity requires a metric. Classical spacetimes are however singular, and their metric cannot be trusted in the proximity of the would-be singularities. Moreover, the Newton coupling itself, which in a standard RG improvement procedure is supposed to be replaced with its running counterpart, is part of the metric needed for the definition of the map $k(x)$. Finally, when a one-step RG improvement is performed at the level of the solutions, this induces a change in the effective Einstein equations and a change in the spacetime metric, which in turn would lead to a different map $k(x)$. This suggests that the application of the RG improvement in gravity requires taking backreaction effects into account and determining the effective metric self-consistently, e.g., via the iterative procedure devised in~\cite{Platania:2019kyx}.
\end{itemize}
Summarizing, the RG improvement was originally motivated by the decoupling mechanism, and the scale identification $k(x)$ was meant to act as a short-cut to determine the effective action. Yet, in most works on RG improved spacetimes the function $k(x)$ has been fixed based on physical intuition only, accounting neither for the consistency constraints stemming from Bianchi identities (aside from a few examples, e.g.~\cite{Reuter:2003ca,Babic:2004ev,Domazet:2010bk,Koch:2010nn,Domazet:2012tw,Koch:2014joa}), nor for the decoupling mechanism. In this work we will for the first time exploit the decoupling mechanism to derive the function $k(x)$ from the decoupling condition~\eqref{eq:dec-condition} and to determine the dynamics of quantum-corrected black holes.
\section{Modified spacetimes from the decoupling mechanism: setup}\label{sect:setup}
In this section we derive the differential equations describing the evolution of the metric of a spherically-symmetric, asymptotically flat black hole spacetime, including quantum corrections computed by exploiting the gravitational beta functions and the decoupling mechanism.
\subsection{Generalized Vaidya spacetimes} \label{sec: classical Vaidya spacetimes}
One of the key lessons of Einstein's GR is the formation of black holes from the gravitational collapse of matter and radiation. Scenarios for the collapse of a sufficiently massive object have been developed and are discussed controversially in relation to the cosmic censorship conjecture~\cite{Penrose:1969pc}. In its weak form, the conjecture posits that the maximal Cauchy development possesses a complete future null infinity for generic initial data. In other words, an event horizon should exist which prevents an observer at future null infinity from seeing the singularity. The conjecture is however known to be violated in various models for the gravitational collapse. In particular, well-known classical models which violate this conjecture are the Tolman-Bondi spacetime for the spherical collapse of dust clouds~\cite{Tolman:1934za,Bondi:1947fta,Eardley:1978tr} or the imploding Vaidya spacetime describing the spherical collapse of radiation~\cite{Vaidya:1951zza,Vaidya1966AnAS}. In the latter case the singularity appears when the ingoing radiation hits a chosen spacetime point---typically the origin of the given coordinate system. In this classical model the singularity is initially naked provided that the rate of concentration of the radiation is sufficiently low~\cite{Kuroda:1984}. In the following we will introduce these Vaidya spacetimes as well as an important generalization of the corresponding class of metrics that will be key in our construction.
The classical imploding Vaidya solution in advanced Eddington-Finkelstein coordinates reads~\cite{Vaidya:1951zza,Vaidya1966AnAS}
\begin{equation}\label{eq: Vaidya metric classical}
\dd{s^2} = -f(r,v)\dd{v^2} + 2 \dd{v}\dd{r} + r^2\dd{\Omega^2} \,,
\end{equation}
with the lapse function
\begin{equation}\label{eq: lapse function classical}
f(r,v) = 1-\frac{2 G_0 m(v)}{r}\,,
\end{equation}
where $G_0$ denotes the observed value of the Newton coupling. The mass function $m(v)$ depends on the advanced time coordinate and can be used to model a gravitational collapse or evaporation. The metric~\eqref{eq: Vaidya metric classical} with lapse function~\eqref{eq: lapse function classical} is an exact solution to the Einstein equations with vanishing cosmological constant,
\begin{equation}\label{eq: field equations}
G_{\mu\nu} = 8\pi G_0 T_{\mu\nu}\,,
\end{equation}
and an energy-momentum tensor corresponding to a pressureless perfect fluid~\cite{Vaidya:1951zza,Wang:1998qx},
\begin{equation}\label{eq: energy-momentum tensor Vaidya}
T_{\mu\nu} = \mu \,u_\mu u_\nu \,.
\end{equation}
Here $u^\mu$ is the fluid's four-velocity and
\begin{equation}\label{eq: energy density classical}
\mu = \frac{\dot{m}(v)}{4\pi G_0\, r^2}
\end{equation}
is its energy density. A dot denotes differentiation with respect to the advanced time. More generally, one could consider a generalized mass function depending both on the advanced time $v$, as well as on the radial coordinate~$r$. The resulting generalized Vaidya spacetime~\cite{Wang:1998qx} is described by a metric of the form~\eqref{eq: Vaidya metric classical} with lapse function
\begin{equation}\label{eq: lapse function}
f(r,v)=1-\frac{2M(r,v)}{r}\,,
\end{equation}
where the Newton constant $G_0$ is now absorbed in the generalized mass function $M(r,v)$.
These spacetimes are solutions to the classical Einstein equations~\eqref{eq: field equations} with an effective energy momentum tensor
\begin{equation}\label{eq: energy-momentum tensor for generalized Vaidya}
T_{\mu\nu} = \mu \,l_\mu l_\nu + \qty(\rho + p)\qty(l_\mu n_\nu+l_\nu n_\mu) + p g_{\mu\nu}\,,
\end{equation}
where the two null vectors $l_\mu$ and $n_\mu$ satisfy $l_\mu n^\mu = -1$. The functions $\mu $ and $\rho$ are the two contributions to the energy density associated with the first advanced time and radial derivatives of the generalized mass function $M(r,v)$, while $p$ is the classical pressure computed from its second derivative,
\begin{equation}\label{eq:rhomupclass}
\mu = \frac{\dot{M}(r,v)}{4\pi G_{0} r^2}\,, \quad\quad \rho = \frac{M'(r,v)}{4\pi G_{0} r^2}\,, \quad\quad p = -\frac{M''(r,v)}{8\pi G_{0} r}\,.
\end{equation}
In the previous definition, dots and primes denote derivatives with respect to~$v$ and~$r$, respectively.
Generalized Vaidya spacetimes can be used to model deviations from the Schwarzschild solution. In the next subsection we will make use of these dynamical solutions to describe the collapse of black holes in the presence of quantum-gravitational fluctuations. We will take into account both the backreaction generated by the modifications induced on the spacetime by quantum effects~\cite{Platania:2019kyx}, as well as the dynamical evolution of the spacetime triggered by a time-varying mass function $m(v)$. To this end, we will combine the techniques developed in~\cite{Platania:2019kyx}, which we review below, with the ideas in~\cite{Bonanno:2006eu,Bonanno:2016dyv,Bonanno:2017zen,Bonanno:2017kta}, and with the decoupling mechanism~\cite{Reuter:2003ca,Platania:2020lqb}.
\subsection{Determining the decoupling scale}
The scope of this subsection is to determine the decoupling scale $k_{dec}$ at which the RG scale dependent effective action $\Gamma_k$ approximates the full effective action. To this end, one first needs to derive the Hessian $\Gamma_k^{(2)}$ and its regularized version.
Within the Einstein-Hilbert truncation the effective average action is given by
\begin{equation}\label{eq: effective action}
\Gamma_k = \int \dd[d]x \sqrt{g}\qty(\frac{1}{16\pi G_k}\qty(2\Lambda_k-R) + \mathcal{L}_m) \,,
\end{equation}
where $G_k$ and $\Lambda_k$ are the Newton coupling and cosmological constant, $d$ is the number of spacetime dimensions, and $\mathcal{L}_m$ is a matter Lagrangian. In our case, since one of our main scopes is to describe the quantum-corrected gravitational collapse, we limit ourselves to the Lagrangian of a perfect fluid with energy-momentum tensor~\eqref{eq: energy-momentum tensor Vaidya}, which reads~\cite{Ray:1972}
\begin{equation}\label{eq: matter Lagrangian}
\mathcal{L}_m = \mu(r,v) \,,
\end{equation}
$\mu(r,v)$ being the energy density of the pressureless fluid as defined in~\eqref{eq: energy density classical}. The quadratic part of the action is constructed by writing the metric as $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$, where $\bar{g}$ is the background metric and $h$ describes fluctuations about this background, and by expanding the action about $\bar{g}$ up to quadratic order in $h$. The metric fluctuations are split as $h_{\mu\nu}=h_{\mu\nu}^{TL}+d^{-1}\bar{g}_{\mu\nu}\phi$, where $\phi\equiv\bar{g}^{\mu\nu}h_{\mu\nu}$ is the trace part of the metric and $\bar{g}^{\mu\nu}h_{\mu\nu}^{TL}=0$ expresses the orthogonality condition of the traceless mode $h_{\mu\nu}^{TL}$. Further restricting the background to a maximally symmetric spacetime\footnote{This choice is not ideal for a generic black hole background. However, it significantly simplifies the expressions and we do not expect it to impact the qualitative aspects of our results. This expectation comes from two independent considerations. First, within the model of gravitational collapse that we will employ, the spacetime is initially a flat Minkowski background. Thus, a maximally symmetric background is a consistent choice for the early stages of the gravitational collapse. Thereafter, deviations from a maximally symmetric background ought to be automatically encoded in the dynamical adjustment of all physical energy scales and equations involved.
Secondly, we checked that in $d=4$ the corrections induced by a generic Vaidya background would only change the numerical prefactors of our expressions---at least to leading order in the radial coordinate, in the two opposite regions $r\ll l_{Pl}$ and $r\gg l_{Pl}$.} and using a harmonic gauge fixing, with
\begin{equation}
\Gamma_{gf}=\frac{1}{2} \int \dd[d]x \sqrt{g}\frac{1}{16\pi G_k} \qty[\bar{g}^{\mu\nu}\qty(\bar{D}^\sigma h_{\mu\sigma}-\frac{1}{2}\bar{D}_\mu \bar{g}^{\alpha\beta}{h}_{\alpha\beta})\qty(\bar{D}^\rho h_{\nu\rho}-\frac{1}{2}\bar{D}_\nu \bar{g}^{\alpha\beta}{h}_{\alpha\beta})]\,,
\end{equation}
the regularized Hessian becomes diagonal in field space. Its elements in the trace, traceless, and Faddeev–Popov ghost sectors~\cite{Reuter:2019byg} are
\begin{equation}
\begin{aligned}
& \left.\Gamma_k^{(2)}+\mathcal{R}_k\right|_{h}={G_k^{-1}}\left( \Box+k^2 r_k(\Box/k^2)-2\lambda_k+C_T \bar{R}+\mu G_k \right)\,, \\
& \left.\Gamma_k^{(2)}+\mathcal{R}_k\right|_{\phi}=-\frac{d-2}{2d}{G_k^{-1}}\left(\Box+k^2 r_k(\Box/k^2)-2\lambda_k+C_S \bar{R}+\mu G_k \right)\,,\\
& \left.\Gamma_k^{(2)}+\mathcal{R}_k\right|_{gh}={G_k^{-1}}\left( \Box+k^2 r_k(\Box/k^2)-2\lambda_k+C_V \bar{R} \right)\,,
\end{aligned}
\end{equation}
where $\Box\equiv-\bar{D}^2$ is the d'Alembertian operator built with background covariant derivatives, $\bar{R}$ is the background Ricci scalar, $r_k$ is the dimensionless version of the regulator $\mathcal{R}_k$, and
\begin{equation}
C_T=\frac{d(d-3)+4}{d(d-1)}\,,\qquad C_S=\frac{d-4}{d}\,,\qquad C_V=-\frac{1}{d}\,.
\end{equation}
As we are interested in asymptotically flat spacetimes, in the following we shall neglect the contribution from the cosmological constant. At this point, the decoupling condition~\eqref{eq:dec-condition} reads
\begin{equation}\label{eq:decoupl-cond}
G_{k_{dec}}^{-1}(\gamma \bar{R} +G_{k_{dec}} \mu-k^2_{dec} r_{k_{dec}})=0 \,,
\end{equation}
where $\gamma\equiv \text{max}\{|C_T|,|C_V|,|C_S|\}$, and $\gamma=2/3$ for $d=4$. In particular, to determine exact form of the decoupling scale a simple mass-type regulator $\mathcal{R}_k\simeq k^2$ suffices. This is tantamount to setting $r_k=1$, so that the decoupling condition reads
\begin{equation}\label{eq:dec-cond}
k_{dec}^2\equiv G_{k_{dec}} \mu +\gamma \bar{R} \,.
\end{equation}
Accounting for the decoupling condition at the level of the action, in combination with the expression~\eqref{eq: G running} for the running Newton coupling~$G_k$, would thus yield an effective action
\begin{equation}
\Gamma_0\approx \Gamma_{k=k_{dec}}=\int d^4 x \sqrt{g} \left(-\frac{1+\mu\, \omega \,G_0^2}{16\pi G_0}\,\bar{R}-\frac{\gamma \omega (1-\mu \,\omega\, G_0^2)}{16\pi}\,\bar{R}^2 + \mathcal{O}(\bar{R}^3) \right) \,,
\end{equation}
where we have expanded all terms in a curvature expansion and linearized the final expression with respect to the energy density $\mu$. The resulting effective action contains some of the higher-derivative terms in~\eqref{eq:eff-action}, as expected, as well as a non-minimal coupling with matter, encoded in the terms $\mu \bar{R}$ and $\mu \bar{R}^2$.
We thus expect that the dynamical spacetimes stemming from the implementation of the decoupling mechanism at the level of the solutions---which are the focus of our work---will reflect the presence of the higher-derivative operators and of the non-minimal coupling~$\mu R$. As we will see, our findings are consistent with this expectation. Note that this is non-trivial: in past applications of the RG improvement, accounting neither for the backreaction effects of~\cite{Platania:2019kyx} nor for the decoupling condition, the implementation of the replacement $k\mapsto k(x)$ at the level of the action or at the level of the solutions would generally yield different results.
Before proceeding, we briefly comment on the possibility to further constrain the function $k(x)$ by requiring the validity of the Bianchi identities~\cite{Reuter:2003ca,Babic:2004ev,Domazet:2010bk,Koch:2010nn,Domazet:2012tw,Koch:2014joa}.
The simplest starting point is the RG scale dependent version of the Einstein-Hilbert action, Eq.~\eqref{eq: effective action}. If the stress-energy tensor for the matter is separately conserved, the Bianchi identities impose a consistency condition on the function~$k(x)$. The specific form of the modified Bianchi identities relies on whether one makes the replacement $k\mapsto k(x)$ at the level of the action or at the level of the field equations (see~\cite{Platania:2020lqb} for details). If the scale dependence is first introduced at the level of the action, this condition reads~\cite{Reuter:2003ca}
\begin{equation}\label{eq: Bianchi condition}
2 G(k) \Lambda^\prime(k) + G^\prime(k)\qty(R-2\Lambda(k)) = 0 \,,
\end{equation}
where primes denote the differentiation with respect to $k$. The requirement~\eqref{eq: Bianchi condition} expresses the fact that the sum of the effective energy momentum tensor introduced by the coordinate dependence of the Newton coupling and the cosmological constant term should be conserved to guarantee consistency with the covariant conservation of the Einstein tensor. Such a requirement turns out to be redundant in our case, since the effective spacetimes are solutions to field equations of the form~\eqref{eq: field equations} which are found self-consistently.
We thus conclude that in our case the consistency conditions~\cite{Reuter:2003ca,Babic:2004ev,Domazet:2010bk,Koch:2010nn,Domazet:2012tw,Koch:2014joa} are automatically satisfied and therefore do not add additional constraints.
\subsection{Effective dynamics from the decoupling mechanism}\label{sect:effeqs-sol}
In this subsection we derive the equations governing the effective dynamics of a spherically symmetric black hole by combining the decoupling mechanism with the iterative procedure devised in~\cite{Platania:2019kyx}. The latter replaces the standard RG improvement at the level of the solutions with a self-consistent approach accounting for the backreaction effects generated by the introduction of quantum effects on dynamical spacetimes. We will first review this procedure and will subsequently combine it with the decoupling mechanism to derive quantum-corrected spacetimes of the Vaidya type and to study their dynamics.
The starting point is the classical (static) Schwarzschild spacetime with lapse function given by~\eqref{eq: lapse function}, $m(v)\equiv m$ being the mass of the black hole as measured by an observer at infinity. While the exterior Schwarzschild metric is a solution to the vacuum Einstein equations, a non-zero effective energy-momentum tensor $T_{\mu\nu}$ is expected to be present on the right-hand side of the field equations~\eqref{eq: field equations}. The latter can arise in the presence of (quantum) matter, or via quantum-gravitational effects in the form of higher derivatives in the gravitational effective action, cf. Eq.~\eqref{eq:eff-action}.
Due to these additional terms, the metric of a static spherically-symmetric black hole is expected to be modified with respect to the classical case.
Assuming that the time and radial components of the metric are inversely related, $g_{rr}=g_{tt}^{-1}$, as is the case for Schwarzschild black holes, the action of quantum effects can be encoded in the radial dependence of an effective Newton coupling $G(r)$. The radial dependence is introduced via the replacement $G_0 \rightarrow G[k(r)]$, and leads to an effective metric of the form
\begin{equation}\label{eq: RG step 1}
f(r) = 1- \frac{2 m G[k(r)]}{r}\,,
\end{equation}
where $k(r)$ is the map between the RG scale $k$ and the radial coordinate $r$, and is initially constructed by means of the classical metric. The spacetime~\eqref{eq: RG step 1} describes an exact solution to the Einstein equations in the presence of a generalized effective energy-momentum tensor $T_{\mu\nu}^{\text{eff}}$ with energy density $\rho_{\text{eff}} \propto \partial_r G$ and pressure $p_{\text{eff}} \propto \partial_{r}^2G$~\cite{Wang:1998qx,Platania:2019kyx}. This effective energy-momentum tensor has the role of mimicking the higher-derivative terms in the full quantum effective action~\eqref{eq:eff-action}.
Yet, in a gravitational context the simple replacement $k\to k(x)$ is not expected to yield a good approximation to the effective field equations~\cite{Platania:2019kyx} since: \emph{(i)} the scalar quantity~$k(r)$ (e.g., the proper distance, or a curvature invariant) is necessarily built on the original Schwarzschild metric which fails to give an accurate description of the spacetime in the region of interest, i.e., where quantum gravity effects are important, \emph{(ii)} the new metric~\eqref{eq: RG step 1} is no longer a solution to the vacuum field equations and this backreaction effect might in turn impact the spacetime metric, and \emph{(iii)} a new scale $k(r)$ built with~\eqref{eq: RG step 1} will not match the function $k(r)$ constructed using the Schwarzschild metric.
This points to the conclusion that in gravity backreaction effects induced
by the replacement $k\to k(r)$ have to be taken into account. This can be done iteratively, until a self-consistent solution is reached. In other words, one should iteratively apply the RG improvement until the scale $k_n(r)$ used to define the lapse function $f_{n+1}(r)$ matches the decoupling scale $k_{n+1}$ constructed using the metric $g_{\mu\nu}^{(n+1)}$ at the step $n+1$.
The iterative procedure is implemented by defining the lapse function $f_n(r)$ at a step $n>0$ as
\begin{equation}
f_{n}(r) = 1- \frac{2 m G_{n}[k_{n-1}(r)]}{r} \, ,
\end{equation}
i.e., in terms of a scale $k_{n-1}(r)$ constructed by means of the metric $g_{\mu\nu}^{(n-1)}$ at the step $n-1$. In general, this will be a function of the first and second derivatives of the effective Newton coupling $G_{n-1}(r)$. If the sequence $\{G_n\}$ defined in this way converges, the fixed function $G_\infty(r)$ satisfies a differential equation which is fully determined by the functional form of the scale $k_n(r)$. Specifically, based on the RG running~\eqref{eq: G running}, this yields the differential equation
\begin{equation}\label{eq: G infinity}
G_\infty(r) = \frac{G_0}{1+\omega\, G_0 k_\infty ^2(r)}\,,
\end{equation}
with $k_\infty ^2$ depending on $G_\infty$ and its derivatives.
In~\cite{Platania:2019kyx} the scale has been fixed to be $k^2 \propto \rho$, giving rise to an analytically solvable first-order ordinary differential equation for $G_\infty$. Its solution is given by
\begin{equation}
G_\infty(r) = G_0\qty(1- e^{-\frac{r^3}{l^3}})\,,
\end{equation}
where $l$ is a length scale of the order of the Planck length $l_P$. As a key result, the limit of the sequence of metrics is described by a Dymnikova black hole~\cite{Dymnikova:1992ux} with a regular de Sitter core.
We now proceed by generalizing the framework in~\cite{Platania:2019kyx} to Vaidya spacetimes~\eqref{eq: lapse function}, including a general dynamical mass $m(v)$ in the lapse function~\eqref{eq: lapse function}. This has the ultimate goal to describe the dynamics of quantum black holes from formation to evaporation. In this, the scale $k(x)$ will be derived by an explicit use of the decoupling mechanism, as this is key to connect the RG scale dependent description~\eqref{eq:EAA-quadratic} with the physics of the effective action~\eqref{eq:eff-action}. Specifically, $k(r)$ should be equated to the decoupling scale $k_{dec}$ in Eq.~\eqref{eq:dec-cond} in order for the metric~\eqref{eq: RG step 1} to be an approximate solution to the field equations stemming from an effective action of the type~\eqref{eq:eff-action}.
In the case of dynamical spacetimes, one can set up the iterative procedure by using the lapse function
\begin{equation}\label{eq: lapse functions RG-iterated}
f_{n}(r,v) = 1-\frac{2 M_{n}(r,v)}{r} \, .
\end{equation}
Here the generalized mass function $M_{n}(r,v)=m(v)\,G_{n}[k_{n-1}(r,v)]$ is defined by the classical mass function and the running Newton coupling at the step $n$ of the iteration. The metric defined by the lapse function~\eqref{eq: lapse functions RG-iterated} belongs to the class of generalized Vaidya spacetimes~\cite{Wang:1998qx} introduced in Sect.~\ref{sec: classical Vaidya spacetimes}. The corresponding metric satisfies the effective Einstein equations
\begin{equation}\label{eq: RG-improved field equations}
G_{\mu\nu} ^{n} = 8 \pi G_n T_{\mu\nu} ^{n}\,,
\end{equation}
where the effective energy-momentum tensor takes the form~\eqref{eq: energy-momentum tensor for generalized Vaidya} with energy densities and pressure redefined as
\begin{equation}\label{eq: energy densities and pressure}
\mu_{n} = \frac{\dot{M}_{n}(r,v)}{4\pi G_{n}(r,v) r^2}\,, \quad\quad \rho_{n} = \frac{M' _{n}(r,v)}{4\pi G_{n}(r,v) r^2}\,, \quad\quad p_{n} = -\frac{M'' _{n}(r,v)}{8\pi G_{n}(r,v) r}\,.
\end{equation}
The effective Newton coupling $G_{n}$ will itself depend on the self-adjusting cutoff $k_{n}$ which needs to be determined by the properties of the spacetime at the previous step of the iteration. In particular, the effective Newton coupling will generally depend on both the radial coordinate~$r$ and the advanced time $v$, $G_n=G_n(r,v)$; in the remainder of this section we will omit this dependence for shortness. Finally, in order to make contact with the FRG and determine solutions which approximate those stemming from the full effective action $\Gamma_0$, we shall fix~$k$ to be the decoupling scale $k_{dec}$. In particular, for a fully consistent implementation of the decoupling mechanism, the decoupling scale has to be built using the iterative procedure detailed above.
Setting $r_k=1$ as before and evaluating the decoupling condition~\eqref{eq:dec-cond} on-shell finally yields the definition of the decoupling scale at the step $n+1$,
\begin{equation}\label{eq: cutoff identification}
k_{n+1} ^2 = G_n\qty(\mu_n + \gamma 16 \pi (\rho_n - p_n))\,,
\end{equation}
where we have dropped the label ``dec'' from the decoupling scale and we have written the background Ricci scalar (for metrics of type~\eqref{eq: lapse functions RG-iterated}) in terms of the generalized energy density and pressure~\eqref{eq: energy densities and pressure}, according to
\begin{equation}\label{eq: Ricci scalar}
R = 4 \frac{M' _{n} (r,v)}{r^2} + 2 \frac{M'' _{n} (r,v)}{r} = 16 \pi G_{n}( \rho_{n} - p_{n})\,.
\end{equation}
As will be important later, we note at this point that the high-energy regime $k\gg m_{Pl}$, where the flow is close to the UV fixed point $g_\ast$ of the dimensionless Newton coupling, corresponds to the large-curvature regime. For a spherically symmetric black hole spacetime this means that the UV fixed point regime corresponds to the region close to the classical singularity, while the IR corresponds to large radii.
Finally, taking the limit $n\to \infty$, the dynamical equation for the effective gravitational coupling becomes %
\begin{equation}\label{eq: G infinity differential equation}
G_\infty = \frac{G_0}{1+ G_0 \omega G_\infty \qty(\mu_\infty + \frac{2}{3} 16 \pi \qty(\rho_\infty - p_\infty))}\,,
\end{equation}
where $\mu_\infty$, $\rho_\infty$ and $p_\infty$ are defined by~\eqref{eq: energy densities and pressure}, with $M_\infty(r,v) \equiv G_\infty(r,v)\, m(v)$, i.e.,
\begin{equation}\label{eq:explicitdependence}
\mu_{\infty} = \frac{\dot{m}(v)}{4\pi G_{\infty} r^2}+\frac{m(v)\dot{G}_{\infty}}{4\pi G_{\infty} r^2}\,, \quad\quad \rho_{\infty} = \frac{m(v)\,G' _{\infty}}{4\pi G_{\infty} r^2}\,, \quad\quad p_{\infty} = -\frac{m(v)\,G'' _{\infty}}{8\pi G_{\infty} r}\,.
\end{equation}
Inserting these expressions for the energy densities and pressure in terms of the effective Newton coupling and mass~$m(v)$ in Eq.~\eqref{eq: G infinity differential equation}, we obtain the following second-order non-linear partial differential equation for $G_\infty(r,v)$
\begin{equation}\label{eq: G infinity differential equation rough}
\qty(G_0 \omega m \qty(16\pi r G_\infty '' + 32 \pi G_\infty ' + 3 \dot{G}_\infty) + 3 G_0 \omega \dot{m}G_\infty + 12\pi r^2)G_\infty - 12 G_0 \pi r^2 = 0\,,
\end{equation}
where the classical Vaidya mass function $m(v)$ is still to be specified. The partial differential equation~\eqref{eq: G infinity differential equation rough} is our first main result and will be used to determine the dynamics underlying the quantum-corrected gravitational collapse and black hole evaporation. Specifically, once a solution to Eq.~\eqref{eq: G infinity differential equation rough} is found, the resulting spacetime metric takes the form of a generalized Vaidya spacetime with lapse function
\begin{equation}
f_\infty(r,v)=1-\frac{2\,m(v)\,G_{\infty}(r,v)}{r}\,.
\end{equation}
We will use the framework introduced in this section to study the effective metric in different regimes, from formation to evaporation.
\section{Dynamics of the collapse process}\label{sect:collapse}
As a result of the decoupling mechanism, the dynamics of the effective Newton coupling~$G(r,v)$ is governed by Eq.~\eqref{eq: G infinity differential equation rough} where it remains to specify the Vaidya mass function~$m(v)$. In this work we will use one of the simplest models for the gravitational collapse of a massive star, known as Vaidya-Kuroda-Papapetrou (VKP) model~\cite{Vaidya1966AnAS,Kuroda:1984,Papapetrou:1985}. The same model was considered in~\cite{Bonanno:2016dyv,Bonanno:2017kta,Bonanno:2017zen} to study the quantum-corrected collapse based on a one-step RG improvement not accounting for the decoupling mechanism.
We will present two distinct analytical results showing the expected functional dependence of the effective Newton coupling at early times and for small values of the radial coordinate. Finally, by using these solutions as boundary conditions, together with the requirement of matching the observed value of the Newton constant at large distances and early times, we will provide a complete numerical solution to the partial differential equation~\eqref{eq: G infinity differential equation rough}. Non-trivial corrections to the classical black hole spacetime, which describe the outcome of the gravitational collapse, will be the subject of Sect.~\ref{sect:Solutionsstatic}, while the evaporation will be described separately in Sect.~\ref{sect:evaporation}.
\subsection{Vaidya-Kuroda-Papapetrou collapse model}\label{subsec: VKP spacetime}
The VKP spacetime~\cite{Vaidya1966AnAS,Kuroda:1984,Papapetrou:1985} is a simplified model for the gravitational collapse of a massive star. Its geometry is characterized by a linear mass function,
\begin{equation}\label{eq: mass function}
m(v) = \begin{cases}
0, & v \leq 0\,; \\
\lambda v, & 0 < v < \overline{v}\,; \\
m, & v\geq \overline{v}\,,
\end{cases}
\end{equation}
as shown in Fig.~\ref{fig: mass function VKP}. While for advanced times $v \leq 0$ the spacetime is a flat Minkowski vacuum, at $v=0$ shells of ingoing radiation originating from the star are infused and concentrated towards the origin, $r=0$. The linear increase in mass at the rate $\lambda$ stops at $v=\overline{v}$, when the object settles down to the static classical Schwarzschild spacetime with mass $m$. Historically, the VKP model was one of the first counterexamples to the cosmic censorship conjecture~\cite{Kuroda:1984}.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{img-massfunction-VKP.pdf}
\caption{\label{fig: mass function VKP} Mass function of the classical VKP spacetime as given in Eq.~\eqref{eq: mass function}. The spacetime is initially flat. At $v=0$ the gravitational collapse starts and the mass $m(v)$ increases linearly with an injection rate $\lambda$. The collapse lasts until $v=\bar{v}$, where the mass function $m(v)$ reaches the plateau $m(v)=m$, $m$ denoting the final mass of the black hole.}
\end{figure}
\subsection{Identifying possible boundary conditions}
In this section we determine the solutions to the dynamical equation~\eqref{eq: G infinity differential equation rough} in two asymptotic regimes, where this equation can be solved analytically. This will provide us with the boundary conditions to solve the full dynamics numerically.
\subsubsection{Dynamics at early times}\label{subsec:ApprVdependence}
The radial dependence of the effective Newton coupling at early times $v\ll \bar{v}$ is dictated by the differential equation~\eqref{eq: G infinity differential equation rough} with the dominant contribution stemming from the energy density associated with $\mu_\infty$. Indeed, for $v\ll \bar{v}$ the spacetime is approximately Minkowski and thus the radial derivatives of the Newton coupling---defining $\rho_\infty$ and $p_\infty$---are approximately zero. Moreover, since during the collapse $m(v)$ is modeled as a power of the advanced time, $m(v)\sim v^n$ (with exponent $n=1$ in our case), it further suppresses $\rho_\infty$ and $p_\infty$, cf. Eq.~\eqref{eq:explicitdependence}. In contrast, $m(v)$ enters $\mu_\infty$ via its advanced time derivative, and since in our case it is a constant, $\dot{m}=\lambda$, its contribution to $\mu_\infty$ will dominate over all terms in $\rho_\infty$ and~$p_\infty$. As a consequence, at early times the effective Newton coupling has only a very weak dependence on the advanced time~$v$, which even drops out if $\rho_\infty$ and~$p_\infty$ are neglected.
Dropping the $\rho_\infty$ and $p_\infty$ contributions from Eq.~\eqref{eq: G infinity differential equation} the effective Newton coupling reduces to a function of the radial coordinate only, $G_\infty = G_\infty(r)$, and obeys the equation
\begin{equation}\label{eq: G infinity differential equation - approximate}
\qty(3 G_0 \,\omega \, \lambda\,G_\infty(r) + 12\pi r^2)G_\infty(r) - 12 G_0 \pi r^2 = 0\,.
\end{equation}
The positive semi-definite solution to the previous quadratic equation reads
\begin{equation}
G_\infty(r) = \frac{2}{G_0 \lambda \omega}\qty(-\pi r^2 + \sqrt{\pi^2 r^4 + {G_0}^2 \lambda \, \omega \pi r^2})\,.
\end{equation}
Therefrom, the resulting metric can be determined by inserting the result into the lapse function~\eqref{eq: lapse function}. At small radii, the corresponding Kretschmann scalar scales as
\begin{equation}
R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} \propto \frac{1}{r^4}\,,
\end{equation}
Compared to the classical curvature singularity $\propto r^{-6}$, the antiscreening of gravity stemming from the fixed point implies a weakening of the curvature singularity already at early times. Moreover, following our later analysis in Sect.~\ref{subsubsec: Solutions static fixed-point}, the divergence of the local curvature at the origin is expected to be further weakened during the collapse. In fact, the curvature for the static solutions at the end of the collapse settles down to a scaling $\propto r^{-3}$.
\subsubsection{Dynamics close to the classical singularity}\label{subsubsec:FPapprox}
We shall in the following consider solutions to the field equations in the region of spacetime close to the would-be singularity, i.e., for $r\ll l_{Pl}$. Our goal is to determine a boundary condition of the form $G_{\infty}(r_{min},v)=J(v)$, at a fixed $r_{min}\ll l_{Pl}$, for the numerical integration of the partial differential equation~\eqref{eq: G infinity differential equation rough}. In contrast to the case~$v\sim0$ studied in the previous subsection, the asymptotic analysis of Eq.~\eqref{eq: G infinity differential equation rough} for $r\sim0$ is extremely involved, and standard techniques based, e.g., on expansions in power laws, are not effective in this case. Yet, as we are only interested in finding a boundary condition $G_{\infty}(r_{min},v)=J(v)$ at a fixed $r_{min}\ll l_{Pl}$, we will utilize two complementary strategies, in combination with some arguments, which we describe in the following.
In our first approach, we neglect the two terms proportional to $r^2$ in Eq.~\eqref{eq: G infinity differential equation rough}, as they are small for $r\sim0$, and dropping them significantly reduces the complexity of the equation. Separating the variables, the ansatz
\begin{equation}\label{eq: G infinity small r ansatz}
G_\infty(r,v) = G_0 \qty(1-H(v))F(r)
\end{equation}
further simplifies the remaining differential equation to
\begin{equation}\label{eq: F and H together}
3 F(r)(-1+H(v)+v H'(v)) + 16 \pi v (-1 + H(v))(2 F'(r)+r F''(r))) = 0\,,
\end{equation}
which can be rewritten as
\begin{equation}
\frac{2 F'(r) + rF''(r)}{F(r)} = -\frac{3}{16 \pi v}\frac{1-H(v)-vH'(v)}{1-H(v)} \equiv c_0\,.
\end{equation}
Here we have used the fact that the left-hand and right-hand sides of the equation can depend only on $r$ and $v$, respectively, and thus must be equal to a constant $c_0$. As a result we obtain two differential equations determining the functions $F$ and $H$,
\begin{equation}\label{eq: F and H}
\begin{aligned}
2 F'(r) + r F''(r) & = c_0 F(r)\, ,\\
1- v\frac{H'(v)}{1-H(v)} & = -\frac{16 \pi}{3}c_0 v\, .
\end{aligned}
\end{equation}
The solution for the radial function $F(r)$ is
\begin{equation}\label{eq:r-dependence-small-r}
F(r)=c_1+\frac{c_2}{r} \,.
\end{equation}
for $c_0=0$, while for $c_0\neq0$ it is given by
\begin{equation}
F(r)= c_1 \frac{I_1(2\sqrt{c_0}\sqrt{r})}{\sqrt{c_0}\sqrt{r}}+c_2 \frac{K_1(2\sqrt{c_0}\sqrt{r})}{\sqrt{c_0}\sqrt{r}}\,,
\end{equation}
where $I_n(x)$ and $K_n(x)$ are modified Bessel functions of the first and second kind, respectively, and $c_0>0$ in order for the solution to be real. In both solutions for $F(r)$, $c_1$ and $c_2$ are integration constants. The dependence on the advanced time is instead encoded in the function
\begin{equation}\label{eq: H(v)}
H(v) = 1 + b_1 \frac{e^{-\frac{16\pi}{3}c_0 v}}{v}\, ,
\end{equation}
where $b_1\equiv -v_0$ is an integration constant.
Expanding the exponential function produces a term $\simeq 1/v$ to first order. At next order, $H(v)$ receives a constant contribution. In general, terms coming with an even power in the series expansion of the exponential give rise to positive odd powers of $v$ with a positive pre-factor in the overall expression for~$H(v)$. They would therefore yield positive contributions to the $v$-dependence of the effective Newton coupling and dominate at late times. As we expect the effective Newton coupling to be dynamically weakened during the collapse process, we set these terms to zero by the choice $c_0\equiv 0$ in~\eqref{eq: F and H}, i.e., we proceed by requiring
both summands in Eq.~\eqref{eq: F and H together} to vanish simultaneously. In this case the time dependence is encoded in Eq.~\eqref{eq: H(v)}, with $c_0=0$, while the function $F(r)$ is given by Eq.~\eqref{eq:r-dependence-small-r}, where the integration constant~$c_2$ must be zero, as otherwise the magnitude of the effective Newton coupling would increase towards $r\to0$. This requirement follows from the existence of a fixed point of the RG flow, as in this case the RG scale dependent Newton coupling scales as $G_k\sim g_\ast k^{-2}$ in the UV, and vanishes in the high-energy limit. The anti-screening of the gravitational interaction, making gravity weaker in the UV, is the reason behind the expectation of singularity resolution in asymptotically safe gravity. Finally, the integration constant $c_2$ for the function $F(r)$ is fixed to $c_1 = 1$, which guarantees that the ansatz~\eqref{eq: G infinity small r ansatz} is compatible with the observed value of the Newton constant at early times. In summary, for small radii $r$ and times $v>0$ the form of the effective Newton coupling can be approximated by
\begin{equation}\label{eq: dynamics in UV FP regime}
G_\infty(r,v) = G_0 \frac{v_0}{v}\,.
\end{equation}
This result indicates that the injection of radiation into an initially flat Minkowski spacetime, within the quantum-corrected VKP model, describes a highly non-perturbative process at early times and close to the would-be singularity, after which the strength of the coupling rapidly decreases with time.
A similar conclusion is also reached by employing an alternative strategy. We shall use once again the ansatz~\eqref{eq: G infinity small r ansatz}, and then proceed by replacing it in the full partial differential equation~\eqref{eq: G infinity differential equation rough} (including the last two terms proportional to $r^2$) and by expanding about $r=0$ up to linear order in the radial coordinate. This procedure yields a differential equation for the function $H(v)$, whose solution reads
\begin{equation}\label{eq:sol-uv-fp-alternative}
H(v)=1+{b_1}\frac{e^{-\frac{16\pi}{3}c_0 v}}{v}\,,
\end{equation}
where we have defined
\begin{equation}\label{eq:expre-c0}
c_0=\frac{\left(r\,F(0) F''(0)+2 \,r\, F'(0)^2+2 F(0) F'(0)\right)}{3 F(0) \left(2\, r\, F'(0)+F(0)\right)}\,.
\end{equation}
On the one hand, Eq.~\eqref{eq:sol-uv-fp-alternative} resembles the solution in Eq.~\eqref{eq: H(v)}. On the other hand, its explicit dependence on the radial coordinate $r$---encoded in the expression~\eqref{eq:expre-c0} of $c_0$---shows that a simple separation of variables of the form~\eqref{eq: G infinity small r ansatz}, while generally successful to study the asymptotics of differential equations, is not effective in our case and leads to contradictions. After all, as already mentioned, the exponent $\alpha$ governing the leading-order scaling of $G_\infty \sim r^\alpha$ for $r\sim 0$ is expected to be a function of the advanced time $v$. The asymptotics~\eqref{eq: dynamics in UV FP regime} is thus not expected to be accurate and the absence of an $r$ dependence in Eq.~\eqref{eq: dynamics in UV FP regime} should not come as a surprise. Specifically, a weak dependence on the radial coordinate at small radii, making the effective Newton coupling vanishing at $r=0$, is expected on physical grounds.
Despite these issues, as the two derivations presented above yield the same $v$ dependence at a fixed spatial slice, and since the aim of this subsection is solely to find a second, reasonable input for the numerical integration, we will assume that Eq.~\eqref{eq: dynamics in UV FP regime} provides a consistent boundary condition at $r=r_{min}\ll l_{Pl}$, and we will use it as an input for the numerical integration. Whether this assumption is consistent can then be verified a posteriori, based on the outcome of the numerical integration. In particular, the $r$ dependence ought to be restored in the full solution. We anticipate here that the numerical solution will be compatible with this expectation, and specifically with an effective Newton coupling that vanishes in the limit $r\to 0$.
\subsection{Full numerical solution} \label{subsubsec:fullsol}
In this section we combine the previous results and provide a numerical solution to the partial differential equation~\eqref{eq: G infinity differential equation rough} for the VKP model. The numerical integration will be performed in the region $(v,r) \in \qty[v_0, \overline{v}] \cross \qty[r_{min},r_{max}]$. Hereby $v_0\ll1$ and $\overline{v}$ denote the start and end time, respectively, for the numerical integration of the equations along the advanced time direction. Similarly, $r_{min}$ and $r_{max}$ are the integration boundaries for the radial coordinate. In particular, for the numerical integration we fixed $v_0/t_{Pl}=0.01$, $\bar{v}/t_{Pl}=1$, $r_{min}/l_{Pl}=10^{-4}$, and $r_{max}/l_{Pl}=50$. Moreover, we set the parameters~$\lambda$ and~$\omega$ to one and we chose the mass $m$ of the black hole to be Planckian, $m/m_{Pl}=1$. In general, in the collapse model introduced in Sect.~\ref{subsec: VKP spacetime}, the infusion rate $\lambda$ and the duration of the collapse~$\bar{v}$ determine the mass~$m$ of the configuration at the end of the collapse. Consequently, different choices of $m$ will have an impact on the properties of the final static object, as we shall see in Sect.~\ref{sect:Solutionsstatic}.
In Eq.~\eqref{eq: G infinity differential equation rough} the time derivative and second spatial derivative occur with the same sign on the left-hand side of the partial differential equation, resulting in a structure reminiscent of negative diffusion. It is well known that the numerical analysis of this type of differential equations is very involved. We present here a numerical solution stemming from the following initial and boundary conditions. First, we require that the effective Newton coupling reduces to the observed value $G_0$ both at early times and at large distances. This results in the initial and boundary conditions $G_\infty(r,v_0) = G_\infty(r_{max},v) = G_0$. Secondly, we make use of the result~\eqref{eq: dynamics in UV FP regime}, which describes the dynamics in the proximity of the classical singularity, to fix the remaining boundary condition near the origin, at $r_{min}/l_{Pl} = 10^{-4}$. More explicitly, this boundary condition reads $G(r_{min},v) = G_0 v_0/v$. Finally, according to our simplified model for the gravitational collapse, see Sect.~\ref{subsec: VKP spacetime}, we evolve the system until a final time $\overline{v}/t_{Pl} = 1$. Fig.~\ref{fig: numerical solution G infinity} shows the result of the numerical integration.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{img-G-full-pde-solution.pdf}
\caption{\label{fig: numerical solution G infinity}Numerical solution to the partial differential equation~\eqref{eq: G infinity differential equation} for the effective Newton coupling $G_\infty$. We use as initial condition at early times and boundary condition in the IR the observed value of the Newton constant, i.e.~$G_\infty(r,v_0) = G_\infty(r_{max},v) = G_0$. For the remaining boundary condition near the origin at $r_{min}$ we use the result~\eqref{eq: dynamics in UV FP regime} associated with the dynamics in the proximity of the would-be singularity. At early times for all $r$, as well as at large distances for all $v$, the effective Newton coupling reproduces the observed value of Newton's constant, $G_0$. When the collapse process starts, the effective Newton coupling decays as $\simeq v^{-1}$ until the shell-focusing is over. At the end of the collapse, the effective Newton coupling converges to a function which interpolates between $G_0$ (large-distance limit) and zero (for small radii). This function features in addition damped oscillations along the radial direction.
}
\end{figure}
Whereas at early times and large distances the effective Newton coupling is well approximated by its classical value, the situation is drastically different for small radii. As soon as the collapse process has started, the effective Newton coupling at
small distances from the radial center becomes weaker, thus providing a direct illustration
of the antiscreening effect of gravity in the UV. Its dependence on the advanced time~$v$ approximately follows an inverse power, cf. Eq.~\eqref{eq: H(v)}. In particular, at the end of the gravitational collapse, the effective Newton coupling interpolates between the classical observed value
$G_0$, which is recovered at large distances, and zero in the limit $r\to0$, i.e., where quantum gravity effects
become stronger. Importantly, we checked that these qualitative features are insensitive to the initial and boundary conditions.
An additional striking feature of the effective Newton coupling lies in its damped oscillations along the radial direction at late times. Such oscillations have been observed in some specific black hole solutions of higher-derivative gravity with specific non-local form factors~\cite{Zhang:2014bea}. This seems to be consistent with the arguments in Sect.~\ref{sect:FRG-RGimp-DecMech}, and specifically with the insight that the decoupling mechanism might fulfil the original scope of RG improvement, granting access to some of the quantum corrections in the effective action. We will come back to this topic in the next section.
Finally, Fig.~\ref{fig: numerical solution lapse function} shows the time-evolution of the $(0,0)$-component of the resulting metric according to the defining equation~\eqref{eq: lapse function}. The collapse drives the formation of a black hole horizon whose location lies initially at a radius smaller than its classical counterpart, the latter being approximately located at $r_h/l_{Pl} = 2 m(v)/m_{Pl}$. In fact, the classical Schwarzschild spacetime at the end of the collapse is reproduced well for sufficiently large masses of the final configuration, and only outside of the Planckian region, $r\gg l_{Pl}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{img-f-full-pde-solution.pdf}
\hfill
\includegraphics[width=0.53\textwidth]{img-f-full-pde-solution-different-v-slices.pdf}
\caption{\label{fig: numerical solution lapse function}Time evolution and radial dependence of the lapse function $f_\infty(r,v)$, i.e.~the $(0,0)$-component, of the VKP spacetime with effective Newton coupling $G_\infty(r,v)$. The collapse drives the formation of a black hole horizon whose location lies initially at a radius smaller than for the classical VKP model. For sufficiently large masses, the final configuration is approximated well by the Schwarzschild spacetime, at least outside of the Planckian region, i.e., for~$r\gg l_{Pl}$.
}
\end{figure}
In the next section, Sect.~\ref{sect:Solutionsstatic}, we will analyse possible outcomes of our collapse model and provide an analytical explanation for the origin of the oscillations by studying the static limit of the partial differential equation~\eqref{eq: G infinity differential equation rough}.
\section{Static spacetimes at the end of the collapse}\label{sect:Solutionsstatic}
A key aspect of classical gravitational collapse models, including the VKP model considered here, is the formation of an event horizon and a spacetime singularity after a finite amount of time. In spherically symmetric settings, the metric at the end of the collapse is the static Schwarzschild geometry and contains a curvature singularity at $r=0$. In terms of the Kretschmann scalar the degree of divergence of the final static configuration is $R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} \propto {r^{-6}}$. On the other hand, for the quantum-corrected VKP spacetime with the effective gravitational coupling determined by~\eqref{eq: G infinity}, a weakening of the curvature singularity is expected due to the anti-screening character of gravity encoded in Eq.~\eqref{eq: G running}. Within the VKP model, the system is static for advanced times $v>\overline{v}$, as the mass function reaches the constant value $m(v)=m$. In such a static limit the effective Newton coupling becomes independent of the advanced time $v$, $G_\infty = G_\infty(r)$, and the energy density $\mu_\infty$ in Eq.~\eqref{eq: cutoff identification} vanishes. All together, the static limit of the effective Newton coupling is defined by the differential equation~\eqref{eq: G infinity differential equation} with all advanced time derivatives set to zero, and it describes the final spacetime configuration at the end of the gravitational collapse.
To investigate the properties of the resulting static spacetime, in this section we first study the analytical properties of the effective Newton coupling in two opposite limits: in the small radii regime, close to the classical singularity, and in the large distance limit. In the latter the solution displays the same damped oscillations appearing in the collapse phase. Neglecting such oscillations, we will find a function that interpolates between the small- and large-radii behaviors. This interpolating function will provide us with the starting point to study the evaporation phase, which is the focus of the next section.
\subsection{Analytical solution close to the classical singularity}\label{subsubsec: Solutions static fixed-point}
In the following we study the outcome of the quantum-corrected VKP model for small radii. Neglecting for a moment the evaporation effects, at the end of the collapse the effective Newton coupling $G_{\infty}$ will be a function of the radial coordinate only, governed by the differential equation~\eqref{eq: G infinity differential equation} with constant ADM mass $m(v)=m$. Focusing on the small-$r$ region, corresponding to the UV fixed point regime, the RG scale dependence of the dimensionful Newton coupling is given by $G(k) \simeq {g_*}{k^{-2}}$. Since the fixed point regime is reached for $k^2\gg m_{Pl}^2/\omega $ and $\omega = 1/g_* \sim \order{1}$ according to FRG computations, this scaling can be obtained by neglecting the $1$ in the denominator of Eq.~\eqref{eq: G running}. Accordingly, we can study the static spacetime solutions resulting from the collapse, and in the proximity of the classical singularity, by setting $m(v)=m$ (static limit) and by neglecting the $1$ in the denominator of Eq.~\eqref{eq: G infinity differential equation} (fixed point regime). In these limits the effective Newton coupling $G_\infty=G_\infty(r)$ is completely determined by the ordinary differential equation
\begin{equation}\label{eq: G infinity differential equation static fixed-point}
G_0 \omega m \qty(4 r G_\infty '' + 8 G_\infty ')G_\infty - 3 G_0 r^2 = 0\,.
\end{equation}
As we are interested in determining the leading-order scaling of $G_{\infty}(r)$ in the proximity of the would-be singularity, we assume that $G_{\infty}(r)\sim C\,r^n$ close to $r=0$ and determine the parameters $(C,n)$ by inserting this power law ansatz in Eq.~\eqref{eq: G infinity differential equation static fixed-point}. Following this approach we find
\begin{equation}\label{eq: G infinity fixed point static solution}
G_\infty(r) = \frac{1}{\sqrt{5 \omega m}}r^{3/2}\,.
\end{equation}
Let us stress that Eq.~\eqref{eq: G infinity fixed point static solution} is expected to approximate the effective gravitational coupling at the end of the collapse and at sufficiently small radial distances, $r/r_{Pl} \ll 1$. The $r^{3/2}$-scaling implies that at the origin the effective Newton coupling goes to zero. A positive exponent for the leading power in a series expansion around the origin is consistent with previous results on RG improved black holes in spherical symmetry, e.g.~\cite{Bonanno:2000ep}. In particular, the specific exponent $3/2$ was also found in~\cite{Pawlowski:2018swz}. Using the expression~\eqref{eq: G infinity fixed point static solution} for the Newton coupling in the lapse function of the classical Schwarzschild spacetime,
\begin{equation}\label{eq: f infinity}
f_\infty(r) = 1 - \frac{2 m G_\infty(r)}{r} \simeq 1 - \frac{2 m}{r}\frac{r^{3/2}}{\sqrt{5 \omega m}}\,,\quad \mathrm{for}\,\, r\ll l_{Pl} \,,
\end{equation}
allows us to investigate properties of the geometry close to the origin. In contrast to the classical solution, the lapse function is regular and takes the value $f_{\infty}=1$ at $r=0$, as a consequence of the vanishing effective Newton coupling in the limit $r\to0$. However, the regularity of the metric at the origin does not imply a curvature singularity-free de Sitter core. Indeed, the Kretschmann scalar of the quantum-corrected VKP model at the end of the collapse becomes
\begin{equation}\label{eq: Kretschmann new}
R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} \propto \frac{1}{r^3}\,.
\end{equation}
It diverges due to the divergent metric derivative at the origin. Nonetheless, the degree of divergence is lowered compared to the classical singularity and reproduces the exponent found in~\cite{Pawlowski:2018swz} using an alternative cutoff scheme, but a similar self-consistent approach. In summary, the anti-screening character of the gravitational force at high energies reduces the strength of the curvature singularity in comparison to the classical Vaidya model. In previous studies it was shown that such an anti-screening effect might in certain cases even lead to singularity resolution~\cite{Bonanno:1998ye,Bonanno:2000ep,Bonanno:2006eu,Torres:2014gta,Torres:2017ygl}, cf.~also~\cite{Adeifeoba:2018ydh} for an analysis of necessary conditions.
To investigate global properties of the static solutions, an approximate scale dependence of the Newton coupling must be derived from~\eqref{eq: G running}. This is the focus of the next section.
\subsection{Analytical solution at large distances and interpolating function}\label{subsubsec: Solutions static large r}
The scope of this section is to determine static analytic solutions at large distances, complementing the analytic solution~\eqref{eq: f infinity} found in the previous section, which instead describes the endpoint of the VKP collapse for small radii. To this end, we need to solve the differential equation~\eqref{eq: G infinity differential equation} with constant mass function $m(v) =m$ and for large radii. Setting $m(v) =m$, the differential equation~\eqref{eq: G infinity differential equation} simplifies to
\begin{equation}\label{eq: G infinity differential equation static}
\qty(G_0 \omega m \qty(4 r G_\infty '' + 8 G_\infty ') + 3 r^2)G_\infty - 3 G_0 r^2 = 0\,.
\end{equation}
Moreover, at radii $r/l_{Pl}\gg 1$, we can make the ansatz
\begin{equation}\label{eq: G infinity large r ansatz}
G_\infty(r) = G_0\qty(1 - \frac{F(r)}{r})\,,
\end{equation}
where $\abs{F(r)/r}\ll 1$ at large $r$ and $\abs{F(r)/r}\to 0$ as $r\to \infty$, such that the classical lapse function is recovered at infinity. Inserting the ansatz~\eqref{eq: G infinity large r ansatz} into the differential equation~\eqref{eq: G infinity differential equation static} leads to
\begin{equation}
4 G_0^{2} m \omega \qty(F(r)-r)F''(r) - 3 r^2 F(r) = 0\,.
\end{equation}
Using that $|F(r)|\ll r$ at large distances, the previous equation reduces to a Stokes differential equation
\begin{equation}
F''(r) + \frac{3}{4 G_0^2 m \omega} r F(r) = 0\,.
\end{equation}
Solutions are linear combinations of Airy functions in the form
\begin{equation}\label{eq: Airy F}
F(r) = \mathfrak{Re} \big[c_1 \text{Ai}(a(m,\omega) r) + c_2 \text{Bi}(a(m,\omega)r)\big]\,,
\end{equation}
where $a(m,\omega) = 2^{-2/3}3^{1/3}(-G_0^2 m \omega)^{-1/3} $. The two integration constants are taken to be $c_i \propto 1/m$ on dimensional grounds (see also~\cite{Carballo-Rubio:2018pmi}). The left panel of Fig.~\ref{fig: G infinity static} shows the analytic solution for the effective Newton coupling at large $r$ according to~\eqref{eq: G infinity large r ansatz} with the function~$F(r)$ given in Eq.~\eqref{eq: Airy F}, together with the analytic solution~\eqref{eq: G infinity fixed point static solution} valid at small radii, for a mass parameter corresponding to one Planck mass, $m=m_{Pl}$. The power-law behavior in the UV remains valid up to approximately one Planck length $r\approx l_{Pl}$ away from the origin. Beyond the transition at the Planck scale where no analytic solution to the differential equation~\eqref{eq: G infinity differential equation static} is available (corresponding to the blue region in Fig.~\ref{fig: G infinity static}), the analytic solution~\eqref{eq: G infinity large r ansatz} characterized by damped Airy functions~\eqref{eq: Airy F} takes over. The presence of Airy functions causes characteristic oscillations around the classical value $G_0 = m_{Pl}^{-2}$ with decaying amplitude and wavelength at increasing radii. In the limit $r\to \infty$ the amplitude of the oscillations goes to zero, such that the classical lapse function is recovered. In particular, since the effective Newton coupling approaches the observed value of Newton's constant in the large-distance limit, the resulting spacetimes are asymptotically flat. In the right panel of Fig.~\ref{fig: G infinity static} the analytic solution~\eqref{eq: G infinity differential equation static} at large radii is displayed for different masses. At a given radius $r$, the amplitudes of the oscillations decrease, whereas their wavelengths increase as the mass parameter $m$ grows. In particular, for astrophysical black holes the mass is $m/m_{Pl} \approx m_\odot /m_{Pl} \approx 10^{38}$ and thus the amplitude of the oscillations becomes tiny and hard to resolve. Accordingly, the energy associated with the inverse wavelength of the oscillations becomes microscopic for large masses. To sum up, the amplitude and wavelength of the oscillations decreases with both the radial coordinate $r$ and with the black hole mass~$m$, making them negligible for astrophysical black holes.
We have additionally confirmed these findings, which are based on our analytic results, through different numerical methods, such as a direct integration of the second-order differential equation, a transformation to a first-order system, and a shooting with boundary conditions imposed at the origin and at large radii.
Let us now comment on the interpretation of these oscillations of the lapse function. On the one hand, similar oscillation patterns were found in certain models of quadratic gravity~\cite{Bonanno:2013dja,Bonanno:2019rsq} (where however the amplitude of the oscillations does not decrease by increasing $r$, and the period does not increase either), in higher-derivative gravity with specific non-local form factors~\cite{Zhang:2014bea}, and in the context of corpuscolar gravity~\cite{Giusti:2021shf,Casadio:2021eio,Casadio:2022ndh}. On the other hand, the RG improvement procedure was originally introduced as a way to explore the leading effects of operators occurring at higher order in the expansion of the effective action. In particular, operators quadratic in the curvature will appear naturally beyond the Einstein-Hilbert truncation. Reproducing solutions to a quadratic action with non-local form factors may therefore be viewed as an indication that the results of the iterative RG improvement coupled with the decoupling mechanism are consistent.
Next, we need to determine an analytic approximation to the full static solution. If the oscillations on top of the effective Newton coupling are neglected, we find that the analytic power-law solution~\eqref{eq: G infinity fixed point static solution} at the origin and the classical constant Newton coupling at large~$r$ are smoothly connected by the interpolating function
\begin{equation}\label{eq: approximate static solution}
G_\infty(r) = G_0\qty(1 - e^{-\frac{r^{3/2}}{\sqrt{5 \omega r_h/2}\,l_{Pl}}})\,,
\end{equation}
with $r_h = 2 m G_0$, which is shown in the left panel of~Fig.~\ref{fig: G infinity static}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{img-G-Airy-vs-fixed-point-solution.pdf}
\hfill
\includegraphics[width=0.47\textwidth]{img-G-Airy-analytic-solution-different-m.pdf}
\caption{\label{fig: G infinity static}Effective Newton coupling $G_\infty$ as a function of the radial coordinate in the static limit at the end of the collapse, for $\omega\equiv1$. The left panel shows static solutions to Eq.~\eqref{eq: G infinity differential equation} in different approximations and for $m=m_{Pl}$. Below the Planck scale, $r \lesssim l_{Pl}$, the effective Newton coupling is approximated by the solution~\eqref{eq: G infinity fixed point static solution} to the equations in the fixed point regime, and scales as $\sim r^{3/2}$ (dotted line). The solid line displays the analytic solution~\eqref{eq: G infinity large r ansatz} for large radii and is characterized by damped Airy functions of the form~\eqref{eq: Airy F} where we set $c_1,c_2\equiv 1/m$. The blue region is where the transition between these two analytical solutions~\eqref{eq: G infinity fixed point static solution} and~\eqref{eq: G infinity large r ansatz} should occur. Finally, the dashed line shows the exponential function~\eqref{eq: approximate static solution} which smoothly interpolates between the analytic solution in the UV and the Newton's constant $G_0 = m_{pl}^{-2}$ in the IR, and solves Eq.~\eqref{eq: G infinity differential equation} in the static limit and in the special case where the amplitude of the oscillations vanishes.
The right panel depicts the analytic solution~\eqref{eq: G infinity large r ansatz} with the function $F(r)$ specified by Eq.~\eqref{eq: Airy F} at large radii for different black hole masses. At a given radius, the amplitudes of the oscillations decrease, whereas their wavelengths increase for growing mass parameter $m$. All solutions are valid at large radii, and are not expected to provide a good approximation in the blue region where the transition to the scaling solution~\eqref{eq: G infinity fixed point static solution} occurs.}
\end{figure}
In the limit of large masses or large radii the exponential becomes negligible, and deviations from the classical Schwarzschild solution are strongly suppressed. The exponential nature of the interpolating function seems to be a feature of the self-consistent approach, which in the static case leads to a Dymnikova solution, cf.~\cite{Platania:2019kyx}, corresponding to an effective Newton coupling of the type~\eqref{eq: approximate static solution}, with characteristic scaling $\sim r^3$ close to the origin, in place of $\sim r^{3/2}$. The Dymnikova scaling is physically more appealing, as it makes curvature invariants finite at $r=0$. This is to be contrasted with our case, where the characteristic scaling $\sim r^{3/2}$ of the effective Newton coupling is not strong enough to remove the singularity, although it makes it weaker, cf.~\eqref{eq: Kretschmann new}. This result is not surprising: even at the level of a one-step RG improvement, adding quantum corrections to the static Schwarzschild solution leads to singularity resolution~\cite{Bonanno:2000ep}, while, starting from a dynamical spacetime, the dynamics of the quantum-corrected gravitational collapse typically lead to black holes with gravitationally weak (or integrable~\cite{Lukash:2011hd}) singularities~\cite{Bonanno:2017zen}. Replacing the one-step RG improvement with the self-consistent procedure in~\cite{Platania:2019kyx} does not change this intriguing result. Yet, in contrast to the one-step RG improvement, self-consistency favors the appearance of exponential lapse functions. Such an exponential behavior is a highly desirable feature, as it gives hopes that the corresponding spacetime can come from a principle of least action in quantum gravity~\cite{Knorr:2022kqp}. In particular, it is conceivable that these spacetimes characterized by exponential lapse functions could stem from an effective action of the type~\eqref{eq:eff-action} with exponential form factors. Notably, this resonates with the findings in~\cite{Zhang:2014bea}, where it was shown that quadratic effective actions with exponential form factors lead to damped oscillations resembling those that we have observed. In contrast, the typical polynomial lapse functions obtained from the one-step RG improvement, such as the Bonanno-Reuter metric~\cite{Bonanno:2000ep} and the Hayward black hole~\cite{Hayward:2005gi}, seem to be incompatible with a principle of least action~\cite{Knorr:2022kqp}, making their relation with quantum gravity questionable.
\section{Dynamics of the evaporation process}\label{sect:evaporation}
In the previous section we obtained an approximate analytical result for the effective Newton coupling at the end of the collapse. According to Eq.~\eqref{eq: approximate static solution}, the resulting static metric is characterized by the approximate lapse function
\begin{equation}\label{eq: lapse function static solution}
f_\infty(r) = 1-\frac{2 m G_0}{r}\qty(1 - e^{-\frac{r^{3/2}}{\sqrt{5 \omega r_h/2}\,l_{Pl}}})\,,
\end{equation}
where we remind the reader that $m$ is the ADM mass measured by an observer at infinity. Although the lapse function~\eqref{eq: lapse function static solution} neglects the oscillations encountered in the previous section, it provides an analytical approximation to the endpoint of the gravitational collapse and sets our starting point to study its evaporation.
\subsection{Causal structure and critical mass}\label{subsubsec:alltogether}
The interpolating function~\eqref{eq: approximate static solution} allows us to study the causal structure of the quantum-corrected static spacetime at the end of the collapse, and to determine the approximate location of its horizon(s). While the classical lapse function has a single horizon at $r_h=2G_0 m$, the causal structure of the quantum-corrected spacetime is more complicated and, similarly to other proposed alternatives to Schwarzschild black holes, it depends on the ratio $m/m_{Pl}$. At a critical value $m=m_c$ there is exactly one horizon. For masses below the critical mass there is no horizon and the curvature singularity is naked and timelike. Above the critical mass instead, as is typical for regular black holes, there are two horizons. We note at this point that our construction does not eliminate the problem of mass inflation characterizing most black holes with two horizons, as the lapse function~\eqref{eq: lapse function static solution} is such that the surface gravity at the inner horizon $\kappa_-$ is non-zero~\cite{Carballo-Rubio:2022kad}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{img-f-Airy-vs-fixed-point-solution.pdf}
\caption{\label{fig: lapse function} Lapse function $f_\infty$ of the final static configuration as a function of $r$ for different masses, each depicted with a different color. The analytic solution based on the oscillating effective Newton coupling~\eqref{eq: G infinity large r ansatz} with the function $F(r)$ defined by~\eqref{eq: Airy F} (dashed lines) is valid at large radii, $r/l_{Pl}\gg 1$. In the deep UV instead, for $r\ll l_{Pl}$, the lapse function is approximated by the analytic solution to equation~Eq.~\eqref{eq: G infinity fixed point static solution} (solid lines), and takes the value $f_\infty(0) = 1$ at the origin. The region highlighted in blue is where the transition between these two analytic solutions, which are valid in opposite asymptotic regimes, should occur.}
\end{figure}
The critical ratio $m_c/m_{Pl}$ is expected to be of order one, since no scale other than the Planck mass is included in our physical description. The critical mass parameter can be estimated analytically as follows.
First, we start by determining the condition to have a horizon close to the classical singularity, where the Newton coupling is described by the function~\eqref{eq: G infinity fixed point static solution}. This is done by inserting~\eqref{eq: G infinity fixed point static solution} into~\eqref{eq: f infinity} and searching for zeros of the resulting lapse function. There is one zero at
\begin{equation}\label{eq: horizon location approximated}
r_h = \frac{5 \omega}{4 m/m_{Pl}} l_{Pl}\,.
\end{equation}
Next, we recall that the fixed point scaling of the Newton coupling is valid only at high energies, i.e.,~at small distances, $r/l_{Pl} \ll 1$. The previous condition is saturated in Eq.~\eqref{eq: horizon location approximated} at $r=r_h$, if the mass parameter is chosen to be $m_c/m_{Pl} \approx \sqrt{5/4} \simeq 1.12$. This derivation however is valid only if the horizon lies in the region where the analytic approximation for $G_\infty$ based on the fixed point solution is adequate and we must verify this assumption a posteriori. It turns out that $m_c/m_{Pl} \approx \sqrt{5/4}$ can only provide a rough estimate for the critical mass $m_c$. In fact, the horizon location for this value of the mass parameter would be at $r\approx 2 G_0 m$, cf.~Fig.~\ref{fig: lapse function}, which is outside the regime of validity of Eq.~\eqref{eq: G infinity fixed point static solution} but within the same order of magnitude. A numerical analysis utilizing the analytical approximation~\eqref{eq: G infinity large r ansatz} for the lapse function at large radii shows that the correct value for the critical mass lies slightly above the analytical one derived above, and is $m_c/m_{Pl} \approx 1.18$, cf.~Fig.~\ref{fig: lapse function}.
Finally, when neglecting the oscillations, i.e., when considering the interpolating lapse function~\eqref{eq: lapse function static solution} as a starting point, the qualitative causal structure is similar: depending on the value of $m$, the spacetime exhibits two, one or no horizons, cf. Fig.~\ref{fig: f infinity}. The location $r_+$ of the outer black hole horizon approximates the location of the classical Schwarzschild radius at $r_h = 2 G_0 m$ as the mass is increased. At the same time, increasing $m$ the inner Cauchy horizon $r_-$ moves closer to the origin in units of $G_0 m$. At the critical value $m_c$ both horizons coincide, while spacetimes characterized by smaller masses have no horizon. Finally, if the black hole mass $m$ is below the critical value $m_c$, the spacetime is horizon-free. In this case however the critical mass is $m_c \approx 1.55 \, m_{Pl}$. The difference with the one previously discussed stems from neglecting the oscillations of the lapse function, as is clear from Fig.~\ref{fig: lapse function}. As the specific position of the horizon does not impact the qualitative aspects of the evaporation process, and since we do not have a full solution featuring both the oscillations~\eqref{eq: G infinity large r ansatz} at large radii and the correct $\sim r^{3/2}$ scaling at short distances, we will neglect the oscillations and we will use the interpolating lapse function~\eqref{eq: lapse function static solution} as a starting point to study the evaporation process of the corresponding black hole.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{img-f-contourplot-r-m.pdf}
\caption{\label{fig: f infinity} Density plot of the lapse function $f_\infty(r)$, highlighting its positivity for increasing values of the mass parameter~$m$ and as a function of the radial coordinates in units of $r_h = 2 G_0 m$. In the figure $r_-$ and $r_+$ denote the inner and outer horizon, respectively. For $r> r_h$ or $m<m_c$ the lapse function $f_\infty(r)$ is strictly positive. The causal structure is instead non-trivial for $m\geq m_c$, where $f_\infty(r)$ can also be negative or vanish. Specifically, the lapse function is negative between the two horizons, positive outside, and vanishes on the boundary. Thus, for masses $m/m_{Pl}$ greater than, equal or less than the critical value $m_c/m_{Pl}$, the spacetime has two, one or no horizon(s) respectively.}
\end{figure}
\subsection{Evaporation process}
For $m > m_c$ the lapse function $f_\infty$ exhibits a simple zero at $r=r_+$. In particular, its derivative is non-negative for $r\geq r_+$ and its value increases monotonically from zero at $r=r_+$ to one at infinity. We may thus associate a temperature to this black hole configuration by following Hawking's analysis of black hole radiance~\cite{Hawking:1975vcx} in the language of Euclidean path integrals and thermal Green's functions~\cite{Gibbons:1976pt,Gibbons:1976ue,Hawking:1978jz}.
To this end, let us consider a static spherically symmetric spacetime of the form
\begin{equation}\label{eq: Vaidya metric}
\dd{s^2} = -f(r)\dd{t^2} + f(r)^{-1} \dd{r^2} + r^2\dd{\Omega^2}\,.
\end{equation}
A positive definite Euclidean metric can be defined by performing a Wick rotation, i.e., by complexifying the time coordinate, $t\to i\tau$. Expanding the lapse function in a Taylor series in the near-horizon region it can be shown that to first order the metric locally describes a Rindler space. A coordinate transformation $(\tau, r) \to (\phi, \rho)$, where $\phi = \abs{f'(r_+)}\tau/2$ and $\rho^2 = 4 (r-r_+)/f'(r_+)$, allows us to write the metric in the neighborhood of the horizon as
\begin{equation}
\dd{{s_E}^2} = \dd{\rho^2} + \rho^2 \dd{\phi^2} + r_+^2\dd{\Omega^2}\,.
\end{equation}
By requiring smoothness of the metric, one is led to identify $\phi$ with an angle variable having period $2\pi$ and to restrict the range of possible values of the radial variable to $r> r_+$. In this case the first two terms in the Euclidean metric correspond to the line element of a 2-dimensional flat plane written in polar coordinates $(\phi, \rho)$. The resulting manifold is a Euclidean black hole with topology $\mathbb{R}^2 \times S^2$.
The periodicity of $\phi$ translates into one of $\tau$, i.e.~$\tau \to \tau + \beta$ with period $\beta = 4\pi / f'(r_+) $. If quantized matter fields are considered on the Euclidean black hole background, their Green's functions become thermal with the temperature determined by the inverse of the parameter $\beta$,
\begin{equation}\label{eq: Temperature}
T_{BH} = \frac{1}{\beta} = \frac{f'(r_+)}{4\pi}\,.
\end{equation}
For a Schwarzschild black hole this temperature reproduces the well-known result due to Hawking~\cite{Hawking:1975vcx},
\begin{equation}\label{eq: Hawking temperature}
T_{Schwarzschild} = \frac{1}{8\pi G_0 m}\,.
\end{equation}
In order to apply the previous formula to our case, starting from the configuration with two horizons, one has to identify the location of the outer horizon $r_+$. The latter is given by largest positive zero of the lapse function~\eqref{eq: lapse function static solution}, see~Fig.~\ref{fig: f infinity}. We determine this root numerically for varying mass and insert the result into~\eqref{eq: Temperature}. Thereby we arrive at the temperature as a function of $m$, shown in the left panel of Fig.~\ref{fig: temperature and evaporation}. For large masses the spacetime is well approximated by the Schwarzschild solution. As a consequence, in this limit the temperature of the quantum black hole reduces to the Hawking temperature~\eqref{eq: Hawking temperature}. Lowering the mass, deviations between the classical and quantum spacetime become significant, with the quantum-corrected temperature always lying below the semi-classical one. While the latter diverges as $\propto 1/m$ for small $m$, when lowering the mass of the quantum black hole its temperature reaches a maximum and subsequently falls down to zero. This happens when the two horizons coincide, i.e., when the black hole mass $m$ has reached the critical value~$m_c$. Initial configurations characterized by a smaller mass parameter have no horizon and thus the derivation of~\eqref{eq: Temperature} does not apply.
Our results are in remarkable agreement with the RG improved spacetimes studied in~\cite{Bonanno:2000ep}, whereby a cutoff function is constructed from the radial proper distance of an observer to the center: after reaching a maximum temperature, the quantum black hole begins to cool down. The evaporation process comes to an end when its mass is lowered to $m=m_c=\order{m_{Pl}}$. The critical mass therefore represents a final state of evaporation, leaving behind a Planck-size black hole remnant.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{img-temperature-m.pdf}
\hfill
\includegraphics[width=0.47\textwidth]{img-mass-evaporation.pdf}
\caption{\label{fig: temperature and evaporation} Temperature and mass of an evaporating classical and quantum black hole. All quantities are in Planck units. The left panel depicts the temperature of the quantum black hole (solid blue line) compared to the classical Hawking temperature of a Schwarzschild black hole (dashed dark blue line). The temperature is displayed as a function of the black hole mass. In the classical evaporation process, the black hole becomes hotter and hotter, leading to a complete evaporation which eventually leaves the classical singularity naked after a finite amount of time. In the quantum version, the temperature at first increases as the mass decreases. However, in contrast to the classical case, it reaches a maximum and then slowly goes to zero. The right panel shows the black hole mass as a function of the proper time measured by an observer at infinity. The function $m(t)$ is determined by solving Eq.~\eqref{eq:temperaturevariation} numerically, with the initial value $m_i \equiv m(0)$ set to $m_i/m_{Pl}=2 >m_c$ and with $\sigma\equiv 1$. The dashed dark green line corresponds to the classical case, and shows that the evaporation process occurs in a finite amount of time. By contrast, in the quantum-corrected model (solid green line), a black hole with initial mass $m_i$ requires an infinite amount of proper time to convert the mass $(m_i-m_c)$ into Hawking radiation, eventually leading to a black hole remnant with mass $m_c$ (dotted black line).}
\end{figure}
We now evaluate how much time is needed for the evaporation of a black hole from an initial mass $m_i$ to its final value $m_f$. The mass loss per unit proper time measured by an observer is given by Stefan-Boltzmann's law
\begin{equation}\label{eq:temperaturevariation}
\dot{m} = - \sigma A(m) T_{BH}^4(m)\,,
\end{equation}
where a dot denotes differentiation with respect to the proper time $t$, $\sigma$ is a constant and $A(m)=4\pi r_+ ^2$ is the area of the outer horizon. For a Schwarzschild black hole the radiation power decreases as $\propto m^{-2}$, which leads to a finite amount of time $\propto {m_i}^3$ for the complete evaporation from $m_i\to 0$ to happen, as shown in the right panel of Fig.~\ref{fig: temperature and evaporation}.
The situation is notably different in the quantum case. Starting from an initial value $m_i>m_c$, the critical final value $m_c$ is reached only asymptotically, at infinitely late times. This can be explained as follows. As the quantum black hole evaporates it eventually reduces its mass to the value associated with the maximum temperature peak, displayed in Fig.~\ref{fig: temperature and evaporation}. Thereafter the cooling process begins and the temperature gradient becomes negative. When the temperature is close to zero, the mass change per time---which obeys a $T^4$-behaviour according to Stefan-Boltzmann's law---becomes tiny. At this stage the black hole cannot radiate away power efficiently anymore. In particular, it is impossible to reach the final stage of evaporation. This result is consistently interpreted in view of the third law of black hole thermodynamics, according to which a zero surface gravity cannot be achieved in a physical process, as has already been observed in~\cite{Bonanno:2000ep}.
The time dependence of the metric is obtained by plugging the time-dependent mass function $m(t)$ into the Vaidya lapse function~\eqref{eq: lapse function}. Fig.~\ref{fig: evaporation lapse function} shows the evaporation of a Schwarzschild black hole compared to the time evolution of its quantum counterpart. A Schwarzschild black hole evaporates completely within a time $t\propto m_i ^3$, leaving behind empty Minkowski space, modulo a naked singularity. By contrast, a quantum black hole gradually approaches the critical configuration for which the inner and outer horizon coincide. Following our previous considerations, however, it will take infinitely long to get there. Remarkably, as these remnants are only produced asymptotically, the entropic arguments against them~\cite{Bekenstein:1993dz,Susskind:1995da} might not apply to this case.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{img-f-Schwarzschild-evaporation.pdf}
\hfill
\includegraphics[width=0.47\textwidth]{img-f-quantum-BH-evaporation.pdf}
\caption{\label{fig: evaporation lapse function}Schwarzschild (left panel) and quantum-corrected lapse function (right panel) at different times of evaporation. Classically, the Hawking temperature increases monotonically as the black hole mass is converted into Hawking quanta. The evaporation thus continues for a finite amount of time $\Delta t=2048 \pi^3/3 t_{Pl}$ until the ADM mass $m$ vanishes and the classical black hole reduces to a Minkowski spacetime with a naked singularity at $r=0$. The corresponding lapse function is one everywhere except at the origin, where it diverges. In the quantum model, evaporation takes an infinite amount of time and a black hole remnant with mass $m_c\sim m_{Pl}$ (solid red line) is only realized asymptotically.}
\end{figure}
\section{Conclusions}\label{sect:conclu}
A key challenge of quantum gravity is to derive spacetimes whose properties and dynamics are valid at all resolution scales. Such dynamical solutions are expected to emerge from a principle of least action, in which the classical action is replaced by its quantum (or ``effective'') counterpart. Yet, determining such an effective action as well as finding solutions to the corresponding quantum field equations is technically extremely involved. One should first evaluate the gravitational path integral or, equivalently, solve the RG equations of a scale-dependent version of the effective action~\cite{Dupuis:2020fhh}. By taking its infrared limit, all quantum fluctuations are integrated out and the scale-dependent effective action reduces to the standard quantum effective action.
As a way to circumvent these technical challenges, in the past decades studies of quantum gravity phenomenology in the context of asymptotically safe gravity have strongly relied on the use of ``RG improvement''~\cite{Coleman1973:rcssb,Migdal:1973si,Adler:1982jr,Dittrich:1985yb}. The latter was originally devised in the context of quantum field theory to provide insights on the quantum dynamics while avoiding the complex procedures of solving RG equations or computing quantum loops in perturbation theory. Its necessary ingredients are an action, the beta functions governing the scale dependence of its couplings, and a functional relation between the RG scale and the characteristic energy of a given phenomenon, e.g., the center of mass energy in a scattering process. Although the use of RG improvement in quantum field theory has been incredibly successful, its application to gravity is subject to several ambiguities (see, e.g.,~\cite{Platania:2020lqb} for a summary), making its connection to the asymptotic safety program unclear. In particular, the lack of a clear recipe to relate the RG scale with the variety of competing physical energy scales involved in gravitational phenomena is one of its most severe problems.
In this work we put forth a method to address this issue and to determine some of the leading-order quantum corrections to classical spacetimes. Our strategy relies on the so-called decoupling mechanism~\cite{Reuter:2003ca}: when a system is characterized by one or more physical infrared scales, their combination can overcome the regulator term implementing the shell-by-shell integration of fast-fluctuating modes in the path integral, thus slowing down the flow of the scale-dependent effective action. At the ``decoupling scale''---the critical scale below which the flow freezes out---the scale-dependent effective action approximates the quantum effective action. The decoupling mechanism thus provides a short-cut to the effective action and generally grants access to higher-order terms which were not part of the original truncation. In this work we derived a condition to identify the decoupling scale, given an ansatz for the action, and subsequently exploited this condition to study the dynamics of quantum-corrected black hole spacetimes in asymptotic safety, starting from the Einstein-Hilbert truncation.
Our results are remarkably promising. On the one hand, they are in qualitative agreement with previous studies based on the RG improvement. Specifically:
(i) Accounting for the dynamics of a gravitational collapse makes full singularity resolution less straightforward than in static settings. Nevertheless, quantum effects make the singularity gravitationally weaker, in agreement with preliminary indications from first-principle computations~\cite{Bosma:2019aiu}; (ii) Black holes can have up to two horizons depending on whether their mass is below, equal, or above a critical Planckian mass scale. Astrophysical black holes would thus be characterized by two horizons and their evaporation would resemble closely the one of known black holes in the literature~\cite{Dymnikova:1992ux,Hayward:2005gi,Bonanno:2006eu}. On the other hand, in our construction we find additional striking features reminiscent of higher-derivative operators with specific non-local form factors. In particular, the lapse function characterizing quantum-corrected black holes decreases exponentially, and displays damped oscillations along the radial direction. Although we started from the Einstein-Hilbert truncation, free oscillations are typical of black holes in local quadratic gravity assuming a specific sign of the Weyl-squared term~\cite{Bonanno:2013dja,Bonanno:2019rsq}. This result is consistent with the expectation that the decoupling mechanism ought to grant access to higher-derivative terms that were not included in the original truncation, and provides encouraging evidence that our construction could lead to results in qualitative agreement with first-principle calculations in quantum gravity. In addition, the damping of the oscillations indicates the presence of non-local form factors in the quadratic part of the effective action. Specifically, given the exponential nature of the dynamical lapse function we derived, one could speculate that these black holes could stem from an effective action with exponential form factors. In turn, this hypothesis is supported by the findings in~\cite{Zhang:2014bea}, where it was shown that exponential form factors in the action yield black holes whose lapse functions oscillate along the radial direction, with a characteristic damped amplitude.
Altogether, the decoupling mechanism provides an intriguing novel avenue to systematically compute leading-order corrections to classical spacetime solutions. While in this work we focused on black holes and we started from a simple ansatz for the action, our construction also applies to cosmological frameworks and can be extended to include higher-derivative terms. We hope to tackle these points in future works.
\acknowledgments
The authors would like to thank Niayesh Afshordi, Ivano Basile, Benjamin Knorr, and Nobuyoshi Ohta for interesting discussions, and Benjamin Knorr for very helpful comments on the manuscript. JNB is supported by NSERC grants
awarded to Niayesh Afshordi and Bianca Dittrich. The authors acknowledge support by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. AP also acknowledges Nordita for support within the ``Nordita Distinguished Visitors'' program and for hospitality during the last stages of development of this work. Nordita is supported in part by NordForsk.
\printbibliography{}
\end{document}
|
2,869,038,156,671 | arxiv | \section{Introduction\label{sec:intro}}
De Sitter space is widely accepted as a probable early-universe
cosmological solution, as it describes the state of the universe
during inflation. Provided our universe possesses a
completely stable positive cosmological constant, it should also
asymptotically approach de Sitter space at late times\textsuperscript{\footnotemark[1]}\footnotetext[1]{Such an idea is
at the heart of de Sitter Equilibrium, an alternative to eternal
inflation as an initial conditions independent cosmological framework
\cite{AlbrechtSorbo2004, Albrecht2009}, but the motivation for this
paper is broader than this.}.
The standard lore of de Sitter space is that it acts as a heat bath,
in such a way that an Unruh-deWitt detector for a quantum field in a de Sitter background
will register a thermal spectrum for the number of particles in a
given momentum mode (\cite{BirrellDavies1982}
and references therein). But how
does this happen? If de Sitter space is past and future eternal and the state is de Sitter invariant, then it should not come as a
surprise that the Green's functions of the quantum field embedded in
this background should partake of its thermal behavior as evidenced by
the periodicity in imaginary time inherent in the
metric \cite{GibbonsHawking1977}. However, suppose we start the de
Sitter evolution at an initial time, as might happen in inflation,
say, and further assume that we start the field in a state that is not de Sitter
invariant. What happens next?
We address this question here in a particular scheme which is chosen to
be relevant to the question and also technically tractable. Consider the situation where, for
conformal times $\eta<\eta_0$, the background geometry is that of
Minkowski flat space-time, and a minimally coupled free field is taken
to be in the free field vacuum state of the flat space
Hamiltonian. Then at $\eta =\eta_0$, the background is changed to
become de Sitter space with an expansion rate $H$ so that the
Gibbons-Hawking temperature is $T_{\rm de\ S} = \frac{H}{2\pi}$. Since
we are only considering free field theory in a time-dependent
background, we can solve the functional Schr\"odinger equation for the
wave functional describing the state of the field explicitly, and use
this wave functional to understand to what extent does this state
approximate the ``thermal'' Bunch-Davies (BD) state \cite{BunchDavies1978} (extending the work on the corresponding modes by Schomblond and Spindel \cite{SchomblondSpindel1976}), by analyzing ratios of various correlators and momentum-energy tensors, evaluated in our vacuum state to the quantities considered in the BD vacuum state.
This issue is not only of conceptual relevance, but could have
observational consequences as well. The state of the field prior to
inflation need not be one that matches on smoothly to the BD state at
the onset of inflation, and if the number of e-folds is close to the
minimum it
is not an outlandish thought that some remnants of this
pre-inflationary state might have survived to imprint themselves on
the CMB and/or large scale structure. Conversely, given how well the
power spectrum of CMB fluctuations has been
measured \cite{WMAP2012,Planck2013}, and how closely this spectrum
follows what would have been expected from the assumptions of an
initial BD state, we can use this data and our calculation to
constrain the space of allowed initial states for inflationary
fluctuations.
It is worth noting that the question we are asking can be recast as: to what extent are there no-hair theorems for the quantum state of a test field in de Sitter space? There has been some prior work in this direction, starting from the seminal work of Ford and Vilenkin \cite{VilenkinFord1982} as well as the more recent one of Anderson, Eaker, Habib, and Molina-Par\'{i}s \cite{AndersonHabibMottolaParis2000}. In both cases, an attractor behavior was found for sufficiently well-behaved states. Related issues were also addressed in \cite{FischlerKunduPedraza2013}, \cite{FischlerNguyenPedrazaTangarife2014}, \cite{SinghGangulyPadmanabhan2013}, and \cite{SinghModakPadmanabhan2013}. Our viewpoint is somewhat different here; we don't know what the state of the field is prior to inflation but, regardless, it should be reasonable to ask what the evolution of that state is after inflation begins. Then we can ask to what extent the BD behavior is generic at late times during inflation.
In the next section we set up the initial value problem for the
Schr\"odinger wave functional with the flat space initial conditions
described above. We then use that wave functional to compute two-point
functions in our state. Since we have a free field theory, that state
will be a gaussian, and thus fully described by the three correlation functions: $\langle
\Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle$, $\langle
\Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$, and $\langle \Pi_{\vec{k}}
\Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$. Additionally,
we study observables such as the
expectation value of the stress-energy tensor of this state. Section
\ref{sec:numerics} is devoted to numerical results and the analysis of
ratios of two-point functions, and stress-energy
tensors, evaluated in both our state and the BD states. We
end with a discussion of our results as well as some further
directions to take in addressing the issues dealt with here. Overall
our vacuum state approaches the BD state, when considering
coarse-grained collections of modes clearly within the horizon.
\section{\label{sec:schrodingerFT} Wave Functional, Mode Equation, and Correlation Functions}
\subsection{Finding the Schr\"odinger wave functional}
As discussed in the introduction, we consider a test scalar
field embedded in an FRW space-time that transitions between a
constant scale factor for conformal times $\eta\leq \eta_0$ to a de
Sitter scale factor for $\eta>\eta_0$. We assume that such a
space-time is generated by an appropriate stress-energy tensor via
the Einstein equations, but do not concern ourselves further with how
this background geometry is obtained.
If $\Phi(\vec{x}, \eta)$ denotes the scalar field in question, the action we use is
\begin{equation}
\label{eq:action}
S = \frac{1}{2}\int d^4 x a^4(\eta)\left[\frac{1}{a^2(\eta)} \left(\Phi^{\prime 2}-(\nabla \Phi)^2\right) -m^2 \Phi^2\right],
\end{equation}
\\
where a prime denotes a derivative with respect to $\eta$, and $m^2 = m^2_{\Phi} + \xi_BR$, $m_\Phi$ referring to the mass of our field. We will take the scale factor as
\begin{equation}
a(\eta) = \left\{\begin{array}{cc}-\frac{1}{\eta_0 H} & \eta\leq \eta_0 \\-\frac{1}{\eta H} & \eta>\eta_0\end{array}\right.
\end{equation}
\\
The scale factor is continuous though not differentiable at
$\eta=\eta_0$. A more reasonable assumption would be that the
transition is smoother than this (for an example see
\cite{VilenkinFord1982}), but this will suffice for our purposes.
Instead of quantizing this theory in the usual way (i.e., by defining
creation and annihilation operators acting on the Fock space of
states) we will use a Schr\"odinger picture quantization
\cite{BoyanovskyVegaHolman1994}. Here we use eigenstates of
the Schr\"odinger picture field operator $\hat{\Phi}(\vec{x})$,
$|\Phi(\cdot)\rangle$ such that
\begin{equation}
\label{eq:spicquant}
\hat{\Phi}(\vec{x}) |\Phi(\cdot)\rangle=\Phi(\vec{x}) |\Phi(\cdot)\rangle.
\end{equation}
\\
The state of the field is then represented by a wave functional $\Psi\left[\Phi(\cdot); \eta\right]$ (or more generally by a density matrix element $\rho[\Phi(\cdot),\tilde{\Phi}(\cdot);\eta]$) satisfying the Schr\"odinger (Liouville) equation
\begin{equation}
\label{eq:schreqn}
i\frac{\partial \Psi[\Phi(\cdot); \eta]}{\partial \eta} = \hat{H}\left[-i\frac{\delta}{\delta \Phi(\cdot)}, \Phi(\cdot)\right] \Psi[\Phi(\cdot); \eta]\quad \left(\text{or }i\frac{\partial\rho}{\partial \eta}=\left[ \hat{H}, \rho\right]\right),
\end{equation}
\\
where $\hat{H}$ is the Hamiltonian operator (again in the Schr\"odinger picture) obtained from the action in Eq. (\ref{eq:action}). For our case this reads
\begin{equation}
\label{eq:hamiltonian}
\hat{H} = \int d^3 x\left\{ \frac{\Pi^2}{2 a^2 (\eta)} +\frac{1}{2} a^2 (\eta) \left(\nabla \Phi(\vec{x})\right)^2 + \frac{1}{2} m^2 a^4(\eta)\Phi(\vec{x})^2 \right\},
\end{equation}
\\
with $\Pi = a^2 (\eta) \Phi^{\prime}$ being the canonically conjugate momentum to $\Phi$, represented in the usual way as $\Pi\rightarrow -i\delta\slash \delta \Phi(\cdot)$ in the Schr\"odinger picture.
We note that the Schr\"odinger equation in
Eq. (\ref{eq:schreqn}) should be written using the proper time of the observer
measuring the wave function. For an FRW space-time this would be the
cosmic time $t$. However, the use of conformal time corresponds to a
canonical transformation and thus gives rise to the same
physics \cite{BoyanovskyVegaHolman1994}, as would be expected of a
coordinate transformation. It will be important in our later analysis to keep in mind that
$t\rightarrow \infty$ corresponds to $\eta \rightarrow 0^-$.
We will take our spatial geometry to be flat so we can expand the field in terms of Fourier components. Furthermore, we will quantize our field in a box of comoving spatial volume $V$ so that the Schr\"odinger picture field and conjugate momenta can be written as
\begin{eqnarray}
\label{eq:expansion}
& & \Phi(\vec{x}) = \frac{1}{\sqrt{V}} \sum_{\vec{k}} \Phi_{\vec{k}} e^{-i \vec{k}\cdot \vec{x}}\nonumber\\
& & \Pi(\vec{x}) = \frac{1}{\sqrt{V}} \sum_{\vec{k}} \Pi_{\vec{k}} e^{-i \vec{k}\cdot \vec{x}},
\end{eqnarray}
\\
where the equal time commutation relations
\begin{equation}
\label{eq:commrel}
\left[ \Phi_S(\vec{x}),\Pi_S(\vec{y})\right]= i\delta^3(\vec{x}-\vec{y})
\end{equation}
\\
imply $\left[\Phi_{\vec{k}}, \Pi_{\vec{q}}\right] = i \delta_{\vec{k}, -\vec{q}}$ and thus, in the Schr\"{o}dinger picture, $\Pi_{\vec{q}}$ can be represented as $-i\frac{\delta}{\delta \Phi_{\vec{-q}}}$. Hence, the Hamiltonian breaks up into the sum of Hamiltonians for each mode, and we can also write the wave function as the product of wave functions for each mode:
\begin{eqnarray}
\label{eq:momentumspace}
&& H = \sum_{\vec{k}} H_{\vec{k}}\quad \text{with } H_{\vec{k}}= \frac{\Pi_{\vec{k}} \Pi_{-\vec{k}}}{2 a^2(\eta)} +\frac{1}{2} a^2(\eta) \Omega_{\vec{k}}^2(\eta)\ \Phi_{\vec{k}} \Phi_{-\vec{k}},\nonumber\\
&& \Psi[\{\Phi_{\vec{k}}\}, \eta] = \prod_{\vec{k}} \psi_{\vec{k}} (\Phi_{\vec{k}}, \eta),\nonumber\\
&& \Omega_k^2(\eta) \equiv k^2 + m^2 a^2(\eta).
\end{eqnarray}
\\
Since we have a free field theory, our ansatz for the ground-state wave functional for each mode should be Gaussian as in
\begin{equation}
\label{eq:gaussianansatz}
\psi_{\vec{k}} (\Phi_{\vec{k}}, \eta)=N_{\vec{k}} (\eta) \exp\left(-\frac{1}{2} A_k (\eta) \Phi_{\vec{k}} \Phi_{-\vec{k}}\right),
\end{equation}
\\
where we have made use of rotational invariance to write the kernel $A_k(\eta)$ as a function of the magnitude $k$ of $\vec{k}$. By matching powers of $\Phi_{\vec{k}}$ on either side of the Schr\"odinger equation for each mode we find:
\begin{eqnarray}
\label{eq:seqn}
&& i\frac{N_{\vec{k}}^{\prime} (\eta)}{N_{\vec{k}} (\eta)} = \frac{A_k(\eta)}{2 a^2(\eta)} \nonumber\\
&& i A_k^{\prime}(\eta) = \frac{A_k^2(\eta)}{a^2(\eta)}-a^2(\eta)\Omega_{k}^2(\eta)\quad A_k(\eta_0) = \Omega_k(\eta_0) a^2(\eta_0)
\end{eqnarray}
\\
where the primes represent conformal time derivatives, and the initial condition is found by considering the ground state wave function of a quantum mechanical harmonic oscillator with mass $a^2(\eta_0)$ and frequency $\Omega_k (\eta_0)$.
\subsection{Solving the mode equations}
Eq. \eqref{eq:seqn} is of the Ricatti form and can be converted into a second order equation of Schr\"odinger type via the substitution
\begin{equation}
A_k(\eta) = -i a^2(\eta)\left(\frac{\phi_k^{\prime}(\eta)}{\phi_k(\eta)}-\frac{a^{\prime}(\eta)}{a(\eta)}\right).
\end{equation}
\\
Doing this we find
\begin{equation}
\label{eq:mode}
\phi_k^{\prime \prime}(\eta) + \left(\Omega_k^2(\eta)-\frac{a^{\prime \prime}(\eta)}{a(\eta)}\right) \phi_k(\eta)=0,\quad \phi_k^{\prime}(\eta_0) =\left( i \Omega_k(\eta_0)+\frac{a^{\prime}(\eta_0)}{a(\eta_0)}\right)\phi_k(\eta_0).
\end{equation}
\\
The equation we start with for $A_k(\eta)$ is a first order equation and we have one initial condition for it so that there is a unique solution for $A_k(\eta)$. On the other hand, the equation for $\phi_k(\eta)$ is a second order one, requiring two initial conditions for a unique solution. The resolution of this dilemma can be found by noting that $A_k$ is related to the ratio of $\phi_k^{\prime}$ and $\phi_k$. This means that in any linear combination of the two independent solutions to Eq. (\ref{eq:mode}), we can factor out an overall constant leaving only one constant to be determined. We can use this freedom to fix the (constant) Wronskian of $\phi_k(\eta)$ and $\phi_k^*(\eta)$ to equal $-i$. Imposing this condition then implies that $\phi_k(\eta_0) = \frac{1}{\sqrt{2 \Omega_k(\eta_0)}}$.
Eq. (\ref{eq:mode}) is nothing but the mode equation for a massive, minimally coupled scalar field in de Sitter space. The solutions are well known \cite{SchomblondSpindel1976} and we can write
\begin{eqnarray}
\label{eq:modesoln}
&& \phi_k(\eta) = \alpha_k {\cal U}_k(\eta)+\beta_k {\cal U}_k^*(\eta),\quad {\cal U}_k(\eta) = \frac{\sqrt{-\pi \eta}}{2}H_{\nu}^{(2)}(-k\eta),\nonumber\\
&& \alpha_k = \frac{i}{\sqrt{2 \Omega_k(\eta_0)}}\left[{\cal U}_k^{* \prime}(\eta_0)+ \left(-i \Omega_k(\eta_0)+\frac{1}{\eta_0}\right){\cal U}_k^{*}(\eta_0)\right], \\
&&\beta_k = -\frac{i}{\sqrt{2 \Omega_k(\eta_0)}}\left[{\cal U}_k^{ \prime}(\eta_0)+\left(-i \Omega_k(\eta_0)+\frac{1}{\eta_0}\right){\cal U}_k(\eta_0)\right],\nonumber
\end{eqnarray}
\\
where $\nu =
\sqrt{\frac{9}{4}-\frac{m^2}{H^2}}$ and ${\cal U}_k(\eta)$ is commonly referred to as the $k^{th}$ Bunch-Davies mode. It is easy to check that the
Wronskian condition implies that $|\alpha_k|^2-|\beta_k|^2=1$; had we
been doing Heisenberg field theory, we would infer that the modes
$\phi_k(\eta)$ are just the Bogoliubov transforms of the BD modes. Moreover, as $\eta_0 \rightarrow -\infty$, the
form of ${\cal U}_k(\eta)$ allows us to conclude:
\begin{eqnarray}
\label{eq:limeta0}
&& \Omega_k(\eta_0) \rightarrow k \nonumber \\
&&{\cal U}_k(\eta_0) \rightarrow \frac{1}{\sqrt{2k}}, \\
&&{\cal U'}_k(\eta_0) \rightarrow i \sqrt{\frac{k}{2}} \nonumber,
\end{eqnarray}
\\
from which we can infer $\alpha_k \rightarrow 1$ and $\beta_k \rightarrow 0$, i.e. in this limit, we go back to an eternal inflationary patch of de Sitter space with the field state being the BD state.
The full wave function for the mode $\Phi_{\vec{k}}$ is thus given by
\begin{equation}
\label{eq:kwavefcn}
\psi_{\vec{k}}(\eta) = \left(\frac{a^2(\eta)}{\pi \left|\phi_k(\eta)\right|^2}\right)^{\frac{1}{4}} \exp\left[\frac{i}{2} a^2(\eta) \left(\frac{\phi_k^{\prime}(\eta)}{\phi_k(\eta)}-\frac{a^{\prime}(\eta)}{a(\eta)}\right) \Phi_{\vec{k}} \Phi_{-\vec{k}}\right],
\end{equation}
\\
where we should note that when computing any expectation values for
quantities involving the mode $\Phi_{\vec{k}}$, we also need to
include the contribution of the wave function $\psi_{-\vec{k}}(\eta)$,
since $\Phi_{-\vec{k}} = \Phi_{\vec{k}}^*$, and $\Phi$ is a real
field. This is equivalent to using the square of
$\psi_{\vec{k}}(\eta)$ in any such calculation.
Eq. \eqref{eq:kwavefcn} coupled with the mode equations (\ref{eq:mode}) gives the full specification of the quantum state with the given initial conditions. We can now use this wave function to compute observables that might help us answer the question asked in the introduction: to what extent does this state ``feel'' de Sitter space?
\subsection{Calculating relevant correlation functions}
What are the useful diagnostic tools to evaluate the behavior of this state? Since the state is Gaussian, it can be fully specified by the following correlators: $\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle,\ \langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle,\ \langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$, computed below.
From (\ref{eq:kwavefcn}) computing $\langle \Phi_{\vec{k}} \Phi_{-\vec{k}}\rangle(\eta)$ results in
\begin{eqnarray}
\label{eq:2ptphi}
\langle \Phi_{\vec{k}} \Phi_{-\vec{k}}\rangle(\eta) &&= \int {\cal D}\Phi_{\vec{k}} \left|\psi_{\vec{k}}(\eta)\right|^2 \left|\psi_{-\vec{k}}(\eta)\right|^2 \Phi_{\vec{k}} \Phi_{-\vec{k}} \nonumber \\
&& = \frac{1}{2A_{kR}} \\
&& = \frac{|\phi_k(\eta)|^2}{a^2(\eta)}. \nonumber
\end{eqnarray}
\\
where $A_{kR}$ denotes the real part of the kernel $A_k(\eta)$.
The other correlators are also easy enough to compute. For $\langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ we have
\begin{eqnarray}
\label{eq:2ptpi}
\langle \Pi_{\vec{k}} \Pi_{-\vec{k}}\rangle(\eta) &&= \int {\cal D}\Phi_{\vec{k}}\ \psi_{\vec{k}}(\eta)^{* 2} \left(-\frac{\delta^2}{\delta \Phi_{\vec{k}} \delta \Phi_{-\vec{k}}}\right) \psi_{-\vec{k}}(\eta)^2 \nonumber \\
&& =\frac{\left|A_k\right|^2}{2 A_{k R}} \\
&& = a^4(\eta) \left| \frac{d}{d\eta}\left(\frac{\phi_k(\eta)}{a}\right) \right|^2. \nonumber
\end{eqnarray}
\\
Finally, we find $\langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ to be given by
\begin{eqnarray}
\label{eq:2ptmixed}
\langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle&& =\int {\cal D}\Phi_{\vec{k}}\ \psi_{\vec{k}}(\eta)^{* 2} \left(-i\frac{\delta}{\delta \Phi_{-\vec{k}}}\Phi_{\vec{k}} -\Phi_{-\vec{k}}i\frac{\delta}{\delta \Phi_{\vec{k}}} \right) \psi_{-\vec{k}}(\eta)^2 \nonumber \\
&& = -\frac{A_{k I}}{A_{k R}}\\
&&=a^2(\eta) \frac{d}{d\eta}\left(\frac{\left|\phi_k(\eta)\right|^2}{a^2(\eta)}\right) \nonumber.
\end{eqnarray}
\\
We can check to see what happens to our two-point functions as a
function of time. In particular, we might expect that, if de Sitter
space really did act as a heat bath and an ``equilibration'' process truly
was in effect over time, then we should see these correlators approach
the standard de Sitter two-point functions. We can check this by
noticing that at late times, it is only the imaginary part of the
Hankel function that becomes relevant (since it is singular as
$\eta\rightarrow 0^-$). Hence, the late-time expression of the Hankel
function is
\begin{equation*}\lim_{\eta \to 0^-} H_{\nu}^{(2)}(k\eta) = i\frac{\Gamma(\nu)}{\pi}\left(\frac{2}{-k\eta}\right)^{\nu}, \end{equation*}
\\
so that, as $k\eta$ approaches $0^-$ for finite $k$, we can use this form in our two-point functions to find:
\begin{align}
\label{eq:2ptfunctionsAsymptotic}
&\left\langle \Phi_{\vec{k}}\Phi_{-\vec{k}} \right\rangle \rightarrow 4^{\nu -1}\frac{H^2 \Gamma^2(\nu)}{\pi}\left|\alpha_k - \beta_k \right|^2(-\eta)^{3} (-k\eta)^{-2\nu} , \nonumber \\
& \left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}} \right\rangle \rightarrow 4^{\nu - 3}\frac{\left|\alpha_k - \beta_k \right|^2}{\pi H^2} (-\eta)^{-3}(-k\eta)^{-2\nu}\left[(k\eta)^2\Gamma(\nu-1)+(6-4 \nu)\Gamma(\nu)\right]^2, \\
& \left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} +\Pi_{-\vec{k}}\Phi_{\vec{k}} \right\rangle \rightarrow -4^{\nu-\frac{3}{2}} \frac{\left|\alpha_k - \beta_k \right|^2}{\pi} (-k\eta)^{-2\nu}\Gamma(\nu)\left[(k\eta)^2\Gamma(\nu-1)+(6-4\nu)\Gamma(\nu)\right]. \nonumber
\end{align}
\\
From \eqref{eq:modesoln}, we can compute $ \left|\alpha_k-\beta_k\right|^2$ as
\begin{equation}
\left|\alpha_k-\beta_k\right|^2=\frac{2}{\Omega_k(\eta_0)}\left[\left(\Re\left({\cal
U}_k^{\prime}(\eta_0)+\frac{1}{\eta_0}{\cal
U}_k(\eta_0)\right)\right)^2+\Omega_k^2(\eta_0)
\left(\Re\left({\cal U}_k(\eta_0)\right)\right)^2\right].
\label{EqnCoeff}
\end{equation}
\\
At first glance, focusing our attention on $\left\langle
\Phi_{\vec{k}}\Phi_{-\vec{k}} \right\rangle$, Eq.
(\ref{eq:2ptfunctionsAsymptotic}) tells us that, even at late times,
information about the initial state as encoded in the coefficients
$\alpha_k$ and $ \beta_k$ is not lost, at least not in the two-point
function. This should not be surprising since unitary evolution
always preserves information about the initial state as long as the
state is viewed in a sufficiently fine-grained manner. It is only
through coarse-graining that a process of equilibration (should it
occur) will be revealed. In this paper we will consider coarse-graining that is expressed by looking at quantities averaged over a
range of $k$ modes.
We can be more explicit about Eq. \eqref{EqnCoeff} in the massless, minimally
coupled case where $\nu= \frac{3}{2}$ (this was also treated
in \cite{AndersonHabibMottolaParis2000}). In this case,
\begin{equation}
{\cal U}_k(\eta) = -\frac{e^{ik\eta}}{\sqrt{2 k}}\left(1+\frac{i}{k \eta}\right),\quad \Omega_k(\eta_0) = k,
\end{equation}
\\
and
\begin{equation}
\label{eq:bogomass}
\left|\alpha_k-\beta_k\right|^2 = 1-\frac{\sin 2 k \eta_0}{k \eta_0} + \frac{\sin^2 k \eta_0}{\left(k \eta_0\right)^2}.
\end{equation}
\\
For $-k\eta_0\gg 1$, this modulating factor tends towards $1$.
The two-point functions are studied in further detail in section~\ref{sec:numerics}, after substituting $q = -k\eta_0$ in the mode equation, and performing proper rescalings of our quantities by appropriate powers of $-\eta_0$.
\subsection{The stress-energy tensor}
With the previous two-point functions in hand, we can study the equilibration of our mode with respect to the BD mode. Moreover, we can calculate relevant quantities such as the stress-energy tensor and in particular the energy density $\left\langle T^0_{\phantom{b}0} \right\rangle$.
We need to compute the expectation value of the stress-energy tensor in a particular state corresponding to the density matrix $\rho(\eta)$. The momentum-energy tensor in operator form is
\begin{align*} T_{\mu\nu} = &\left(1-2\xi_B\right)\nabla_\mu\Phi\nabla_\nu\Phi + \left(2\xi_B-\frac{1}{2}\right)g_{\mu\nu}g^{\alpha\beta}\nabla_\alpha\nabla_\beta\Phi + g_{\mu\nu} V(\Phi) \\
&-\xi_B\Phi^2\left(R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R\right) + 2\xi_B\Phi(g_{\mu\nu}\Box - \nabla_\mu\nabla_\nu)\Phi.
\end{align*}
\\
Thus, in a de Sitter background,
\begin{align}
\label{eq:T00}
\left\langle T^0_{\phantom{b}0} \right\rangle = & \left\langle \frac{\Phi'^2}{2a^2} + \frac{1}{2a^2}(1-4\xi_B)(\nabla \Phi)^2 + V(\Phi) - \xi_B G^0_{\phantom{b}0}\Phi^2 + 2 \xi_B \Phi \left[3\frac{a'}{a^3}\Phi' - \frac{1}{a^2} \nabla^2\Phi \right] \right\rangle,
\end{align}
\\
where $G^{\mu}_{\phantom{b}\nu} = R^{\mu}_{\phantom{b}\nu} - \frac{1}{2} \delta^{\mu}_{\phantom{b}\nu} R$ is the Einstein tensor. $\rho(\eta)$ is written in terms of the Fourier components of the fluctuation fields. Hence, $T^0_{\phantom{b}0}$ also needs to be expanded in terms of such fluctuations. Additionally, we have $\pi = \frac{\Phi'}{a^2}$ and need to hermitianize $\Phi\Phi'$ so that $\Phi\Phi'$ goes to $\frac{1}{2a^2}(\Phi\tilde{\pi} + \tilde{\pi}\Phi)$. Therefore, combining the previous results with Eq. \eqref{eq:T00}, one obtains
\begin{align}
\label{eq:T00Fourier}
\left\langle T^0_{\phantom{b}0} \right\rangle = \int \frac{d^3k}{(2\pi)^3} & \Bigg[ \frac{1}{2a^6} \left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}}\right\rangle + \left(\frac{1}{2a^2}(k^2 + a^2 V''(\Phi)) - \xi_BG^0_{\phantom{b}0} \right) \left\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle \nonumber \\
&+3\xi_B\frac{a'}{a^5}\left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} + \Pi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle \Bigg],
\end{align}
\\
where in our case, $a = -\frac{1}{\eta H}$, $V''(\Phi)= m^2, \frac{m^2}{H^2} = \frac{9}{4} - \nu^2$, and $G^0_{\phantom{b}0} = -3H^2$. Because of the divergences notably appearing in $\left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}}\right\rangle$ and $\left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} + \Pi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle$, the previous integration is not straight forward, even in the massless minimally coupled case. A more in depth analysis of Eq. \eqref{eq:T00Fourier} will be presented in the next section.
\section{Numerical Work}\label{sec:numerics}
\subsection{Numerical approach}
Now that we have calculated all the relevant correlation functions involving our state and used those to compute the pertinent observables, we turn to a numerical analysis of these quantities.
Before doing this, however, a rescaling of our mode
equations, fields $\phi_k$ and corresponding momenta $\Pi_k$ should be
performed. Time will be measured in units of $\eta_0$, $\langle
\Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle$ in units of $(\eta_0H)^2$ and
$\langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ in units of
$(\eta_0H)^{-2}$. Rescale the momenta by $\eta_0$ and the modes by $\sqrt{-\eta_0}$, and let $q = -k\eta_0$ and let $x = 1 - \frac{\eta}{\eta_0}$ represent our new ``time'' variable. Since we are only interested in conformal times $\eta \in [\eta_0,0)$, we have $x \in [0,1)$. A given mode labeled by the comoving wavenumber $k$ crosses the de Sitter horizon when $k\eta= -1$, which corresponds to $x = 1 - \frac{1}{q}$.
Then the BD mode and mode equations \eqref{eq:mode} respectively become
\begin{equation}
\label{eq:BDresc}
{\cal U}_q(x) = \frac{\sqrt{\pi(1-x)}}{2}H^{(2)}_\nu\left(q(1-x)\right),
\end{equation}
\\
and
\begin{equation}
\label{eq:moderesc}
\phi_q^{\prime \prime}(x) + \left(q^2+\frac{\frac{1}{4}-\nu^2}{(1-x)^2}\right) \phi_q(x)=0,
\end{equation}
\\
where a prime now denotes a derivative with respect to $x$. Notice that, going from Eq. \eqref{eq:mode} to Eq. \eqref{eq:moderesc}, we made the substitution $\frac{m^2}{H^2} = \frac{9}{4} - \nu^2$. The initial conditions previously defined now give
\begin{equation}
\label{eq:ICresc}
\phi_q(0) = \frac{1}{\sqrt{2}\left(q^2+\frac{9}{4}-\nu^2\right)^{\frac{1}{4}}} \text{\hspace{10 pt} and \hspace{10 pt}} \phi_q^{\prime}(0) = \left[i\left(q^2 + \frac{9}{4} - \nu^2\right)^{\frac{1}{2}}+1\right]\phi_q(0).
\end{equation}
\\
The measure of the equilibration of our state to the BD state will be quantified by the approach of the corresponding correlators to the standard BD ones. We will examine this approach both mode by mode as well as in terms of momentum integrated quantities. For simplicity, we focus on the massless, minimally coupled case below.
\subsection{Correlators}\label{subsec:correlators}
\subsubsection{Single mode case}
We consider ratios of the form
$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}}$,
$\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(BD)}}$,and $\frac{\left\langle
\Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}}
+\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$, where $(Mode)$
stands for a correlation function evaluated in our ansatz and $(BD)$
for the same quantity examined in the Bunch-Davies state. Below are
plots of all such ratios.
Fig. \ref{phiRatio3q} shows
$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}}$ for $q = 1, 10$, and 100. For $q = 1$,
corresponding to a mode that is crossing the horizon at $\eta = \eta_0$, the ratio seems to settle well below unity, increasing
monotonically until it plateaus for larger $x$ values, meaning that no
equilibrium between $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}$ and $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}$ is reached. This is not
surprising. Indeed, since all modes for which $q \in [0,1]$ are
essentially frozen we should not expect anything
dynamical to happen to their matching correlation functions. Thus
it seems clear that for modes crossing the horizon or outside of it,
information about the initial state is never lost.
As $q$ increases to 10 or even 100,
$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}}$ is characterized by an undamped oscillatory
behavior about $1$ with higher $q$'s having smaller amplitudes. The absence of damping is due to the fact that
taking a ratio of $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle$ in different states erases the contributions of the
scale factors, as can be seen from Eq. \eqref{eq:2ptphi}, hence the
red-shifting of the modes due to the expansion of the
universe is removed. Additionally, since our state can be viewed as a Bogoliubov
transform of the BD state, $|\phi_q(x)|^2$ just oscillates about
$|{\cal U}_q(x)|^2$ with constant amplitude. Given the form of
$\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle$ the same
should occur between $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}$ and $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{phiRatio3q.pdf}
\caption{The ratio$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}}$ for $q = 1, 10,$ and $100$. For $q
= 1$ (dotted line) the ratio clearly does not asymptote to
$1$, while for $q = 10$ and $q = 100$ (dashed and solid
lines, respectively) it oscillates about $1$ without any
damping, but with amplitude decreasing with increasing $q$
values. Such fine-grained curves do not observe equilibration.}
\label{phiRatio3q}
\end{figure}
The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(BD)}}$ in Fig. \ref{piRatio3q}, presents
similarities with Fig. \ref{phiRatio3q} for $q = 10$ and $q = 100$,
namely the ratio corresponding to such modes is oscillatory about an
equilibrium value of $1$ and undamped. Moreover for $q = 1$, no approach to unity is observed.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{piRatio3q.pdf}
\caption{The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10$, and 100. Similarly to what was observed in Fig. \ref{phiRatio3q}, it appears the ratio evaluated at $q = 1$ does not approach $1$, while $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ taken for $q = 10$ or $q = 100$ oscillates without damping about $1$, with an amplitude that decreases as $q$ becomes higher. As in Fig. \ref{phiRatio3q} no signs of equilibration are found.}
\label{piRatio3q}
\end{figure}
The last ratio of correlators, $\frac{\left\langle
\Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}}
+\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$, is shown
in Fig. \ref{mixRatio3q}. When $q = 1$, this
quantity appears to grow monotonically for all $x$, corroborating the
absence of equilibrium for such corresponding modes. The striking
feature of Fig. \ref{mixRatio3q}, manifesting itself when compared
to Fig. \ref{phiRatio3q} and \ref{piRatio3q}, is that the ratios
tend to oscillate about $1$ for $q > 1$, but now with an amplitude
that diminishes as a function of time. The size of the oscillations is
now damped linearly, while being constant in the first two plots. Such
a disparity is due to the presence of scale factors in
\eqref{eq:2ptmixed} that do not cancel upon taking a quotient of
two-point functions evaluated in different states.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{mixRatio3q.pdf}
\caption{The ratio $\frac{\left\langle \Pi_{\vec{q}}\Phi_{-\vec{q}} + \Phi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{ \left\langle \Pi_{\vec{q}}\Phi_{-\vec{q}} + \Phi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10$, and $100$. Similar conclusions to the ones in Fig. \ref{phiRatio3q} and \ref{piRatio3q} can be made, namely, the higher the $q$-mode the smaller the amplitude of oscillations. However, contrary to the previous two figures, the ratios corresponding to $q = 10$ and $100$ oscillate about $1$ with amplitude decreasing linearly.}
\label{mixRatio3q}
\end{figure}
Despite the described differences found when comparing Fig.
\ref{phiRatio3q}, \ref{piRatio3q}, and \ref{mixRatio3q}, we argue that
one can draw similar conclusions regarding the lack of
equilibration. It is not surprising that
correlation functions for modes crossing, near crossing, or outside
the horizon do not exhibit equilibration, as such modes freeze
out. Moreover the oscillatory behavior shown for higher $q$ modes in
Fig. \ref{phiRatio3q} and \ref{piRatio3q} also does not reflect
equilibration. At first glance, the curves in Fig.~\ref{mixRatio3q}
seem to indicate an approach to the BD mode, since all curves approach unity over
time. However, $x \rightarrow 1$ corresponds to $t \rightarrow
+\infty$ for cosmic time $t$. We feel the slowness of the approach to
unity of the curves in Fig.~\ref{mixRatio3q} leaves us unconvinced
that this quantity should be regarded as equilibrating. The $q=1$ and
$q=10$ curves clearly do not actually reach unity as $x \rightarrow 1$, and
the same is true of the $q=100$ curve, although this is harder to see
from the plot.
The lack of equilibration of the two-point functions for single modes is
hardly the final word. After all one cannot learn about the
equilibration of a box of gas by following a single energy eigenstate
of the microscopic system,
no matter how strongly the equilibration is realized overall. We next
consider correlation functions averaged over a range
of $q$'s, as a way to represent coarse-graining. Although our setup
is rather formal, we believe these averaged quantities bring us closer
to representing realistic observables.
\subsubsection{Quantities averaged over modes}
We integrated all our two-point functions over finite
ranges of $q$: $[1,3]$, $[3,9]$, and $[10, 20]$. Such domains in $q$
have been chosen to demonstrate the difference between modes that sit
inside the horizon (with large wavelengths for $q \in [3, 9]$ or $q$
an order of magnitude away from horizon-crossing for $q \in [10, 30]$)
and those that are traversing or near the horizon.
We have found that
the general behaviors can be identified without including even
higher values of $q$. Our ratios then become
$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}_{[q_{min},
q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$,
$\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(Mode)}_{[q_{min},
q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}
\right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$,and $\frac{\left\langle
\Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle
\Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}_{[q_{min}, q_{max}]}},$ where $q_{min}$ is our
lower limit of integration and $q_{max}$ our upper limit.
As shown in Fig. \ref{phiRatioInt}, integrating
$\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ and
$\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}_{[q_{min}, q_{max}]}$ over $q$ introduces damping
in the ratio of the two for mode ranges well within the horizon, while
averaging over the near-horizon-crossing range ($q \in [ 1, 3]$)
does not. For the former modes, the ratios oscillate about $1$ but a damping occurs over
time such that the two-point functions eventually asymptote to $1$.
For $q \in [10, 30]$, the ratio clearly becomes $1$, i.e,
equilibration of $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ with
$\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}
\right\rangle^{(BD)}_{[q_{min}, q_{max}]}$ is reached. Looking at
higher $q$-modes, we observed that the higher the $q$-domain the
earlier the equilibration, since such modes have smaller
amplitudes. Comparing with Fig. \ref{phiRatio3q}, we can infer that
the damping is due to the integration over $q$-modes. Hence we may
conclude that, from the perspective of the field correlation
functions, equilibrium is attained for sets of modes that start well
inside the horizon, with $q$ of order $10$ and beyond.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{phiRatioInt.pdf}
\caption{The ratio $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For $q_{min} = 1$ and $q_{max} = 3$ (dotted line) the ratio clearly deviates from $1$, corroborating the fact that modes near horizon-exit and beyond, do not equilibrate. For other domains $[q_{min}, q_{max}]$ (dashed and solid lines) $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ oscillates about $1$ with a clear damping over time.}
\label{phiRatioInt}
\end{figure}
We may draw similar conclusions from Fig. \ref{piRatioInt} as in Fig. \ref{phiRatioInt}. For $q \in [3, 9]$ and $q \in [10,30]$, $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ appears oscillatory about the equilibrium position and damped. For the former domain, the ratio does not exactly achieve equilibrium but approaches it. It clearly hits $1$ for $q \in [10, 30]$. For $q_{min} = 1$, no equilibration occurs. Therefore, we may conclude that modes such that $q$ is of order $10$ and beyond equilibrate, from the point of view of the momentum correlator.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{piRatioInt.pdf}
\caption{The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For $q_{min} = 1$ and $q_{max} = 3$ (dotted line) the ratio oscillates about $1$ without damping, while for other ranges $[q_{min}, q_{max}]$, the ratio damps out close to $1$. For $q \in [10, 30]$ (solid line), it clearly achieves $1$ for higher $x$-values. In other words coarse-graining $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ over such modes results in equilibration of the numerator and denominator.}
\label{piRatioInt}
\end{figure}
In Fig. \ref{mixRatioInt}, we observe a slight difference in the behavior of the lowest $q$-range modes. For $q \in [1, 3]$, $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ appears to plateau for $x \geq 0.7$. Nevertheless no approach to unity can be found. This again proves that modes which are crossing or near-crossing the horizon do not equilibrate. Plots generated after integrating for $q \in [3, 9]$, and $q \in [10, 30]$, show the same trends as in Fig. \ref{phiRatioInt} and \ref{piRatioInt}. Such modes approach (for $q_{min} = 3$) equilibrium or equilibrate ($q_{min} = 10$ and higher).
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{mixRatioInt.pdf}
\caption{The ratio $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For the first domain of $q$ (dotted line), $\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ never equilibrates to $\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}$. For $q_{min} = 3$ or $10$, the ratio is damped over time, and equilibrium is reached for $q$-modes of order and greater than $10$.}
\label{mixRatioInt}
\end{figure}
In summary, when we ask whether the correlation functions of our state and the BD state approach one other, the answer seems to be that it depends on which modes are being considered. For those that remain well inside the horizon, we see the tendency of our state to approach the BD one, while for low $q$-modes, this does not occur.
\subsection{Stress-energy tensor}
In terms of our variables $x$ and $q$, Eq. \eqref{eq:T00Fourier} in the massless minimally coupled case becomes
\begin{equation}
\label{eq:T00q}
\left\langle T^0_{\phantom{b}0} \right\rangle = \int \frac{d^3q}{(2\pi)^3} \left[ \frac{(1-x)^6}{2} \left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle + \frac{(1-x)^2}{2}q^2 \left\langle \Phi_{\vec{q}}\Phi_{-\vec{q}}\right\rangle\right].
\end{equation}
\\
Let $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ be the integrand of Eq. \eqref{eq:T00q}. Fig. \ref{t00Ratio3q} represents $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q = 1, 10$ and 100. As seen when analyzing two-point functions, the ratio settles away from $1$ when $q = 1$ and exhibits an oscillatory behavior about 1 for the other $q$-modes. However, contrary to our previous observations, the oscillations are characterized by an amplitude that increases as a function of $x$. This is rather puzzling. Indeed, as discussed in section \ref{subsec:correlators}, our state should be fully described by the two-point functions. $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ itself is a function of two of them, in the massless and minimally coupled case. Thus we should expect to draw the same conclusions as in \ref{subsec:correlators}. Note, however, that our conclusions about equilibration as perceived from the correlators originated after integrating them over $q$. This suggests that we should adopt the same approach here.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{t00Ratio3q.pdf}
\caption{The ratio $\frac{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$for $q = 1$, $q = 10$ and $q = 100$. For $q = 1$, the ratio settles down well below the equilibrium position, while for $q = 10$ and $100$, it oscillates with increasing amplitude about $1$. None of the curves present equilibration, similar to what was observed for other fine-grained quantities. }
\label{t00Ratio3q}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=4 in, keepaspectratio]{t00Int.pdf}
\caption{The stress-energy tensor $\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ corresponding to $\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}$ integrated between $q = 1$ and $q = 50$. The expectation value monotonically approaches $0$ as $x$ increases.}
\label{t00Int}
\end{figure}
In Fig. \ref{t00Int} one can observe $\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}$ integrated between $q = 1$ and $q = 50$. Let us label it $\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$. The distinctive feature of the plot is the fact that the integrated expectation value of the stress-energy tensor decreases monotonically as a function of $x$, and eventually reaches $0$. Given that $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}\right\rangle$ falls off as a function of $x$, and so does $1-x$ (representing the scale factor in our rescaled equations) for $x \in [0,1)$, such a behavior makes sense from the point of view of those quantities. The correlator $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$, however, rises as a function of $x$. Since it does so as $(1-x)^{-2}$, the presence of $(1-x)^{6}$ in front of $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$ in Eq. \eqref{eq:T00q} is responsible for an overall decrease. In other words, the expansion of the universe takes care of any diverging behavior in $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$. Looking at $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ $q$-mode per $q$-mode the same declining trend was found, regardless of the chosen vacuum state. Equilibrium however was so far considered from the point of view of ratios of functions evaluated in our state to those evaluated in the Bunch-Davies state.
\begin{figure}[htbp]
\centering
\includegraphics[width=4 in, keepaspectratio]{t00RatioInt.pdf}
\caption{The ratio $\frac{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ obtained using $q_{min} = 1$ and $q_{max} = 50$ as our limits of integration. The ratio seems to oscillate about $1.0004$ with an amplitude that decreases up to $x = 0.5$ but keeps increasing afterward.}
\label{t00RatioInt}
\end{figure}
The quantity $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ is plotted in Fig. \ref{t00RatioInt}. Again, the limits of integration were chosen to be $q_{min} = 1$ and $q_{max} = 50$. The ratio appears to oscillate with damping until about $x = 0.5$, but the amplitude of oscillations keeps increasing afterwards until very late times. This is quite unexpected as, from the point of view of the two-point functions, the ratios appeared damped monotonically with increasing $x$ values, after coarse-graining.
One could now ask why we used such low limits of integration. Using higher limits resulted in jaggedness in the plots coming from the higher frequency modes in the integral, which are difficult to integrate numerically. Since we are dealing with Hankel functions, themselves highly oscillatory, it is not astonishing that numerical integrators will have difficulty handling them. The fact that a lower step size in $x$ modified the observed jaggedness backs this hypothesis up.
Smaller steps in $x$ however means greater computing time. Analyzing plots with $q_{max} > 50$ revealed that the equilibrium position of oscillations in $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ would decrease to become closer to $1$. Given the problems encountered after integrating numerically, we chose to call upon Riemann sums of $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ instead of integrals. A step size in $q$ of order unity seemed appropriate and sufficient to draw our conclusions.
Fig. \ref{t00SumMaxVar} shows $\frac{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q_{max} = 50,$ 250, and 500 focusing on late times ($x \in [0.900, 0.999]$). From the plots one can infer that the higher the $q_{max}$ the lower the amplitude of oscillations of our ratio. Additionally, the equilibrium position of the latter gets arbitrarily near 1 as $q_{max}$ increases. For a value $q_{max} =500$, the ratio appears to remain at 1, up to 5 digits. This corroborates the fact that some equilibration process occurs as long as $q$-modes of order 100 and more are included. One last important characteristic that can be found in the figure, is the fact that the quotient does not seem to diverge at very late times but flattens out.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{t00RiemSumRatioMin1MaxVarVLate.pdf}
\caption{The quantity $\frac{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$ for $q_{max} = 50$, 250, and 500, for $x \in [0.900, 0.999]$. Higher $q_{max}$ values correspond to lower amplitude of oscillations of the quotient, and an equilibrium position closer to 1. For $q_{max} = 500$, the ratio is indistinguishable from 1 up to four decimal places.}
\label{t00SumMaxVar}
\end{figure}
Another path to consider is changing our lower limit $q_{min}$ in the integration, $q_{min} = 1$ corresponding to modes exiting the horizon. As seen in \ref{subsec:correlators} correlators characterizing such modes behave much differently than those for higher $q$ values. Fig. \ref{t00SumMinVar} shows $\frac{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q_{min} = 1$, 50, and 250. Similar conclusions as the ones obtained in Fig. \ref{t00SumMaxVar} can be drawn (except from the perspective of $q_{min}$), namely the higher the value of $q_{min}$, the closer the central value about which oscillations occur is to unity. Also, the amplitude decreases with higher $q_{min}$'s.
\begin{figure}[htbp]
\centering
\includegraphics[width=5 in, keepaspectratio]{t00RiemSumRatioMinVarMax500VLate.pdf}
\caption{The ratio $\frac{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$ for $q_{min} = 1$, 50, and 250, for $x \in [0.900, 0.999]$. The equilibrium position of the quotients and their amplitude of oscillations go down as $q_{min}$ rises.}
\label{t00SumMinVar}
\end{figure}
The stress-energy tensor seemed at first to be telling us a slightly different story about the equilibration of our state. However, focusing on the late time behavior of $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$, and/or changing the domain of $q$-values to sum over, allows us to reconcile the conclusions coming from our different quantities. Indeed, lower $q$-modes (of order unity) cross the horizon at earlier values of $x$ and have a much larger amplitude of oscillations as compared to the modes with $q$-values that are one or more orders of magnitude higher.
Our relatively low values of $q_{max}$ means any disparity from equilibrium would become mostly washed out by increasing $q_{max}$ by a factor of 10 and using proper integration techniques. Nevertheless one should still expect the late time increase to be observed with increasing precision, even though it would decrease in amplitude. A reasonable explanation lies in the horizon-exit of the modes at different times. Such modes are still accounted for in our sums at late times, responsible for the seemingly anomalous rise of $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$. Hence, the effects observed in Fig. \ref{t00RatioInt} should be attributed to the limit in precision, the bounds in $q$ of our numerical calculation, and horizon-crossing effects.
\section{Conclusions}
To check for the approach to equilibrium for any given system, said system has to be disturbed from the putative equilibrium state. Then, its relaxation, or lack thereof back to the original state can be studied. This is what we did here: we disturbed de Sitter space by attaching to it a flat space segment for conformal times $\eta \leq \eta_0$, and considered what happened to the quantum state of a test scalar field in this geometry. The claim being tested is that this state should relax to the ``thermal'' Bunch-Davies state.
The simplicity of the system allowed us to fully characterize the state by its various two-point functions and we used the ratios of these two-point functions in the disturbed state to their values in the BD state as our diagnostics of equilibration. We also used the stress-energy tensor as a check on whether our state evolved to the BD one.
What we found was that if we considered these quantities mode by mode there was no evidence of equilibration. Coarse-grained quantities (integrated over a range of $q$-modes) did show evidence of equilibration in cases where the modes were well within the horizon. It seems that the importance of coarse-graining in our analysis here is no different than it is in more familiar equilibrating systems. Integrating our quantities over momentum modes that remained inside a horizon defined by $\eta_0$, we saw that they did equilibrate, and the smaller the wavelengths of our modes, the earlier the equilibration. Particular attention was given to the analysis of the momentum-energy tensor which, initially, presented disparities when compared to the other observables. The range of horizon-exit times of our modes, the necessity to restrict our domain in $q$-space, and a finite precision in our numerical integrations accounted for such differences.
While our results corroborate the attractor behavior of the Bunch-Davies state, no notion of thermality was discussed. The Bunch-Davies state however is considered thermal \cite{BirrellDavies1982}, upon calculation of how an Unruh-DeWitt detector responds when the field is placed in such a state. Such ambiguity will be investigated and hopefully alleviated in future work (\cite{AlbrechtRichardHolman2014}) in which we address the thermality of our state, after developing a different way to calculate the response rate of an Unruh-DeWitt detector.
\begin{acknowledgments}
We thank McCullen Sandora for useful discussions. R.~H. was supported in part by the Department of Energy under grant DE-FG03-91-ER40682 as well as the John Templeton Foundation. He also thanks the Department of Physics at UC Davis for hospitality while this work was in progress. A.~A. and B.~R. were supported in part by DOE Grants DE-FG02-91ER40674 and DE-FG03- 91ER40674 and the National Science Foundation under Grant No. PHY11-25915.
\end{acknowledgments}
|
2,869,038,156,672 | arxiv | \section{Equivalence Rules}\label{apx:equivalence}
\begin{mathpar}
\inferrule*
{\mtyequiv s t T}
{\mtyequiv t s T}
\inferrule*
{\mtyequiv s t T \\ \mtyequiv t u T}
{\mtyequiv s u T}
\inferrule*
{x : T \in \Gamma}
{\mtyequiv[\vect \Gamma; \Gamma] x x T}
\inferrule*
{\mtyequiv[\vect \Gamma; \cdot]{t}{t'}T}
{\mtyequiv{\boxit t}{\boxit t'}{\square T}}
\inferrule*
{\mtyequiv{t}{t'}{\square T} \\
|\vect \Delta| = n}
{\mtyequiv[\vect \Gamma; \vect \Delta]{\unbox n t}{\unbox n t'}{T'}}
\inferrule*
{\mtyequiv[\vect \Gamma;(\Gamma, x : S)]{t}{t'}T}
{\mtyequiv[\vect \Gamma; \Gamma]{\lambda x. t}{\lambda x. t'}{S \longrightarrow T}}
\inferrule*
{\mtyequiv{t}{t'}{S \longrightarrow T} \\
\mtyequiv{s}{s'}S}
{\mtyequiv{t\ s}{t'\ s'}T}
\inferrule*
{\mtyping[\vect \Gamma; \cdot]{t}{T} \\
|\vect \Delta| = n}
{\mtyequiv[\vect \Gamma; \vect \Delta]{\unbox{n}{(\boxit t)}}{t\{n / 0 \}}{T}}
\inferrule*
{\mtyping[\vect \Gamma;(\Gamma, x : S)]t T \\ \mtyping[\vect \Gamma; \Gamma] s S}
{\mtyequiv[\vect \Gamma; \Gamma]{(\lambda x. t) s}{t[s/x]}{T}}
\inferrule*
{\mtyping{t}{\square T}}
{\mtyequiv{t}{\boxit{\unbox 1 t}}{\square T}}
\inferrule*
{\mtyping{t}{S \longrightarrow T}}
{\mtyequiv t {\lambda x. (t\ x)}{S \longrightarrow T}}
\end{mathpar}
\section{Introduction}\label{sec:intro}
The Curry-Howard correspondence fundamentally connects formulas and
proofs to types and programs. This view not only
provides logical explanations for computational
phenomena, but also serves as a guiding principle in designing type
theories and programming languages.
Extending the Curry-Howard correspondence to modal logic has been fraught with
challenges. One of the first such calculi for the modal logic S4 were proposed by
Bierman and de Paiva~\cite{Bierman:96,DBLP:journals/sLogica/BiermanP00}
and subsequently by Pfenning and Davies~\cite{pfenning_judgmental_2001}. A key
characteristic of this work is to separate the assumptions that are valid in every
world from the assumptions that presently hold in the current world. This leads to a
dual-context style formulation that satisfies substitution properties (see for example
\cite{DBLP:conf/icalp/GhaniPR98}).
In recent years, modal type systems based on this
dual-context style have received renewed attention and provided
insights into a wide range of seemingly unconnected areas: from
reasoning about universes in homotopy type
theory~\cite{licata_internal_2018,shulman_brouwers_2018} to
mechanizing meta-theory~\cite{Pientka:POPL08,Pientka:LICS19}, to reasoning about
effects~\cite{Zyuzin:ICFP21}, and meta-programming
\cite{Jang:POPL22}. This line of work builds on the dual-context
formulation of Pfenning and Davies. However, due to the permutation
conversions it is also challenging to extend to dependent type
theories and directly prove normalization via logical relations.
An alternative to dual-context-style modal calculi is pursued by Clouston,
Birkedal and collaborators (see
\cite{clouston_fitch-style_2018,gratzer_implementing_2019}).
This line of work is inspired
by the Fitch-style proof representation given by Borghuis~\cite{borghuis_coming_1994}. Following Borghuis they have
been calling their representation the Fitch style.
Fitch-style systems model Kripke semantics~\cite{kripke_semantical_1963} and use
locks to manage assumptions in a context. To date, existing
formulations of $S4$ in Fitch style~\cite{clouston_fitch-style_2018,gratzer_implementing_2019}
mainly considers idempotency where $\square T$ is isomorphic to
$\square\square T$. However, this distinction is important from
a computational view. For example, in multi-staged programming (see
\cite{pfenning_judgmental_2001,davies_modal_2001}) $\square T$
describes code generated in one stage, while $\square\square T$
denotes code generated in two stages. It is also fruitful to keep the
distinction from a theoretical point of view, as it allows for a fine
grained study of different, related modal logics.
In this paper, we take \ensuremath{\lambda^{\to\square}}\xspace, an intuitionistic version of modal logic $S4$, from
Pfenning, Wong and Davies~\cite{Pfenning95mfps,davies_modal_2001} as a starting
point.
Historically, \ensuremath{\lambda^{\to\square}}\xspace is also motivated by Kripke semantics~\cite{kripke_semantical_1963} (see \cite[Section 3]{Davies:POPL96} and
\cite[Section 4]{davies_modal_2001}) and is hence referred to as the Kripke style.
Unlike Fitch-style systems where
worlds are represented by segments between two adjacent ``lock''
symbols, in $\ensuremath{\lambda^{\to\square}}\xspace$, each world is represented by a context in a
context stack. Nevertheless, the conversion between Kripke
and Fitch styles is largely straightforward\footnote{However, we note
that \ensuremath{\lambda^{\to\square}}\xspace has never been identified as or called a Fitch-style system.}. Here, we will often use ``context'' and ``world''
interchangeably.
In \ensuremath{\lambda^{\to\square}}\xspace, a term $t$ is typed in a context stack $\vect \Gamma$ where
initially, the context stack consists of a single local context
which is itself empty (i.e. $\epsilon;\cdot$).
\begin{mathpar}
\mtyping[\epsilon; \Gamma_1; \ldots; \Gamma_n] t T
\text{or}
\mtyping t T
\end{mathpar}
The rightmost (or topmost) context represents the
current world. In the $\square$ introduction rule, we extend the
context stack with a new world (i.e. new context). In the elimination
rule, if $\square T$ is true in a context stack
$\vect \Gamma$, then $T$ is true in any worlds
$\vect \Gamma; \Delta_1;\ldots; \Delta_n$ reachable from
$\vect \Gamma$.
The choice of the level
$n$ corresponds to reflexivity and transitivity of the accessibility
relation between worlds in the Kripke semantics.
\begin{mathpar}
\inferrule*
{\mtyping[\vect \Gamma; \cdot] t T}
{\mtyping{\boxit t}{\square T}}
\inferrule*
{\mtyping t {\square T}}
{\mtyping[\vect \Gamma; \Delta_1; \ldots; \Delta_n]{\unbox n t}{T}}
\end{mathpar}
There are two key advantages of this $\ensuremath{\texttt{unbox}}\xspace$ formulation in \ensuremath{\lambda^{\to\square}}\xspace.
First, it introduces a syntactic convenience to use natural numbers to describe levels
and therefore allows us to elegantly capture various modal logics
differing only in one parameter of the $\ensuremath{\texttt{unbox}}\xspace$ rule. By introducing $\ensuremath{\texttt{unbox}}\xspace$
levels, $\square$ is naturally non-idempotent. Having $\ensuremath{\texttt{unbox}}\xspace$ allows us to study the relation of various
sublogics of $S4$ and treat them uniformly and compactly.
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Axiom \textbackslash\ System & $K$ & $T$ & $K4$ & $S4$ \\ \hline
$K$: $\square (S \longrightarrow T) \to \square S \to \square T$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\
$T$: $\square T \to T$ & & $\checkmark$ & & $\checkmark$ \\
$4$: $\square T \to \square \square T$ & & & $\checkmark$ & $\checkmark$ \\
\hline
$\ensuremath{\texttt{unbox}}\xspace$ level (UL) $n$ & \{ 1 \} & \{ 0, 1 \} & $\mathbb{N}^+$ & $\mathbb{N}$ \\
\hline
\end{tabular}
\end{center}
Second, compared to dual-context formulation, it directly
corresponds to computational idioms quote ($\ensuremath{\texttt{box}}\xspace$) and unquote
($\ensuremath{\texttt{unbox}}\xspace$) in practice, thereby giving a logical foundation to
multi-staged meta-programming \cite{davies_modal_2001}. In particular,
allowing $n=0$ gives us the power to
not only generate code, but also to run and evaluate code.
A major stumbling block in reasoning about \ensuremath{\lambda^{\to\square}}\xspace (see also
\cite{goubaultlarrecq:96}) is the fact that it is not obvious how to
define substitution properties for context stacks. This
prevents us from formulating an explicit substitution calculus for
\ensuremath{\lambda^{\to\square}}\xspace which may serve as an efficient implementation. More importantly,
it also seems to be the bottleneck in developing normalization proofs
for \ensuremath{\lambda^{\to\square}}\xspace that can be easily adapted to the various subsystems of S4.
In this paper, we make the following contributions:
\begin{enumerate}
\item We introduce the concept of of \emph{Kripke-style substitutions} on
context stacks (\Cref{sec:usubst}) which combines two previously separate
concepts: modal transformations on context stacks (such as modal
weakening and fusion) and substitution properties for individual assumptions within
a given context.
\item We extend the standard presheaf model~\cite{altenkirch_categorical_1995} for
simply typed $\lambda$-calculus and obtain a normalization by evaluation (NbE)
algorithm for \ensuremath{\lambda^{\to\square}}\xspace in \Cref{sec:presheaf}. One critical feature of our development is that the algorithm
and the proof accommodate all four subsystems of S4 \emph{without change}.
\item As opposed to Nanevski et al.~\cite{nanevski_contextual_2008}, we provide a
contextual type formulation in $\ensuremath{\lambda^{\to\square}}\xspace$ inspired by our notion of Kripke-style
substitutions in \Cref{sec:contextual} which can serve as a construct for describing open code in a
meta-programming setting.
\end{enumerate}
This work opens the door to a substitution calculus and normalization of a dependently
typed modal type theory. There are A partial formalization~\cite{artifact} of this work in
Agda~\cite{agda,norell_towards_2007} and an accompanying technical
report~\cite{hu_investigation_2022}.
\section{Related Work and Conclusion}\label{sec:related}
\subsection{Modal Type Theories}
There are many early attempts to give a constructive formulation of
modal logic, especially the modal logic S4 starting back in the 1990's~\cite{bierman_intuitionistic_1996,bierman_intuitionistic_2000,bellin_extended_2001,alechina_categorical_2001,borghuis_coming_1994,martini_computational_1996}.
Pfenning and Davies~\cite{Davies:POPL96,pfenning_judgmental_2001} give the first formulation of
$S4$ in the dual-context style where we separate the assumptions that are
valid in every world from assumptions that are true in the current
world. This leads to a dual-context style formulation that
satisfies substitution properties and has found many applications
from staged computation to homotopy type theory (HoTT). For example,
Shulman~\cite{shulman_brouwers_2018} extends idempotent $S4$ with dependent types, called spatial type
theory and Licata et al.~\cite{licata_internal_2018} define crisp type theory,
which removes the idempotency from spatial type theory. However, both papers do not
give a rigorous justification of their type theories.
Most recently Kavvos~\cite{kavvos_dual-context_2017} investigates modal
systems based on this dual-context formulation for Systems $K$, $T$, $K4$ and $S4$ as well
as the L\"ob induction principle. Kavvos also gives categorical semantics for
these systems.
However, it has been difficult to develop direct normalization proofs
for these dual-context formulations, since we must handle extensional properties like commuting
conversions (c.f. \cite{kavvos_dual-context_2017,girard_proofs_1989}). Further, our four target systems have very different formulations in
the dual-context style as shown by Kavvos~\cite{kavvos_dual-context_2017}. As a
consequence, it is challenging to have one single normalization
algorithm for all our four target systems.
An alternative to the dual-context style is the Fitch-style
approach pursued by Clouston,
Birkedal and collaborators (see
\cite{clouston_fitch-style_2018,gratzer_implementing_2019,birkedal_modal_2020}). At the
high-level, Fitch-style systems also model the Kripke semantics, but instead of using one context for each world,
the Fitch style uses a special symbol (usually $\text{\faLock}$) to segment one context
into multiple sections, each of them representing one world. Variables to the left of
the rightmost $\text{\faLock}$ are not accessible.
Our normalization proof and the generalization of $\ensuremath{\lambda^{\to\square}}\xspace$ to contextual types
also can likely be adapted to those systems.
Clouston~\cite{clouston_fitch-style_2018} gives Systems $K$ and idempotent $S4$ in
the Fitch style and discusses their categorical
semantics. Gratzer et al.~\cite{gratzer_implementing_2019} describe idempotent $S4$ with
dependent types.
Birkedal et al.~\cite{birkedal_modal_2020} give $K$ with dependent types and formulate dependent
right adjoints, an important categorical concept of
modalities. Gratzer et al.~\cite{gratzer_multimodal_2020,gratzer_multimodal_2021,gratzer_multimodal_2022}
proposes MTT, a multimode type theory, which describes interactions between multiple
modalities. Though MTT uses $\text{\faLock}$ to segment contexts, we believe
that MTT is better understood as a generalization of the dual-context
style and is apparent in the let-based formulation of the box
elimination rule. This different treatment of the box elimination also
makes it less obvious how to understand $\ensuremath{\lambda^{\to\square}}\xspace$ as a subsystem
of MTT.
Currently, existing Fitch-style systems mostly consider idempotent $S4$ where $\square T$ is isomorphic to
$\square\square T$. However, we consider this distinction to be
important from a computational view. For example, in multi-staged programming (see
\cite{pfenning_judgmental_2001,davies_modal_2001}) $\square T$
and $\square\square T$ describe code generated in one stage and two stages,
respectively. Moreover, $\unbox 0 t$ is
interpreted as evaluating and running the code generated by $t$.
It is nevertheless possible to develop a non-idempotent $S4$ system using $\ensuremath{\texttt{unbox}}\xspace$
levels $n$ in the Fitch style by defining a function which truncates a context until its
$n$'th $\text{\faLock}$. This is however more elegantly handled in \ensuremath{\lambda^{\to\square}}\xspace, because
worlds are separated syntactically. For this reason, we consider \ensuremath{\lambda^{\to\square}}\xspace
as a more versatile and more suitable foundation for developing a
dependently typed meta-programming system. In particular, our
extension to contextual types shows how we can elegantly accomodate
reasoning about open code which is important in practice.
Though context stacks in \ensuremath{\lambda^{\to\square}}\xspace are taken from Pfenning, Wong and Davies'
development~\cite{pfenning_judgmental_2001,davies_modal_2001},
Borghuis~\cite{borghuis_coming_1994} also uses context stacks in his
development of modal pure type systems. The elimination rules use explicit weakening
and several ``transfer'' rules while \ensuremath{\lambda^{\to\square}}\xspace incorporates both using $\ensuremath{\texttt{unbox}}\xspace$ levels,
which we consider more convenient and more practical from a programmer's
point of view. Martini and Masini~\cite{martini_computational_1996} also use context stacks. Their system
annotates all terms with a level which we consider too verbose to be practical.
\subsection{Normalization}
For the dual-context style, Nanevski et al.~\cite{nanevski_contextual_2008} give
contextual types and prove normalization by
reduction to another logical system with permutation
conversions~\cite{de_groote_strong_1999}. This means that the proof is indirect and
does not directly yield an algorithm for normalizing
terms. Kavvos~\cite{kavvos_dual-context_2017} gives a rewriting-based normalization
proof for dual-context style systems with L\"ob induction. Most
recently, Gratzer~\cite{gratzer_multimodal_2022} proves the normalization for MTT. It is not
clear to us whether techniques in~\cite{gratzer_multimodal_2022} scale
to dependently typed Kripke-style systems, as the system have
different treatment of the box elimination.
There are two recent papers closely related to our work: Valliappan et al.~\cite{VRC}
and Gratzer et al.~\cite{gratzer_implementing_2019}. %
\cite{VRC} gives different simply typed formulations in the Fitch style for all four
subsystems of $S4$ and as a result, a different normalization proof must be given to
each subsystem individually. %
Gratzer et al.~\cite{gratzer_implementing_2019} follow
Abel~\cite{abel_normalization_2013} and give an NbE proof for dependently typed
idempotent $S4$. Since the proof in~\cite{gratzer_implementing_2019} is parameterized
by an extra layer of poset to model the Kripke world structure introduced by
$\square$, as pointed out in~\cite{gratzer_multimodal_2020}, this proof cannot even be
easily adapted to dependently typed $K$ (see Birkedal et
al.~\cite{birkedal_modal_2020}). %
Compared to these two papers, our model is a moderate extension to the standard
presheaf model, requiring no such extra layer and adapting to multiple logics
automatically, and we are confident that it will generalize more easily to the
dependently typed setting. The ultimate reason why we only need one proof to handle
all four subsystems of $S4$ is that we \emph{internalize} the Kripke structure of
context stacks in the presheaf model. The internalization happens in the base
category, where MoTs are encoded as part of K-weakenings. The internalization
captures peculiar behaviours of different systems and conflates the extra Kripke
structure from context stacks and the standard model construction, so that the proofs
become much simpler and closer to the typical construction.
\subsection{Conclusion and Future Work}
In this paper, we present a normalization-by-evaluation (NbE) algorithm
for the simply-typed modal $\lambda$-calculus (\ensuremath{\lambda^{\to\square}}\xspace) which covers
all four subsystems of S4. The key to achieving this result is our
notion of K-substitutions which
provides a unifying account for modal
transformations and term substitutions and allows us to formulate
a substitution calculus for modal logic S4 and its various
subsystems. Such calculus is not only important from a
practical point of view, but play also a central role in our theoretical
analysis. Using insights gained from K-substitutions we organize a presheaf model, from which we extract a normalization
algorithm. The algorithm can be implemented in conventional programming languages
and directly account for the normalization of \ensuremath{\lambda^{\to\square}}\xspace. Deriving from
K-substitutions, we are also able to give a formulation for contextual types with
$\ensuremath{\texttt{unbox}}\xspace$ and context stacks, which had
been challenging prior to our observation of K-substitutions and is
important for representing open code in a meta-programming setting.
This work serves as a basis for further investigations into
coproducts~\cite{altenkirch_normalization_2001} and categorical structure of context
stacks.
We also see this
work as a step towards a Martin-L\"of-style modal type theory in which open code has
an internal shallow representation. With a dependently typed extension and contextual types,
it would allow us to
develop a \emph{homogeneous} meta-programming system with dependent
types which has been challenging to achieve.
\section{Contextual Types}\label{sec:contextual}
In $S4$ and a meta-programming setting, $\square$ is interpreted as stages, where a term of
type $\square T$ is considered as a term of type $T$ but available only in the next
stage. However, as pointed out in~\cite{davies_modal_2001,nanevski_contextual_2008},
$\square$ only characterizes closed code. Nanevski et al.~\cite{nanevski_contextual_2008}
propose contextual types which relativize the surrounding context of a term so
representing open code becomes possible. However, this notion of contextual types
is in the dual-context style and how contextual types can be formulated with $\ensuremath{\texttt{unbox}}\xspace$ and
context stacks remains open. In this section, we answer this question
by utilizing our notion of K-substitutions.
\subsection{Typing Judgments and Semi-K-substitutions}
With contextual types, we augment the syntax as follows:
\begin{align*}
S, T &:= \cdots \;|\; \cbox{\vect \Delta}{T} &
s,t,u &:= \cdots \;|\; \cbox{\vect \Delta}{t} \;|\; \cunbox{t}{\svect \sigma}
\end{align*}
$\cbox{\vect \Delta}{T}$ is a contextual type. It captures a list of contexts $\vect \Delta$ which a
term of type $T$ can be open in. Note that $\vect \Delta$ here can be
empty. This notion of contextual types is very general and captures a term open in
multiple stages. $\cbox{\vect \Delta}{t}$ is the constructor of a contextual type, where the contexts
that it captures are specified. $\cunbox{t}{\svect \sigma}$ is the eliminator. Instead of
an $\ensuremath{\texttt{unbox}}\xspace$ level, we now require a different argument $\svect \sigma$, which is a
\emph{semi-K-substitution} storing $\ensuremath{\texttt{unbox}}\xspace$ offsets and terms. We will discuss more
very shortly.
The introduction rule for contextual types is straightforward:
\begin{mathpar}
\inferrule
{\mtyping[\vect \Gamma;\vect \Delta]{t}{T}}
{\mtyping{\cbox{\vect \Delta}{t}}{\cbox{\vect \Delta}{T}}}
\end{mathpar}
If we let $\vect \Delta = \epsilon; \cdot$, then we recover $\square$. If we let $\vect \Delta =
\epsilon; \Delta$ for some $\Delta$, then we have an open term $t$ which uses only
assumptions in the same stage. If $\vect \Delta$ has more contexts, then $t$ is an open
term which uses assumptions from previous stages. We can also let $\vect \Delta =
\epsilon$. In this case, $\cbox{\epsilon}{T}$ is isomorphic to $T$ and is not too
meaningful but allowing so makes our formulation mathematically cleaner.
The elimination rule, on the other hand, becomes significantly more complex:
\begin{mathpar}
\inferrule
{\mtyping[\trunc\vect \Gamma{\sLtotal{\svect \sigma}}]{t}{\cbox{\vect \Delta}T} \\
\svect \sigma : \vect \Gamma \Rightarrow_s \vect \Delta}
{\mtyping{\cunbox{t}{\svect \sigma}}{T}}
\end{mathpar}
It is no longer enough to eliminate with just an $\ensuremath{\texttt{unbox}}\xspace$ level because the
eliminator must specify how to replace all variables in $\vect \Delta$ and how
contexts in $\vect \Gamma$ and $\vect \Delta$ relate. This information is collectively stored in a
\emph{semi-K-substitution} $\svect \sigma$ (notice the semi-arrow), which intuitively is not yet a valid
K-substitution, but close:
\begin{definition}
A semi-K-substitution $\svect \sigma$ is defined as follows:
\begin{align*}
\svect \sigma, \svect \delta := \varepsilon \;|\; \sext \svect \sigma n \sigma
\tag*{Semi-K-substitutions, $\textsf{SSubsts}$}
\end{align*}
\begin{mathpar}
\inferrule
{ }
{\varepsilon : \vect \Gamma \Rightarrow_s \epsilon}
\inferrule
{\svect \sigma : \vect \Gamma \Rightarrow_s \vect \Delta \\ |\vect \Gamma'| = n \\ \sigma : \vect \Gamma;\vect \Gamma' \Rightarrow \Delta}
{\sext \svect \sigma n \sigma : \vect \Gamma;\vect \Gamma' \Rightarrow_s \vect \Delta;\Delta}
\end{mathpar}
\end{definition}
Compared to K-substitutions, semi-K-substitutions differ in the base case, where
empty $\varepsilon$ is permitted, so they are not valid K-substitutions. However, if a
semi-K-substitution is prepended by an identity K-substitution, then the result is a
valid K-substitution. Also, $\sLtotal{\svect \sigma}$ computes the sum of all offsets in $\svect \sigma$:
\[
\begin{array}{llp{2cm}ll}
\vect \id; &: \textsf{SSubsts} \to \textsf{Substs} & & \sLtotal{\_} &: \textsf{SSubsts} \to \mathbb{N} \\
\vect \id; \varepsilon &:= \vect\id & & \sLtotal \varepsilon &:= 0\\
\vect \id; (\sext \svect \sigma n \sigma) &:= \sext{(\vect \id; \svect \sigma)}n\sigma
& & \sLtotal{\sext \svect \sigma n \sigma} &:=
\sLtotal
\svect \sigma
+ n
\end{array}
\]
We can prove the following lemma:
\begin{lemma}\label{lem:ssubsts-id}
If $\svect \sigma : \vect \Gamma \Rightarrow_s \vect \Delta$, then $\vect\id; \svect \sigma : \vect \Gamma \Rightarrow_s
\trunc\vect \Gamma{\sLtotal{\svect \sigma}}; \vect \Delta$.
\end{lemma}
This lemma is needed to justify the $\beta$ equivalence rule which we are about to
discuss.
\subsection{Equivalence of Contextual Types}
Having defined the introduction and elimination rules, we are ready to describe how
they interact. Note that the congruence rules are standard so we omit them here and
only describe the $\beta$ and $\eta$ rules:
\begin{mathpar}
\inferrule
{\mtyping[\trunc \vect \Gamma {\sLtotal{\svect \sigma}};\vect \Delta]{t}{T} \\
\svect \sigma : \vect \Gamma \Rightarrow_s \vect \Delta}
{\mtyequiv{\cunbox{\cbox{\vect \Delta}{t}}{\svect \sigma}}{t[\vect\id; \svect \sigma]}{T}}
\inferrule
{\mtyping{t}{\cbox{\vect \Delta}{T}}}
{\mtyequiv{t}{\cbox{\vect \Delta}{\cunbox{t}{\svect \id}}}{\cbox{\vect \Delta}{T}}}
\end{mathpar}
In the $\eta$ rule, $\svect \id$ denotes the identity semi-K-substitution, which is
defined as
\begin{align*}
\svect \id_{\vect \Delta} &: \vect \Gamma; \vect \Delta \Rightarrow_s \vect \Delta \\
\svect \id_{\vect \Delta} &:= \varepsilon; \underbrace{\id; \cdots ; \id}_{|\vect \Delta|}
\end{align*}
We omit the subscript whenever possible. Both rules are easily justified. In the $\beta$ rule, since $t$ is typed in the
context stack $\trunc \vect \Gamma {\sLtotal{\svect \sigma}};\vect \Delta$, we obtain a term in
$\vect \Gamma$ by applying $\vect\id; \svect \sigma$ due to Lemma \ref{lem:ssubsts-id}. In the
$\eta$ rule, by definition, we know
$\trunc{(\vect \Gamma; \vect \Delta)}{\sLtotal{\svect \id}} = \vect \Gamma$ and therefore
$\mtyping[\vect \Gamma; \vect \Delta]{\cunbox{t}{\svect \id}}{T}$.
In an extensional setting, where the constructor and the eliminator of modalities are
congruent as done in this paper, we can show that the contextual type $\cbox{\epsilon;
\Delta_1; \cdots; \Delta_n}{T}$ is isomorphic to $\square(\Delta_1 \to \cdots \square
(\Delta_n \to T))$ if we view contexts $\Delta_i$ as iterative products. This implies
introducing contextual types does not increase the logical strength of the system and
the system remains normalizing. Nevertheless, contextual types given here seem to have
a natural adaptation to dependent types and set a stepping stone towards representing
open code with dependent types and therefore a homogeneous,
dependently typed meta-programming system.
\section{Normalization: A Presheaf Model}\label{sec:presheaf}
In this section, we present our NbE algorithm based on a presheaf model. Once we determine
the base category, the rest of the construction is largely
standard following Altenkirch et al.~\cite{altenkirch_categorical_1995} with minor
differences, which we will highlight.
To construct the presheaf model, we first determine the base category. Then we
interpret types, contexts and context stacks to presheaves and terms to natural
transformations. After that, we define two operations, reification and reflection,
and use them to define the NbE algorithm. Last, we briefly discuss the completeness and
soundness proof.
The algorithm is implemented in Agda~\cite{artifact}.
\subsection{Kripke-style Weakenings}
In the simply typed $\lambda$-calculus (STLC), the base category is the category
of weakenings. In \ensuremath{\lambda^{\to\square}}\xspace, we must consider the effects of MoTs and we
will use the more general notion of \emph{Kripke-style
weakenings} or K-weakenings which characterizes how a well-typed term in \ensuremath{\lambda^{\to\square}}\xspace moves from one context
stack to another.
\begin{definition}
A K-weakening $\vect \gamma : \vect \Gamma \Rightarrow_w \vect \Delta$ is:
\[
\vect \gamma := \varepsilon \;|\; q(\vect \gamma) \;|\; p(\vect \gamma) \;|\; \sext
\vect \gamma n {} \qquad\hfill \mbox{(K-weakenings)}
\]
\begin{mathpar}
\inferrule
{ }
{\varepsilon: \epsilon ; \cdot \Rightarrow_w \epsilon ; \cdot}
\inferrule
{\vect \gamma : \vect \Gamma; \Gamma \Rightarrow_w \vect \Delta; \Delta}
{q(\vect \gamma) : \vect \Gamma; (\Gamma, x : T) \Rightarrow_w \vect \Delta; (\Delta, x : T)}
\inferrule
{\vect \gamma : \vect \Gamma; \Gamma \Rightarrow_w \vect \Delta; \Delta}
{p(\vect \gamma) : \vect \Gamma; (\Gamma, x : T) \Rightarrow_w \vect \Delta; \Delta}
\inferrule
{\vect \gamma : \vect \Gamma \Rightarrow_w \vect \Delta \\ |\vect \Gamma'| = n}
{\sext \vect \gamma n {~} : \vect \Gamma; \vect \Gamma' \Rightarrow_w \vect \Delta; \cdot}
(\text{the offset $n$ depends on UL\xspace})
\end{mathpar}
\end{definition}
The $q$ constructor is the identity extension of the K-weakening
$\vect \gamma$, while $p$ accommodates weakening of an individual
context. These constructors are typical in the category of
weakenings~\cite[Definition 2]{altenkirch_categorical_1995}. To accommodate MoTs, we add to the category of
weakenings the last rule which transforms a context stack.
In the last rule, the offset $n$ is again parametric, subject to the \emph{same} UL\xspace
as the syntactic system, and its choice
determines which modal logic the system corresponds to.
Note that we also write $\vect \id$ for the identity K-weakening.
Following our
truncation and truncation offset operations for K-substitutions
in~\Cref{sec:usubst}, we can easily define these operations
together with composition also for K-weakenings. We omit these
definitions for brevity and we simply note that a truncated
K-weakening remains a K-weakening. Now we obtain the base category:
\begin{lemma}
K-weakenings form a category $\WC$.
\end{lemma}
\subsection{Presheaves}
The NbE proof is built on the presheaf category $\widehat \WC$ over $\WC$. $\widehat\WC$ has presheaves
$\WC^{op} \Rightarrow \SetC$ as objects and natural transformations as morphisms. We know from
the Yoneda lemma that two presheaves $F$ and $G$ can form a presheaf exponential $F
\hfunc G$:
\[
\begin{array}{ll}
F \hfunc G &: \WC^{op} \Rightarrow \SetC \\
(F \hfunc G)_{\;\vect \Gamma} &:= \forall \vect \Delta \Rightarrow_w \vect \Gamma. F_{\;\vect \Delta} \to G_{\;\vect \Delta}
\end{array}
\]
It is natural in $\vect \Delta$. As a convention, we use subscripts for both functorial applications and
natural transformation components. As in~\cite{altenkirch_categorical_1995}, presheaf exponentials model functions.
To model $\square$, we define
\[
\begin{array}{rl}
\hsquare F & : \WC^{op} \Rightarrow \SetC \\
(\hsquare F)_{\;\vect \Gamma} & := F_{\;\vect \Gamma; \cdot}
\end{array}
\]
where $F$ is a presheaf. In Kripke semantics, $\hsquare$ takes $F$ to
the next world. Unlike presheaf exponentials which always exist regardless of the base
category, $\hsquare$ requires the base category to have the notion of ``the next
world''. This dependency in turn allows us to embed the Kripke structure of context
stacks into the base category, so that our presheaf model can stay a moderate
extension of the standard construction~\cite{altenkirch_categorical_1995}.
With this setup, we give the interpretations of types, contexts, and
context stacks in \Cref{fig:intp-presheaf}.
The interpretation of the base type $B$ is the presheaf from context stacks to neutral
terms of type $B$. We write $\textsf{Ne}\ T\ \vect \Gamma$ for the set of
neutral terms of type $T$ in stack $\vect \Gamma$. $\textsf{Ne}\ T$ then is the presheaf
$\vect \Gamma \mapsto \textsf{Ne}\ T\
\vect \Gamma$. $\textsf{Nf}\ T\ \vect \Gamma$ and $\textsf{Nf}\ T$ are defined similarly.
The case $\intp{\square T} := \hsquare \intp{T}$ states that
semantically, a value of $\intp{\square T}$ is just a value of $\intp{T}$ in the next
world, which implicitly relies on
unified weakening's capability of expressing MoTs.
$\hat\top$ are $\hat\times$ are a chosen terminal object and products in $\widehat\WC$
and $*$ is \emph{the only element} in the chosen singleton set.
The interpretation of context stacks is more interesting. In the step case, $\vect \Gamma;
\Gamma$ is interpreted as a product. To extract both part of $\vect \rho
\in \intp{\vect \Gamma}_{\vect \Delta}$, we write $(\pi, \rho) := \vect \rho$ where $(n,\vect \rho') := \pi$.
The first component, namely $\pi$, again consists of two parts:
1) the level $n$ satisfying $n < |\vect \Delta|$ which corresponds to the
MoTs that we support. We note that our definitions again apply to any
of the combinations of Axioms $K$, $T$ and $4$ depending on the choice of
$n$. 2) the recursive interpretation of $\vect \Gamma$ in the truncated stack $\trunc
\vect \Delta n$ described by $\vect \rho'$. This stack truncation is necessary to interpret $\ensuremath{\texttt{unbox}}\xspace$.
The second component, namely $\rho$, describes the interpretation of
the top-most context $\Gamma$.
The fact that our interpretation of context stacks stores the level $n$
ultimately justifies the offsets stored in K-substitutions.
\begin{figure*}
\begin{minipage}[t]{.2\textwidth}
\begin{align*}
\intp{\_} &: \textsf{Typ} \to \WC^{op} \Rightarrow \SetC \\
\intp{B} &:= \textsf{Ne}\ B \\
\intp{\square T} &:= \hsquare \intp{T} \\
\intp{S \longrightarrow T} &:= \intp{S} \hfunc \intp{T}
\end{align*}
\end{minipage}
\begin{minipage}[t]{.3\textwidth}
\begin{align*}
\intp{\_} &: \textsf{Ctx} \to \WC^{op} \Rightarrow \SetC \\
\intp{\cdot} &:= \hat\top \\
\intp{\Gamma, x : T} &:= \intp{\Gamma} \hat\times \intp{T}
\end{align*}
\end{minipage}
\begin{minipage}[t]{.4\textwidth}
\begin{align*}
\intp{\_} &: \vect\textsf{Ctx} \to \WC^{op} \Rightarrow \SetC \\
\intp{\epsilon; \Gamma} &:= \hat\top \hat \times \intp{\Gamma} \\
\intp{\vect \Gamma; \Gamma}_{\;\vect \Delta} &:= (\Sigma n <
|\vect \Delta|. \intp{\vect \Gamma}_{\;\trunc \vect \Delta n})
\;\times\; \intp{\Gamma}_{\;\vect \Delta}
\tag{where the offset $n$ depends on UL\xspace }
\end{align*}
\end{minipage}
\caption{Interpretations of types, contexts and context stacks to presheaves}\label{fig:intp-presheaf}
\end{figure*}
\begin{lemma}[Functoriality]
$\intp{T}$, $\intp{\Gamma}$ and $\intp{\vect \Gamma}$ are presheaves.
\end{lemma}
Functoriality means the interpretations also act on morphisms in $\WC$. Given $\vect \gamma
: \vect \Gamma \Rightarrow_w \vect \Delta$ and $a \in \intp{T}_{\;\vect \Delta}$, we write $a[\vect \gamma] \in
\intp{T}_{\;\vect \Gamma}$. We intentionally overload the notation for applying K-substitutions
to draw a connection. This notation also applies for morphism actions of
$\intp{\Gamma}$ and $\intp{\vect \Gamma}$.
\begin{figure*}
\vspace{-10px}
\[
\begin{array}{l@{\;}l}
\multicolumn{2}{l}{\mbox{Evaluation}\qquad \intp{\_}\!\quad: \mtyping t T \to \intp{\vect \Gamma} \Rightarrow \intp{T}}\\
\multicolumn{2}{l}{\mbox{Expanded form}~ \intp{t}_{\;\vect \Delta} : \intp{\vect \Gamma}_{\;\vect \Delta} {\;{\to}\;} \intp{T}_{\;\vect \Delta}}\\
\intp{x}_{\;\vect \Delta} ((\_, \rho))
& := \rho(x) \hfill\mbox{lookup $x$ in $\rho$} \\
\intp{\boxit t}_{\;\vect \Delta}(\vect \rho)
&:= \intp{t}_{\;\vect \Delta; \cdot} (((1, \vect \rho), *)) \\
\intp{\unbox n t}_{\vect \Delta}(\vect \rho)
&:= \intp{t}_{\;\trunc \vect \Delta m}(\trunc \vect \rho n)[\vect \id; \Uparrow^m]
\qquad \mbox{where $m := \Ltotal \vect \rho n$ and $\vect
\id;\Uparrow^m: \vect \Delta \Rightarrow_w \trunc \vect \Delta m ; \cdot$} \\
\intp{\lambda x. t}_{\;\vect \Delta}(\vect \rho)
&:= (\vect \gamma : \vect \Delta' \Rightarrow_w \vect \Delta)(a) \mapsto
\intp{t}_{\vect \Delta'} ((\pi, (\rho, a)))
\hfill \text{where $(\pi, \rho) := \vect \rho[\vect \gamma]$} \\[0.2em]
\intp{t\ s}_{\;\vect \Delta}(\vect \rho) & := \intp{t}_{\;\vect \Delta} (\vect \rho, \vect{\id}_{\;\vect \Delta}~,~
\intp{s}_{\;\vect \Delta}(\vect \rho)) \\[0.5em]
\multicolumn{2}{l}{\mbox{Reification}\quad \downarrow^T : \intp{T} \Rightarrow \textsf{Nf}\ T}\\
\downarrow^B_{\vect \Gamma}(a) &:= a \\[0.75em]
\downarrow^{\square T}_{\vect \Gamma}(a)
&:= \boxit \downarrow^T_{\vect \Gamma; \cdot}(a)
\hfill \mbox{notice $a \in (\hsquare\intp{T})_{\vect \Gamma} = \intp{T}_{\vect \Gamma; \cdot}$}\\[0.75em]
\downarrow^{S \longrightarrow T}_{\vect \Gamma; \Gamma}(a)
&:= \lambda x. \downarrow^T_{\vect \Gamma; (\Gamma, x : S)}(a~(p(\vect \id)~,~{\uparrow^S_{\vect \Gamma; (\Gamma, x : S)}\!(x)}))
\hfill \mbox{where $p(\vect \id) : \vect \Gamma; \Gamma, x{:} S \Rightarrow_w \vect \Gamma; \Gamma$}\\[0.5em]
\multicolumn{2}{l}{\mbox{Reflection} \quad \uparrow^T : \textsf{Ne}\ T \Rightarrow \intp{T}}\\
\uparrow^B_{\vect \Gamma}(v) &:= v \\[0.75em]
\uparrow^{\square T}_{\vect \Gamma}(v)
&:= \uparrow^T_{\vect \Gamma; \cdot}({\unbox 1 v}) \\[0.75em]
\uparrow^{S \longrightarrow T}_{\vect \Gamma; \Gamma}(v)
&:= (\vect \gamma : \vect \Delta \Rightarrow_w \vect \Gamma;\Gamma)(a)
\mapsto \uparrow^T_{\vect \Delta}(v[\vect \gamma]\ \downarrow^S_{\vect \Delta}(a))
\end{array}
\]
\vspace{-15px}
\caption{Evaluation, reification and reflection functions}\label{fig:intp-nat}
\end{figure*}
\subsection{Evaluation}
The interpretation of well-typed terms to natural transformations, or
\emph{evaluation} (see \Cref{fig:intp-nat}),
relies on truncation and the truncation offset.
These operations are defined below and follow the same principles that lie behind the corresponding operations for syntactic K-substitutions.
\[
\begin{array}{llp{.5cm}ll}
\multicolumn{2}{l}{\mbox{Truncation Offset}\; \Ltotal{\_}{\_} :
\intp{\vect \Gamma}_{\;\vect \Delta} \to \mathbb{N} \to \mathbb{N} }
& & \multicolumn{2}{l}{\mbox{Truncation}\; \trunc {\_} {\_} : (\vect \rho :
\intp{\vect \Gamma}_{\;\vect \Delta})\; (n:\mathbb{N}) \to \intp{\trunc\vect \Gamma n}_{\trunc \vect \Delta
{\Ltotal \vect \rho n}}} \\
\Ltotal \vect \rho 0 &:= 0 & & \trunc \vect \rho 0 &:= \vect \rho \\
\Ltotal {((n, \vect \rho),\rho)} {1 + m}&:= n + \Ltotal \vect \rho m
& & \trunc {((n, \vect \rho), \rho)}{1 + m} &:= \trunc \vect \rho m
\end{array}
\]
Most cases in evaluation are straightforward. In the $\ensuremath{\texttt{box}}\xspace$ case the recursion
continues with an extended environment and $t$ in the next world. In the $\ensuremath{\texttt{unbox}}\xspace$
case, we first recursively interpret $t$ with a truncated environment and then the
result is K-weakened. This is because from the well-typedness of $t$, we know
$\typing[\trunc\vect \Gamma n]t{\square T}$, so
$\intp{t}_{\;\trunc \vect \Delta m}(\trunc \vect \rho n)$ gives an element in set
$\intp{\square T}_{\;\trunc \vect \Delta m} = \intp{T}_{\;\trunc \vect \Delta m; \cdot}$. To
obtain our goal $\intp{T}_{\;\vect \Delta}$, we can apply monotonicity of $\intp{T}$ using a
morphism $\vect \Delta \Rightarrow_w \trunc\vect \Delta m; \cdot$, which is given by
$\vect \id; \Uparrow^m$.
The cases related to functions are identical to~\cite{altenkirch_categorical_1995}.
In the $\lambda$ case, since we need to return a set function due to presheaf
exponentials, we use $\mapsto$ to construct this function. We first K-weaken the
environment $\vect \rho$ and then extend it with the input value $a$.
In the application case, since $\intp{t}$ gives us a presheaf exponential, we just
need to apply it to $\intp{s}$. We simply supply $\vect\id_{\vect \Delta}$ for the
K-weakening argument because no extra weakening is needed.
The following lemma proves that $\intp{t}$ is a natural transformation in $\vect \Delta$:
\begin{lemma}[Naturality]\label{lem:presh:nat}
If $\mtyping t T$ and $\vect \rho \in \intp{\vect \Gamma}_{\;\vect \Delta}$, then for all
K-weakenings $\vect \gamma : \vect \Delta' \Rightarrow_w \vect \Delta$, we have $\intp{t}_{\;\vect \Delta'}(\vect \rho[\vect \gamma]) =
\intp{t}_{\;\vect \Delta}(\vect \rho)[\vect \gamma]$.
\end{lemma}
The lemma states that the result of evaluation in a K-weakened environment is the same
as K-weakening the result evaluated in the original environment.
In STLC, despite being
a fact, naturality is not used anywhere in the proof. In \ensuremath{\lambda^{\to\square}}\xspace,
since K-weakenings encode MoTs, naturality is necessary
in the completeness proof.
After evaluation, we obtain a semantic value of the semantic type $\intp{T}$. In the last
step, we use a \emph{reification} function to convert the semantic value back to a
normal form. Reification is defined mutually with \emph{reflection} in
\Cref{fig:intp-nat}. As suggested by their signatures, they are both natural
transformations, but our proof does not rely on this fact. Both reification and
reflection are type-directed, so after reification we obtain $\beta\eta$ normal
forms. We reify a semantic value $a$ of box type $\square T$ in a context stack
$\vect \Gamma$ recursively extending the context stack to $\vect \Gamma;\cdot$. Note that $a$
has the semantic type $(\hsquare\intp{T})_{\vect \Gamma}$ which is defined as
$\intp{T}_{\vect \Gamma; \cdot}$. In the case of function type
$S \longrightarrow T$, since $a$ is a presheaf exponential, we supply a
K-weakening and a value, the result of which is then recursively reified.
Reflection turns neutral terms into semantic values. We reflect neutral terms of type
$\square T$ recursively and incrementally extending the context stack with one context
at a time.
In the function case, to construct a presheaf exponential, we first take two
arguments $\vect \gamma$ and $a$. Since $v$ is a neutral term, $v[\vect \gamma]$ is also neutral but now
well-typed in $\vect \Delta$. Both recursive calls to reification and reflection then go
down to $\vect \Delta$ instead.
Normalization by evaluation (NbE) takes a well-typed term $t$ in a context stack $\vect \Gamma$ as input, interprets $t$ to its semantic counterpart in the initial environment, and reifies it back. Before defining NbE more formally, we define the identity environment $\intp{\vect \Gamma}_{\vect \Gamma}$ that is used as the initial environment:
\[
\begin{array}{l@{}l}
\uparrow &: (\vect \Gamma : \vect\textsf{Ctx}) \to \intp{\vect \Gamma}_{\vect \Gamma} \\
\uparrow^{\epsilon; \cdot} &:= (*, *) \\
\uparrow^{\vect \Gamma; \cdot} &:= ((1, \uparrow^{\vect \Gamma}), *) \\
\uparrow^{\vect \Gamma; \Gamma, x : T} &:= (\pi, (\rho, \uparrow^{T}_{\vect \Gamma; \Gamma, x : T}\!\!(x)))\hfill
\;\;\mbox{where $(\pi, \rho) := \uparrow^{\vect \Gamma; \Gamma} [p(\vect \id)]$}
\end{array}
\]
Finally we define the NbE algorithm:
\begin{definition}(Normalization by Evaluation)
If $\mtyping t T$, then
\[
\textsf{nbe}^T_{\vect \Gamma}(t) := \downarrow^T_{\vect \Gamma} (\intp{t}_{\vect \Gamma}(\uparrow^{\vect \Gamma}))
\]
\end{definition}
\subsection{Completeness and Soundness}
The algorithm given above is sound and complete:
\begin{theorem}
[Completeness] If $\mtyequiv{t}{t'}{T}$, then $\textsf{nbe}^T_{\vect \Gamma}(t) = \textsf{nbe}^T_{\vect \Gamma}(t')$.
\end{theorem}
\begin{theorem}
[Soundness] If $\mtyping t T$, then $\mtyequiv{t}{\textsf{nbe}^T_{\vect \Gamma}(t)}T$.
\end{theorem}
Due to space limitation, we are not able to present the whole proof. Fortunately, the
proof is very standard~\cite{altenkirch_categorical_1995}. To prove completeness, we simply need to prove that equivalent
terms always evaluate to the same natural transformation:
\begin{lemma}
If $\mtyequiv{t}{t'}{T}$, then for any $\vect \rho \in \intp{\vect \Gamma}_{\vect \Delta}$,
$\intp{t}_{\vect \Delta}(\vect \rho) = \intp{t'}_{\vect \Delta}(\vect \rho)$.
\end{lemma}
\begin{proof}
Induct on $\mtyequiv{t}{t'}{T}$ and apply naturality in most cases about $\square$.
\end{proof}
The soundness proof is established by a \emph{Kripke gluing model}. The gluing model
$t \sim a \in \glu{T}_{\;\vect \Gamma}$ relates a syntactic term $t$ and a natural
transformation $a$, so that after reifying $a$, the resulting normal form is
equivalent to $t$:
\begin{align*}
\glu{T}_{\;\vect \Gamma} &\subseteq \textsf{Exp} \times \intp{T}_{\;\vect \Gamma} \\
t \sim a \in \glu{B}_{\;\vect \Gamma} &:= \mtyequiv{t}{a}{B} \\
t \sim a \in \glu{\square T}_{\;\vect \Gamma} &:= \mtyping{t}{\square T} \tand \forall
\vect \Delta. \unbox{|\vect \Delta|}{t} \sim a[\vect \id; \Uparrow^{|\vect \Delta|}] \in \glu{T}_{\vect \Gamma; \vect \Delta} \\
t \sim a \in \glu{S \longrightarrow T}_{\;\vect \Gamma} &:= \mtyping{t}{S \longrightarrow T} \tand \forall
\vect \gamma : \vect \Delta \Rightarrow_w \vect \Gamma, s \sim b \in
\glu{S}_{\;\vect \Delta}. t[\vect \gamma]\ s \sim a(\vect \gamma, b) \in
\glu{T}_{\;\vect \Delta}
\end{align*}
The gluing model should be monotonic in $\vect \Gamma$, hence Kripke. Again the gluing
model is very standard~\cite{altenkirch_categorical_1995}. It
is worth mentioning that in the $\square T$ case, the Kripke predicate effectively requires that $t$
and $a$ are related only when their results of \emph{any} $\ensuremath{\texttt{unbox}}\xspace$ing remains related.
We then can move on to prove some properties of the gluing model and define its
generalization to substitutions, which eventually allow us to conclude the soundness
theorem. Please find more details in our technical report~\cite{DBLP:journals/corr/abs-2206-07823}.
\subsection{Adaptiveness}
We emphasize that our construction is stable no matter our choice of
UL\xspace. Hence, our construction applies to all four modal systems, $K$, $T$,
$K4$ and $S4$ that we introduced in \Cref{sec:intro} \emph{without change}. The key insight
that allows us to keep our construction and model generic is the fact
that K-substitutions,
K-weakenings, and $\intp{\vect \Gamma}$ are instances of the algebra
formed by truncation and truncation offsets and
satisfy all the properties, in particular identity and
distributivity, listed at the end of \Cref{sec:usubst}.
More importantly, all the truncation and truncation offset functions
are defined for all choices of UL\xspace thereby accommodating all four
modal systems with their varying level of $\ensuremath{\texttt{unbox}}\xspace$ing.
\section{Definition of \ensuremath{\lambda^{\to\square}}\xspace}\label{sec:syntax}
In this section, we introduce the simply typed modal $\lambda$-calculus, \ensuremath{\lambda^{\to\square}}\xspace, by Pfenning, Wong and Davies~\cite{Pfenning95mfps,davies_modal_2001} more
formally. We concentrate here on the fragment containing function
types $S \longrightarrow T$, the necessity modality $\square
T$, and a base type $B$.
\[
\begin{array}{l@{~}l@{~}l}
S, T & := & \ B \;|\; \square T \;|\; S \longrightarrow T \hfill \mbox{Types, \textsf{Typ}} \\
l, m, n & \multicolumn{2}{r}{\mbox{$\ensuremath{\texttt{unbox}}\xspace$ levels or offsets, $\mathbb{N}$}} \\
x, y & \multicolumn{2}{r}{\mbox{Variables, $\textsf{Var}$}} \\
s, t, u &:=& x \;|\; \boxit t \;|\; \unbox n t \;|\; \lambda x. t \;|\; s\ t \hfill\qquad \mbox{Terms, $\textsf{Exp}$} \\
\Gamma, \Delta, \Phi &:= & \cdot \;|\; \Gamma, x : T \hfill \mbox{Contexts, \textsf{Ctx}}\\
\vect \Gamma, \vect \Delta &:= &\ \epsilon \;|\; \vect \Gamma; \Gamma \hfill \mbox{Context stack, $\vect{\textsf{Ctx}}$} \\
w &:= & v \;|\; \boxit w \;|\; \lambda x. w \hfill \mbox{Normal form, $\textsf{Nf}$} \\
v &:= & x \;|\; v\ w \;|\; \unbox n v \hfill \mbox{Neutral form, $\textsf{Ne}$}
\end{array}
\]
Following standard practice, we consider variables, applications, and $\ensuremath{\texttt{unbox}}\xspace$ neutral.
Functions, boxed terms and neutral terms are normal. Note that we
allow reductions under binders and inside boxed terms. As a
consequence, a function or a boxed term is normal, if their body is
normal.
We define typing rules and type-directed equivalence between terms in
\Cref{fig:typing}.
We only show the rules for $\beta$ and $\eta$
equivalence of terms, but the full set of rules can be found
in~\Cref{apx:equivalence}.
We use Barendregt's abstract naming and
$\alpha$ renaming to ensure that variables are unique with respect to
context stacks. The variable rule asserts that one can only refer to
a variable in the current world (the topmost context). In a typing judgment, we require all context
stacks to be non-empty, so the topmost context must exist.
From Kripke semantics' point of view, the introduction rule for $\square$ says a term
of $\square T$ is just a term of $T$ in the next world. The elimination rule brings
$\square T$ from some previous world to the current world. This previous world is
determined by the level $n$. As mentioned earlier, the choice of $n$ determines which
logic the system corresponds to.
$|\vect \Delta|$ counts the number of contexts in $\vect \Delta$.
To illustrate, we recap how the axioms in \Cref{sec:intro} can be described in
\ensuremath{\lambda^{\to\square}}\xspace. $K$ is defined by choosing $n=1$. Axiom $T$ requires that $n=0$ and Axiom $4$
is only possible when $\ensuremath{\texttt{unbox}}\xspace$ levels (UL\xspace{s}) can be $>1$.
\[
\begin{array}{llp{1cm}llp{1cm}ll}
K &: \square (S \longrightarrow T) \to \square S \to \square T & & T &: \square T \to T & & A4 &: \square T \to \square \square T \\% \tag{Axiom $4$} \\
K\ f\ x &:= \boxit{((\unbox 1 f) (\unbox 1 x))} & & T\ x &:= \unbox 0 x & & A4\ x &:= \boxit{(\boxit{(\unbox 2 x)})}
\end{array}
\]
The term equivalence rules are largely standard. In the $\eta$ rule
for $\square$, we restrict $\ensuremath{\texttt{unbox}}\xspace$ to level $1$. In the $\beta$ rule for $\square$, we rely on the
modal transformation operation~\cite{davies_modal_2001}, written as $\{n/0\}$, which
allows us to transform the term $t$ which is well-typed in the context
stack $\vect \Gamma; \cdot$ to the context stack $\vect \Gamma; \vect \Delta$. We abuse slightly notation and use $;$ for both extending a
context stack with a context and appending two context stacks.
We will discuss modal transformations more later in this section.
\subsection{Term Substitutions}
A term substitution simply replaces a variable $x$ with a term $s$
in a term $t$. It simply pushes the substitution inside the subterms of
$t$ and avoiding capture using renaming. Below, we simply restate the
ordinary term substitution lemma:
\begin{lemma}[Term Substitution]$\;$\\
If $\mtyping[\vect \Gamma; (\Gamma, x{:}S, \Gamma'); \vect \Delta]t T$ and
$\mtyping[\vect \Gamma;(\Gamma, \Gamma')]s S$,
then $\mtyping[\vect \Gamma; (\Gamma, \Gamma'); \vect \Delta]{t[s/x]}T$.
\end{lemma}
\subsection{Modal Transformations (MoTs)}
In addition to the usual structural properties (weakening and
contraction) of individual contexts, \ensuremath{\lambda^{\to\square}}\xspace also relies on structural
properties of context stacks, e.g. in the $\beta$ rule for
$\square$. In particular, we need to be able to weaken a
context stack $\vect \Gamma; \vect \Gamma'$ to $\vect \Gamma; \vect \Delta; \vect \Gamma'$ by
splicing in additional contexts $\vect \Delta$ (\emph{modal
weakening}). \emph{Modal fusion} allows us to combine two adjacent
contexts in a context stack transforming a context stack
$\vect \Gamma;\Gamma_0;\Gamma_1;\vect \Delta$ to a context stack $\vect \Gamma;
(\Gamma_0,\Gamma_1); \vect \Delta$.
These modal transformations (\emph{MoTs}) require us to relabel the level $n$
associated with the $\ensuremath{\texttt{unbox}}\xspace$ eliminator. This is accomplished by the
operation $t\{n/l\}$. Assume that $t$ is well-typed
in a context stack $\vect \Gamma$. If $n > 0$, then at position $l$ in the
stack (i.e. $\vect \Gamma = \vect \Gamma';\vect \Delta$ and $|\vect \Delta| = l$), we splice
in $n - 1$ additional contexts. If $n = 0$, then
this can be interpreted as fusing the two adjacent contexts at position
$l$ in the stack $\vect \Gamma$.
\[
\begin{array}{ll}
x\{n/l\} &:= x \\
\boxit t\{n/l\} &:= \boxit{(t\{n/l+1\})} \\[0.2em]
\unbox m t \{n/l\} &:=
\begin{cases}
\unbox m {(t\{n/l-m\})} & \text{if $m \le l$} \\
\unbox{n + m - 1}t &\text{if $m > l$}
\end{cases} \\[0.2em]
\lambda x. t \{n/l\} &:= \lambda x. (t\{n/l\}) \\
s\ t\{n/l\} &:= (s\{n/l\})\ (t\{n/l\})
\end{array}
\]
In the $\ensuremath{\texttt{box}}\xspace$ case,
$l$ increases by one, as we extend the context stack by a new
world. In the $\ensuremath{\texttt{unbox}}\xspace$ case, we distinguish cases based on the $\ensuremath{\texttt{unbox}}\xspace$
level $m$. If $m \le l$, then we simply rearrange the UL\xspace{s} recursively in $t$. If $m > l$, we only
need to adjust the UL\xspace and do not recurse on $t$.
MoTs satisfy the following lemma:
\begin{lemma}[Structural Property of Context Stacks]\label{lem:mtran-typ}$\;$\\
If $\mtyping[\vect \Gamma;\Gamma_0;\Delta_0;\cdots;\Delta_l]{t}{T}$,
then $\mtyping[\vect \Gamma;\Gamma_0; \cdots; (\Gamma_n,
\Delta_0);\cdots;\Delta_l]{t\{n/l\}}{T}$.
\end{lemma}
We call the case where $n = 0$ \emph{modal fusion} or just
\emph{fusion}.
Other cases are \emph{modal weakening}. These names can be made sense of from the
following examples:
\begin{itemize}
\item When $n = l = 0$, the lemma states that if $\mtyping[\vect \Gamma;\Gamma_0;\Delta_0]{t}{T}$, then
$\mtyping[\vect \Gamma;(\Gamma_0, \Delta_0)]{t\{0/0\}}{T}$. Notice that $\Gamma_0$ and $\Delta_0$
in the premise are fused into one in the conclusion, hence ``modal fusion''.
\item When $n = 2$ and $l = 0$, the lemma states that if $\mtyping[\vect \Gamma;\Gamma_0;\Delta_0]{t}{T}$, then
$\mtyping[\vect \Gamma;\Gamma_0; \Gamma_1; (\Gamma_2, \Delta_0)]{t\{2/0\}}{T}$. A new
context $\Gamma_1$ is inserted into the stack and the topmost context is (locally)
weakened by $\Gamma_2$, hence ``modal weakening''.
\item In the $\beta$ rule for $\square$, a MoT $\{n/0\}$ is used to transform $t$ in
context stack $\vect \Gamma; \cdot$ to $\vect \Gamma; \vect \Delta$.
\item When $l > 0$, the leading $l$ contexts are skipped. If $n = 2$, $l = 1$ and
$\mtyping[\vect \Gamma;\Gamma_0;\Delta_0; \Delta_1]{t}{T}$, then
$\mtyping[\vect \Gamma;\Gamma_0; \Gamma_1; (\Gamma_2, \Delta_0);
\Delta_1]{t\{2/1\}}{T}$. Here $\Delta_1$ is kept as is.
\end{itemize}
\section{Kripke-style Substitutions}\label{sec:usubst}
Traditionally, we have viewed term substitutions and modal
transformations as two separate operations~\cite{davies_modal_2001}. This makes reasoning about
\ensuremath{\lambda^{\to\square}}\xspace complex. For example, a composition of $n$ MoTs leads to up to $2^n$ cases in the
$\ensuremath{\texttt{unbox}}\xspace$ case. This becomes quickly
unwieldy.
How can we avoid such case analyses
by \emph{unifying} MoTs and term substitutions as one operation that transforms context stacks? --
We will view context stacks as a category and a special \emph{unifying} group of simultaneous
substitutions as morphisms (denoted by $\Rightarrow$). MoTs are then simply a special case of
these morphisms. Lemma \ref{lem:mtran-typ} suggests to view a MoT as a morphism:
\[
\begin{array}{c}
\{n/l\} : \vect \Gamma;\Gamma_0; \cdots; (\Gamma_n,
\Delta_0);\cdots;\Delta_l \Rightarrow \vect \Gamma;\Gamma_0;\Delta_0;\cdots;\Delta_l
\end{array}
\]
because $\{n/l\}$ moves $t$ from the codomain context stack to the domain
context stack.
If this group of substitutions
are closed under composition, then a category of context
stacks can be organized.
\subsection{Composing MoTs}
If MoTs are just special substitutions, then the composition of substitutions must also
compose MoTs. The following diagram is a composition of multiple MoTs, forming a morphism $\vect \Gamma \Rightarrow \vect \Delta$:
\begin{center}
\vspace{-5px}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=10pt]
{
\vect \Gamma := \epsilon; & \Gamma_0; & \Delta_0; & \Delta_1; & (\Gamma_1, \Gamma_2, \Gamma_3); &
\Delta_2; & \Gamma_4
\\
\vect \Delta := \epsilon; & \Gamma_0; & & & \Gamma_1; \Gamma_2; \Gamma_3; &
& \Gamma_4 \\
};
\draw[stealth-, double] ($(m-2-1) + (-0.35, 0.3)$) -- ($(m-1-1) + (-0.35, -0.3)$);
\draw[<-] (m-2-2) -- (m-1-2);
\draw[<-] ($(m-2-5) + (-0.6, 0.3)$) -- ($(m-1-5) + (-0.6, -0.3)$);
\draw[<-] ($(m-2-5) + (-0.1, 0.3)$) -- ($(m-1-5) + (-0.1, -0.3)$);
\draw[<-] ($(m-2-5) + (0.4, 0.3)$) -- ($(m-1-5) + (0.4, -0.3)$);
\draw[<-] (m-2-7) -- (m-1-7);
\end{tikzpicture}
\vspace{-5px}
\end{center}
This composition contains both fusion ($\Gamma_1$, $\Gamma_2$, $\Gamma_3$) and
modal weakening ($\Delta_0$, $\Delta_1$, $\Delta_2$). The thin arrows
correspond to contexts in both stacks. Some of these arrows are local identities
($\Gamma_0$ and $\Gamma_4$). They happen between contexts that are affected by modal
weakenings. The rest are local weakenings ($\Gamma_1$, $\Gamma_2$ and $\Gamma_3$); in
this case, they are affected by modal fusions. The first observation is that the size of gaps
between adjacent thin arrows varies, because it is determined by different MoTs. Another
observation is that thin arrows do not have to be just local weakenings; if they
contain general terms, then we obtain a general
simultaneous substitution. Moreover, thanks to the thin arrows, we know exactly which context stack each term
should be well-typed in. Combining all information, we arrive at
the definition of \emph{Kripke-style
substitutions}:
\begin{definition}
A Kripke-style substitution $\vect \sigma$, or just \textbf{K-substitution}, between context stacks is defined as
\[
\begin{array}{ll}
\sigma, \delta &:= () \;|\; \sigma, t/x
\hfill\mbox{(Local) substitutions, $\textsf{Subst}$}\\
\vect \sigma, \vect \delta &:= \snil \sigma \;|\; \sext \vect \sigma n \sigma\qquad
\hfill\mbox{K-substitutions, $\textsf{Substs}$}
\end{array}
\]
\begin{mathpar}
\inferrule
{\sigma : \vect \Gamma \Rightarrow \Gamma}
{{\snil { \sigma}} : \vect \Gamma \Rightarrow \epsilon;\Gamma}
\inferrule
{\vect \sigma : \vect \Gamma \Rightarrow \vect \Delta \\ |\vect \Gamma'| = n \\ \sigma : \vect \Gamma;\vect \Gamma' \Rightarrow \Delta}
{\sext \vect \sigma n \sigma : \vect \Gamma;\vect \Gamma' \Rightarrow \vect \Delta;\Delta}
\end{mathpar}
where a local substitution $\sigma : \vect \Gamma \Rightarrow \Gamma$ is defined as a list of
well-typed terms in $\vect \Gamma$ for all bindings in $\Gamma$.
\end{definition}
Just as context stacks must be non-empty and consist of at least one
context, a K-substitution must have a topmost local substitution
written as $\snil \sigma$ in the base case. It provides a mapping for the context
stack $\epsilon;\Gamma$. We extend a K-substitution $\vect \sigma$ with
$\Uparrow^n\!\!\!\sigma$ where $n$ captures the offset due to a MoT and $\sigma$ is
the local substitution. %
To illustrate, the morphism in the previous diagram can be represented as
$\varepsilon; \sext {\sext {\sext {\sext \id 3 \textsf{wk}_1} 0 \textsf{wk}_2} 0
\textsf{wk}_3} 2 \id$ where $\id$ is the local
identity substitution and
$\textsf{wk}_i : \Gamma_0;\Delta_0;\Delta_1;(\Gamma_1,\Gamma_2,\Gamma_3) \Rightarrow {\Gamma_i}$
are appropriate local weakenings. We break down this representation:
\begin{enumerate}
\item We start with $\snil\id : \epsilon ; \Gamma_0 \Rightarrow \epsilon; \Gamma_0$.
\item We add an offset $3$ and a local weakening $\textsf{wk}_1$, forming
$\sext {\snil \id} 3 \textsf{wk}_1 :
\epsilon ; \Gamma_0; \Delta_0;\Delta_1;(\Gamma_1,\Gamma_2,\Gamma_3) \Rightarrow
\epsilon; \Gamma_0; \Gamma_1$. The offset $3$ adds three contexts to
the domain stack ($\Delta_0$, $\Delta_1$ and $\Gamma_1,\Gamma_2,\Gamma_3$). Local weakening
$\textsf{wk}_1$ extracts $\Gamma_1$ from $\Gamma_1,\Gamma_2,\Gamma_3$.
\item We extend the K-substitution to
$\sext
{\sext {\snil \id} 3 \textsf{wk}_1}
0 \textsf{wk}_2 : \epsilon ; \Gamma_0; \Delta_0;\Delta_1;(\Gamma_1,\Gamma_2,\Gamma_3) \Rightarrow
\epsilon; \Gamma_0; \Gamma_1; \Gamma_2$. Since the offset associated
with $\textsf{wk}_2$ is $0$, no context is added to the domain stack. This effectively represents fusion. $\textsf{wk}_2$ is
similar to $\textsf{wk}_1$.
\item The rest of the K-substitution proceeds similarly.
\end{enumerate}
Subsequently, we may simply write
$\vect \sigma; \sigma$ instead of $\sext \vect \sigma 1 \sigma$. In particular,
we will write $\vect \sigma ; \textsf{wk}$ instead of $\sext \vect \sigma 1 \textsf{wk}$ and
$\vect \sigma ; \id$ instead of $\sext \vect \sigma 1 \id$. We often omit offsets
that are $1$ for readability.
\subsection{Representing MoTs}
Now we show that MoTs are a special case of K-substitutions. Let $l := |\vect \Delta|$. We
define modal weakenings as
\[
\begin{array}{ll}
\{n+1/l\} &: \vect \Gamma;\Gamma_1; \cdots; (\Gamma_{n + 1}, \Delta_0);\vect \Delta \Rightarrow
\vect \Gamma;\Delta_0;\vect \Delta \\
\{n+1/l\} &= \varepsilon;
\underbrace{\id; \cdots;\id}_{|\vect \Gamma|}; \Uparrow^{n + 1} \textsf{wk};
\underbrace{\id; \cdots; \id}_{|\vect \Delta|}
\end{array}
\]
where the offset $n + 1$ on the right adds $\Gamma_1; \cdots; (\Gamma_{n + 1},
\Delta_0)$ to $\vect \Gamma$ in the domain
stack and $\textsf{wk} : \vect \Gamma;\Gamma_1; \cdots; (\Gamma_{n + 1}, \Delta_0) \Rightarrow {\Delta_0}$.
Fusion is also easily defined:
\[
\begin{array}{ll}
\{0/l\} &: \vect \Gamma;(\Gamma_1, \Gamma_2); \vect \Delta \Rightarrow \vect \Gamma;\Gamma_1; \Gamma_2; \vect \Delta \\
\{0/l\} &= \varepsilon; \underbrace{\id; \cdots; \id}_{|\vect \Gamma|}; \textsf{wk}_1; \Uparrow^0\!
\textsf{wk}_2; \underbrace{\id; \cdots; \id}_{|\vect \Delta|}
\end{array}
\]
where the offset $0$ associated with $\textsf{wk}_2$ allows us to fuse $\Gamma_1$ and $\Gamma_2$, and
$\textsf{wk}_i : \vect \Gamma;(\Gamma_1, \Gamma_2) \Rightarrow {\Gamma_i}$ for $i \in \{1, 2\}$.
\subsection{Operations on K-Substitutions}
We now show that K-substitutions are morphisms in a category of context stacks. In order to define
composition, we describe two essential operations: 1) \emph{truncation}
($\trunc \vect \sigma n$) drops $n$ topmost substitutions from a
K-substitution $\vect \sigma$ and 2)
\emph{truncation offset} ($\Ltotal \vect \sigma n$) computes the \emph{total} number of contexts that need to be dropped from the domain context stack, given that we truncate $\vect \sigma$ by $n$.
It computes the sum of $n$ leading
offsets. Let
$\vect \sigma := \sext{\sext{\vect \sigma'}{m_n}{\sigma_n} ; \ldots}{m_1}{\sigma_1}
$, then $\Ltotal {\vect \sigma}{n} = m_n + \ldots + m_1$ and $\trunc \vect \sigma n = \vect \sigma'$. For the operation to be meaningful, $n$ must be less than $|\Delta|$.
\[
\begin{array}{ll}
\multicolumn{2}{l}{\text{Truncation Offset}\quad
\Ltotal {\_} {\_} : (\vect \Gamma \Rightarrow \vect \Delta) \to \mathbb{N} \to \mathbb{N} }\\
\Ltotal \vect \sigma 0 &:= 0 \\
\Ltotal {\sext \vect \sigma n \sigma} {1 + m} &:= n + \Ltotal \vect \sigma m
\end{array}
\]
Truncation simply drops $n$ local
substitutions regardless of the offset that is associated with
each local substitution.
\[
\begin{array}{ll}
\multicolumn{2}{l}{\text{Truncation}\quad
\trunc {\_} {\_} : (\vect \sigma : \vect \Gamma \Rightarrow \vect \Delta) \to (n:\mathbb{N}) \to \trunc{\vect \Gamma} {\Ltotal {\vect \sigma} n} \Rightarrow
\trunc {\vect \Delta} n}\\
\trunc \vect \sigma 0 &:= \vect \sigma \\
\trunc {(\sext \vect \sigma m \sigma)} {1 + n} &:= \trunc {\vect \sigma} n
\end{array}
\]
Similar to truncation of K-substitutions, we rely on truncation of contexts, written as $\trunc{\vect \Gamma}{n}$ which simply drops $n$ contexts from the context stack $\vect \Gamma$, i.e. if $\vect \Gamma = \vect \Gamma'; \Gamma_1; \ldots;\Gamma_n$, then $\trunc \vect \Gamma n = \vect \Gamma'$.
Note that $n$ must satisfy $n < |\vect \Gamma|$, otherwise the operation
would not be meaningful.
We emphasize that no further restrictions are placed on $n$ and hence
our definitions apply to any of the combinations of Axioms $K$, $T$ and $4$
described in the introduction.
\subsection{K-Substitution Operation}
We now can define the K-substitution operation as follows:
\[
\begin{array}{ll}
x[\snil \sigma] &:= \sigma(x)\qquad\hfill
\mbox{lookup $x$ in $\sigma$}\\
x[\sext \vect \sigma k \sigma] &:= \sigma(x)\\
(\boxit t)[\vect \sigma] &:= \boxit{t[\sext \vect \sigma 1 {()}]} \\
(\unbox n t)[\vect \sigma] &:= \unbox{\Ltotal {\vect \sigma}{n}}{(t[\trunc \vect \sigma n])} \\
(\lambda x. t) [\snil \sigma] &:= \lambda x. t[\snil{(\sigma, x/x)}] \\
(\lambda x. t) [\sext \vect \sigma k \sigma] &:= \lambda x. t[\sext \vect \sigma k {(\sigma, x/x)}] \\
(s\ t)[\vect \sigma] &:= s[\vect \sigma]\ u[\vect \sigma]
\end{array}
\]
In the $\ensuremath{\texttt{box}}\xspace$ case, the recursive call adds to $\vect \sigma$ an empty local
substitution. Note that the offset must be $1$, since we extend in the
box-introduction rule our context stack with a new empty context.
The $\ensuremath{\texttt{unbox}}\xspace$ case for K-substitutions incorporates MoTs.
Instead of distinguishing cases based on the unbox
level $n$, we use the truncation offset operation to re-compute the
UL\xspace and
the recursive call $t[\trunc \vect \sigma n]$ continues with $\vect \sigma$ truncated, because
$t$ is typed in a shorter stack.
Due to the typing invariants, we know that $\Ltotal \vect \sigma n$ is indeed defined for
all valid UL\xspace $n$ in all our target systems. This fact can be checked easily. In
System $K$, since $n = 1$ and all MoT offsets in $\vect \sigma$ are $1$, we have that
$\Ltotal \vect \sigma n = 1$. In System $T$ where $n \in \{0, 1\}$ and $\vect \sigma$ only
contains MoT offsets $0$ and $1$, we have $\Ltotal \vect \sigma n \in \{0, 1\}$. In System
$K4$ where $n \ge 1$, $\Ltotal \vect \sigma n$ cannot be $0$ and thus
$\Ltotal \vect \sigma n \ge 1$. In System $S4$, since $n \in \mathbb{N}$,
$\Ltotal \vect \sigma n \in \mathbb{N}$ naturally holds.
The following lemma shows that K-substitutions are indeed the proper notion we seek:
\begin{lemma}
If $\mtyping t T$ and $\vect \sigma : \vect \Delta \Rightarrow \vect \Gamma$, then
$\mtyping[\vect \Delta]{t[\vect \sigma]}T$.
\end{lemma}
\subsection{Categorical Structure}
We are now ready to organize
K-substitutions into a category. First we define the identity K-substitution:
\[
\begin{array}{l@{\;}l}
\vect{\id}_{\vect \Gamma} &: \vect \Gamma \Rightarrow \vect \Gamma \\
\vect{\id}_{\vect \Gamma} &:= \snil \id; \id; \cdots; \id
\end{array}
\]
where $\id$'s are appropriate local identities. We again omit the offsets
when we extend K-substitutions with $\id$, since they are $1$.
We also omit the subscript $\vect \Gamma$ on $\vect{\id}$ for readability
Composition is defined in terms of the K-substitution operation:
\[
\begin{array}{r@{\;\;}l}
\_ \;\circ \_ \;&: \vect \Gamma' \Rightarrow \vect \Gamma'' \to \vect \Gamma \Rightarrow \vect \Gamma' \to \vect \Gamma
\Rightarrow \vect \Gamma'' \\
(\snil \sigma) \circ \vect \delta &:= \snil{(\sigma[\vect \delta])} \\
(\sext \vect \sigma n \sigma) \circ \vect \delta &:=
\sext {(\vect \sigma \circ {(\trunc \vect \delta n)})}{\Ltotal \vect \delta n}{(\sigma[\vect \delta])}
\end{array}
\]
where $\sigma[\vect \delta]$ iteratively applies $\vect \delta$ to all terms in
$\sigma$. In the recursive case, we continue with a truncated
K-substitution $\trunc \vect \delta n$ and recompute the offset.
Verification of the categorical laws is then routine:
\begin{theorem}
Context stacks and K-substitutions form a category with identities and composition
defined as above.
\end{theorem}
\subsection{Properties of Truncation and Truncation Offset}
Finally, we summarize some critical properties of truncation and truncation offset. Let $\vect \sigma : \vect \Gamma' \Rightarrow \vect \Gamma$ and $\vect \delta : \vect \Gamma'' \Rightarrow \vect \Gamma'$:
\begin{lemma}
If $n < |\vect \Gamma|$, then $\Ltotal \vect \sigma n < |\vect \Gamma'|$.
\end{lemma}
\begin{lemma}[Distributivity of Addition]
If $n + m < |\vect \Gamma|$, then
$\trunc \vect \sigma {(n + m)} = \trunc {(\trunc \vect \sigma n)} m$
and $\Ltotal \vect \sigma {n + m} = \Ltotal {\vect \sigma} {n} + \Ltotal {\trunc \vect \sigma n} m$.
\end{lemma}
\begin{lemma}[Distributivity of Composition]
If $n < |\vect \Gamma|$,
then $\Ltotal {\vect \sigma \circ \vect \delta} n = \Ltotal {\vect \delta}{\Ltotal \vect \sigma n}$
and $\trunc {(\vect \sigma \circ \vect \delta)} n = (\trunc \vect \sigma n) \circ (\trunc {\vect \delta} {\Ltotal \vect \sigma n })$.
\end{lemma}
These properties will be used in \Cref{sec:presheaf}.
Later we will define other instances of truncation and truncation offsets but all
these instances satisfy properties listed here. Therefore, these properties
sufficiently characterize an algebra of truncation and truncation offset.
|
2,869,038,156,673 | arxiv | \section{Introduction}
While language is many times mistakenly perceived as a stable, unchanging structure, it is in fact constantly evolving and adapting to the needs of its users. It is a well researched fact that some words and phrases can change their meaning completely in a longer period of time. The word \textit{gay}, which was a synonym for cheerful until the 2\textsuperscript{nd} half of the 20\textsuperscript{th} century, is just one of the examples found in the literature. On the other hand, we are just recently beginning to research and measure more subtle semantic changes that occur in much shorter time periods. These changes reflect a sudden change in language use due to changes in the political and cultural sphere or due to the localization of language use in somewhat closed communities.
The study of how word meanings change in time has a long tradition \cite{bloomfield1933language} but it has only recently saw a surge in popularity and quantity of research due to recent advances in modelling semantic relations with word embeddings \cite{mikolov2013efficient} and increased availability of textual resources. The current state-of-the-art in modelling semantic relations are contextual embeddings \cite{devlin2018bert,peters2018deep}, where the idea is to generate a different vector for each context a word appears in, i.e., for each specific word occurrence. This solves the problems with word polysemy and employing this type of embeddings has managed to improve the state-of-the-art on a number of natural language understanding tasks. However, contextual embeddings have not yet been widely employed in the discovery of diachronic semantic shifts.
In this study, we present a novel method that relies on contextual embeddings to generate time specific word representations that can be leveraged for the purpose of diachronic semantic shift detection \footnote{Code is available at \url{https://gitlab.com/matej.martinc/semantic_shift_detection}.}. We also show that the proposed approach has the following advantages over existing state-of-the-art methods:
\begin{itemize}
\item It shows comparable performance to the previous state-of-the-art in detecting a short-term semantic shift without requiring any time consuming domain adaptation on a very large corpus that was employed in previous studies.
\item It enables the detection and comparison of semantic shifts in a multilingual setting, which is something that has never been automatically done before and will facilitate the research of differences and similarities of how word meanings change in different languages and cultures.
\end{itemize}
The paper is structured as follows. We address the related work on diachronic semantic shift detection in Section \ref{sec:relatedWork} We describe the methodology and corpora used in our research in Section \ref{sec:methodology} The conducted experiments and results are presented in Section \ref{sec:experiments} Conclusions and directions for further work are presented in Section~\ref{sec:conclusion}
\section{Related Work}
\label{sec:relatedWork}
If we take a look at a research on diachronic semantic shift, we can identify two distinct trends: (1) a shift from raw word frequency methods to methods that leverage dense word representations, and (2) a shift from long-term semantic shifts (spanning decades or even centuries) to short-term shifts spanning years at most.
Earlier studies \cite{juola2003time,hilpert2008assessing} in detecting semantic shift and linguistic change used raw word frequency methods for detecting semantic shift and linguistic change. They are being replaced by methods that leverage dense word representations. The study by \newcite{kim2014temporal} was arguably the first that employed word embeddings, or more specifically, the Continuous Skipgram model proposed by \newcite{mikolov2013efficient}, while the first research to show that these methods can outperform frequency based methods by a large margin was conducted by \newcite{kulkarni2015statistically}.
In the latter method, separate word embedding models are trained for each of the time intervals. Since embedding algorithms are inherently stochastic and the resulting embedding sets are invariant under rotation, vectors from these models are not directly comparable and need to be aligned in a common space \cite{kutuzov2018diachronic}.
To solve this problem, \newcite{kulkarni2015statistically} first suggested a simple linear transformation for projecting embeddings into a common space. \newcite{zhang2016past} improved this approach by proposing the use of an additional set of nearest neighbour words from different models that could be used as anchors for alignment. Another approach was devised by \newcite{eger2017linearity}, who proposed second-order embeddings (i.e., embeddings of word similarities) for model alignment and it was \newcite{hamilton2016cultural} that showed that these two methods can compliment each other.
Since imperfect aligning can negatively affect semantic shift detection, the newest methods try to avoid it altogether. \newcite{rosenfeld2018deep} presented an approach, where the embedding model is trained on word and time representations, treating the same words in different time periods as different tokens. Another solution to avoid alignment is the incremental model fine-tuning, where the model is first trained on the first time period and saved. The weights of this initial model are used for the initialization of the model trained on the next successive time period. The described step of incremental weight initialization is repeated until the models for all time periods are trained. This procedure was first proposed by \newcite{kim2014temporal} and made more efficient by \newcite{peng2017incrementally}, who suggested to replace the softmax function for the Continuous bag-of-word and Continuous skipgram models with a more efficient hierarchical softmax, and by \newcite{kaji2017incremental}, who proposed an incremental extension for negative sampling.
Recently, a new type of embeddings called contextual embeddings have been introduced. ELMo (Embeddings from Language Models) by \newcite{peters2018deep} and BERT (Bidirectional Encoder Representations from Transformers) by \newcite{devlin2018bert} are the most prominent representatives of this type of contextual embeddings. In this type of embeddings, a different vector is generated for each context a word appears in. These new contextual embeddings solve the problems with word polysemy but have not been used widely in the studies concerning temporal semantic shifts. The only two temporal semantic shift studies we are aware off, that used contextual BERT embeddings, are reported in \newcite{hu-etal-2019-diachronic} and \newcite{giulianellilexical}.
In the study by \newcite{hu-etal-2019-diachronic}, contextualised BERT embeddings were leveraged to learn a representation for each word sense in a set of polysemic words. Initially, BERT is applied to a diachronic corpus to extract embeddings for tokens that closely match the predefined senses of a specific word. After that, a word sense distribution is computed at each successive time slice. By comparing these distributions, one is able to inspect the evolution of senses for each target word.
In the study by \newcite{giulianellilexical}, word meaning is considered as ``inherently under determined and contingently modulated in situated language use'', meaning that each appearance of a word represents a different word usage. The main idea of the study is to determine how word usages vary through time. First, they fine-tune the BERT model on the entire corpus for domain adaptation and after that they perform diachronic fine-tuning, using the incremental training approach proposed by \newcite{kim2014temporal}. After that, the word usages for each time period are clustered with the K-means clustering algorithm and the resulting clusters of different word usages are compared in order to determine how much the word usage changes through time.
The second trend in diahronic semantic change research is a slow shift of focus from researching long-term semantic shifts spanning decades or even centuries to short-term shifts spanning years at most \cite{kutuzov2018diachronic}. For example, a somewhat older research by \newcite{sagi2011tracing} studied differences in the use of English spanning centuries by using the Helsinki corpus \cite{rissanen1993helsinki}. The trend of researching long-term shifts continued with \newcite{eger2017linearity} and \newcite{hamilton2016diachronic}, who both used the Corpus of Historical American (COHA)\footnote{\url{http://corpus.byu.edu/coha}}. In order to test if existing methods could be applied to detect short-term semantic changes in language, newer research focuses more on tracing short-term socio-cultural semantic shift. \newcite{kim2014temporal} analyzed yearly changes of words in the Google Books Ngram corpus and \newcite{kulkarni2015statistically} analyzed Amazon Movie Reviews, where spans were one year long, and Tweets, where change was measured in months. The most recent exploration of meaning shift over short periods of time that we are aware of, was conducted by \newcite{del2019short}, who measured changes of word meaning in online Reddit communities by employing the incremental fine-tuning approach proposed by \newcite{kim2014temporal}.
\section{Methodology}
\label{sec:methodology}
In this section, we present the methodology of the proposed approach by explaining how we obtain time period specific word representations, on which corpora the experiments are conducted, and how we evaluate the approach.
\subsection{Time specific word representations}
\label{sec:time_rep}
Given a set of corpora containing documents from different time periods, we develop a method for locating words with different meaning in different time periods and for quantifying these meaning changes. Our methodology is similar to the approach proposed by \newcite{rosenfeld2018deep} since we both construct a time period specific word representation that represents a semantic meaning of a word in a distinct time period.
In the first step, we fine-tune a pretrained BERT language model for domain adaptation on each corpus presented in Section \ref{sec:corpora} Note that we do not conduct any diachronic fine-tuning, therefore our fine-tuning approach differs from the approach presented in \newcite{giulianellilexical}, where BERT contextual embeddings were also used, and also from other approaches from the related work that employ the incremental fine-tuning approach \cite{kim2014temporal,del2019short}. The reason behind this lies in the contextual nature of embeddings generated by the BERT model, which are by definition dependent on the time-specific context and therefore, in our opinion, do not require diachronic (time-specific) fine-tuning. We use the English BERT-base-uncased model with 12 attention layers and a hidden layer size of 768 for experiments on the English corpora, and the multilingual BERT-base-cased model for multilingual experiments\footnote{Although recently a variety of novel transformer language models emerged, some of them outperforming BERT \cite{yang2019xlnet,sun2019ernie}, BERT was chosen in this research due to the availability of the pretrained multilingual model which among other languages also supports Slovenian.}. Only one model is used for generating the time period specific word representations in the multilingual setting and not two: one for each language in our experiments, English and Slovenian. We opted for this method in order to generate word representations for both languages that do not need to be aligned in a common vector space and are directly comparable. We only conduct light text preprocessing on the LiverpoolFC corpus, where we remove URLs.
In the next step, we generate time specific representations of words. Each corpus is split into predefined time periods and a set of time specific subcorpora is created for each corpus. The documents from each of the time specific subcorpora are split into sequences of byte-pair encoding tokens \cite{kudo2018sentencepiece} of a maximum length of 256 tokens and fed into the fine-tuned BERT model. For each of these sequences of length $n$, we create a sequence embedding by summing the last four encoder output layers. The resulting sequence embedding of size $n$ times \textit{embeddings size} represents a concatenation of contextual embeddings for the $n$ tokens in the input sequence. By chopping it into $n$ pieces, we acquire a representation, i.e., a contextual token embedding, for each word usage in the corpus. Note that these representations vary according to the context in which the token appears, meaning that the same word has a different representation in each specific context (sequence). Finally, the resulting embeddings are aggregated on the token level (i.e., for every token in the corpus vocabulary, we create a list of all their contextual embeddings) and averaged, in order to get a time specific representation for each token in each time period.
Last, we quantitatively estimate the semantic shift of each target word in the period between two time specific representations by measuring the cosine distance between two time specific representations of the same token. This differs from the approach proposed by \newcite{giulianellilexical}, where clustering was used as an aggregation method and than Jensen-Shannon divergence was measured, a measure of similarity between probability distributions, to quantify changes between word usages in different time periods.
Another thing to note is that for the experiments on the Brexit news corpus (see Section \ref{sec:corpora}), we conduct the same averaging procedure on the entire corpus (not just on the time specific subcorpus) in order to get a general (not just time specific) representation for each token in the corpus. These general representations of words are used to find the 50 most similar words to the word \textit{Brexit} (see Section \ref{sec:brexit} for further details).
Since the byte-pair input encoding scheme \cite{kudo2018sentencepiece} employed by the BERT model does not necessarily generate tokens that correspond to words but rather generate tokens that can sometimes correspond to subparts of words, we also propose the following \textit{on the fly} reconstruction mechanism that allows us to get word representations from byte pair tokens. If a word is split into more than one byte pair tokens, we take an embedding for each byte pair token constituting a word and build a word embedding by averaging these byte pair tokens. The resulting average is used as a context specific word representation.
\subsection{Corpora}
\label{sec:corpora}
We used three corpora in our experiments, all of them covering short time periods of eight years or less. The statistics about the datasets are presented in Table \ref{tbl:datasets}.
\subsubsection{LiverpoolFC}
\label{sec:liverpool}
The LiverpoolFC corpus is used to compare our approach to a recent state-of-the-art approach proposed by \newcite{del2019short}. It contains 8 years of Reddit posts, more specifically the LiverpoolFC subreddit for fans of the English football team. It was created for the task of short-term meaning shift analysis in online communities. The language use in the corpus is specific to a somewhat closed community, which means linguistic innovations are common and non-standard word interpretations are constantly evolving. This makes this corpus very appropriate for testing the models for abrupt semantic shift detection.
We adopt the same procedure as the original authors and split the corpus into two time spans, the first one covering texts ranging from 2011 until 2013 and the second one containing texts from 2017.
\subsubsection{Brexit news}
We compiled the Brexit news corpus to test the ability of our model to detect relative semantic changes (i.e., how does a specific word, in this case \textit{Brexit}, semantically correlate to other words in different time periods) and to test the method on consecutive yearly periods. The subject of Brexit was chosen due to its extensive news coverage over a longer period of time, which allows us to detect possible correlations between the actual events that occurred in relation to this topic and semantic changes detected by the model. The corpus contains about 36.6 million tokens and consists of news articles (more specifically, their titles and content) about Brexit\footnote{Only articles that contain word \textit{Brexit} in the title were used in the corpus creation.} from the RSS feeds of the following news media outlets: Daily Mail, BBC, Mirror, Telegraph, Independent, Guardian, Express, Metro, Times, Standard and Daily Star and the Sun. The corpus is divided into 5 time periods, the first one covering articles about the Brexit before the referendum that occurred on June 23, 2016. The articles published after the referendum are split into 4 yearly periods. The yearly splits are made on June 24 each year and the most recent time period contains only articles from June 24, 2019 until August 23, 2019. The corpus is unbalanced, with time periods of 2016 and 2018 containing much more articles than other splits due to more intensive news reporting. See Table \ref{tbl:datasets} for details.
\subsubsection{Immigration news}
The Immigration news corpus was compiled to test the ability of the model to detect relative semantic changes in a multilingual setting, something that has to our knowledge never been tried before. The main idea is to detect similarities and differences in semantic changes related to immigration in two distinct countries with different attitudes and historical experiences about this subject.
The topic of immigration was chosen due to relevance of this topic for media outlets in both countries that were covered, England and Slovenia. The corpus consists of 6,247 English articles and 10,089 Slovenian news articles (more specifically, their titles and content) about immigration\footnote{The corpus contains English articles that contain words \textit{immigration}, \textit{immigrant} or \textit{immigrants} in the title and Slovenian articles that contain Slovenian translations of these words in either title or content.}, is balanced in terms of number of tokens for each language and altogether contains about 12 million tokens. The English and Slovenian documents are combined and shuffled\footnote{Shuffling is performed to avoid the scenario where all English documents would be at the beginning of the corpus and all Slovenian documents at the end, which would negatively affect the language model fine-tuning.} and after that the corpus is divided into 5 yearly periods (split on December 31). The English news articles were gathered from the RSS feeds of the same news media outlets as the news about Brexit, while the Slovenian news articles were gathered from the RSS feeds of the following Slovenian news media outlets: Slovenske novice, 24ur, Dnevnik, Zurnal24, Vecer, Finance and Delo.
\begin{table}[!h]
\begin{center}
\begin{tabularx}{\columnwidth}{|l|l|X|}
\hline
Corpus & Time period & Num. tokens (in millions)\\
\hline
LiverpoolFC & 2013 & 8.5 \\
LiverpoolFC & 2017 & 11.9 \\
LiverpoolFC & Entire corpus & 20.4 \\
\hline
Brexit news & 2011 - 23.6.2016 & 2.6 \\
Brexit news & 24.6.2016 - 23.6.2017 & 10.3\\
Brexit news & 24.6.2017 - 23.6.2018 & 6.2\\
Brexit news & 24.6.2018 - 23.6.2019 & 12.7\\
Brexit news & 24.6.2019 - 23.8.2019 & 2.4\\
Brexit news & Entire corpus & 36.6\\
\hline
Immigration news & 2015 & 2.2\\
Immigration news & 2016 & 2.6\\
Immigration news & 2017 & 2.6\\
Immigration news & 2018 & 2.6\\
Immigration news & 2019 & 1.9\\
Immigration news & Entire corpus & 11.9\\
\hline
\end{tabularx}
\caption{Corpora statistics.}
\label{tbl:datasets}
\end{center}
\end{table}
\subsection{Evaluation}
\label{sec:eval}
We evaluate the performance of the proposed approach for semantic shift detection by conducting quantitative and qualitative evaluation.
\subsubsection{Quantitative evaluation}
In order to get a quantitative assessment of the performance of the proposed approach, we leverage a publicly available evaluation set for semantic shift detection on the LiverpoolFC corpus \cite{del2019short}. The evaluation set contains 97 words from the corpus manually annotated with semantic shift labels by the members of the LiverpoolFC subreddit. 26 community members with domain knowledge but no linguistic background were asked to make a binary decision whether the meaning of the word changed between the two time spans (marked as 1) or not (marked as 0) for each of the words in the evaluation set. Each word received on average 8.8 judgements and the average of these judgements is used as a gold standard semantic shift index.
Positive examples of meaning shift in this evaluation set can be grouped into three classes according to the type of meaning shift. First are metonymic shifts, which are figures of speech, in which a thing or concept is referred to by the name of something associated with it (e.g., the word \textit{F5} that is initially used as a shortcut for refreshing a page and starts to denote any act of refreshing). Second are metaphorical shifts where the original meaning of a word is widened through analogy (e.g., the word \textit{pharaoh} which is the nickname of an Egyptian football player). Lastly, memes are semantic shifts that occur when a word first used in a humorous or sarcastic way prompts a notable change in word's usage on a community scale (e.g., the first part of the player's surname \textit{Van Dijk} is being used in jokes related to shoes' brand \textit{Vans}).
We measure Pearson correlation between the semantic shift index and the model's semantic shift assessment for each of the words in the evaluation set in order to be able to directly compare our approach to the one presented in \newcite{del2019short}, where the same evaluation procedure was employed. As explained in Section \ref{sec:time_rep}, we obtain semantic shift assessments by measuring the cosine distance between two time specific representations of the same token.
\subsubsection{Qualitative evaluation}
For the Brexit and Immigration news corpora, manually labeled evaluation sets are not available, therefore we were not able to quantitatively assess the approach's performance on these two corpora. For this reason, the performance of the model on these two corpora is evaluated indirectly, by measuring how does a specific word of interest semantically correlate to other seed words in a specific time period and how does this correlation vary through time. The cosine distance between the time specific representation of a word of interest and the specific seed word is used as a measure of semantic relatedness. We can evaluate the performance of the model in a qualitative way by exploring if detected differences in semantic relatedness (i.e., relative semantic shifts) are in line with the occurrences of relevant events which affected the news reporting about Brexit and Immigration, and also the findings from the academic studies on these topics. This is possible because topics of Brexit and Immigration have been extensively covered in the news and several qualitative analyses on the subject have been conducted.
The hypothesis that justifies this type of evaluation comes from structural linguistics and states that word meaning is a relational concept and that words obtain meaning only in relation to their neighbours \cite{matthews2001short}. According to this hypothesis, the change in the word's meaning is therefore expressed by the change in semantic relatedness to other neighbouring words. Neighbouring seed words to which we compare the word of interest for the Brexit news corpus are selected automatically (see Section \ref{sec:brexit} for details) while for the Immigration news corpus, the chosen seed words are concepts representing most common aspects of the discourse about immigration (see Section \ref{sec:immigration} for details).
\section{Experiments}
\label{sec:experiments}
In this section we present details about conducted experiments and results on the LiverpoolFC, Brexit and Immigration corpora.
\subsection{LiverpoolFC}
\begin{figure*}[!h]
\begin{center}
\includegraphics[scale=0.29]{liverpool_047_lrec_new.png}
\caption{Semantic shift index vs. cosine distance in the LiverpoolF1 evaluation dataset.}
\label{fig.1}
\end{center}
\end{figure*}
In this first experiment, we offer a direct comparison of the proposed method to the state-of-the-art approach proposed by \newcite{del2019short}. In their study, they use a Continuous Skipgram model proposed by \newcite{mikolov2013efficient} and employ the incremental model fine-tuning approach first proposed by \newcite{kim2014temporal}. In the first step, they create a large Reddit corpus (with about 900 million tokens) containing Reddit post from the year 2013 and use it for training the domain specific word embeddings. The embeddings of this initial model are used for the initialization of the model trained on the next successive time period, LiverpoolFC 2013 posts, and finally, the embeddings of the LiverpoolFC 2013 model are used for the initialization of the model trained on the LiverpoolFC 2017 posts. We, on the other hand, do not conduct any additional domain adaptation on a large Reddit corpus and only fine-tune the BERT model on the LiverpoolFC corpus, as already explained in Section \ref{sec:methodology}.
First, we report on the results of the diachronic semantic shift detection for 97 words from the LiverpoolFC corpus that were manually annotated with semantic shift labels by members of the LiverpoolFC subreddit (see Section \ref{sec:liverpool} for more details on the annotation and evaluation procedures). Overall, our proposed approach yields almost identical positive correlation between cosine distance between 2013 and 2017 word representations and semantic shift index as in the research conducted by \newcite{del2019short}. We observe the Pearson correlation of 0.47 (p $<$ 0.001) while the original study reports Pearson correlation of 0.49.
On the other hand, there are also some important differences between the two methods. Our approach (see Figure \ref{fig.1}) proves to be more conservative when it comes to measuring the semantic shift in terms of cosine distance. In the original approach, the cosine distance of up to 0.6 is measured for some of the words in the corpus, while we only observe the differences in cosine distance of up to 0.3 (for the word \textit{roast}). This conservatism of the model results in less false positive examples (i.e., detected semantic shifts that were not observed by human annotators) compared to the original study, but also results in more false negative examples (i.e., unrecognised semantic shifts that were recognized by human annotators)\footnote{Expressions \textit{false positive} and \textit{false negative} are used here to improve the readability of the paper and should not be interpreted in a narrow context of binary classification.}. An example of a false negative detection by the system proposed by \newcite{del2019short} is the word \textit{lean}. An example of a false positive detection by the system proposed by \newcite{del2019short} that was correctly identified by our system as a word with unchanged semantic context is the word \textit{stubborn}. On the other hand, our system also manages to correctly identify some of the words that changed the most that were misclassified by the system proposed by \newcite{del2019short}. An example of this is the word \textit{Pharaoh}.
There are also some similarities between the two systems. For example, the word \textit{highlighter} is correctly identified as a word that changed meaning by both systems. With the exception of \textit{Pharaoh}, we also notice similar tendencies of both systems to misclassify as false negatives words that fit into the category of so-called metaphorical shifts (i.e., widening of the original meaning of a word through analogy). Examples of these words would be \textit{snake}, \textit{thunder} and \textit{shovel}. One explanation for this misclassification that was offered by \newcite{del2019short} is the fact that many times the metaphoric usage is very similar to the literal one, therefore preventing the model to notice the difference in meaning\footnote{For example, \textit{shovel} is used in a context where the team is seen as a train running through the season, and the fan's job is to contribute in a figurative way by shoving the coal into the train boiler. Therefore, the word \textit{shovel} is used in sentences like \textit{You boys know how to shovel coal.}}.
\subsection{Brexit news}
\label{sec:brexit}
\begin{figure*}[!h]
\begin{center}
\includegraphics[scale=0.6]{brexit_lrec.png}
\caption{The relative diachronic semantic shift of the word \textit{Brexit} in relation to ten words that changed the most out of 50 closest words to \textit{Brexit} according to the cosine similarity.}
\label{fig.2}
\end{center}
\end{figure*}
Here we asses the performance of the proposed approach for detecting sequential semantic shift of words in short-term yearly periods. More specifically, we explore how time specific seed word representations in different time periods change their semantic relatedness to the time specific word representation of the word \textit{Brexit}. The following procedure is conducted. First, we find 50 words most semantically related to the general non-time specific representation of \textit{Brexit} according to their non-time specific general representations. Since the initial experiments showed that many of the 50 most similar words are in fact derivatives of the word \textit{Brexit} (e.g., \textit{brexitday}, \textit{brexiters}...) and therefore not that relevant for the purpose of this study (as their meaning is fully dependant on the concept from which they derived), we first conduct an additional filtering according to the normalized Levenshtein distance defined as:
\[normLD = 1 - (LD / max(len(w1), len(w2))),\]
where $normLD$ stands for normalized Levenshtein difference, $LD$ for Levenshtein difference, $w1$ is \textit{Brexit} and $w2$ are other words in the corpus. Words for which normalized Levenshtein difference is bigger than 0.5 are discarded and out of the remaining words we extract 50 words most semantically related to \textit{Brexit} according to the cosine similarity. \footnote{The normalized Levenshtein difference treshold of 0.5 and the number of most semantically similar words were chosen empirically.}
Out of these 50 words, we find ten words that changed the most in relation to the time specific representation of the word \textit{Brexit} with the following equation:
\[\mathit{MC} = abs(CS(w1_{2015}, w2_{2015}) - CS(w1_{2019}, w2_{2019}))\]
where \textit{MC} stands for meaning change, \textit{CS} stands for cosine similarity, $w1_{year}$ is a year specific representation of the word \textit{Brexit} and $w2_{year}$ are year specific representations of words related to \textit{Brexit}.
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.6]{immigrants_lrec.png}
\caption{The relative diachronic semantic shift of the word \textit{immigration} in relation to English-Slovenian word pairs crime-kriminal, economy-gospodarstvo, integration-integracija and politics-politika.}
\label{fig.3}
\end{center}
\end{figure*}
The resulting 10 seed words are used to determine the relative diachronic semantic shift of the word \textit{Brexit} as explained in Section \ref{sec:eval} Figure \ref{fig.2} shows the results of the experiments. We can see that the word \textit{deal} is constantly becoming more and more related to \textit{Brexit}, from having a cosine similarity to the word \textit{Brexit} of about 0.67 in 2015 to having a cosine similarity of about 0.77 in 2019. This is consistent with our expectations. The biggest overall difference of about 0.14 in semantic relatedness can be observed for the word \textit{globalisation}, having not been very related to \textit{Brexit} before the referendum in year 2016 (with the cosine similarity of about 0.52) and than becoming very related to the word \textit{Brexit} in a year after the referendum (with cosine similarity of about 0.72). After that, we can observe another drop in similarity in the following two years and then once again a rise in similarity in 2019. This movement could be at least partially explained by the post-referendum debate on whether UK's \textit{Leave} vote could be seen as a vote against globalisation \cite{coyle2016brexit}.
A sudden rise in semantic relatedness between words \textit{Brexit} and \textit{devolution} in years 2016 and 2017 could be explained by a still quite relevant question of how UK's withdrawal from the EU will affect its structures of power and administration \cite{hazell2016brexit}. We can also observe a quite sudden drop in semantic relatedness between the words \textit{Brexit} and \textit{austerity} in year 2017, one year after the referendum. It is possible, that the debate on whether UK's \textit{leave} vote was caused by austerity-induced welfare reforms proposed by the UK government in 2010 \cite{fetzer2019did} has been calming down. Another interesting thing to note is the enormous drop of about 0.25 in cosine similarity for the word \textit{debacle} after June 23 2019, which has gained the most in terms of semantic relatedness to the word \textit{Brexit} in 2018. It is possible that this gain is related to the constant delays in the UK's attempts of leaving the EU).
Some findings of the model are harder to explain. For example, according to the model, the talk about the \textit{renegotiating} in the context of \textit{Brexit} has not been very common in years 2015 and 2016 and than we can see a major rise of about 0.15 in cosine similarity in year 2017. On the other hand, an almost identical word \textit{renegotiation} has kept a very steady cosine similarity of about 0.72 throughout an entire five year period. We also do not have an explanation for a large drop in semantic relatedness in 2019 between words \textit{chequers} and \textit{Brexit}, and \textit{climate} and \textit{Brexit}.
\subsection{Immigration news}
\label{sec:immigration}
Here we asses the performance of the proposed approach in a multilingual English-Slovenian setting. Since the main point of these experiments is to detect differences and similarities in relative semantic shift in two distinct languages, we first define English-Slovenian word pairs that arguably represent some of the most common aspects of the discourse about immigration \cite{martinez2000immigration,borjas1995economic,heckmann2016integration,cornelius2005immigration}. These English-Slovenian matching translations are \textit{crime-kriminal}, \textit{economy-gospodarstvo}, \textit{integration-integracija} and \textit{politics-politika}. We measure the cosine similarity between time specific vector representations of each word in the word pair and a time specific vector representation of a word \textit{immigration}.
The results of the experiments are presented in Figure \ref{fig.3}. First thing one can note is that in most cases and in most years English and Slovenian parts of a word pair have a very similar semantic correlation to a word \textit{immigration}, which suggest that the discourse about immigration is quite similar in both countries. The similarity is most apparent for the word pair \textit{crime-kriminal} and to a slightly lesser extent for the word pair \textit{politics-politika}. On the other hand, not much similarity in relation to a word \textit{immigration} can be observed for an English and Slovenian words for economy. This could be partially explained with the fact that Slovenia is usually not a final destination for modern day immigrants (who therefore do not have any economical impact on the country) and serves more as a transitional country \cite{garb2018coping}, therefore immigration is less likely to be discussed from the economical perspective.
Figure \ref{fig.3} also shows some interesting language specific yearly meaning shifts. The first one is the rise in semantic relatedness between the word \textit{immigration} and the English word \textit{politics} in 2016. This could perhaps be related to the Brexit referendum which occurred in the middle of the year 2016 and the topic of \textit{immigration} was discussed by politicians from both sides of the political spectrum extensively in the referendum campaign.
Another interesting yet currently unexplainable yearly shift concerns Slovenian and English words for \textit{integration} in 2019. While there is a distinct fall in semantic relatedness between words \textit{integration} and \textit{immigration}, we can on the other hand observe a distinct rise in semantic relatedness between words \textit{integracija} and \textit{immigration}.
\section{Conclusion}
\label{sec:conclusion}
We presented a research on how contextual embeddings can be leveraged for the task of diachronic semantic shift detection. A new method that uses BERT embeddings for creating time specific word representations was proposed and we showcase the performance of the new approach on three distinct corpora, LiverpoolFC, Brexit news and Immigration news.
The proposed method shows comparable performance to the state-of-the-art on the LiverpoolFC corpus, even though domain adaptation was performed only on the corpus itself and no additional resources (as was the case in the study by \newcite{del2019short}) were required. This shows that the semantic knowledge that BERT model acquired during its pretraining phase can be successfully transferred into domain specific corpora. This is welcome from the stand point of reduced time complexity (since training BERT or most other embedding models from scratch is very time consuming) and it also makes our proposed method appropriate for detecting meaning shifts in domains for which large corpora are not available.
Experiments on the Brexit news corpus are also encouraging, since detected relative semantic shifts are somewhat in line with the occurrence of different events which affected the news reporting about Brexit in different time periods. Same could be said for the multi-lingual experiments conducted on the English-Slovenian Immigration news corpus, which is to our knowledge the first attempt to compare parallel meaning shifts in two different languages, and opens new paths for multilingual news analysis.
On the other hand, a lot of further work still needs to be done. While the results on the Brexit and Immigration news corpora are encouraging, a more thorough evaluation of the approach would be needed. This could either be done in comparison to a qualitative discourse analysis or by a quantitative manual evaluation, in which changes detected by the proposed method would be compared to changes identified by human experts with domain knowledge, similar as in \newcite{del2019short}.
The method itself could also be refined or improved in some aspects. While we demonstrated that averaging embeddings of word occurrences in order to get time specific word representations works, we did not experiment with other grouping techniques, such as taking median word representation instead of an average or by using weighted averages. Another option would also be to further develop clustering aggregation techniques, similar as in \newcite{giulianellilexical}. While these methods are far more computationally demanding and less scalable than averaging, they do have an advantage of better interpretability, since clustering of word usages into a set of distinct clusters resembles the manual approach of choosing the word's contextual meaning from a set of predefined meanings.
\section{Acknowledgements}
This paper is supported by European Union’s Horizon 2020 research and innovation programme under grant agreement No. 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The authors acknowledge also the financial support from the Slovenian Research Agency for research core funding for the programme Knowledge Technologies (No. P2-0103) and the project TermFrame - Terminology and Knowledge Frames across Languages (No. J6-9372). We would also like to thank Lucija Luetic for useful discussions.
\section{Bibliographical References}
\label{reference}
\bibliographystyle{lrec}
|
2,869,038,156,674 | arxiv | \section{Introduction} \label{sec:intro}
A quandle~\cite{DJoyce, Matveev} is a non-associate algebraic structure whose definition is motivated by knot theory, and it provides an axiomatization of the Reidemeister moves for link diagrams. The fundamental quandle of a link is a quandle that can be computed from any diagram of the link.
Joyce~\cite{DJoyce} and Matveev~\cite{Matveev} proved that the fundamental quandle is a stronger invariant of knots and links than the knot group. Quandles have been used to define other useful invariants of oriented links (see~\cite{CEGS, CJKLS, ChoNelson, ElhamdadiNelson, GN} and others). For example, the quandle counting invariant~\cite{ElhamdadiNelson} is the size of the set of all homomorphisms from the fundamental quandle of an oriented link to a finite quandle. More recently, Cho and Nelson~\cite{ChoNelson} introduced the notion of a quandle coloring quiver of an oriented link $K$ with respect to a finite quandle $\mathcal{X}$, which is a directed graph that allows multiple edges and loops, and enhances the quandle counting invariant of $K$ by $\mathcal{X}$.
In this paper, we study quandle coloring quivers of $(p,2)$-torus links by dihedral quandles. The paper is organized as follows. In Sec.~\ref{sec:prelims} we review some basic concepts about torus links and quandles. In Sec.~\ref{ssec:2.2} we also prove a couple of statements about the coloring space of $(p, 2)$-torus links by dihedral quandles, as they will be needed later in the paper. Sec.~\ref{sec:quivers} is the heart of the paper; here we review the definition of quandle coloring quivers of oriented links with respect to finite quandles and study the quivers of $(p, 2)$-torus links with respect to dihedral quandles.
\section{Preliminaries} \label{sec:prelims}
In order to have a self-contained paper, we begin by reviewing relevant definitions and concepts.
\subsection{Basic definitions} \label{sses:2.1}
A knot or link $K$ is called a \textit{$(p, q)$-torus knot or link} if it lies, without any points of intersection, on the surface of a trivial torus in $\mathbb{R}^3$. The integers $p$ and $q$ represent the number of times $K$ wraps around the meridian and, respectively, the longitude of the torus. We denote a $(p, q)$-torus knot or link by $\mathcal{T}(p,q)$. It is known that $\mathcal{T}(p,q)$ is a knot if and only if $\gcd(p, q) = 1$ and $p \not = 1 \not = q$, and that it is a link with $d$ components if and only if $\gcd(p, q) = d$. Moreover, $\mathcal{T}(p,q)$ is the trivial knot if and only if either $p$ or $q$ is equal to $1$ or $-1$. It is also known that the $(p, q)$-torus link is isotopic to the $(q, p)$-torus link. The $(-p, -q)$-torus link is the $(p, q)$-torus link with reverse orientation, and thus torus links are invertible. For simplicity, we will assume that $p, q >0$. The mirror image of $\mathcal{T}(p,q)$ is $\mathcal{T}(p, - q)$ and a non-trivial torus-knot is chiral.
We use the term link to refer generically to both knots and links. It is known that the crossing number of a torus link with $p, q >0$ is given by $c = \text{min}((p-1)q, (q-1)p)$. The link $\mathcal{T}(p,q)$ can also be represented as the closure of the braid $(\sigma_1 \sigma_2 \dots \sigma_{p-1})^q$. Since $\mathcal{T}(p,q)$ is isotopic to $\mathcal{T}(q, p)$, this torus link has also a diagram which is the closure of the braid $(\sigma_1 \sigma_2 \dots \sigma_{q-1})^p$.
The simplest nontrivial torus knot is the trefoil knot, $\mathcal{T}(3,2)$, depicted in Fig.~\ref{Trefoil}. For more details on torus knots and links, we refer the reader to the books~{\cite{Kawauchi, Lickorish, Livingston, Murasugi}.
\begin{figure}[ht]
\includegraphics[height=1in]{TrefoilQuandleLabelled.pdf}
\caption{Oriented torus knot $\mathcal{T}(3,2)$}
\label{Trefoil}
\end{figure}
The term quandle is attributed to Joyce in his 1982 work~\cite{DJoyce} as an algebraic structure whose axioms are motivated by the Reidemeister moves.
A \textit{quandle}~\cite{DJoyce, Kamada, Matveev} is a non-empty set $\mathcal{X}$ together with a binary operation $\triangleright: \mathcal{X} \times \mathcal{X} \to \mathcal{X}$ satisfying the following axioms:
\begin{enumerate}
\item For all $x \in \mathcal{X}$, $x \triangleright x = x$.
\item For all $y \in \mathcal{X}$, the map $\beta_y: \mathcal{X} \to \mathcal{X}$ defined by $\beta_y(x) = x \triangleright y$ is invertible.
\item For all $x,y,z \in \mathcal{X}$, $(x \triangleright y) \triangleright z = (x \triangleright z ) \triangleright (y \triangleright z)$.
\end{enumerate}
In this case, we use the notation $(\mathcal{X},\triangleright)$ or $(\mathcal{X},\triangleright_{\mathcal{X}})$, if we need to be specific about the quandle operation.
A \emph{quandle homomorphism}~\cite{Inoue} between two quandles $(\mathcal{X},\triangleright_{\mathcal{X}})$ and $(\mathcal{Y},\triangleright_{\mathcal{Y}})$ is a map $f \co \mathcal{X} \to \mathcal{Y}$ such that $f(a\triangleright_{\mathcal{X}} b) = f(a) \triangleright_{\mathcal{Y}} f(b)$, for any $a,b \in \mathcal{X}$.
It is not hard to see that the composition of quandle homomorphisms is again a quandle homomorphism. We note that the third axiom of a quandle implies that for all $y \in \mathcal{X}$, the map $\beta_y$ is an automorphism of $\mathcal{X}$. Moreover, by the first axiom of a qundle, the map $\beta_y$ fixes $y$.
\begin{example} \label{dihedral quandle2}
Let $n \in \mathbb{Z}, n \ge2, \mathcal{X} = \mathbb{Z}_n = \{0, 1, \dots, n-1\}$ and define $x \triangleright y = (2y - x) \mod n$, for all $x,y \in \mathbb{Z}_n$. That is, $x \triangleright y$ is the remainder of $2y-x$ upon division by $n$. Then $\mathbb{Z}_n$ together with this operation $\triangleright$ is a quandle, called the \textit{dihedral quandle} (see~\cite{ElhamdadiNelson, MTakasaki}).
\end{example}
\begin{example}\label{fundamental quandle}
To every oriented link diagram $K$, we can naturally associate its \textit{fundamental quandle} (also referred to as the \textit{knot quandle}), denoted by $Q(K)$. Given a diagram $D$ of $K$, its crossings divide $D$ into arcs and we use a set of labels to label all of these arcs. The fundamental quandle $Q(D)$ is the quandle generated by the set of arc-labels together with the crossing relations given by the operation $\triangleright$ explained in Fig.~\ref{QuandleCrossing}.
\begin{figure}[ht]
\includegraphics[height = 1.2in]{OrientedKnotQuandle.pdf}
\caption{Labelling a knot crossing in a fundamental quandle}
\label{QuandleCrossing}
\end{figure}
\end{example}
The axioms of a quandle encode the Reidemeister moves, and thus the fundamental quandle is independent on the diagram $D$ representing the oriented link $K$. Hence, $Q(K): = Q(D)$ is a link invariant. It was shown independently by Joyce~\cite{DJoyce} and Matveev~\cite{Matveev} that the fundamental quandle is a stronger invariant than the fundamental group of a knot or link. In fact, the fundamental quandle is a complete invariant, in the sense that if $Q(K_1)$ and $Q(K_2)$ are isomorphic quandles, then the (unoriented) links $K_1$ and $K_2$ are equivalent (up to chirality). However, determining whether or not two quandles are isomorphic is not an easy task.
\begin{remark}\label{fundamental quandle of torus knot}
We consider the fundamental quandle of a $(p,2)$-torus link. By arranging the quandle relations of the fundamental quandle in a particular way, we can present $Q(\mathcal{T}(p,2))$ in a recursive fashion.
We know that a $(p,2)$-torus link can be represented as the closure of the braid $\sigma_1^p$, which is also a minimal diagram for the $(p,2)$-torus link.
There are $p$ crossings and $p$ arcs labeled $\{x_1, x_2, \dots, x_p\}$ in such a diagram of $\mathcal{T}(p,2)$. Considering the braid $\sigma_1^p$, we label its arcs starting from the bottom right with $x_1$ then moving to the bottom left arc to label it $x_2$, and continuing up along the braid using labels $x_3, x_4, \dots, x_p$, as shown in Fig.~\ref{FundQuandle torus knot}. We obtain the following presentation for the fundamental quandle of the $(p,2)$-torus link:
\begin{center}
$Q(\mathcal{T}(p,2)) =\langle x_1,x_2,x_3,\dots,x_p \, | \, x_2\triangleright x_1 = x_p, x_1\triangleright x_p = x_{p-1},$ and \\
$ \hspace{6cm} x_i \triangleright x_{i-1} = x_{i-2}, \text{ for all } 3 \leq i \leq p\rangle$.
\end{center}
\end{remark}
\begin{figure}[ht]
\includegraphics[height = 2in]{p_2TorusKnot.pdf}
\put(-20, 10){\fontsize{10}{10}$x_1$}
\put(-58, 10){\fontsize{10}{10}$x_2$}
\put(-20, 27){\fontsize{10}{10}$x_2$}
\put(-20, 45){\fontsize{10}{10}$x_3$}
\put(-20, 63){\fontsize{10}{10}$x_4$}
\put(-25, 90){\fontsize{10}{10}$x_{p-2}$}
\put(-25, 107){\fontsize{10}{10}$x_{p-1}$}
\put(-25, 125){\fontsize{10}{10}$x_{p}$}
\put(-28, 146){\fontsize{10}{10}$x_{1}$}
\put(-58, 146){\fontsize{10}{10}$x_{2}$}
\caption{A diagram of a $(p, 2)$-torus link}
\label{FundQuandle torus knot}
\end{figure}
When classifying links, it is useful to count the number of quandle homomorphisms from the fundamental quandle of a link to a fixed quandle. For example, Inoue~\cite{Inoue} showed that the number of all quandle homomorphisms from the fundamental quandle of a link to an Alexander quandle~\cite{ElhamdadiNelson, Inoue} has a module structure and that it is determined by the series of the Alexander polynomials of the link.
A \emph{quandle coloring} of an oriented link $K$ with respect to a finite quandle $\mathcal{X}$ (also called an \emph{$\mathcal{X}$-quandle coloring}) is given by a quandle homomorphism $\psi: Q(K) \to \mathcal{X}$. In this case, we refer to $\mathcal{X}$ as the coloring quandle. We let $Hom(Q(K), \mathcal{X})$ be the collection of all $\mathcal{X}$-quandle colorings and we call it the \textit{coloring space} of $K$ with respect to $\mathcal{X}$. Finally, the cardinality $|Hom(Q(K), \mathcal{X})|$ of the coloring space is called the \textit{quandle counting invariant with respect to $\mathcal{X}$} (see~\cite{ChoNelson}).
When a quandle coloring consists of a single color, we use the term \emph{trivial coloring}, and make use of the term \emph{nontrivial coloring} otherwise.
\subsection{The coloring space of $(p, 2)$-torus links with respect to dihedral quandles} \label{ssec:2.2}
In this paper we are concerned with $(p, 2)$-torus links, and we need to understand the coloring space of these links with respect to dihedral quandles. The following proposition and theorem are our first results.
\begin{proposition}\label{coloring lemma}
Let $\mathcal{T}(p,2)$ be a torus link and $\mathbb{Z}_n$ a dihedral quandle. Consider any two fixed and consecutive arcs of $\mathcal{T}(p,2)$ with labels $x_1,x_2\in Q(\mathcal{T}(p,2))$. If $\psi \in Hom(Q(\mathcal{T}(p,2)),\mathbb{Z}_n)$, then $p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n)$.
\end{proposition}
\begin{proof}
By Remark \ref{fundamental quandle of torus knot}, the fundamental quandle of the torus link $\mathcal{T}(p,2)$ has the following presentation:
\begin{center}
$Q(\mathcal{T}(p,2)) =\langle x_1,x_2,x_3,\dots,x_p \, | \, x_2\triangleright x_1 = x_p, x_1\triangleright x_p = x_{p-1},$ and \\
$ \hspace{6cm} x_i \triangleright x_{i-1} = x_{i-2}, \text{ for all } 3 \leq i \leq p\rangle$.
\end{center}
For any $\mathbb{Z}_n$-quandle coloring $\psi \in Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ and $x_i,x_j \in Q(\mathcal{T}(p,2))$, we must have that
$\psi(x_i\triangleright x_j) = \psi(x_i)\triangleright' \psi(x_j)$, where $\triangleright$ and $\triangleright'$ are the quandle operations for $Q(\mathcal{T}(p,2))$ and $\mathbb{Z}_n$, respectively.
In particular,
\begin{align*}
\psi(x_p) &= \psi(x_2\triangleright x_1) = \psi(x_2)\triangleright'\psi(x_1) = (2\psi(x_1)-\psi(x_2))\text{ mod }n, \\
\psi(x_{p-1}) &= \psi(x_1\triangleright x_p) = \psi(x_1)\triangleright'\psi(x_p) = (2\psi(x_p)-\psi(x_1))\text{ mod }n.
\end{align*}
Moreover, for all $3 \leq i \leq p$, the following holds:
\[ \psi(x_{i-2}) = \psi(x_i\triangleright x_{i-1}) = \psi(x_i)\triangleright'\psi(x_{i-1}) = (2\psi(x_{i-1})-\psi(x_i))\text{ mod }n .\]
Using these statements inductively, we obtain:
\begin{align*}
\psi(x_2) &=(2\psi(x_3)-\psi(x_4)) \text{ mod }n \\
&=(2(2\psi(x_4)-\psi(x_5))-\psi(x_4)) \text{ mod }n \\
&= (3\psi(x_4)-2\psi(x_5))\text{ mod }n \\
&=(3(2\psi(x_5)-\psi(x_6))-2\psi(x_5)) \text{ mod }n \\
&= (4\psi(x_5)-3\psi(x_6))\text{ mod }n \\
&\hspace{1.5in}\vdots \\
&= ((p-1)\psi(x_p)-(p-2)\psi(x_1))\text{ mod } n \\
&= ((p-1)(2\psi(x_1)-\psi(x_2))-(p-2)\psi(x_1))\text{ mod } n \\
&= (p\psi(x_1)-(p-1)\psi(x_2))\text{ mod } n.
\end{align*}
It follows that $p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n).$
\end{proof}
We are now ready to prove the following theorem about the coloring space of $(p,2)$-torus links with respect to dihedral quandles, which will be used extensively in Sec.~\ref{sec:quivers}.
\begin{theorem}\label{torus coloring theorem}
For a given torus link $\mathcal{T}(p,2)$ and $\mathbb{Z}_n$ a dihedral quandle, the following holds:
(i) If $\gcd(p,n) = 1$, then $|Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)| = n$ and the coloring space $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ is the collection of all trivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$.
(ii) If $\gcd(p,n) = c \neq1$, then $|Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)| = nc$, and the coloring space $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ is the union of all $n$ trivial quandle colorings together with a collection of $n(c-1)$ nontrivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$.
\end{theorem}
\begin{proof}
Let $\psi\in Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$. By Proposition~\ref{coloring lemma},
\[p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n).\]
(i) If $\gcd(p,n) = 1$, then the above congruence simplifies to
\[\psi(x_2)\equiv \psi(x_1) \text{ (mod }n).\]
The latter can only occur when $\psi$ maps $x_1$ and $x_2$ to the same color. Using the relations of $Q(\mathcal{T}(p,2))$, as described in Remark~\ref{fundamental quandle of torus knot}, this forces the quandle homomorphism $\psi$ to color every arc in the diagram of $\mathcal{T}(p,2)$ the same. Since there are $n$ quandle colorings to choose from, $|Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)| = n$, and $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ is the collection of all trivial quandle colorings of $\mathcal{T}(p,2)$ with respect to $\mathbb{Z}_n$.
(ii) If $\gcd(p,n)=c\neq 1$, then the congruence $p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n)$ simplifies to the following:
\[ \psi(x_2)\equiv \psi(x_1) \text{ }\left(\text{mod }\frac{n}{c}\right).\]
Then $\psi(x_2) \equiv \psi(x_1)+ k\frac{n}{c}\text{ }(\text{mod }n/c)$ also holds for all integers $k$, where $0 \leq k \leq c-1$. Hence, for each fixed color $\psi(x_1) \in \mathbb{Z}_n$, there are $c$ possibilities for $\psi(x_2)$ in $\mathbb{Z}_n$ such that $p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n)$. Since there are $n$ choices for $\psi(x_1)\in\mathbb{Z}_n$, we conclude that there are $nc$ $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$. Moreover, since there are $n$ trivial quandle colorings with respect to $\mathbb{Z}_n$, the remaining $n(c-1)$ $\mathbb{Z}_n$-quandle colorings must be nontrivial quandle colorings of the link $\mathcal{T}(p,2)$ with respect to $\mathbb{Z}_n$.
\end{proof}
\section{Quandle coloring quivers of $(p,2)$-torus links with respect to dihedral quandles} \label{sec:quivers}
A \textit{quandle enhancement} is an invariant for oriented links that captures the quandle counting invariant and in general is a stronger invariant. For some examples of quandle enhacements, we refer the reader to~\cite{CEGS, CJKLS, ElhamdadiNelson, GN}. More recently, by representing quandle colorings as vertices in a graph, Cho and Nelson~\cite{ChoNelson} imposed a combinatorial structure on the coloring space of a link that led to new quandle enhancements. For example, one of these enhancements, the quandle coloring quiver, can distinguish links that have the same quandle counting invariant (see~\cite[Example 6]{ChoNelson}).
In what follows, we work with graphs that are allowed to have multiple edges and loops. To capture multiple edges in a graph $G$, we use a weight function $c \co \{(x, y) \, | \, x, y \in V(G) \} \to \mathbb{N} \cup \{0\}$, where $V(G)$ is the vertex set of $G$ and $(x, y)$ is any edge connecting vertices $x, y \in V(G)$. We denote the corresponding graph with a weight function $c$ by $(G, c)$. When $c$ is a constant function $c(x, y) = k$, for all $x, y \in V(G)$ and some $k \in \mathbb{N}$, then we use the notation $c = \hat{k}$ for the weight function and $(G, \hat{k})$ for the corresponding graph.
We say that a graph is \textit{complete}, provided that there is an edge between every pair of distinct vertices and a loop at each vertex; we denote a complete graph with $n$ vertices by $K_n$. The graphs we work with in this paper are all directed, and we use the notation $\overleftrightarrow{G}$ to denote a graph $G$ with all edges directed both ways.
Given two graphs $G_1$ and $G_2$ with disjoint vertex sets and edge sets, the \textit{join of $G_1$ and $G_2$}, denoted $G_1 \nabla G_2$, is the graph with vertex set $V(G_1) \cup V(G_2)$ and edge set containing all edges in $G_1$ and $G_2$, as well as all edges that connect vertices of $G_1$ with vertices of $G_2$. Finally, if there is the same number of directed edges from each vertex of the graph $G_2$ to the graph $G_1$ with constant weight function $\hat{k}$ and no directed edges from vertices of $G_1$ to vertices of $G_2$, we use the notation $G_1 \overleftarrow{\nabla}_{\hat{k}} G_2$ to denote such a join of graphs.
Let $\mathcal{X}$ be a finite quandle and $K$ an oriented link. For any set of quandle homomorphims $S \subset Hom(\mathcal{X}, \mathcal{X})$, the associated \textit{quandle coloring quiver}~\cite{ChoNelson}, denoted $\mathcal{Q}^S_{X}(K)$, is the directed graph with a vertex $v_f$ for every element $f \in Hom(Q(K), \mathcal{X})$ and an edge directed from the vertex $v_f$ to the vertex $v_g$, whenever $g = \phi \circ f$, for some $\phi \in S$.
When $S = Hom(\mathcal{X}, \mathcal{X})$, we denote the associated quiver by $\mathcal{Q}_{\mathcal{X}}(K)$, and call it the \emph{full $\mathcal{X}$-quandle coloring quiver} of $K$.
It was explained in~\cite{ChoNelson} that $\mathcal{Q}^S_{X}(K)$ is a link invariant, in the sense that the quandle coloring quivers associated to diagrams that are related by Reidemeister moves are isomorphic quivers.
\textbf{Notation.} We establish some notation for mappings. Consider a function $g \co \mathcal{X} \to \mathcal{Y}$, where $\mathcal{X}=\{x_1,x_2,\dots,x_n\}$ and $\mathcal{Y}=\{y_1,y_2,\dots,y_m\}$. If $g(x_i) = y_{j_i}$, for each $x_i \in \mathcal{X}$, we will write $g$ as $g_{y_{j_1}, y_{j_2}, y_{j_3},\dots,y_{j_n}}$ (omitting the commas when possible). This notation will simplify the formulas for various quandle homomorphisms considered in this paper.
Our goal is to study quandle coloring quivers of $(p,2)$-torus links with respect to dihedral quandles $\mathbb{Z}_n$. Before we proceed, we need to prove the following result about the space $Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ of quandle automorphisms of the dihedral quandle $\mathbb{Z}_n$, where $n \geq 3$.
\begin{proposition}\label{Hom(X,X)}
Consider the dihedral quandle $\mathbb{Z}_n$, when $n \geq 3$. Then $Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ is a set of size $n^2$ of quandle homomorphisms $\phi_{\gamma} \co \mathbb{Z}_n \to \mathbb{Z}_n$ that are defined recursively, where $\gamma$ is any $n$-tuple $(\gamma_1,\gamma_2,\gamma_3,\dots, \gamma_n) \in (\mathbb{Z}_n)^n$, such that
\[
\phi_{\gamma}:= \left\{\begin{array}{lr}
\phi_{\gamma}(0)=\gamma_1, &\\
\phi_{\gamma}(1)=\gamma_2, &\\
\phi_{\gamma}(k) = \phi(k-2)\triangleright \phi(k-1), &\text{for }2\leq k\leq n-1.\end{array}\right.
\]
\end{proposition}
\begin{proof}
The operation of the dihedral quandle $(\mathbb{Z}_n, \triangleright)$ is given by $x \triangleright y = (2y-x)\text{ mod }n$, for any $x,y\in \mathbb{Z}_n$. Then, $k\triangleright k+1 = k+2$, for all $0 \leq k \leq n-3$, $n-2 \triangleright n-1 = 0$ and $n-1\triangleright 0 = 1$.
Since any quandle homomorphism $\phi \in Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ must satisfy $\phi(x\triangleright y) = \phi(x)\triangleright\phi(y)$, for any $x,y \in \mathbb{Z}_n$ (using the same $\triangleright$), then $\phi$ must now satisfy $\phi(k)\triangleright\phi(k+1) = \phi(k\triangleright (k+1)) = \phi(k+2)$, for all $0 \leq k \leq n-3$, $\phi(n-2) \triangleright \phi(n-1) = \phi(0)$ and $\phi(n-1)\triangleright\phi(0) = \phi(1)$.
Hence, $\phi(0)$ and $\phi(1)$ completely determine $\phi$, and we have the following recursive relation for the image of $(0,1,2,\dots,n-1)$ under $\phi$:
\[
(0,1,2,\dots,n-1) \stackrel{\phi}{\mapsto} (\phi(0),\phi(1),\phi(0)\triangleright\phi(1),\phi(1)\triangleright\phi(2),\dots,\phi(n-3)\triangleright\phi(n-2)).
\]
Since there are $n$ ways to assign each of $\phi(0)$ and $\phi(1)$ an element in $\mathbb{Z}_n$, we see that $|Hom(\mathbb{Z}_n, \mathbb{Z}_n)| = n^2$.
\end{proof}
\begin{remark} \label{Hom(X,X)bis}
From the cyclic behavior of the dihedral quandle $\mathbb{Z}_n$ and the proof of Proposition~\ref{Hom(X,X)}, we see that any homomorphism $\phi_{\gamma} \in Hom(\mathbb{Z}_n, \mathbb{Z}_n)$, where $\gamma$ is an $n$-tuple $(\gamma_1,\gamma_2,\dots, \gamma_n) \in (\mathbb{Z}_n)^n$, is completely determined by the images $\phi_{\gamma}(k) = \gamma_{k+1}$ and $\phi_{\gamma}(k+1) = \gamma_{k+2}$, where if $k = n-1$ then $k+2 = 0$ and if $k = n$ then $k+1 = 0$.
We also remark that using our notation, we can write $\phi_\gamma = \phi_{\gamma_1 \gamma_2 \dots \gamma_n}$.
\end{remark}
Using Theorem~\ref{torus coloring theorem} together with Proposition~\ref{Hom(X,X)} and Corollary~\ref{Hom(X,X)bis}, we are ready to state and prove our main results regarding quivers.
\begin{theorem}\label{torus quiver theorem1}
Given a torus link $\mathcal{T}(p,2)$ and $\mathbb{Z}_n$ a dihedral quandle, where $\gcd(p,n) = 1$, the full $\mathbb{Z}_n$-quandle coloring quiver of $\mathcal{T}(p,2)$ is the complete directed graph:
$$\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2)) = \left(\overleftrightarrow{K_{n}},\hat{n}\right).$$
\end{theorem}
\begin{proof}
By Theorem~\ref{torus coloring theorem}, we know that $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ is the collection of all $n$ trivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$. Hence, the quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))$ has $n$ vertices, one for each of the elements in the coloring space $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$.
Let $\phi =\phi_{\gamma_1\gamma_2\dots\gamma_n} \in Hom(\mathbb{Z}_n , \mathbb{Z}_n)$ be a quandle homomorphism given by: $0 \stackrel{\phi}{\mapsto} \gamma_1, 1 \stackrel{\phi}{\mapsto} \gamma_2, \dots,n-1 \stackrel{\phi}{\mapsto} \gamma_n$.
In Proposition~\ref{Hom(X,X)} we proved that $|Hom(\mathbb{Z}_n, \mathbb{Z}_n)| = n^2$, which was due to the fact that a quandle homomorphism $\phi_{\gamma_1\gamma_2\dots\gamma_n}$ is completely determined by the choices for $\gamma_1$ and $\gamma_2$. In fact, by Remark~\ref{Hom(X,X)bis}, we know that $\phi_{\gamma_1\gamma_2\dots\gamma_n}$ is determined by the choices for any two consecutive values $\gamma_k$ and $\gamma_{k+1}$, where if $k = n$ then $k+1 = 1$.
Let $\psi_{i} \in Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ denote the trivial $\mathbb{Z}_n$-quandle coloring of $\mathcal{T}(p,2)$ associated to some fixed $i \in \mathbb{Z}_n$ and let $v_{\psi_i}$ be the corresponding vertex in the quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))$. Without loss of generality, by the above discussion, we can use that any map $\phi_{\gamma_1\gamma_2\dots\gamma_n}$ is completely determined by the choices for $\gamma_i$ and $\gamma_{i+1}$.
Consider $\phi_{\gamma_1 \dots \gamma_{i} i \gamma_{i+2 }\dots\gamma_n} \in Hom(\mathbb{Z}_n , \mathbb{Z}_n)$. That is, $\gamma_{i+1} = i$, which means that $i \stackrel{\phi}{\mapsto} i$ under this map. There are $n$ maps of the form $\phi_{\gamma_1 \dots \gamma_{i} i \gamma_{i+2 }\dots\gamma_n}$ (determined by the value of $\gamma_{i} \in \mathbb{Z}_n)$ and they all satisfy that
\[ \phi_{\gamma_1 \dots \gamma_{i-1} i \gamma_{i+1 }\dots\gamma_n} \circ \psi_i = \psi_i.\]
In addition, there are no other maps $\phi_{\gamma_1\gamma_2\dots\gamma_n}$ satisfying $\phi_{\gamma_1\gamma_2\dots\gamma_n} \circ \psi_i = \psi_i$. It follows that the quiver vertex $v_{\psi_i}$ has $n$ loops that represent $\psi_{i}$ being fixed by those $n$ maps.
Let $\psi_{j} \in Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$, where $j\in \mathbb{Z}_n$ is distinct from the fixed value of $i$ above. For similar reasons as above, there are $n$ quandle homomorphisms in $Hom(\mathbb{Z}_n , \mathbb{Z}_n)$ of the form $\phi_{\gamma_1 \dots \gamma_{i-1} j \gamma_{i+1 }\dots\gamma_n}$ that send $i \stackrel{\phi}{\mapsto} j$. Moreover,
\[ \phi_{\gamma_1 \dots \gamma_{i-1} j \gamma_{i+1 }\dots\gamma_n} \circ \psi_i = \psi_j,\]
and there are no other maps $\phi_{\gamma_1\gamma_2\dots\gamma_n}$ satisfying $\phi_{\gamma_1\gamma_2\dots\gamma_n} \circ \psi_i = \psi_j$.
Thus $\psi_{i}$ is also mapped $n$ times to each of the colorings $\psi_{j}$. This implies that there are $n$ directed edges from the vertex $v_{\psi_i}$ to each of the vertices $v_{\psi_j}$, for all $j \not = i$.
Hence, the full quandle coloring quiver of $\mathcal{T}(p,2)$ with respect to $\mathbb{Z}_n$ is a graph with $n$ vertices where every vertex has $n$ directed edges from itself to all other vertices, including itself. Thus, $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))= \left(\overleftrightarrow{K_n},\hat{n}\right)$, whenever $\gcd(p,n) = 1$.
\end{proof}
\begin{theorem}\label{torus quiver theorem2}
Let $\mathcal{T}(p,2)$ be a torus link and $\mathbb{Z}_n$ a dihedral quandle. If $\gcd(p,n) =c$ where $c$ is prime, then the full quandle coloring quiver of $\mathcal{T}(p,2)$ with respect to $\mathbb{Z}_n$ is the join of two complete directed graphs:
\[ \mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2)) = \left(\overleftrightarrow{K_{n}},\hat{n}\right) \overleftarrow{\nabla}_{\hat{d}}\left(\overleftrightarrow{K_{n(c-1)}},\hat{d}\right),\]
where $d=n/c$, and the two complete subgraphs correspond to the trivial and nontrivial, respectively, $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$.
\end{theorem}
\begin{proof}
Using Theorem \ref{torus coloring theorem}, the quandle coloring space $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ is the union of all $n$ trivial $\mathbb{Z}_n$-quandle colorings, together with $n(c-1)$ nontrivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$. Therefore, the quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))$ contains $n$ vertices corresponding to the trivial $\mathbb{Z}_n$-quandle colorings and $n(c-1)$ vertices corresponding to the nontrivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$.
Let $\psi_{\sigma}=\psi_{\sigma_1\sigma_2\sigma_3\dots\sigma_p}$ and $\psi_{\omega}=\psi_{\omega_1\omega_2\omega_3\dots\omega_p}$ be any two quandle colorings in $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$. Our notation means that $\psi_{\sigma}(x_i)=\sigma_i$ and $\psi_{\omega}(x_i)=\omega_i$, where $\sigma_i, \omega_i \in\mathbb{Z}_n$ and $x_i$ is a generator of the fundamental quandle $Q(\mathcal{T}(p,2))$.
If both $\psi_{\sigma}$ and $\psi_{\omega}$ are trivial $\mathbb{Z}_n$-quandle colorings (possibly the same coloring), we know from the proof of Theorem~\ref{torus quiver theorem2} that there are $n$ directed edges from the quiver vertex $v_{\psi_{\sigma}}$ to the vertex $v_{\psi_{\omega}}$ (including $n$ directed loops from the vertex $v_{\psi_{\sigma}}$ to itself, if $\psi_{\sigma}=\psi_{\omega}$). It follows that there is a complete directed graph $\left(\overleftrightarrow{K_n},\hat{n}\right)$ associated with the $n$ trivial $\mathbb{Z}_n$-quandle colorings of $\mathcal{T}(p,2)$, as a subgraph of the full quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))$.
If $\psi_{\sigma}$ is a nontrivial quandle coloring, then $\psi_{\sigma}(x_1) = \sigma_1\neq \sigma_2 = \psi_{\sigma}(x_2)$, because equivalent consecutive colors in the subscript of $\psi_{\sigma}$ would impose a trivial $\mathbb{Z}_n$-quandle coloring. By Proposition~\ref{coloring lemma}, for any $x_1,x_2\in Q(\mathcal{T}(p,2))$, a quandle coloring $\psi\in Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$ must satisfy the congruence $p\psi(x_2)\equiv p\psi(x_1) \text{ (mod }n)$, which reduces to $\psi(x_2)\equiv \psi(x_1) \text{ (mod }d)$, where $d= \frac{n}{c}$ and $c = \gcd(p, n), c \not = 1$. Since $\psi(x_1), \psi(x_2) \in \mathbb{Z}_n$, we have that for each fixed $\psi(x_1)$, there are $c$ possibilities for $\psi(x_2)$ in $\mathbb{Z}_n$ satisfying the above two congruences; specifically, $\psi(x_2) = \psi(x_1) + kd$, where $k$ is an integer such that $0 \leq k \leq c-1$. Moreover, since $\psi_{\sigma}(x_1) \neq \psi_{\sigma}(x_2)$ for a nontrivial quandle coloring $\psi_{\sigma}$, then for each fixed $\psi_{\sigma}(x_1)$ there are $c-1$ possibilities for $\psi_{\sigma}(x_2)$ in $\mathbb{Z}_n$ satisfying $p\psi_{\sigma}(x_2)\equiv p\psi_{\sigma}(x_1) \text{ (mod }n)$.
Let $\psi_{\sigma}$ be a fixed nontrivial quandle coloring in $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$. Then $\psi_{\sigma}=\psi_{\sigma_1(\sigma_1+kd)\sigma_3\dots\sigma_p}$, for some $1\leq k \leq c-1$ and $\sigma_1, \sigma_3, \dots, \sigma_p \in \mathbb{Z}_n$. Let $\psi_{\omega}=\psi_{\omega_1(\omega_1+hd)\omega_3\dots\omega_p}$ be any fixed coloring in $Hom(Q(\mathcal{T}(p,2)), \mathbb{Z}_n)$; that is, $h$ is an integer such that $0\leq h \leq c-1$, where $\omega_1, \omega_3, \dots, \omega_p \in \mathbb{Z}_n$ and $h \not = 0$ if $\psi_{\omega}$ is a multi-coloring.
Now suppose $\phi_{\gamma_1\gamma_2\dots\gamma_n}\in Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ satisfies $\phi_{\gamma_1\gamma_2\dots\gamma_n} \circ \psi_{\sigma} = \psi_{\omega}$.
In particular, this means that
\begin{eqnarray}
(\phi_{\gamma_1\gamma_2\dots\gamma_n} \circ \psi_{\sigma}) (x_1) &=& \psi_{\omega}(x_1) = \omega_1 \label{eq1}\\
(\phi_{\gamma_1\gamma_2\dots\gamma_n} \circ \psi_{\sigma}) (x_2) &=&\psi_{\omega}(x_2) = \omega_1 + hd. \label{eq2}
\end{eqnarray}
Since $\psi_{\sigma}(x_1) = \sigma_1$ and $\psi_{\sigma}(x_2) = \sigma_1 + kd$, equations~\eqref{eq1} and~\eqref{eq2} imply that the following must hold:
\[
\phi_{\gamma_1\gamma_2\dots\gamma_n}(\sigma_1) = \omega_1 \,\, \text{and} \,\, \phi_{\gamma_1\gamma_2\dots\gamma_n}(\sigma_1 + kd) = \omega_1 +hd.
\]
In Remark~\ref{Hom(X,X)bis} we showed that a homomorphism $\phi_{\gamma_1\gamma_2\dots\gamma_n}\in Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ is determined by two consecutive values in its subscript. Moreover, by fixing the value of $\omega_1$, a homomorphism $\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}$, with some $\tau\in\mathbb{Z}_n$, satisfying equation~\eqref{eq1} yields:
\begin{eqnarray*}
\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}(\sigma_1) &=& \omega_1\\
\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}(\sigma_1 +1) &=& \tau \\
\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}(\sigma_1 +2) &=& (2 \tau - \omega_1) \text{ mod} \ n\\
\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}(\sigma_1 +3) &=& (3 \tau - 2\omega_1) \text{ mod} \ n \\
\vdots \\
\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}(\sigma_1 + kd) &=& (kd \tau - (kd-1)\omega_1) \text{ mod} \ n
\end{eqnarray*}
Hence, a homomorphism $\phi_{\gamma_1\gamma_2\dots \omega_1 \tau \dots \gamma_n}$ satisfies both of the equations~\eqref{eq1} and~\eqref{eq2} if and only if
\begin{eqnarray} \label{eq3}
kd \tau - (kd-1)\omega_1 \equiv \omega_1 + hd \text{ (mod }n).
\end{eqnarray}
The latter congruence is equivalent to $kd\tau \equiv kd\omega_1+hd \text{ (mod }n)$, and since $\gcd(d, n) = d$, this congruence reduces further to:
\begin{eqnarray} \label{eq4}
k\tau \equiv k\omega_1+h\text{ (mod }\frac{n}{d}).
\end{eqnarray}
Note that $k$, $\omega_1$, and $h$ are considered fixed in this congruence. Since $\frac{n}{d}$ is equal to the prime $c$ and $1 \leq k \leq c-1$, we have that $\gcd(k, \frac{n}{d}) = 1$ and thus the congruence in~\eqref{eq4} has solutions $\tau$. In particular, $\tau$ has a unique solution modulo $\frac{n}{d}$. It follows that for the congruence~\eqref{eq3}, there are $d$ incongruent solutions modulo $n$ for $\tau$.
Therefore, for a given nontrivial quandle coloring $\psi_{\sigma}$, there are $d$ homomorphisms $\phi_{\gamma_1\gamma_2 \dots \gamma_n}$ such that $\phi_{\gamma_1\gamma_2 \dots \gamma_n} \circ \psi_{\sigma} = \psi_{\omega}$, for all (trivial and nontrivial) $\mathbb{Z}_n$-quandle colorings $\psi_{\omega}$ of $\mathcal{T}(p, 2)$.
The above reasoning shows that for each vertex $v_{\psi_{\sigma}}$ in the quiver corresponding to a nontrivial coloring, there are $d$ directed edges to all of the vertices in the quiver, including to itself. It follows that the quiver $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2))$ contains as a subgraph, a complete directed graph $\left(\overleftrightarrow{K_{n(c-1)}},\hat{d}\right)$ associated with the nontrivial quandle colorings that is joined, using the weight function $\hat{d}$, to the subgraph associated with trivial colorings:
\[ \left(\overleftrightarrow{K_n},\hat{n}\right) \overleftarrow{\nabla}_{\hat{d}} \left(\overleftrightarrow{K_{n(c-1)}},\hat{d}\right). \]
Moreover, there is no map in $Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ that sends a fixed value in $\mathbb{Z}_n$ to two distinct values in $\mathbb{Z}_n$. That is, there is no homomorphism $\phi_{\gamma_1\gamma_2 \dots \gamma_n}$ in $Hom(\mathbb{Z}_n, \mathbb{Z}_n)$ that sends a trivial $\mathbb{Z}_n$-quandle coloring of $\mathcal{T}(p,2)$ to a nontrivial $\mathbb{Z}_n$-quandle coloring of $\mathcal{T}(p,2)$. Hence, the joining behavior cannot be reciprocated from $\left(\overleftrightarrow{K_n},\hat{n}\right)$ to $\left(\overleftrightarrow{K_{n(c-1)}},\hat{d}\right)$.
This completes the proof of $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2)) = \left(\overleftrightarrow{K_n},\hat{n}\right) \overleftarrow{\nabla}_{\hat{d}} \left(\overleftrightarrow{K_{n(c-1)}},\hat{d}\right)$, when $\gcd(p, n) = c $ and $c$ is prime.
\end{proof}
\begin{corollary} \label{quiver-cor}
For a given torus link $\mathcal{T}(p,2)$ and a prime $n$, we have:
\begin{itemize}
\item If $n \nmid p$, then $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2)) = \left(\overleftrightarrow{K_{n}},\hat{n}\right)$.
\item If $n \mid p$, then $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p,2)) = \left(\overleftrightarrow{K_{n}},\hat{n}\right) \overleftarrow{\nabla}_{\hat{1}}\left(\overleftrightarrow{K_{n(n-1)}},\hat{1}\right)$.
\end{itemize}
\end{corollary}
\begin{proof}
The statements follow as particular cases from Theorems~\ref{torus quiver theorem1} and~\ref{torus quiver theorem2}. Since $n$ is prime, if $n \nmid p$, then $\text{gcd}(n, p) = 1$, and Theorem~\ref{torus quiver theorem1} implies the first statement of this corollary. If $n \mid p$, then $\text{gcd}(n, p) = n$ where $n$ is prime, and the application of Theorem~\ref{torus quiver theorem2} yields the second statement above.
\end{proof}
The following statement is a direct consequence of Corollary~\ref{quiver-cor}.
\begin{corollary}
Let $n$ be a prime.
\begin{itemize}
\item If $n \nmid p_1$ and $n \nmid p_2$, then the quandle coloring quivers $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p_1,2))$ and $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p_2,2))$ are isomorphic.
\item If $n \mid p_1$ and $n \mid p_2$, then the quandle coloring quivers $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p_1,2))$ and $\mathcal{Q}_{\mathbb{Z}_n}(\mathcal{T}(p_2,2))$ are isomorphic.
\end{itemize}
\end{corollary}
\begin{example}
As an example, we consider the full $\mathbb{Z}_3$-quandle coloring quiver of the trefoil knot $\mathcal{T}(3, 2)$. In this case, $n = p = 3$, and according to Theorem~\ref{torus quiver theorem2}, the quiver $\mathcal{Q}_{\mathbb{Z}_3}(\mathcal{T}(3,2))$ has the following form:
\[ \mathcal{Q}_{\mathbb{Z}_3}(\mathcal{T}(3,2)) = \left(\overleftrightarrow{K_{3}},\hat{3}\right) \overleftarrow{\nabla}_{\hat{1}}\left(\overleftrightarrow{K_{6}},\hat{1}\right). \]
In Fig.~\ref{quiver example-part 1} on the left, we show the graph $\left(\overleftrightarrow{K_{3}},\hat{3}\right)$ associated with the trivial $\mathbb{Z}_3$-quandle colorings of $\mathcal{T}(3,2)$ and on the right of the figure, there is the graph $\left(\overleftrightarrow{K_{6}},\hat{1}\right)$ corresponding to the $n(c-1) = 3\cdot 2$ nontrivial $\mathbb{Z}_3$-quandle colorings of the trefoil knot. Both of these graphs are subgraphs of the full $\mathbb{Z}_3$-quandle coloring quiver of the trefoil knot. In Fig.~\ref{quiver example-part 2} we present the quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_3}(\mathcal{T}(3,2))$ as the join of these subgraphs by the weight function $\hat{d} = \hat{1}$: $\left(\overleftrightarrow{K_{3}},\hat{3}\right) \overleftarrow{\nabla}_{\hat{1}}\left(\overleftrightarrow{K_{6}},\hat{1}\right)$.
\begin{figure}[hb]
\includegraphics[height = 2in]{TorusKnotQuiver1.pdf}
\put(-320, -10){\fontsize{10}{10}$\left(\overleftrightarrow{K_{3}},\hat{3}\right)$}
\put(-100, -10){\fontsize{10}{10}$\left(\overleftrightarrow{K_{6}},\hat{1}\right)$}
\caption{The subgraph components of the quiver $\mathcal{Q}_{\mathbb{Z}_3}(\mathcal{T}(3,2))$}
\label{quiver example-part 1}
\end{figure}
\begin{figure}[hb]
\includegraphics[height = 2.6in]{TorusKnotQuiver2.pdf}
\caption{The quandle coloring quiver $\mathcal{Q}_{\mathbb{Z}_3}(\mathcal{T}(3,2))$}
\label{quiver example-part 2}
\end{figure}
\end{example}
\begin{remark}
Taniguchi~\cite{Taniguchi} studied quandle coloring quivers of links with respect to dihedral quandles, where the focus was on isomorphic quandle coloring quivers. Among other things, it was proved in~\cite[Theorem 3.3]{Taniguchi} that when the dihedral quandle $\mathbb{Z}_n$ is of prime order, then the $\mathbb{Z}_n$-quandle coloring quivers of two links $L_1$ and $L_2$ are isomorphic if and only if the quandle coloring spaces of $L_1$ and $L_2$ with respect to the dihedral quandle $\mathbb{Z}_n$ have the same size. That is, it was proved that if $n$ is prime, then $\mathcal{Q}_{\mathbb{Z}_n}(L_1) \cong \mathcal{Q}_{\mathbb{Z}_n}(L_2) $ as quivers if and only if $|Hom(Q(L_1), \mathbb{Z}_n)| = |Hom(Q(L_2), \mathbb{Z}_n)|$. Hence, for $n$ prime, we can use Taniguchi's result in combination with our results in Theorem~\ref{torus coloring theorem} and Corollary~\ref{quiver-cor}, to say something about the full $\mathbb{Z}_n$-quandle coloring quiver of a certain link $L$. For this, we would need to have that $|Hom(Q(L), \mathbb{Z}_n)| = |Hom(Q(\mathcal{T}(p, 2)), \mathbb{Z}_n)|$, for a particular link $L$, a torus link $\mathcal{T}(p, 2)$, and a prime $n$. Specifically, if $n$ is prime and $|Hom(Q(L), \mathbb{Z}_n)| = n$ for a certain link $L$, then $\mathcal{Q}_{\mathbb{Z}_n}(L) \cong \left(\overleftrightarrow{K_n},\hat{n}\right)$. Moreover, if $n$ is prime and $|Hom(Q(L), \mathbb{Z}_n)| = n^2$, then $\mathcal{Q}_{\mathbb{Z}_n}(L) \cong \left(\overleftrightarrow{K_n},\hat{n}\right) \overleftarrow{\nabla}_{\hat{1}} \left(\overleftrightarrow{K_{n(n-1)}},\hat{1}\right)$.
\end{remark}
|
2,869,038,156,675 | arxiv | \section{Introduction}
\thispagestyle{empty}
\IEEEPARstart{T}{here} are multiple applications for the room-temperature semi-conductor Cadmium Zinc Telluride (CZT), ranging from medical imaging over homeland security to astroparticle physics experiments. The high efficiency and good spectral and spatial resolution of CZT make it an attractive material for detecting and measuring photons in the energy range from a few keV to a few MeV. As the fractional yield of high-quality crystals increases (and the cost is reduced), CZT will become even more prolific in radiation detection systems.
Limits on the performance of of CZT detector systems depend on characteristics of both the detector
and the readout electronics. State-of-the-art CZT detectors combine excellent homogeneity over
typical volumes between 0.5$\times$2$\times$2 cm$^3$ and 1.5$\times$2$\times$2 cm$^3$ with high
electron $\mu\tau$-products on the order of $10^{-2}$ cm$^2$ V$^{-1}$.
As the best thick CZT detectors achieve now 662 keV energy resolutions
better than 1\% FWHM (full width half maximum), low-noise readout becomes
increasingly more important. In the following, we will present leakage current and capacitance
measurements performed on CZT detectors from the company Orbotech Medical Solutions \cite{Orb}.
Orbotech uses the Modified Horizontal Bridgman process to grow the CZT substrates. The process gives
substrates with excellent homogeneity, but a somewhat low bulk resistivity of 10$^9$ $\Omega$ cm.
In earlier work, several groups including ourselves have shown that pixel-cathode dark currents can
be suppressed efficiently by contacting the substrates with high-work function cathodes \cite{Ira,Jaesub}.
We are currently testing Orbotech detectors with a wide range of thicknesses and with
a range of pixel pitches (see Fig. 1, and Qiang et al., 2007).
In this contribution, we present measurements of the dark currents and capacitances
of an Orbotech CZT detector (0.5$\times$2$\times$2 cm$^3$, 8$\times$8 pixel,2.4mm pitch, 1.6mm pixel side-length),
and discuss the resulting readout noise. In Sect. 2, the ASIC used as a benchmark for
noise calculations is described, and the noise model parameters are given.
The results of dark current and capacitance measurements are described in Sect. 3.
In Sect. 4 the resulting noise is estimated, and in Sect. 5 pixel-pixel
and pixel-cathode noise cross-coupling is discussed.
In Sect. 5, the results are summarized.
\section{Noise Model}
As a reference for our noise calculations, we use the ``NCI ASIC'' developed by Brookhaven
National Laboratory and the Naval Research Laboratory for the readout of Si strip
detectors (De Geronimo et al., 2007).
The self-triggering ASIC comprises 32 channels. Each front-end channel provides
low-noise charge amplification for pulses of selectable polarity, shaping with
a stabilized baseline, adjustable discrimination, and peak detection with
an analog memory. The channels can process events simultaneously, and the read
out is sparsified. The ASIC requires 5 mW of power per channel.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{CZTs}
\caption{Orbotech Cadmium Zinc Telluride (CZT) detectors. From left to right, the detectors
have volumes of 1$\times$2$\times$2 cm$^3$, 0.75$\times$2$\times$2 cm$^3$,
0.5$\times$2$\times$2 cm$^3$, and 0.2$\times$2$\times$2 cm$^3$.}
\label{CZTs}
\end{figure}
We use the following noise model to calculate the equivalent noise charge (ENC):
\[ ENC^{2} = \left[A_{1}\frac{1}{\tau_{P}}\frac{4kT}{g_{m}}+A_{3}\frac{K_{f}}{C_{G}}\right](C_{G}+C_{D})^{2} \]
\begin{equation}
+A_{2}\tau_{P}2q(I_{L}+I_{RST})
\end{equation}
where $A_{1},A_{2}$,and $A_{3}$ characterize the pulse shaping filter,
$\tau_{p}$ is the pulse peaking time, $C_{D}$ and $C_{G}$ are the detector
and MOSFET capacitances, respectively, $g_{m}$ is the MOSFET transconductance,
$K_{f}$ is the 1/f noise coefficient, $I_{L}$ is the detector leakage current,
and $I_{RST}$ is the parallel noise of the reset system (De Geronimo et al.,2002).
For a given detector ($C_{D},I_{L}$) and ASIC ($A_{1-3},C_{G},g_{m},K_{f}$)
$\tau_{p}$ can be optimized to reduce the ENC.
For the NCI ASIC, we use \cite{Ger,Gianluigi-Detailed-Noise-Discussion}: $A_{1}$=0.89, $A_{2}$=$A_{3}$=0.52, $K_{f}$=$10^{-24}$, $C_{G}$=6pF, $g_{m}$=8mS, and $I_{RST}$=50pA.
\section{Dark Current and Capacitance in Orbotech CZT Detectors}
\begin{figure}[!t]
\centering
\includegraphics[width=4.0in]{IV2}
\vspace*{-1cm}
\caption{Current-voltage measurements of 0.5cm-thick CZT detectors with various cathode contact materials.
For all materials, the leakage current is $<$0.2 nA/pixel at biases up to -1500 Volts.}
\label{IV}
\end{figure}
Figure \ref{IV} shows the IV curves for one 2$\times$2$\times$0.5 cm$^3$ Orbotech CZT detector,
for different cathode contact materials. The preferred cathode material is Au, as Au cathodes
give leakage currents $<$0.2 nA/pixel at a cathode bias voltage of -1500 Volts,
and give slightly better spectroscopic performance than other cathode materials.
We used a commercial capacitance meter to measure the capacitance between all pixels and the cathode.
The measurement set-up is shown in Fig. \ref{Cap2}. High voltage blocking capacitors were used to protect
the LCR meter from the detector bias voltage. A low pass filter
was used to isolate the LCR-meter-detector circuit from the high voltage supply at the kHz frequencies
used by the LCR meter. Largely independent of bias voltage, we measure a capacitance of 9 pF for
all 64 pixels, corresponding to a pixel-cathode capacitance of 0.14 pF per pixel.
The measured result agrees well with a simple estimate of the anode to cathode
capacitance: using a dielectric constant of $\epsilon=$ 10 \cite{Spp}, a parallel plate
capacitor with the same dimensions as our detector has a capacitance of 8 pF.
The measurements of the pixel-pixel capacitances resulted in upper limits of $<$1~pF.
For inner pixels (non-border pixels) we estimated the pixel-pixel capacitances with
the same 3-D Laplace solver that we are using to model the response of CZT detectors
from our fabrication \cite{Ira}. The code determines the potential inside a
large grounded box that houses the detector. The capacitance between one pixel and
its neighbors is determined by setting the voltage of the one pixel to
$\Delta V=$ 1 V while keeping the other pixels and the cathode at ground potential.
The charge $\Delta Q$ on the biased pixel and on the neighboring pixels is
determined with the help of Gauss' law and appropriate Gaussian surfaces.
The procedure gives the capacitances $C=\Delta Q/\Delta V$. We obtain a next-neighbor capacitance
of 0.06 pF, and a diagonal-neighbor capacitance of 0.02 pF.
\begin{figure}[!t]
\vspace*{1cm}
\centering
\includegraphics[width=2.0in]{Cap2}
\vspace*{0.8cm}
\caption{Circuit diagram for the pixel-cathode capacitance measurements. Blocking capacitors are used to
protect the LCR meter from the detector bias voltage. A low pass filter is used to decouple the
detector from the high voltage supply at the frequencies used by the LCR meter.
The set-up makes it possible to measure the capacitance as function of bias voltage.
We used cathode biases down to -2000 V.}
\label{Cap2}
\end{figure}
\section{Equivalent Noise Charge of CZT ASIC Readout Electronics}
With the previous results, we can now evaluate Equation (1). In the context of 662 keV energy
depositions (assuming 4.64 eV per electron-hole pair generation, \cite{Spp}),
Fig.\ \ref{results} plots the FWHM contribution (red line) of the readout electronics's ENC
as a function of dark current $I_{L}$ (top) and pixel capacitance $C_{D}$ (bottom).
For the upper plot, $C_{D}$ is held constant at 1.0 pF and for the
lower plot $I_{L}$ was fixed at 1 nA. In both panels, the green line marks a readout noise contribution
of 0.25\% FWHM to the 662 keV energy resolution. At the $^{\sim}_<$ 0.25\%-level, the contribution of the
readout noise to the detector energy resolution is negligible. For the specific ASIC considered here,
we see that leakage currents up to $\sim$3 nA and pixel capacitances $\sim$10 pF are acceptable.
The leads between the readout ASIC and the detector should be sufficiently short
to stay below 10 pF.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{resultsii}
\caption{The two panels show the electronic readout noise contribution to the total FWHM energy resolution as a function of detector dark current (top,red line) and detector capacitance (bottom, red line). The calculations are made for the NCI ASIC. A detector capacitance of 1 pF was assumed for the top plot. The electronic noise ($\sim$0.1$\%$ FWHM) is independent of the leakage current $<$0.2 nA/pixel. The bottom figure assumes a leakage current of 1 nA and the resulting electronic noise is constant ($\sim$0.15$\%$ FWHM) for detector capacitances smaller than 2 pF/pixel.}
\label{results}
\end{figure}
\section{Pixel-Pixel and Pixel-Cathode Noise Cross-Coupling}
In this section we consider possible additional noise contributions arising from the capacitive
coupling between adjacent pixels and between a pixel and the cathode. The capacitive coupling can
result in amplifier noise from one channel being injected into the other channel. In the following
we use the terminology used by Spieler (2005), and assume that all pixels and the cathode are read
out by identical preamplifiers. We first consider pixel-pixel noise cross coupling (compare Fig. 5)
and consider the coupling between a pixel, its four nearest neighbors and the cathode.
The output noise voltage ($\nu{n0}$) of an amplifier creates a noise current, $i_{n}$.
\begin{equation}
i_{n} = \frac{\nu_{n0}}{\frac{1}{\omega C_{f}}+\frac{1}{\omega \left(4C_{SS}+C_{b}\right)}}
\end{equation}
Here, $C_{SS}$, $C_{B}$, and $C_{f}$ is the pixel-pixel capacitance, the pixel-cathode capacitance,
and the amplifier capacitance, respectively. The current is divided among the capacitively coupled
channels in proportion to the coupling capacitance. The fraction of $i_{n}$ going to a nearest neighbor is
\begin{equation}
\eta_{nn} = \left(4 + \frac{C_{B}}{C_{SS}}\right) \approx \frac{1}{6}.
\end{equation}
Adding the additional noise from the four nearest neighbors in quadrature, we find
that the pixel-pixel crosstalk will increase the electronic noise by:
\begin{equation}
\sqrt{4}\nu_{nn} = 2\frac{\eta_{nn}i_{n}}{\omega C_{f}} \approx 8\%\nu_{n0}
\end{equation}.
In most applications, one reads out the pixels {\it and} the cathode. For single-pixel events,
the pixel-to-cathode signal ratio can be used to correct the anode signal for the depth of
the interaction. For multiple-pixel events, the time offset between the cathode signal and the
pixel signals can be used to perform a proper depth of interaction correction for each individual pixel.
The pixel-cathode noise cross-coupling can be more significant. The equivalent noise charge from
cathode noise being injected into pixels, $Q_{CP}$, depends on the number of pixels ($n_{pix}$)
and the ratio of the feedback capacitance to the detector capacitance \cite{Spp}:
\begin{equation}
Q_{CP} = \frac{Q_{n0}}{1 + n_{pix}\frac{C_{d}}{C_{f}}}.
\end{equation}
Here $n_{pix}$=64 is the number of pixel, $C_{d}$= 7 pF is the capacitance between the
cathode and all the pixels, and $C_{f}$=50 fF is the preamplifier feedback capacitance.
With these values, the cathode noise can increase the readout noise of the anode channels
by 68\%.
\section{Summary}
We measured pixel-cathode dark currents and the pixel-cathode capacitances, both as function of detector bias voltage.
The measurements give dark currents well below a nA per pixel, and a pixel-cathode capacitance of 0.14 pF per pixel.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{circuit2}
\caption{The diagram illustrates the cross-coupling between readout channels of an ASIC. The noise voltage, $\nu_{n0}$, at the output of the measurement pixel's amplifier (center channel) sees an infinite resistance at the amplifier input. The resulting noise current, $i_{n}$, is divided into the cathode and nearest neighbor pixel channels in proportion to their capacitances. }
\label{Spieler}
\end{figure}
The pixel-pixel capacitances were smaller than the accuracy of our measurements, and we determined them with the help
of a 3D Laplace solver. We obtain the result that pixel-pixel capacitance is 0.06 pF for direct neighbors and
0.02 pF for diagonal neighbors. For a state-of-the-art ASIC as the NCI ASIC used as a benchmark here,
the noise model predicts a very low level of readout noise. With these nominal capacitance values,
pixel-pixel noise cross-coupling is a minor effect, but cathode-pixel noise cross coupling can be significant.
In practice, the readout noise will be higher owing to additional stray capacitances from the detector mounting
and the readout leads, and from pick-up noise. For the design of a readout system, short leads and a proper
choice of the detector mounting board substrate are thus of utmost importance.
\section*{Acknowledgments}
We gratefully acknowledge Gianluigi De Geronimo and Paul O'Connor for information concerning the NCI ASIC.
This work is supported by NASA under contract NNX07AH37G, and by the DHS under contract 2007DN077ER0002.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,156,676 | arxiv | \section{Presentation of Dirac equation including the Pauli-type term}
\qquad To receive the observed spectra of Fermions from the higher-dimensional theories is a long-standing problem. Introduction of Higgs scalar is a conventional Standard Model approach to generate masses of Fermi fields. However mass-like terms in Dirac equations in higher-dimensional theories may appear also because of interaction of Fermion with gauge fields (see e.g. Review \cite{Quiros}) or with $n$-form fields (\cite{Trip} and references therein). Thus the interesting task is to study the influence of the extra-dimensional Pauli type terms in the bulk Fermi field Lagrangian on the properties of mass spectra of Fermi excitations on different supergravity backgrounds.
In the present paper we explore spectrum of D10 Dirac equation with the flux-generated bulk "mass term" in the Type IIB supergravity \cite{Arutyun}
\begin{equation}
\label{1}
\left(\Gamma^{M}D_{M}- \frac{i}{2 \cdot 5!} \Gamma^{M_{1}\ldots M_{5}} F_{M_{1}\ldots M_{5}}\right) \hat \lambda = 0,
\end{equation}
on the $AdS_{5}\times S^{5}$ (+ self-dual 5-form) background:
\begin{equation}
\label{2}
ds_{10}^{2}=e^{-2z/L}\eta_{\mu\nu}dx^{\mu}dx^{\nu} + dz^{2} + L^{2} d\Omega_{5}^{2},
\end{equation}
\begin{equation}
\label{3}
F_{0123z}=e^{-4z/L} \bar{Q}/L, \qquad F_{56789}=L^{4} \bar{Q}, \qquad \bar{Q}=1.
\end{equation}
The value of the 5-form charge $\bar{Q}=1$ follows from the Einstein equations in 10 dimensions for the choice of normalization of the 5-form taken in the Type IIB supergravity action in \cite{Arutyun}:
\begin{equation}
\label{4}
S=\frac{1}{2k^{2}}\int\,d^{10}x\,\sqrt{-g}\left(R-\frac{4}{5!}F_{M_{1}\ldots M_{5}}F^{M_{1}\ldots M_{5}} + \dots \right),
\end{equation}
$k$ is gravity constant in 10 dimensions: $k=l_{s}^{4}$, $l_{s}$ is fundamental string length. We follow here the notations of \cite{Arutyun}: $M, N = 0,1\ldots 9$, $x^{M}=(x^{a}, y^{\alpha})$, $x^{a}=(x^{\mu},z)$ ($\mu=0,1,2,3$) and $y^{\alpha}$ are five angles of $S^{5}$ ($\alpha = 5,6,7,8,9$); hatted symbols, $\hat M$ etc., below are the corresponding flat indices. $\eta_{\mu\nu}$ in (\ref{2}) is metric of Minkowski space-time with signature {($-,+,+,+$)}.
D10 space-time is as ordinary orbifolded at the $UV$ and $IR$ boundaries given by the corresponding values of proper coordinate $z$:
\begin{equation}
\label{5}
z_{UV}=0 < z < \pi R = z_{IR},
\end{equation}
$AdS_{5}\times S^{5}$ space-time consists of two pasted copies with $Z_{2}$ symmetry imposed at its UV and IR ends. In the paper only bulk equations are explored, it is supposed that there are no additional surface terms of the Action which may influence the dynamics of Fermions.
The low-energy effective action (\ref{4}) makes sense if scale of curvature of space-time (\ref{2}) is essentially below the fundamental scale, i.e. if {$L\gg l_{s}$}. Standard dimensional reduction of Einstein term in (\ref{4}) with use of background metric (\ref{2}) gives the following expression for Planck Mass in 4 dimensions through length parameters $L$, $l_{s}$ (cf. e.g. \cite{Wolfe}):
\begin{equation}
\label{6}
M_{Pl}=\sqrt{\frac{\pi^{3}}{2}}\cdot \left(\frac{L}{l_{s}}\right)^{4}\frac{1}{L},
\end{equation}
here the exponentially small contribution from $z_{IR}$ limit of integration over $z$ in (\ref{4}) is omitted and value of volume of unit 5-sphere $\Omega_{5}=\pi^{3}$ is used.
$\hat \lambda$ in (\ref{1}) is a 32-component spinor, $D_{M}=i\partial_{M}{+} (1/4) \omega_{M}^{\hat A\hat B}\Gamma_{\hat A}\Gamma_{\hat B}$ is derivative including spin-connection, and the often used \cite{Mets}, \cite{Arutyun} representation for $32\otimes 32$ gamma-matrices $\Gamma ^{\hat M}$ is supposed:
\begin{eqnarray}
\label{7}
&&\Gamma^{\hat a}=\gamma^{a}\otimes \sigma^{1} \otimes I_{4}, \qquad \gamma^{a}=(\gamma^{\mu}, \, \gamma_{5}); \nonumber
\\
\\
&&\Gamma^{\hat\alpha}=-I_{4}\otimes \sigma^{2}\otimes \tau^{\alpha}, \qquad \tau^{\alpha}=(\tau^{i}, \, \tau_{5}), \nonumber
\end{eqnarray}
here $\gamma^{\mu}$, $\tau^{i}$ ($i=1,2,3,4$) are ordinary gamma-matrices in flat 4D Minkowski and Euclidian spaces correspondingly; $\gamma_{5}=-i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$, $\tau_{5}=\tau^{1}\tau^{2}\tau^{3}\tau^{4}$. Chiral operator in 10 dimensions $\Gamma_{11}=\prod\nolimits_{0}^{9}\Gamma^{\hat M}= I_{4}\otimes \sigma^{3}\otimes I_{4}$; $\sigma^{1,2,3}$ are Pauli matrices.
As it was shown in \cite{Arutyun} flux term in (\ref{1}) projects out for one of chiral components of $\hat \lambda$ which is easily seen by direct calculation from (\ref{3}), (\ref{7}) (here world gamma-matrices $\Gamma^{\mu}= e^{z/L}\Gamma^{\hat \mu}$, $\Gamma^{z}=\Gamma^{\hat z}$, $\Gamma^{\alpha}=L^{-1}\Gamma^{\hat \alpha}$ are used):
\begin{equation}
\label{8}
\frac{i}{2\cdot 5!}\Gamma^{M_{1}\ldots M_{5}}F_{M_{1}\ldots M_{5}}= -\frac{\bar{Q}}{L}\sigma^{+}\otimes I_{16},
\end{equation}
$\sigma^{+}=(\sigma^{1}+i\sigma^{2})/2$ is a projection $2\otimes 2$ operator. Thus writing down Dirac equation (\ref{1}) for every of two 16-component chiral spinors $\lambda_{Q}$ we must take $Q=\bar{Q}=1$ for the right-handed chiral spinor \cite{Arutyun} and $Q=0$ for the left-handed one.
Again following \cite{Arutyun} we expand chiral spinor in a set of spherical harmonics of $S^{5}$:
\begin{eqnarray}
\label{9}
&& \lambda_{Q}(x^{\mu},z, y^{\alpha})= \sum_{n\ge 0}[\lambda_{Qn}^{+}(x^{\mu}, z)\otimes \chi_{n}^{+}(y^{\alpha})+ \lambda_{Qn}^{-}(x^{\mu}, z)\otimes \chi_{n}^{-}(y^{\alpha})], \nonumber
\\
\\
&& \qquad \tau^{\alpha}D_{\alpha}\chi_{n}^{\pm}(y^{\alpha})= \mp \frac{i}{L}\left(n+\frac{5}{2}\right)\chi_{n}^{\pm}(y^{\alpha}). \nonumber
\end{eqnarray}
Summing up these preliminaries the following Dirac equations in 5 dimensions $(x^{\mu},\, z)$ are received from (\ref{1}) for the 4-component spinors $\lambda_{Qn}^{\pm}$:
\begin{eqnarray}
\label{10}
&& \left[ e^{z/L}\gamma^{\mu}\partial_{\mu}+ \gamma_{5}\partial_{z}-\frac{2}{L}\gamma_{5} + \frac{1}{L}\left(n+\frac{5}{2}+ Q\right) \right]\lambda_{Qn}^{+}(x^{\mu},z)=0, \nonumber
\\
\\
&& \left[ e^{z/L}\gamma^{\mu}\partial_{\mu}+ \gamma_{5}\partial_{z}-\frac{2}{L}\gamma_{5} - \frac{1}{L}\left(n+\frac{5}{2}-Q \right) \right]\lambda_{Qn}^{-}(x^{\mu},z)=0, \nonumber
\end{eqnarray}
$n=0,1,2\ldots$, $Q=1,\,0$ (we remind that $Q=1$ for the right-handed 16-component chiral spinor and $Q=0$ for the left-handed one). Term $(2/L) \gamma_{5}$ appears in (\ref{10}) from the spin-connection in $D_{\mu}$ in (\ref{1}) and reflects $z$-dependence of the warp factor in metric (\ref{2}).
We'll see that most interesting (because of lesser value of coefficient in round brackets) is the case of second equation in (\ref{10}); thus in what follows Dirac equation for 4-component spinors $\lambda_{Qn}^{-}$ will be considered. Further separation of variables:
\begin{equation}
\label{11}
\lambda_{Qn}^{-}(x^{\mu},z)=(\lambda_{QnL}^{-}, \lambda_{QnR}^{-})= (\psi_{L}(x^{\mu})\, f_{L}(z), \,\psi_{R}(x^{\mu})\, f_{R}(z)),
\end{equation}
(indices $Q$, $n$, $-$ are omitted in the RHS of (\ref{11}); $\psi_{L}$, $\psi_{R}$ are the left and right components of Dirac spinor $\psi (x^{\mu})= (\psi_{L},\, \psi_{R})$ of mass $m$ governed by the ordinary Dirac equation in 4 dimensions $(\gamma^{\mu}\partial_{\mu}-m) \psi =0$) reduces equation (\ref{10}) for $\lambda^{-}_{Qn}$ to the following system for profiles $f_{L,R}(z)$ (for every $Q$, $n$):
\begin{eqnarray}
\label{12}
&& \left[\frac{d}{dz} - \frac{2}{L} - \frac{1}{L}\left(\nu+1/2\right)\right] f_{L} + m \, e^{z/L}f_{R}=0, \nonumber
\\
\\
&& \left[\frac{d}{dz} - \frac{2}{L} + \frac{1}{L}\left(\nu+1/2\right)\right] f_{R} - m \, e^{z/L}f_{L}=0, \nonumber
\end{eqnarray}
parameter $\nu = n+2-Q$ essentially determines the looked for spectra of $m$; for $Q=1$ (i.e. for Fermions which "feel" the flux) $\nu = 1,{}2\ldots $, and in case $Q=0$ we have $\nu = 2,{}3\ldots$
Equations (\ref{12}) are typical in the Randall-Sundrum type models when bulk Dirac mass term is included in the Fermi field Lagrangian \cite{Neubert}, \cite{Gher1}, \cite{Gher2}, \cite{Gher3}. However, contrary to these papers were value of bulk Dirac mass which determines the physically important parameter $\nu$ in (\ref{12}) was taken "by hand", here we rely upon well grounded supergravity approach which gives definite values of $\nu$.
\section{Boundary conditions, two towers of spectra of Fermi excitations and "seesaw" scale without seesaw mechanism}
Solution of system (\ref{12}) is a linear combination of Bessel and Neuman functions \cite{Gher1}-\cite{Gher3}:
\begin{eqnarray}
\label{13}
&& f_{L}(z)= e^{5z/2L}\left[AJ_{\nu}(\tau) + B N_{\nu}(\tau) \right], \nonumber
\\
\\
&& f_{R}(z)= e^{5z/2L}\left[AJ_{\nu+1}(\tau) + B N_{\nu+1}(\tau) \right], \nonumber
\end{eqnarray}
where $\tau = mLe^{z/L}$; the slice of the $AdS_{5}\times S^{5}$ space-time is given by the interval $\tau$ (see (\ref{5})):
\begin{equation}
\label{14}
\tau_{UV}=mL < \tau < mL e^{\pi R} = \tau_{IR},
\end{equation}
$A,\,B$ in (\ref{12}) are constants determined from the boundary and normalization conditions. Boundary conditions give also the spectra of $m$.
At the reflection of coordinate $z$, $P_{z}$, D10 spinor $\hat \lambda$ transforms as (see e.g. \cite{Horava}):
\begin{equation}
\label{15}
P_{z}\hat \lambda (z)= \Gamma^{\hat z} \hat \lambda (-z).
\end{equation}
According to (\ref{7}) $\Gamma^{\hat z}=\gamma_{5}\otimes \sigma^{1}\otimes I_{4}$, hence $z$-reflection interchanges right-handed and left-handed chiral components of D10 spinor. It would be mistake however to think that this reflection interchanges chiral component interacting with flux (the right-handed one according to (\ref{8})) and the non-interacting one. The point is that electric and magnetic parts of the 5-form behave opposite under $z$-reflection: electric flux in (\ref{3}) is odd under reflection whereas magnetic one is even. Because of it reflection changes $\sigma^{+}$ to $-\sigma^{-}$ in (\ref{8}) ($\sigma^{-}=(\sigma^{1}-i\sigma^{2})/2$), and it is the left-handed 16-component chiral spinor which "feels" flux on the $Z_{2}$-symmetric pasted half of $AdS_{5}\times S^{5}$ space-time (\ref{2}).
Thus $Z_{2}$-symmetry adjustment at the orbifold points $z=z_{*}$ ($z_{*}=0, \pi R$) of the right-handed (left-handed) chiral component of D10 spinor $\hat \lambda$ "living" on one of pasted copies of slice of $AdS_{5}\times S^{5}$ space-time with the left-handed (right-handed) component on the other pasted copy gives the ordinary in the RS-type models boundary conditions for the left and right profiles (\ref{13}):
\begin{equation}
\label{16}
f_{R}(z_{*}) = \eta f_{R}(z_{*}), \qquad f_{L}(z_{*})= - \eta f_{L}(z_{*}), \qquad \eta = \pm 1.
\end{equation}
We take for definiteness $\eta =1$ at the UV end and following \cite{Gher1}, \cite{Gher2}, \cite{Gher4} consider two types of boundary conditions: the usual "untwisted" one (when $\eta =1$ also at the IR end) and "twisted" one ($\eta =1$ at the UV end, $\eta = -1$ at the IR end). These conditions determine two essentially different towers of the eigenvalues of Dirac equation (\ref{1}). The "twisted" boundary condition corresponds to breaking supersymmetry by the Scherk-Schwarz mechanism \cite{SchSch}, \cite{Quiros} and its application for receiving small gravitino mass in the warped models was first proposed, as to our knowledge, in \cite{Gher2}. Thus let us consider:
\vspace{0,3cm}
{\it{"Untwisted" boundary conditions}}:
$f_{R}(z_{*})=f_{R}(z_{*})$, $f_{L}(z_{*})=-f_{L}(z_{*})$ at both UV and IR boundaries. This means $f_{L}(0)=f_{L}(\pi R)=0$ which according to (\ref{13}) gives the spectral condition:
\begin{equation}
\label{17}
\frac{J_{\nu}(\tau_{UV}) }{N_{\nu}(\tau_{UV})}= \frac{J_{\nu}(\tau_{IR})}{N_{\nu}(\tau_{IR})},
\end{equation}
$\tau_{UV}$, $\tau_{IR}$ are given in (\ref{14}). For $mL \ll 1$, $R/L \gg 1$ (i.e. when $\tau_{UV} \ll 1$ and $\tau_{IR}$ is of order 1) solution of (\ref{17}) is given by simple formula \cite{Gher1}:
\begin{eqnarray}
\label{18}
&&m_{q,n}^{\it{untw}}\simeq \left(q+ \frac{\nu}{2}-\frac{3}{4}\right)\,\frac{\pi}{L}\, e^{-\pi R/L}= \nonumber
\\
\\
&&\left(q+\frac{n}{2}+\frac{1}{4}-\frac{Q}{2}\right)\,\sqrt{\frac{2}{\pi}}M_{Pl}\left(\frac{l_{s}}{L}\right)^{4}e^{-\pi R/L}\cong M_{EW}, \nonumber
\end{eqnarray}
where $q=1,2,3\ldots$, $n=0,1,2\ldots$, formula (\ref{6}) was used to express $L^{-1}$ through $M_{Pl}$.
Physically mass scale in the RHS of (\ref{18}) must be of order of the electro-weak scale $M_{EW}$; its relation to the Planck scale ("first mass hierarchy") is basically given by the small Randall-Sundrum exponent $e^{-\pi R/L}$ in (\ref{18}), although in the model under consideration it also depends on relation of fundamental string length $l_{s}$ to the scale $L$ of the Type IIB supergravity solution (\ref{2}).
Since $\nu=n+2-Q >0$ the profiles of eigenfunctions (\ref{13}) of massive modes are essentially concentrated in vicinity of the IR end of the slice of $AdS_{5}\times S^{5}$ space-time. The "untwisted" boundary conditions permit also zero-mode ($m=0$) solutions of system (\ref{12}): $f_{L}\equiv 0$, $f_{R}\sim \exp [(-\nu +3/2)z/L]$.
\vspace{0,3cm}
{\it{"Twisted" boundary conditions}}:
$f_{L}(z_{UV})=-f_{L}(z_{UV})$, $f_{R}(z_{IR})=-f_{R}(z_{IR})$, i.e. $f_{L}(0)=f_{R}(\pi R)=0$.
In this case spectral condition for solutions (\ref{13}) looks as \cite{Gher2}:
\begin{equation}
\label{19}
\frac{J_{\nu}(\tau_{UV}) }{N_{\nu}(\tau_{UV})}= \frac{J_{\nu+1}(\tau_{IR}) }{N_{\nu+1}(\tau_{IR})},
\end{equation}
There are no zero modes in this case. Spectral equation (\ref{19}) possesses the "first hierarchy" massive modes with eigenvalues of type (\ref{18}), but it also gives the "inverse tower" of extremely small values of $m_{Q,n}^{\it{tw}}$ exponentially decreasing with growth of spectral number $n$:
\begin{equation}
\label{20}
m_{Q,n}^{\it{tw}}=\frac{2\sqrt{n+2-Q}}{L}\, e^{-(n+3-Q)\pi R/L},
\end{equation}
which is received from (\ref{14}), (\ref{19}) with account that in this case both arguments in (\ref{19}) are much less then one ($\tau_{UV}=mL \ll 1$ and $\tau_{IR}=mLe^{\pi R/L} \ll 1$). We also inserted $\nu = n+2-Q$ in (\ref{20}).
The highest value of $m_{Q,n}^{\it{tw}}$ is achieved when Fermion interacts with flux, i.e. at $Q=1$, $n=0$ in (\ref{20}):
\begin{equation}
\label{21}
m_{1,0}^{\it{tw}}=\frac{2}{L}\, e^{-2\pi R/L}=\left(\frac{L}{l_{s}}\right)^{4}\frac{M_{EW}^{2}}{M_{Pl}}.
\end{equation}
In deriving the RHS of (\ref{21}) we expressed $L^{-1}$ and $e^{-\pi R/L}$ through $M_{Pl}$ and $M_{EW}$ from (\ref{6}), (\ref{18}) and omitted coefficient of order one. For the choice $M_{EW}=1TeV$, $(L/l_{s})^{4}= 10^{3}$ (\ref{21}) gives the mass scale of order of mass of electron neutrino.
Theory surely must be more elaborated. The goal of the paper is to demonstrate interesting potential possibilities of the supergravity models, and to demonstrate the importance of presence of Pauli type terms in the bulk Dirac equations. In fact, let us calculate the next spectral value of tower (\ref{20}) which is received for $Q=0$, $n=0$ (or equivalently for $Q=1$, $n=1$):
\begin{equation}
\label{22}
m_{0,0}^{\it{tw}}=m_{1,1}^{\it{tw}}=\frac{\sqrt{8}}{L}\, e^{-3\pi R/L}=\left(\frac{L}{l_{s}}\right)^{8}\frac{M_{EW}^{3}}{M_{Pl}^{2}}.
\end{equation}
In the absence of the 5-form term in (\ref{1}) this would be the highest value of spectrum (\ref{20}) and physically promising "seesaw" combination $M_{EW}^{2}/M_{Pl}$ like in the RHS of (\ref{21}) would not appear in the spectrum of Dirac equation. It must be noted that mass scale $M_{EW}^{2}/M_{Pl}$ is received here without any reference to large right neutrino mass and standard seesaw mechanism (cf. \cite{Neubert}).
\section{Profiles of wave functions and light right neutrino as a candidate for Dark Matter}
Let us look at the profiles of eigenfunctions (\ref{13}) of "twisted" modes. With use of boundary conditions $f_{L}(0)= f_{R}(\pi R)=0$, inserting expression for $m$ given in (\ref{20}), and taking into account that in this case argument of cylinder functions in (\ref{13}) is small ($\tau \ll 1$) in all region (\ref{14}), it is easy to receive the simple approximate expressions for "twisted" eigenfunctions (\ref{13}):
\begin{eqnarray}
\label{23}
&& f_{L}^{\it{tw}}(z)= N_{\nu}\,e^{5z/2L}\sinh\left(\frac{\nu z}{L}\right), \nonumber
\\
\\
&& f_{R}^{\it{tw}}(z)= - N_{\nu}\sqrt{\nu}\,e^{5z/2L}\sinh\left[(\nu +1)\frac{(\pi R -z)}{L}\right], \nonumber
\end{eqnarray}
where $N_{\nu}$ is the normalization factor, $\nu = n+2-Q$.
From (\ref{23}) it is immediately seen that "twisted" profile $f_{L}(z)$ of the left component of 4-spinor $\psi (x^{\mu})$, see (\ref{11}), is concentrated near the IR end of the warped space-time (\ref{2}), whereas profile $f_{R}(z)$ of the right component is located near the UV end. This must result in essential difference in interactions of the left and right spinors with massive modes of other fields which profiles in extra space are concentrated near IR end of the higher-dimensional space-time.
In the extra-dimensional theories the strength of interaction of modes of different fields depends on overlapping of their wave functions in extra space. That is why universality of electric charge is achieved in these theories only if zero-mode of electro-magnetic field is constant in extra space. The same is true of course for the interaction of matter with gravitational field, the constancy of its zero-mode was supposed in deduction of expression (\ref{6}) for Planck mass in 4 dimensions.
If Fermions considered above are neutral, then exponential suppression of the overlapping in extra space of wave function of right spinor with wave functions of the ordinary matter modes trapped on the IR brain or "living" in the bulk in vicinity of the IR end will make right neutral spinor unobservable even if its mass is small. Thus in this approach there is no need to suppose the extra large mass of right neutrino as an explanation of its unobservability in experiments; "twisted" right neutrino (\ref{11}) join the plethora of candidates for Dark Matter because of its extremely weak interaction with all "ordinary" matter but gravity.
\section{Discussion}
\qquad In \cite{Altsh08} it was shown that background flux "living" in extra sub-space makes massive Kaluza-Klein gauge field which corresponds to the isometrty group of this sub-space - in direct analogy with the standard Higgs mechanism. It is expected that background flux generating the Pauli type terms in the extra-dimensional Fermi fields' equations may also substitute Higgs scalar in forming the observed spectra of Fermions. The advantage of the supergravity models is in unambiguity of these terms in the Lagrangian of Fermi fields in high-dimensions.
The preliminary results of this paper surely may be generalized to the consideration of influence of fluxes on spectra of Fermions e.g. in the model of Klebanov-Strassler throat \cite{KS} in the Type IIB supergravity.
But perhaps the backgrounds of throat-like solutions in the Type IIA supergravity will prove to be even more promising for this direction of thought. In particular it would be interesting to explore if the naturally appearing in these models 2-form background flux quickly decreasing upward from the IR end of the throat (cf. \cite{Altsh08}, \cite{Altsh07}) may provide the intermediate mass scales of the Standard Model generations in spectra of neutral and charged Fermions calculated on this background.
The ambitious goal is of course to receive the observed spectra of particles of Standard Model as a solution of classical equations in higher dimensions. The goal does not look too crazy in view of miraculous successes of dual-holography business in calculating spectra of bound states in QCD in its strong coupling limit from solutions of classical higher dimensional supergravity equations.
\qquad
\section*{Acknowledgements} Author is grateful to R.R. Metsaev for valuable consultations. This work was supported by the Program for Supporting Leading Scientific Schools (Grant LSS-1615.2008.2).
|
2,869,038,156,677 | arxiv | \section{Introduction}
\noindent The aim of this article is to prove a version of the classical result due to N.~Wiener, characterising doubly shift-invariant subspaces (of the Hilbert space square integrable functions on the circle with respect to a finite positive Borel measure), for the Besicovitch Hilbert space. We give the pertinent definitions below.
First we recall the aforementioned classical result. See e.g. \cite[Thm.11, \S14, Chap.II]{W} or \cite[Thm. 1.2.1, p.8]{Nik} or \cite[Thm.11, \S14, Chap.II]{W}. Let $\mu$ be a finite, nonnegative Borel measure on the unit circle $\mathbb{T}:=\{z\in \mathbb{C}: |z|=1\}$, and let $L^2_\mu(\mathbb{T})$ be the Hilbert space of all functions $f:\mathbb{T}\rightarrow \mathbb{C}$ such that
$$
\|f\|_2^2:=\int_{\;\!\mathbb{T}} |f(\xi)|^2 \;\!d\mu(\xi)<\infty,
$$
with pointwise operations, and the inner product
$$
\langle f,g\rangle=\int_{\;\!\mathbb{T}} f(\xi)\;\! \overline{g(\xi)}\;\!d\mu(\xi)
$$
for $f,g\in L^2_\mu(\mathbb{T})$.
Here $\overline{\cdot}$ denotes complex conjugation.
\goodbreak
\noindent For a $\mu$-measurable set, $\mathbf{1}_\sigma$ is the indicator/characteristic function of $\sigma$, i.e.,
$$
\mathbf{1}_\sigma(w)=\left\{\begin{array}{ll} 1& \textrm{if } w\in \sigma,\\
0 & \textrm{if } w\in \mathbb{T}\;\! \setminus \sigma.
\end{array}\right.
$$
The multiplication operator $M_z:L^2_\mu(\mathbb{T}) \rightarrow L^2_\mu(\mathbb{T})$ is given by
$$
(M_zf)(w)=wf(w) \;\textrm{ for all }w\in \mathbb{T},\; f\in L^2_\mu(\mathbb{T}),
$$
and is called the {\em shift-operator}. A closed subspace $E\subset L^2_\mu(\mathbb{T})$ is called {\em doubly invariant} if
$M_z E\subset E$ and $(M_z)^* E\subset E$. A closed subspace is doubly invariant if and only if $M_z E=E$. The following result gives a characterisation of doubly invariant subspaces of $L^2_\mu(\mathbb{T})$:
\begin{theorem}[N. Wiener]$\;$
\noindent
Let $E\subset L^2_\mu(\mathbb{T})$ be a closed subspace of $L^2_\mu(\mathbb{T})$. Then $M_z E=E$ if and only if there exists a unique measurable set $\sigma \subset \mathbb{T}$ such that
$$
E=\mathbf{1}_\sigma L^2_\mu(\mathbb{T})=\{f\in L^2_\mu(\mathbb{T}): f=0\;\mu\textrm{-a.e. on }\mathbb{T}\;\!\setminus \sigma\}.
$$
\end{theorem}
\noindent We will prove a similar result when $L^2_\mu(\mathbb{T})$ is replaced by $AP^2$, the Besicovitch Hilbert space.
We recall this space and a few of its properties in the following section, before stating and proving our main result in the final section.
\section{Preliminaries on the Besicovitch space $AP^2$}
\noindent For $\lambda \in \mathbb{R}$, let $e_\lambda:=e^{i\lambda \cdot }\in L^\infty(\mathbb{R})$.
Let $\mathcal{T}$ be the space of trigonometric polynomials, i.e.,
$\mathcal{T}$ is the linear span of $\{e_\lambda: \lambda \in \mathbb{R}\}$.
The {\em Besicovitch space} $AP^2$ is the completion of $\mathcal{T}$ with respect to the inner product
$$
\langle p,q\rangle =\lim_{R\rightarrow \infty} \frac{1}{2R}\int_{-R}^R p(x)\;\! \overline{q(x)} \;\!dx,
$$
for $p,q\in \mathcal{T}$, and where $\overline{\cdot}$ denotes complex conjugation.
We remark that elements of $AP^2$ are not to be thought of as functions on $\mathbb{R}$:
For example, consider
the sequence $(q_n)$ in $\mathcal{T}$, where
$$
q_n(x):=\sum_{k=1}^n \frac{1}{k}e^{i \frac{1}{k} x } \quad (x\in \mathbb{R}).
$$
Then $(q_n)$ converges to an element of $AP^2$, but $(q_n(x))_{n\in \mathbb{N}}$ diverges for all
$x\in \mathbb{R}$ (see \cite[Remark~5.1.2, p.91]{Par}). Although elements of $AP^2$ may not be functions on $\mathbb{R}$, they can be identified
as functions on the Bohr compactifcation $\mathbb{R}_B$ of $\mathbb{R}$, and we elaborate on this below.
We refer the reader to \cite[\S7.1]{BKS} and the references therein for further details.
\noindent For a locally compact Abelian group $G$ written additively, the dual group $G^*$ is the set
all continuous characters of $G$. Recall that a {\em character of $G$} is a map $\chi: G\rightarrow \mathbb{T}$ such that
$$
\chi(g+h)=\chi(g)\;\! \chi(h) \quad (g,h\in G).
$$
Then $G^*$ becomes an Abelian group with pointwise multplication, but we continue to write the group operation in $G^*$ also additively, motivated by the special characters
$$
G=\mathbb{R}\owns x\stackrel{\theta}{\mapsto}e^{i\theta x}\in \mathbb{T},
$$
when $G=\mathbb{R}$. So
the inverse of $\chi\in G^*$ is denoted by $-\chi$. Then $G^*$ is a locally compact Abelian group with the topology
given by the basis formed by the sets
$$
U_{g_1,\cdots, g_n;\epsilon} (\chi)
:=\{ \eta \in G^*: |\eta(g_i)-\chi(g_i)|<\epsilon \textrm{ for all } 1\leq i\leq n\},
$$
where $\epsilon>0$, $n\in \mathbb{N}:=\{1,2,3,\cdots\}$, $g_1,\cdots, g_n\in G$.
Let $G^*_d$ denote the group $G^*$ with the discrete topology. The dual group $(G^*_d)^*$ of $G^*_d$ is called the {\em Bohr compactification of $G$}. By the Pontryagin duality theorem\footnote{See e.g. \cite[p.189]{Kat}.}, $G$ is the set of all continuous characters of $G^*$, and since $G_B$ is the set of all (continuous or not) characters of $G^*$, $G$ can be considered to be contained in $G_B$. It can be shown that $G$ is dense in $G_B$.
Let $\mu$ be the normalised Haar measure in $G_B$, that is, $\mu$ is a positive regular Borel measure such that
\vspace{0.1cm}
\noindent $\;\bullet\;$ (invariance) $\mu(U)=\mu(U+\xi)$ for all Borel sets $U\subset G_B$, and
all $\xi\in G_B$,
\vspace{0.05cm}
\noindent $\;\bullet\;$ (normalisation) $\mu(G_B)=1$.
\vspace{0.1cm}
\noindent
Let $\mathbb{R}_B=(R^*_d)^*$ denote the Bohr compactification of $\mathbb{R}$.
Let $\mu$ be the normalised Haar measure on $\mathbb{R}_B$.
Let $L^2_{\mu}(\mathbb{R}_B)$ be the Hilbert space of all functions $f:\mathbb{R}_B\rightarrow \mathbb{C}$ such that
$$
\|f\|_2^2:=\int_{\mathbb{R}_B} |f(\xi)|^2 \;\!d\mu(\xi)<\infty,
$$
with pointwise operations and the inner product
$$
\langle f,g\rangle=\int_{\mathbb{R}_B} f(\xi)\;\! \overline{g(\xi)}\;\!d\mu(\xi)
$$
for all $f,g\in L^2_{\mu}(\mathbb{R}_B)$.
The Besicovitch space $AP^2$ can be identified as a Hilbert space with $L^2_{\mu}(\mathbb{R}_B)$, and let $\iota: AP^2\rightarrow L^2_{\mu}(\mathbb{R}_B)$ be the Hilbert space isomorphism.
Let $L^\infty_{\mu}(\mathbb{R}_B)$ be the space of $\mu_{B}$-measurable functions that are essentially bounded (that is bounded on $\mathbb{R}_B$ up to a set of measure $0$)
with the essential supremum norm
$$
\|f\|_\infty:= \inf\{M\geq 0: |f(\xi)|\leq M \textrm{ a.e.}\}.
$$
For an element $f\in L^\infty_{\mu}(\mathbb{R}_B)$, let $M_f :L^2_{\mu}(\mathbb{R}_B)\rightarrow L^2_{\mu}(\mathbb{R}_B)$ be the multiplication map
$\varphi\mapsto f\varphi$, where $f\varphi$ is the pointwise multiplication of $f$ and $\varphi$ as functions on $\mathbb{R}_B$.
Let $AP\subset L^\infty(\mathbb{R})$ be the $C^*$-algebra of
almost periodic functions, namely the closure in $L^\infty(\mathbb{R})$ of the space $\mathcal{T}$ of trigonometric polynomials.
Then it can be shown that $AP\subset AP^2$, and $\iota(AP)=C(\mathbb{R}_B)\subset L^\infty_{\mu}(\mathbb{R}_B)$. Also, for $f\in AP$,
$$
\|f\|_2= \|\iota f\|_2\leq \|\iota f\|_\infty=\|f\|_\infty.
$$
For $f,g\in AP$, and $\lambda \in \mathbb{R}$,
$$
\iota(fg)=(\iota f)(\iota g), \quad \iota(e_0)=\mathbf{1}_{\mathbb{R}_B},
\quad \overline{\iota(e_\lambda)}=
\iota(\overline{e_\lambda})=\iota(e_{-\lambda}).
$$
Every element $f\in AP$ gives a multiplication map, $M_{\iota(f)}$ on $ L^2_{\mu}(\mathbb{R}_B)$.
For $f\in AP^2$, the {\em mean value}
$$
{\mathbf{m}}(f):=\int_{\mathbb{R}_B} (\iota(f))(\xi) \;\!d\mu(\xi)=\langle \iota f, \iota \;\!e_0\rangle= \langle f, e_0\rangle
$$
exists. The set
$$
\Sigma(f):=\{\lambda \in \mathbb{R}: {\mathbf{m}}(e_{-\lambda} f)\neq 0\}
$$
is called the {\em Bohr spectrum of $f$}, and can be shown to be at most countable. We have a Hilbert space isomorphism, via the Fourier transform, between $L^2(\mathbb{T})$ and $\ell^2(\mathbb{Z})$:
$$
L^2(\mathbb{T})\owns f\mapsto (\widehat{f}(n):=\langle f, e^{-int}\rangle)_{n\in \mathbb{Z}}\in \ell^2(\mathbb{Z}) .
$$
Analogously, we have a representation of $AP^2$ via the Bohr transform. We elaborate on this below.
Let $\ell^2(\mathbb{R})$ be the set of all $f:\mathbb{R}\rightarrow \mathbb{C}$ for which the set $\{\lambda \in \mathbb{C}: f(\lambda)\neq 0\}$ is countable and
$$
\|f\|_2^2:=\sum_{\lambda\;\!\in\;\! \mathbb{R}} |f(\lambda)|^2 <\infty.
$$
Then $\ell^2(\mathbb{R})$ is a Hilbert space with pointwise operations and the inner product
$$
\langle f,g\rangle =\sum_{\lambda\;\! \in \;\!\mathbb{R}} f(\lambda)\;\! \overline{g(\lambda)} .
$$
For $\lambda \in \mathbb{R}$, define the {\em shift-operator} $S_\lambda:\ell^2(\mathbb{R})\rightarrow \ell^2(\mathbb{R})$ by
$$
(S_\lambda f)(\cdot)=f(\cdot-\lambda).
$$
Let $c_{00}(\mathbb{R})\subset \ell^2(\mathbb{R})$ be the set of finitely supported functions. Define the map
$\mathcal{F}: c_{00}(\mathbb{R})\rightarrow AP^2$ as follows: For $f\in c_{00}(\mathbb{R})$,
$$
(\mathcal{F} f)(x)=\sum_{\lambda \;\!\in\;\! \mathbb{R}} f(\lambda) \;\!e^{i\lambda x}\quad (x\in \mathbb{R}).
$$
By continuity, $\mathcal{F}: c_{00}(\mathbb{R})\rightarrow AP^2$ can be extended to a map (denoted by the same symbol)
$\mathcal{F}:\ell^2(\mathbb{R})\rightarrow AP^2$ defined on all of $\ell^2(\mathbb{R})$, and is called the {\em Bohr transform}. The map $\mathcal{F} :\ell^2(\mathbb{R})\rightarrow AP^2$ is a Hilbert space isomorphism.
The inverse Bohr transform $\mathcal{F}^{-1}: AP^2\rightarrow \ell^2(\mathbb{R})$ is given by
$$
(\mathcal{F}^{-1} f)(\lambda)={\mathbf{m}}(f e_{-\lambda}) \quad (\lambda \in \mathbb{R}).
$$
For $\lambda \in \mathbb{R}$ and $f\in L^2_{\mu}(\mathbb{R}_B)$, we have
the following equality in $\ell^2(\mathbb{R})$:
$$
\mathcal{F}^{-1} \iota^{-1}(M_{\iota(e_\lambda)} f)
=
(\mathcal{F}^{-1}\iota^{-1}f)(\cdot-\lambda)
=
S_\lambda (\mathcal{F}^{-1}\iota^{-1}f).
$$
We also note that by the Cauchy-Schwarz inequality in $L^2_{\mu}(\mathbb{R}_B)$,
for all functions $f,g\in L^2_\mu(\mathbb{R}_B)$, we have
\begin{eqnarray*}
\Big(\sum_{\lambda\;\! \in \;\!\mathbb{R}} (\mathcal{F}^{-1} \iota^{-1} f)(\lambda)\;\!\overline{(\mathcal{F}^{-1} \iota^{-1} g)(\lambda)}\Big)^2
\!\!\!\!&\leq&\!\!\!\! \sum_{\alpha\;\! \in \;\!\mathbb{R}} |(\mathcal{F}^{-1} \iota^{-1} f)(\alpha)|^2
\sum_{\beta\;\!\in \;\!\mathbb{R}} |(\mathcal{F}^{-1} \iota^{-1} g)(\beta)|^2
\\
\!\!\!\!&=&\!\!\!\!\|f\|_2^2\;\!\|g\|_2^2.
\end{eqnarray*}
We will need the following approximation result (see e.g. \cite{Cor} or \cite{BKS}):
\begin{proposition}
Let $f\in AP$ and $\Sigma(f)$ be its Bohr spectrum. Then there exists a sequence $(p_n)_{n\in \mathbb{N}}$ in $\mathcal{T}$ such that
\begin{itemize}
\item for all $n\in \mathbb{N}$, $\Sigma(p_n)\subset \Sigma(f)$, and
\item $ (p_n)_{n\in \mathbb{N}}$ converges uniformly to $f$ on $\mathbb{R}$.
\end{itemize}
\end{proposition}
\noindent Analogous to the classical Fourier theory where the Fourier coefficients of the pointwise multiplication of sufficiently regular functions is given by the convolution of their Fourier coefficients, we have the following.
\begin{lemma}
\label{lemma_8_may_2021_15:17}
Let $f\in L^\infty_{\mu}(\mathbb{R}_B)$ and $g\in L^2_{\mu}(\mathbb{R}_B)$. Then for all $\lambda\in \mathbb{R}$,
$$
(\mathcal{F}^{-1}\iota^{-1}(fg))(\lambda)=\sum_{\alpha\in \mathbb{R}}(\mathcal{F}^{-1}\iota^{-1}f)(\alpha) \;\!
(\mathcal{F}^{-1} \iota^{-1}g)(\lambda-\alpha).
$$
\end{lemma}
\begin{proof} We first show this for $f,g\in \iota \mathcal{T}$, and then use a continuity argument
using the density of $\mathcal{T}$ in $AP^2$. For $f,g\in \iota \mathcal{T}$, we have $\Sigma(\iota^{-1}f),\Sigma(\iota^{-1}g)$ are finite subsets of $\mathbb{R}$, and
\begin{eqnarray*}
\iota^{-1} f =\sum_{\alpha \;\!\in\;\! \mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} f)(\alpha) \;\!e_\alpha,
&\quad &
\iota^{-1} g =\sum_{\beta\;\! \in\;\! \mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} g)(\beta) \;\! e_\beta.
\end{eqnarray*}
So
\begin{eqnarray*}
\iota^{-1} (fg)
&=& (\iota^{-1} f)\;\! \iota^{-1} g
= \Big( \sum_{\alpha\;\!\in\;\! \mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} f)(\alpha)\;\! e_\alpha\Big)
\sum_{\beta \;\!\in \;\!\mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} g)(\beta) \;\!e_\beta\\
&=&
\sum_{\alpha \;\!\in\;\! \mathbb{R}}
\sum_{\beta \;\!\in\;\! \mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} f)(\alpha)
\;\!
(\mathcal{F}^{-1} \iota^{-1} g)(\beta)
\;\!e_{\alpha+\beta}.
\end{eqnarray*}
We have
$$
{\mathbf{m}}(e_a)=\left\{ \begin{array}{ll}
\langle \iota e_0,\iota e_0\rangle=1 & \textrm{if }a=0,\\
\langle \iota e_a,\iota e_0\rangle=0 & \textrm{if }a\neq 0.
\end{array}\right.
$$
Thus
\begin{eqnarray*}
(\mathcal{F}^{-1} \iota^{-1} (fg))(\lambda)
&=& {\mathbf{m}}(\iota^{-1}(fg)\;\!e_{-\lambda} )
\phantom{ \sum_{\alpha \in \mathbb{R}}}
\\
&=& \sum_{\alpha\;\! \in \;\!\mathbb{R}}
\sum_{\beta\;\! \in \;\!\mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} f)(\alpha)
\;\!
(\mathcal{F}^{-1} \iota^{-1} g)(\beta)\;\!
{\mathbf{m}}(e_{\alpha+\beta-\lambda})
\\
&=&
\sum_{\alpha\;\! \in\;\! \mathbb{R}}
(\mathcal{F}^{-1} \iota^{-1} f)(\alpha)
\;\!
(\mathcal{F}^{-1} \iota^{-1} g)(\lambda-\alpha) .
\end{eqnarray*}
Now consider the general case when $f\in L^\infty_{\mu}(\mathbb{R}_B)$ and $g\in L^2_{\mu}(\mathbb{R}_B)$.
Then we can find sequences $(f_n),(g_n)$ in $\iota \mathcal{T}$ such that
\begin{itemize}
\item $( f_n)_{n\in \mathbb{N}}$ converges uniformly to $f$,
\item $( g_n)_{n\in \mathbb{N}}$ converges to $g$ in $AP^2$, and
\item for all $n\in \mathbb{N}$, $\Sigma(\iota^{-1} f_n)\subset \Sigma(\iota^{-1}f)$, and $\Sigma(\iota^{-1}g_n)\subset \Sigma(\iota^{-1}g)$.
\end{itemize}
We remark that the $g_n$ can be constructed by simply `truncating' the `Bohr series' of $\iota^{-1}g$, since
$$
\sup_{\substack{F\;\!\subset\;\! \Sigma(\iota^{-1} g)\\ F\textrm{ finite}} }\;
\sum_{\beta\;\!\in\;\! F} |(\mathcal{F}^{-1} \iota^{-1} g)(\beta)|^2=\|g\|_2^2.
$$
Then, with $\widehat{\cdot}:= \mathcal{F}^{-1} \iota^{-1}$, we have
\begin{eqnarray*}
&&\Big| \sum_{\alpha\;\!\in\;\! \mathbb{R}} (\widehat{f_n})(\alpha) \;\!(\widehat{g_n})(\lambda-\alpha)
-\widehat{f}(\alpha)\;\!\widehat{g}(\lambda-\alpha)\Big|\\
&\leq & \sum_{\alpha\;\!\in\;\! \mathbb{R}}
|\widehat{f_n}(\alpha)| |\widehat{g_n}(\lambda -\alpha)-\widehat{g}(\lambda-\alpha)|
+
\sum_{\alpha\;\!\in \;\!\mathbb{R}}|\widehat{f_n}(\alpha)-\widehat{f}(\alpha)||\widehat{g}(\lambda-\alpha)| \\
&\leq &
\|f_n\|_2 \|g_n-g\|_2+\|f_n-f\|_2 \|g\|_2\phantom{\sum_{\alpha\in \mathbb{R}}}\textrm{(Cauchy-Schwarz)}
\\
&\leq&
\|f_n\|_\infty \|g_n-g\|_2+\|f_n-f\|_\infty \|g\|_2
\stackrel{n\rightarrow\infty}{\longrightarrow}
\|f\|_\infty \cdot 0 +0\cdot \|g\|_2=0.
\end{eqnarray*}
This completes the proof.
\end{proof}
\section{Characterisation of doubly invariant subspaces}
\noindent In this section, we state and prove our main results, namely Theorem~\ref{thm_23_april_2021_18:38} and Corollary~\ref{corollary_20_april_2021_18:15}. Theorem~\ref{thm_23_april_2021_18:38} is a straightforward adaptation\footnote{It is clear that there is but little novelty in the proof of our Theorem~\ref{thm_23_april_2021_18:38}. It may be argued that all this is implicit in the considerable literature on the subject of doubly invariant subspaces in quite general settings; see notably \cite{Sri}, \cite{HasSri}. Let us then make it explicit!} of the proof of the classical
version of the theorem given in \cite[Theorem ~1.2.1, p.8]{Nik}. On the other hand, the main result of the article is Corollary~\ref{corollary_20_april_2021_18:15}, which follows from Theorem~\ref{thm_23_april_2021_18:38}
by an application of
Lemma~\ref{lemma_8_may_2021_15:17}.
For a measurable set $\sigma\subset \mathbb{R}_B$, let
$\mathbf{1}_\sigma\in L^\infty_{\mu}(\mathbb{R}_B)$ denote the characteristic function of $\sigma$, i.e.,
$$
\mathbf{1}_\sigma(\xi)=\left\{\begin{array}{ll} 1& \textrm{if }\; \xi \in \sigma,\\
0 & \textrm{if }\; \xi\in \mathbb{R}_B \setminus \sigma.
\end{array}\right.
$$
\goodbreak
\begin{theorem}
\label{thm_23_april_2021_18:38}
Let $E\subset L^2_{\mu}(\mathbb{R}_B)$ be a closed subspace of $L^2_{\mu}(\mathbb{R}_B)$.
\noindent Then the following are equivalent:
\begin{enumerate}
\item $M_{\iota(e_\lambda)} E=E$ for all $\lambda \in \mathbb{R}$.
\item There exists a unique measurable set $\sigma \subset \mathbb{R}_B$ such that
$$
E=M_{\mathbf{1}_\sigma} L^2_{\mu}(\mathbb{R}_B)\\
=\{ f\in L^2_{\mu}(\mathbb{R}_B): f=0 \;\mu\textrm{\em-a.e. on }\mathbb{R}_B\setminus \sigma\}.
$$
\end{enumerate}
\end{theorem}
\begin{proof} (2)$\Rightarrow$(1): Let $f\in E$. Then there exists an element $\varphi\in L^2_{\mu}(\mathbb{R}_B)$ such that $f= M_{\mathbf{1}_\sigma}\varphi=\mathbf{1}_\sigma \varphi$. For all $\lambda \in \mathbb{R}$, $\iota (e_\lambda)\in L^\infty_{\mu}(\mathbb{R}_B)$, and
so
$$
M_{\iota(e_\lambda)} f
=\iota(e_\lambda)(\mathbf{1}_\sigma \;\!\varphi)
=(\iota(e_\lambda)\mathbf{1}_\sigma ) \;\!\varphi
=\mathbf{1}_\sigma(\iota(e_\lambda)\;\!\varphi)
=M_{\mathbf{1}_\sigma}\psi,
$$
where $\psi:=\iota(e_\lambda)\varphi\in L^2(\mathbb{R}_B, \mu)$, and so $M_{\iota(e_\lambda)} f\in E$. Thus $M_{\iota(e_\lambda)} E\subset E$. Moreover,
\begin{eqnarray*}
f&=&\mathbf{1}_\sigma \;\!\varphi=(\mathbf{1}_{\mathbb{R}_B}\mathbf{1}_\sigma) \;\!\varphi
= (\iota(e_0)\mathbf{1}_\sigma)\;\!\varphi=(\iota(e_{\lambda-\lambda})\mathbf{1}_\sigma)\;\!\varphi
\\
&=&\iota(e_\lambda)(\mathbf{1}_\sigma\;\!\iota(e_{-\lambda})\;\!\varphi)=M_{\iota(e_\lambda)} g,
\end{eqnarray*}
where $g:=\mathbf{1}_\sigma (\iota(e_{-\lambda})\;\!\varphi)\in M_{\mathbf{1}_\sigma}L^2(\mathbb{R}_B, \mu)$. Hence $f\in M_{\iota(e_\lambda)} E$. Consequently, $E\subset M_{\iota(e_\lambda)} E$ too.
\medskip
\noindent (1)$\Rightarrow$(2): Let $P_E:L^2_{\mu}(\mathbb{R}_B)\rightarrow L^2_{\mu}(\mathbb{R}_B)$ be the orthogonal projection onto the closed subspace $E$. Set $f=P_E \mathbf{1}_{\mathbb{R}_B}$. Let $I$ be the identity map on $L^2_{\mu}(\mathbb{R}_B)$. We claim that
$$
\phantom{AAAAAA}\mathbf{1}_{\mathbb{R}_B}-f\perp E. \phantom{AAAAAA}\quad \quad \quad (\star)
$$
Indeed, for all $g\in E$,
\begin{eqnarray*}
\langle \mathbf{1}_{\mathbb{R}_B}-f, g\rangle
& =&
\langle (I-P_E) \mathbf{1}_{\mathbb{R}_B}, P_E g\rangle
=
\langle (P_E)^*(I-P_E) \mathbf{1}_{\mathbb{R}_B}, g\rangle
\\
& =&
\langle P_E(I-P_E) \mathbf{1}_{\mathbb{R}_B}, g\rangle
=
\langle 0,g\rangle=0.
\end{eqnarray*}
As $f=P_E \mathbf{1}_{\mathbb{R}_B}\in E$ and $M_{\iota(e_\lambda)} E=E$ for all $\lambda \in \mathbb{R}$,
we have $\mathbf{1}_{\mathbb{R}_B}-f \perp M_{\iota(e_\lambda)} f$ for all $\lambda \in \mathbb{R}$. So for all
$p\in \mathcal{T}$,
$$
\int_{\mathbb{R}_B} \iota(p) f (\mathbf{1}_{\mathbb{R}_B}-\overline{f}) \;\!d\mu =0.
$$
But $\mathcal{T}$ is dense in $AP^2$, and $\mu$ is a finite positive Borel measure. So
$$
f(\mathbf{1}_{\mathbb{R}_B}-\overline{f})=0\;\; \mu{\textrm{-a.e.}}
$$
Thus $f=|f|^2$ $\mu$-a.e., so that $f(\xi)\in\{0,1\}$ for all $\xi \in \mathbb{R}_B$. Set
$$
\sigma=\{\xi\in \mathbb{R}_B: f(\xi)=1\}.
$$
Then $f=\mathbf{1}_{\sigma}$ $\mu$-a.e. As $\mathbf{1}_{\sigma}=f=P_E \mathbf{1}_{\mathbb{R}_B}\in E$, and
as $M_{\iota(e_\lambda)} E=E$ for all $\lambda \in \mathbb{R}$, it follows that
$\mathbf{1}_{\sigma} \;\!\iota(\mathcal{T})\subset E$. But $E$ is closed, and thus
$$
\textrm{closure}(\mathbf{1}_{\sigma} \;\!\iota(\mathcal{T}))\subset E.
$$
Since $ \textrm{closure}(\mathcal{T})=AP^2$, we conclude that $ \mathbf{1}_{\sigma}L^2_{\mu}(\mathbb{R}_B)\subset E$.
Next we want to show that $E\subset \mathbf{1}_{\sigma}L^2_{\mu}(\mathbb{R}_B)$.
To this end, let $g\in E$ be orthogonal to $\mathbf{1}_{\sigma}\;\!L^2_{\mu}(\mathbb{R}_B)$. In particular, for all $\lambda \in \mathbb{R}$,
$$
\phantom{AAAAAA}
\int_{\mathbb{R}_B} g \;\! \mathbf{1}_{\sigma} \;\!\iota(e_\lambda) \;\!d\mu=0.
\phantom{AAAAAA} \quad \quad \quad \quad \quad(\asterisk)
$$
We want to show that $g=0$. Since $g\in E$, $M_{\iota(e_{\lambda})} g\in E$ for all $\lambda$.
So by ($\star$) above, $\mathbf{1}_{\mathbb{R}_B}-\mathbf{1}_{\sigma} \perp M_{\iota(e_{\lambda})} g$,
and noting that $\mathbf{1}_{\mathbb{R}_B},\mathbf{1}_{\sigma}$ are real-valued,
$$
\phantom{AAAAAa}
\int_{\mathbb{R}_B} g\;\!\iota(e_\lambda) (\mathbf{1}_{\mathbb{R}_B}-\mathbf{1}_{\sigma}) \;\!d\mu =0.
\phantom{AAAAA}\quad \quad \quad (\asterisk\asterisk)
$$
Hence, using the density of $\iota(\mathcal{T})$ in $L^2_{\mu}(\mathbb{R}_B)$, we obtain from ($\asterisk$) and ($\asterisk \asterisk$) that
\begin{eqnarray*}
g \;\! \mathbf{1}_{\sigma}\!\!&=&\!\!0 \;\;\quad\mu\textrm{-a.e.} \\
g \;\!( \mathbf{1}_{\mathbb{R}_B}-\mathbf{1}_{\sigma})\!\!&=&\!\!0 \;\;\quad \mu\textrm{-a.e.}
\end{eqnarray*}
Thus $g=g\mathbf{1}_{\mathbb{R}_B}=0$ $\mu$-a.e., as wanted.
The uniqueness of $\sigma$ up to a set of $\mu$-measure $0$ can be seen as follows: If $E=M_{\mathbf{1}_\sigma} L^2_{\mu}(\mathbb{R}_B)=M_{\mathbf{1}_{\sigma'}} L^2_{\mu}(\mathbb{R}_B)$, then
taking $\mathbf{1}_B\in L^2_\mu(\mathbb{R}_B)$, we must have $\mathbf{1}_\sigma=\mathbf{1}_{\sigma'} \varphi$ for some $\varphi \in L^2_\mu(\mathbb{R}_B)$. So $\sigma\subset \sigma'$. Similarly, $\sigma\subset \sigma'$ as well.
\end{proof}
\noindent We now interpret the above characterisation result for doubly invariant subspaces of $AP^2$ in terms of the Bohr coefficients of elements of $E$. Given a measurable set $\sigma \subset \mathbb{R}_B$, define
$\widehat{\sigma}\in \ell^2(\mathbb{R})$ by
$$
\widehat{\sigma}(\lambda)=\int_{\mathbb{R}_B} \mathbf{1}_\sigma \;\!\iota(e_{-\lambda})\;\!d\mu
=\int_\sigma \iota (e^{-i\lambda \cdot}) \;\!d\mu.
$$
\goodbreak
\begin{corollary}
\label{corollary_20_april_2021_18:15}
Let $E\subset \ell^2(\mathbb{R})$ be a closed subspace of $\ell^2(\mathbb{R})$.
\noindent Then the following are equivalent:
\begin{enumerate}
\item $S_\lambda E=E$ for all $\lambda \in \mathbb{R}$.
\item There exists a unique measurable set $\sigma \subset \mathbb{R}_B$ such that
\begin{eqnarray*}
\phantom{Aai}
E\!\!\!\!&=&\!\!\!\!(\mathcal{F}^{-1} \iota^{-1} M_{\mathbf{1}_\sigma} \iota\;\!\mathcal{F}) \;\! \ell^2(\mathbb{R})
\\
\!\!\!\! &=&\!\!\!\! \Big\{ f:\mathbb{R}\rightarrow \mathbb{C}\;\! \Big|\;\! f(\lambda)=\sum_{\alpha \in \mathbb{R}} \widehat{\sigma}(\lambda-\alpha) \;\!\varphi(\alpha), \;\varphi \in \ell^2(\mathbb{R})\Big\}.
\end{eqnarray*}
\end{enumerate}
\end{corollary}
|
2,869,038,156,678 | arxiv |
\section{Experiments}
\subsection{Rotated MNIST}
We construct a $g$-separable (\Eqs{gconv_g_sep}{pointwise_g}) and $gc$-separable (\Eqs{gconv_gi_sep}{pointwise_gi}) version of the P4CNN architecture~\cite{cohen2016group} and evaluate on Rotated MNIST~\cite{larochelle2007}. Rotated MNIST has 10 classes of randomly rotated handwritten digits with 12k train and 60k test samples. We set the width $w$ of the $g$-P4CNN and $gc$-P4CNN networks such that the number of parameters are as close as possible to our Z2CNN and P4CNN baselines of 10 and 20 channels, respectively. We follow the training procedure of~\cite{cohen2016group} and successfully reproduced the results.
\tab{rotmnist} shows the test error averaged over 5 runs. Both $g$- and $gc$-P4CNN significantly outperform the regular P4CNN architecture and perform comparably or better than other architectures with a similar parameter count. Both $g$- and $gc$-P4CNN also outperform a depthwise separable version of Z2CNN ($c$-Z2CNN), validating that GConvs are more efficiently decomposable than regular convolutions. Additionally, we evaluate data-efficiency in a reduced data setting. As \fig{rotmnist_data} shows, both $g$- and $gc$-P4CNN consistently outperform P4CNN. Sharing the same 2D kernel in a GConv filter bank is thus a strong inductive bias and improves the model's sample efficiency. The test error as a function of number of parameters is also shown in \fig{rotmnist_param}. Separable GConvs do better for all model capacities.
\begin{table}[t]
\centering
\caption{Test error on Rotated MNIST - comparison with $z2$ baseline and other $p4$-equivariant methods. $w$ denotes network width. Separable GConv architectures perform better compared to regular GConvs (upper part) and comparable to other equivariant methods (lower part).}
\label{tab:rotmnist}
\begin{tabularx}{\linewidth}{@{}lcccc@{}}
\toprule
\textbf{Network} & \textbf{Test error} & $w$ & \textbf{Param.} & \textbf{MACs} \\ \midrule
Z2CNN~\cite{cohen2016group} & 5.20 $\pm$ 0.110 & 20 & 25.21 k & 2.98 M\\
$c$-Z2CNN & $4.64 \pm 0.126$ & 57 & 25.60 k & 4.14 M \\
P4CNN~\cite{cohen2016group} & 2.23 $\pm$ 0.061 & 10 & 24.81 k & 11.67 M\\
$g$-P4CNN [ours] & 2.60 $\pm$ 0.098 & 10 & 8.91 k & 4.37 M \\
$gc$-P4CNN[ours] & 2.88 $\pm$ 0.169 & 10 & 3.42 k & 1.80 M \\
$g$-P4CNN [ours] & 1.97 $\pm$ 0.044 & 17 & 25.26 k & 12.34 M \\
$gc$-P4CNN [ours] & \textbf{1.74} $\pm$ \textbf{0.070} & 30 & 24.64 k & 13.01 M \\ \midrule
SFCNN~\cite{Weiler_2018_CVPR} & \textbf{0.71} $\pm$ \textbf{0.022} & - & - & - \\
DREN~\cite{li2018deep} & 1.56 & - & 25 k & - \\
H-Net~\cite{Worrall_2017_CVPR} & 1.69 & - & 33 k & - \\
$\alpha$-P4CNN~\cite{romero2020attentive} & $1.70 \pm 0.021$ & 10 & 73.13 k & - \\
$a$-P4CNN~\cite{Romero2020CoAttentive} & 2.06 $\pm$ 0.043 & - & 20.76 k & - \\ \bottomrule
\end{tabularx}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.50\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/rotmnist-samples.pdf}
\caption{}
\label{fig:rotmnist_data}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/rotmnist-param.pdf}
\caption{}
\label{fig:rotmnist_param}
\end{subfigure}
\caption{Test error on Rotated MNIST for varying training set (a) and model sizes (b). Architectures with separable GConvs perform consistently better. }
\label{fig:rotmnist}
\end{figure}
\subsection{CIFAR 10}
Similarly, we perform a benchmark on the CIFAR10 dataset~\cite{cifar10} using a $p4m$ equivariant version of ResNet44 as detailed in~\cite{cohen2016group}. CIFAR 10+ denotes moderate data augmentation including random horizontal flips and random translations of up to 4 pixels. Our $gc$-$p4m$-ResNet44 outperforms all other methods using less parameters, as shown in~\tab{cifar10}. Also in a low data regime using only 20\% of the training samples our $gc$-$p4m$ architecture outperforms the regular $p4m$ network with an error rate of 13.43\% vs. 14.20\%.
\begin{table}[t]
\caption{Test error on CIFAR10 - comparison with other $p4m$-equivariant methods. $gc$-$p4m$-ResNet44 performs best.}
\label{tab:cifar10}
\begin{tabularx}{\linewidth}{@{}lYYc@{}}
\toprule
\textbf{Network} & \textbf{CIFAR10} & \textbf{CIFAR10+} & \textbf{Param.} \\ \midrule
ResNet44$^\dagger$~\cite{cohen2016group} & 13.10 & 7.66 & 2.64M \\
$p4m$-ResNet44$^\ddag$~\cite{cohen2016group} & 8.06 & 5.78 & 2.62M \\
$\alpha_F$-$p4m$-ResNet44~\cite{romero2020attentive} & 10.82 & 10.12 & 2.70M \\
$a$-$p4m$-ResNet44~\cite{Romero2020CoAttentive} & 9.12 & - & 2.63M \\
$g$-$p4m$-ResNet44 [ours] & 7.60 & 6.09 & 1.78M \\
$gc$-$p4m$-ResNet44 [ours] & \textbf{6.72} & \textbf{5.43} & 1.88M \\ \bottomrule
\multicolumn{4}{@{}l@{}}{\small $^{\dagger\ddag}$ Unable to reproduce results from~\cite{cohen2016group}: 9.45 / 5.61$^\dagger$, 6.46 / 4.94$^\ddag$.}\\
\end{tabularx}
\end{table}
\section{Discussion}
Our method exploits naturally occurring symmetries in GConvs by explicit sharing of the same filter kernel along the group and input channel dimension using a pointwise and depthwise decomposition. Experiments show that imposing such restriction on the architecture only causes a minor performance drop while allowing to significantly reduce the network parameters. This in turn (i) improves data efficiency and (ii) allows to increase the network width for the same parameter budget resulting in better overall performance. Sharing the spatial kernel over only the group dimension ($g$) proves less effective than additionally sharing over input channels ($gc$) as the latter also efficiently exploits inter-channel correlations in the network. This allows to further increase the network width and thereby its representation power.
\section{Introduction}
\label{sec:intro}
Adding convolution to neural networks (CNNs) yields translation equivariance~\cite{kayhan2020translation}: first translating an image $x$ and then convolving is the same as first convolving $x$ and then translating.
Group Equivariant Convolutions~\cite{cohen2016group} (GConvs) enable equivariance to a larger group of transformations $G$, including translations, rotations of multiples of 90 degrees ($p4$ group), and horizontal and vertical flips ($p4m$ group). Equivariance to a group of transformations $G$ is guaranteed by sharing parameters between filter copies for each transformation in the group $G$. Adding such geometric symmetries as prior knowledge offers a hard generalization guarantee to all transformations in the group, reducing the need for large annotated datasets and extensive data augmentation.
In practice, however, GConvs occasionally learn filters that are near-invariant to transformations in $G$. An invariant filter is independent of the transformation and will for GConvs yield identical copies of the transformed filters in the consecutive layer, as shown in~\fig{toyexp}. This implies parameter redundancy, as these filters could be represented by a single spatial kernel. We propose an equivariant pointwise and a depthwise decomposition of GConvs with increased parameter sharing and thus improved data efficiency. Motivated by the observed inter-channel correlations in learned filters in~\cite{Haase_2020_CVPR} we explore additionally sharing the same spatial kernel over all input channels of a GConv filter bank. Our contributions are: (i) we show that near-invariant filters in GConvs yield highly correlated spatial filters; (ii) we derive two decomposed GConv variants; and (iii) improve accuracy compared to GConvs on RotMNIST and CIFAR10.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/fig1.pdf}
\caption{Filters and feature maps of a GConv architecture trained on Rotated MNIST. Rotation invariant filters in Layer 2 result in identical feature maps FM2 (green) and cause Layer 3 to learn identical weights along the group dimension $g$ (blue). In contrast, non-symmetric filters in Layer 2 (red) result in non-identical filters in Layer 3 (brown).}
\label{fig:toyexp}
\end{figure}
\section{Related Work}
\textbf{Equivariance in deep learning.}
Equivariance is a promising research direction for improving data efficiency~\cite{Rath2020BoostingDN}. A variety of methods have extended the Group Equivariant Convolution for the $p4$ and $p4m$ groups introduced in~\cite{cohen2016group} to larger symmetry groups including translations and discrete 2D rotations~\cite{bekkers2018,Weiler_2018_CVPR}, 3D~rotations~\cite{winkels2019,worrall2018,weiler2018a}, and scale~\cite{worrall2019,Sosnovik2020Scale-Equivariant}. Here, we investigate learned invariances in the initial GConv framework~\cite{cohen2016group} for the $p4$ and $p4m$ groups, yet our analysis extends to other groups where invariant filters exist.
\noindent \textbf{Depthwise separable decomposition~\cite{Sifre14}.} These decompose a multi-channel convolution into spatial convolutions applied on each individual input channel separately, followed by a pointwise (1x1) convolution. Depthwise separable convolutions significantly reduce parameter count and computation cost at the expense of a slight loss in representation power and therefore generally form the basis of network architectures optimized for efficiency~\cite{Howard2019SearchingFM, tan2019, mohamed2020data}. The effectiveness of depthwise separable convolutions is motivated~\cite{Haase_2020_CVPR} by the observed inter-channel correlations occurring in the learned filter banks of a CNN, which is quantified using a PCA decomposition. We do a similar analysis to motivate and derive our separable implementation of GConvs.
\section{Method}
\subsection{Group Equivariant Convolutions}
Equivariance to a group of transformations $G$ is defined as
\begin{align}
\Phi(T_gx) = T'_g\Phi(x), \quad \forall g \in G,
\end{align}
where $\Phi$ denotes a network layer and $T_g$ and $T'_g$ a transformation $g$ on the input and feature map, respectively. Note that in the case of translation equivariance $T$ and $T'$ are the same, but in general not need to be. To simplify the explanation, we first focus on the group $p4$ of translations and 90-degree rotations, but extend to larger groups later.
Let us denote a regular convolution as
\begin{align}
\label{eq:conv}
X_{n,:,:}^{l+1} = \sum_c^{C^{l}} F^l_{n,c,:,:} * X_{c,:,:}^l,
\end{align}
with $X$ the input and output tensors of size $[C^l,H,W]$, where $C^l$ is the number of channels in layer $l$, H is height and W is width, and $F$ the filter bank of size $[C^{l+1},C^{l},k,k]$, with $k$ the spatial extent of the filter.
In addition to spatial location, GConvs encode the added transformation group $G$ in an extra tensor dimension such that $X$ becomes of size $[C^l,G^l,H,W]$, where $G^{l}$ denotes the size of the transformation group $G$ at layer $l$, i.e. 4 for the $p4$ group. Likewise, GConv filters acting on these feature maps contain an additional group dimension, yielding a filter bank $F^l$ of size $[C^{l+1},C^{l},G^{l},k,k]$. As such, filter banks in GConvs contain $G^l$ times more trainable parameters compared to regular convolutions. A GConv is then performed by convolving over both the input channel and input group dimensions $C^l$ and $G^l$ and summing up the outputs:
\begin{align}
\label{eq:gconv}
X_{n,h,:,:}^{l+1} &= \sum_c^{C^{l}} \sum_g^{G^l} \Tilde{F}^l_{n,h,c,g,:,:} * X_{c,g,:,:}^l.
\end{align}
Here $\Tilde{F}^l$ denotes the full GConv filter of size $[C^{l+1}, G^{l+1},\allowbreak C^{l},G^{l},k,k]$ containing an additional dimension for the output group $G^{l+1}$. $\Tilde{F}^l$ is constructed from $F^l$ during each forward pass, where $G^{l+1}$ contains rotated and cyclically permuted versions of $F^l$ (see ~\cite{cohen2016group} for details). Note that input images do not have a group dimension, so the input layer has $G^l{=}1$ and $X^1_{c,g,:,:}$ reduces to $X^1_{c,:,:}$, whereas for all following layers $G^l{=}4$ for the $p4$ group (and $G^l{=}8$ for $p4m$).
\subsection{Filter redundancies in GConvs}
A rotational symmetric filter is invariant to the relative orientation between the filter and its input. Thus, if the filter kernels in the group dimension of a $p4$ GConv filter bank $F^l$ are rotational symmetric and identical, the resulting feature maps will also be identical along the group dimension due to the rotation and cyclic permutation performed in constructing the full filter bank $\Tilde{F}^l$. As a result, the filters in the subsequent layer acting on these feature maps receive identical gradients and, given same initialization, learn identical filters. This is illustrated in \fig{toyexp} where a $p4$ equivariant CNN is trained on Rotated MNIST. The first layer contains a single fixed rotation invariant filter. All layers have equal initialization along the group dimension and linear activation functions. The filters in layer 2 converge to be identical along the group dimension. Furthermore, the filter kernels in the second layer belonging to the first output channel (green) are also rotational symmetric, resulting in identical feature maps in FM2 (green) and consequently the filters learned in the first input channel of layer 3 (blue) become highly similar. This is in contrast to the non-symmetric filters in layer 2 (red), resulting in non-identical filters in layer 3 (brown).
Even non-rotational symmetric filters can induce filter correlations in the subsequent layer. For instance, an edge detector will result in inverse feature maps along the group dimension, i.e. $g^0{\approx}-g^2$ and $g^1{\approx}-g^3$ and the filters acting on these feature maps will receive inverse gradients and consequently converge to be inversely correlated. Inversely correlated filters can be decomposed into the same spatial kernel multiplied by a positive and negative scalar.
Upon visual inspection of the learned filter parameters of a regular $p4$ equivariant CNN we observe that, even without any fixed symmetries or initialization and with ReLU activation functions, the filter kernels tend to be correlated along the group axis. To quantify this correlation we perform a PCA decomposition similar as in \cite{Haase_2020_CVPR}. We reshape the filter bank $F$ to size $[C^{l+1}\times C^{l},G^{l},k^2]$ and perform PCA on each set of filters $F_{n,:,:}$ for all $n \in [1,C^{l+1}\times C^{l}]$, where for each $n$ we have $G^l$ features with $k^2$ samples. This results in $G^l$ principal components of size $k^2$, with PC1 being the filter kernel explaining the most variance within the decomposed set.
We perform this decomposition for all layers in a $p4$ equivariant network. \fig{hist} shows the ratio of the variance explained by PC1 for each layer (after the input layer), before and after training. In many cases a substantial part of the variance is explained by a single component, demonstrating a significant redundancy in filter parameters.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/pca_hist.pdf}
\caption{Ratio of variance explained by the first principal component when decomposing a filter kernel along the group dimension, before (blue) and after (red) training on Rotated MNIST. Redundancy in filter parameters increases as the network converges.}
\label{fig:hist}
\end{figure*}
\subsection{Separable Group Equivariant Convolutions}
To exploit the correlations in GConvs we decompose the filter bank $F^l$ into a 2D kernel $K$ that is shared along the group dimension, and a pointwise component $w$ which encodes the inter-group correlations:
\begin{align}
F^l_{n,c,g,:,:} &= K^l_{n,c,:,:} \cdot w^l_{n,c,g}.
\end{align}
The full GConv filter bank is then constructed as
\begin{align}
\label{eq:filter_decomp}
\Tilde{F}^l_{n,h,c,g,:,:} &= T_h(K^l_{n,c,:,:}) \cdot \Tilde{w}^l_{n,h,c,g},
\end{align}
where $T_h$ denotes the 2D transformation corresponding to output group channel $h$ and $\Tilde{w}^l$ contains copies of $w^l$ that are cyclically permuted along the input group dimension. A naive implementation would be to precompute $\Tilde{F}$ and perform a regular GConv as in \eq{gconv}. Alternatively, for better computational efficiency we can substitute the filter decomposition in \eq{filter_decomp} into the GConv in \eq{gconv} and rearrange as follows:
\begin{align}
X_{n,h,:,:}^{l+1} &= \sum_c^{C^{l}} \sum_g^{G^l} X^l_{c,g,:,:} * \left( T_h(K^l_{n,c,:,:}) \cdot \Tilde{w}^l_{n,h,c,g} \right) \\
&= \sum_c^{C^{l}} \sum_g^{G^l} \left(X^l_{c,g,:,:} \cdot \Tilde{w}^l_{n,h,c,g} \right) * T_h(K^l_{n,c,:,:}) \\
\label{eq:gconv_g_sep}
&= \sum_c^{C^{l}} \Tilde{X}^l_{n,h,c,:,:} * T_h(K^l_{n,c,:,:})
\end{align}
with
\begin{align}
\label{eq:pointwise_g}
\Tilde{X}^l_{n,h,c,:,:} = \sum_g^{G^l} \left(X_{c,g,:,:}^l \cdot \Tilde{w}^l_{n,h,c,g} \right).
\end{align}
Expanding the dimensions of $\Tilde{w}^l$ to $[C^{l+1},G^{l+1},C^{l},G^{l},1,1]$ we can implement \eq{pointwise_g} as a grouped $1\times1$ convolution with $C^l$ groups, followed by a grouped spatial convolution with $C^{l+1}\times G^{l+1}$ groups, as given in \eq{gconv_g_sep}. We refer to this separable GConv variants as $g$-GConv, denoting the summation variable in \eq{pointwise_g}.
Alternatively, we share the spatial kernel $K$ along both the group and input channel dimension by decomposing $F^l$ as:
\begin{align}
F^l_{n,c,g,:,:} &= K^l_{n,:,:} \cdot w^l_{n,c,g},\\
\Tilde{F}^l_{n,h,c,g,:,:} &= T_h(K^l_{n,:,:}) \cdot \Tilde{w}^l_{n,h,c,g}.
\end{align}
Substituting $\Tilde{F}^l$ in \eq{gconv} and rearranging yields
\begin{align}
X_{n,h,:,:}^{l+1} &= \sum_c^{C^{l}} \sum_g^{G^l} X_{c,g,:,:}^l * \left( T_h(K^l_{n,:,:}) \cdot \Tilde{w}^l_{n,h,c,g} \right) \\
&= \sum_c^{C^{l}} \sum_g^{G^l} \left(X_{c,g,:,:}^l \cdot \Tilde{w}^l_{n,h,c,g} \right) * T_h(K_{n,:,:}) \\
\label{eq:gconv_gi_sep}
&= \Tilde{X}^l_{n,h,:,:} * T_h(K^l_{n,:,:})
\end{align}
with
\begin{align}
\label{eq:pointwise_gi}
\Tilde{X}^l_{n,h,:,:} = \sum_c^{C^{l}}\sum_g^{G^l} \left(X_{c,g,:,:}^l \cdot \Tilde{w}^l_{n,h,c,g} \right).
\end{align}
This way the GConv essentially reduces to an inverse depthwise separable convolution with \eq{pointwise_gi} being the pointwise and \eq{gconv_gi_sep} being the depthwise component. This variant is named $gc$-GConv after the summation variables in \eq{pointwise_gi}.
While the $g$ and $gc$ decompositions may impose too stringent restrictions on the hypothesis space of the model, the improved parameter efficiency, as detailed in section \ref{sec:compute}, allows us to increase the network width given the same parameter budget resulting in better overall performance.
\subsection{Computation efficiency}
\label{sec:compute}
The decomposition of GConvs allows for a theoretically more efficient implementation, both in terms of the number of stored parameters and multiply-accumulate operations (MACs). As opposed to the $[C^l\times G^l \times k^2 \times C^{l+1}]$ parameters in a GConv filter bank, $g$- and $gc$-GConvs require only $[C^l \times C^{l+1} \times (G^l + k^2)]$ and $[C^{l+1} \times (C^l \times G^l + k^2)]$, respectively. Similarly, a regular GConv layer performs $[C^l \times G^l \times k^2 \times W \times H \times C^{l+1} \times G^{l+1}]$ MACs, whereas $g$- and $gc$-GConvs do only $[C^l \times C^{l+1} \times G^{l+1} \times W \times H \times (G^l + k^2)]$ and $[C^{l+1} \times G^{l+1} \times W \times H \times (C^l \times G^l + k^2)]$, assuming 'same' padding. This translates to a reduction by a factor of $\frac{1}{k^2}+\frac{1}{G^l}$ and $\frac{1}{k^2}+\frac{1}{C^l \times G^l}$, both in terms of parameters and MAC operations. The decrease in MACs comes at the cost of a larger GPU memory footprint due to the need of storing intermediate feature maps, as is generally the case for separable convolutions. Separable GConvs are therefore especially suitable for applications where the available processing power is the bottleneck as opposed to memory. |
2,869,038,156,679 | arxiv | \section*{COFFEE Methodology}
COFFEE is a probabilistic model that forecasts daily reported cases and deaths of COVID-19. COFFEE is fit to geographic regions independently, facilitating parallelization for fast computations.
\subsection*{Notation}
\begin{itemize}
\item $t$ indexes time, where $t$ is the number of days from a reference starting date (index)
\item $T$ is the day of the last observation (index)
\item $K$ is the forecast window size, in days (index)
\item $y_{c,t}$ is the number of reported cases of COVID-19 on day $t$ as reported on the COVID-19 Dashboard by the Centers for Systems Science Engineering (CSSE) at Johns Hopkins University (JHU) (observable)
\item $\ddot{y}_{c,t} = \sum_{j=1}^t y_{c,j}$ is the cumulative number of reported cases of COVID-19 through day $t$ as reported by CSSE at JHU (observable)
\item $y_{d,t}$ is the number of reported deaths of COVID-19 on day $t$ as reported by CSSE at JHU (observable)
\item $\ddot{y}_{d,t} = \sum_{j=1}^t y_{d,j}$ is the cumulative number of reported deaths of COVID-19 through day $t$ as reported by CSSE at JHU (observable)
\item $\delta_{c,t}$ is the underlying number of reported cases on day $t$ (unobservable)
\item $\ddot{\delta}_{c,t} = \sum_{j=1}^t \delta_{c,j}$ is the underlying number of cumulative reported cases through day $t$ (unobservable)
\item $\delta_{d,t}$ is the underlying number of reported deaths on day $t$ (unobservable)
\item $\ddot{\delta}_{d,t} = \sum_{j=1}^t \delta_{d,j}$ is the underlying number of cumulative reported deaths through day $t$ (unobservable)
\item $\delta_{s,0}$ is the underlying number of susceptible individuals at the start of the pandemic (unobservable)
\item $\delta_{s,t} = \delta_{s,0} - \ddot{\delta}_{c,t}$ is the underlying number of susceptible individuals on day $t$ (unobservable)
\end{itemize}
We use the convention that bolded quantities are vectors and unbolded quantities are scalars. For concreteness, $y_{c,t}$ is a scalar while $\boldsymbol{y}_{c,1:t} = (y_{c,1},y_{c,2},\ldots, y_{c,t})'$ is a $t \times 1$ vector.
\subsection*{Cases Model}
Let
\begin{align}
y_{c,t}|\delta_{c,t}, \alpha &\sim \text{NB}\Bigg(\delta_{c,t}, \frac{\delta_{c,t}}{\alpha}\Bigg)
\end{align}
\noindent where NB(a,b) is a Negative-Binomial model with mean parameter a $> 0$ and size parameter b $> 0$ where
\begin{align}
\text{E}(y_{c,t}|\delta_{c,t},\alpha) &= \delta_{c,t} \\
\text{Var}(y_{c,t}|\delta_{c,t},\alpha) &= \delta_{c,t}(1 + \alpha).
\end{align}
Figure \ref{fig:cases} shows the daily reported cases for New Mexico, the United States (US), and France. All three regions have gone through rising and declining periods of cases with various levels of noise in the reported cases.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{cases.png}
\caption{The daily reported cases of COVID-19 for New Mexico, the US, and France.}
\label{fig:cases}
\end{figure}
In what follows, we outline the steps COFFEE takes to produce forecasts of reported cases.
\subsubsection*{Step 1: Identify and Adjust Outliers}
COFFEE automatically identifies and adjusts outliers \cite{tsoutliers}. It runs five different outlier detection algorithms on the reported data, taking into account possible day-of-week (DOW) effects. A datum is declared an outlier if three or more of the five detection algorithms identify that datum as an outlier. The outliers are not removed, but rather adjusted to ensure all values are non-negative. Figure \ref{fig:outlier_adjusted_cases} shows the result of this process on daily cases for New Mexico, the US, and France. All subsequent modeling steps are conducted with outlier adjusted data.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{outlier_removed_cases.png}
\caption{The reported daily cases of COVID-19 (top) and the outlier adjusted daily cases (bottom). (Top) Magenta points were identified as outliers. (Bottom) Magenta points are the adjusted outliers.}
\label{fig:outlier_adjusted_cases}
\end{figure}
\subsubsection*{Step 2: Compute the Empirical Growth Rate, $\hat{\kappa}_t$}
The model for the underlying number of reported daily cases, $\delta_{c,t}$, is a dynamic susceptible-infectious (SI) model \cite{jacquez1993stochastic}, where
\begin{align}
\label{eq:theta_s}
\delta_{s,t} &= \delta_{s,t-1} - \delta_{c,t}\\
\label{eq:theta_c}
\ddot{\delta}_{c,t} &= \ddot{\delta}_{c,t-1} + \delta_{c,t},
\end{align}
\noindent and
\begin{align}
\label{eq:delta_c}
\delta_{c,t} = \kappa_t \frac{\delta_{s,t-1}}{\delta_{s,0}} \ddot{\delta}_{c,t-1}.
\end{align}
\noindent The quantity $\ddot{\delta}_{c,t-1}$ is the cumulative number of underlying cases on day $t-1$, $\frac{\delta_{s,t-1}}{\delta_{s,0}}$ is the proportion of the population still susceptible at time $t-1$, and $\kappa_t$ is the growth rate on day $t$.
When $\frac{\delta_{s,t-1}}{\delta_{s,0}} \approx 1$ (when most of the susceptible population is still susceptible), we can rearrange Equations \ref{eq:theta_s}, \ref{eq:theta_c}, and \ref{eq:delta_c} to identify a crude estimator of $\kappa_t$:
\begin{align}
\hat{\kappa}_t &\approx \Bigg(\frac{\ddot{y}_{c,t}}{\ddot{y}_{c,t-1}} - 1\Bigg).
\end{align}
\noindent Estimates for $\kappa_t$ are shown in Figure \ref{fig:kappa_hat}. It is clear that $\hat{\kappa}_t$ is dynamic and changes over time. This is what makes forecasting COVID-19 so challenging; parameters of epidemiologically-motivated models are dynamic and \emph{forecasting with them requires anticipating how these dynamic parameters will change in the future, not just tracking where they have been in the past}. In what follows, we describe how we forecast $\hat{\kappa}_t$.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{kappa_hat.png}
\caption{$\hat{\kappa}_t$ starting in May for New Mexico, the US, and France.}
\label{fig:kappa_hat}
\end{figure}
\newpage
\subsubsection*{Step 3: Compute $\hat{\kappa}^*_t$ = logit$(\hat{\kappa}_t)$}
After the initial portion of the outbreak, $\hat{\kappa}_t$ is almost always between 0 and 1. COFFEE logit transforms $\hat{\kappa}_t$, where logit($p$) = log($p/(1-p)$) for $p \in (0,1)$. For all days with no reported cases, $\hat{\kappa}_t = 0$, an incompatible value with the logit transform. Thus, we compute $\hat{\kappa}^*_t = $ logit($\hat{\kappa}_t$) as follows:
\begin{align}
\hat{\kappa}^*_t &=
\begin{cases}
\text{logit}(\hat{\kappa}_t) & \text{ if } \hat{\kappa}_t > \tau_c \text{ and } \hat{\kappa}_t < 1 - \tau_c\\
\text{logit}(\tau_c) & \text{ if } \hat{\kappa}_t \leq \tau_c\\
\text{logit}(1-\tau_c) & \text{ if } \hat{\kappa}_t \geq 1-\tau_c,\\
\end{cases}
\end{align}
\noindent where $\tau_c = 0.95*\text{min}(\{\hat{\kappa}_t | \hat{\kappa}_t > 0\})$, ensuring that if $\hat{\kappa}_t \geq \hat{\kappa}_{t'}$, then $\hat{\kappa}^*_t \geq \hat{\kappa}^*_{t'}$. The logit transformed $\hat{\kappa}_t$ are shown in Figure \ref{fig:logit_kappa_hat}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{logit_kappa_hat.png}
\caption{The quantities $\hat{\kappa}^*_t$ starting in May for New Mexico, the US, and France.}
\label{fig:logit_kappa_hat}
\end{figure}
\subsubsection*{Step 4: Split Data into Training and Testing Sets}
Let $T$ be the last observed day. We only consider the last 42 days of data when fitting a model for $\hat{\kappa}^*_t$ and split those days into training and testing data \cite{picard1990data}. Days $T-41$ through $T-14$ constitute the training data, while $T-13$ through $T$ constitutes the testing data. We will denote the last day of the training data by $T^{\text{train}} = T-14$. The splits are shown for New Mexico, the US, and France in Figure \ref{fig:splits}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{logit_kappa_hat_current.png}
\caption{The quantities $\hat{\kappa}^*_t$ for New Mexico, the US, and France. Circles are training data, left of the vertical dashed line, while testing data are the triangles to the right of the dashed vertical line.}
\label{fig:splits}
\end{figure}
\subsubsection*{Step 5: Compute $\hat{\kappa}_t^{\text{trend}}$}
We fit a weighted regression to the training data, where we downweight influential points that could have an outsized influence on the regression using the inverse of Cook's distance. The regression has a linear trend over time and a DOW effect:
\begin{align}
\label{eq:reg}
\hat{\kappa}^*_t &= \beta_0 + \beta_1 t + \beta_2 \text{I}(t = \text{Monday}) + \beta_3 \text{I}(t = \text{Tuesday}) + \ldots + \beta_7 \text{I}(t = \text{Saturday}).
\end{align}
\noindent Variable selection is performed, potentially resulting in a subset of the model parameters in Equation \ref{eq:reg}. We refer to the fits and predictions from this linear model as $\hat{\kappa}_t^{\text{trend}}$ where
\begin{align}
\label{eq:dow}
\hat{\kappa}_t^{\text{trend}} = \hat{\beta}_0 + \hat{\beta}_1 t + \hat{\beta}_2 \text{I}(t = \text{Monday}) + \hat{\beta}_3 \text{I}(t = \text{Tuesday}) + \ldots + \hat{\beta}_7 \text{I}(t = \text{Saturday})
\end{align}
\noindent where, if a variable was removed during the variable selection phase, then the corresponding $\hat{\beta}$ is set equal to 0. Figure \ref{fig:kappa_reg} shows $\hat{\kappa}^{\text{trend}}_t$ for the training and testing windows.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{logit_kappa_hat_regression.png}
\caption{The quantities $\hat{\kappa}^*_t$ for New Mexico, the US, and France. Circles are training data, left of the vertical dashed line, while testing data are the triangles to the right of the dashed vertical line. Solid line represents the fits (train) and predictions (test) of $\hat{\kappa}_t^{\text{trend}}$ based on the regression.}
\label{fig:kappa_reg}
\end{figure}
\subsubsection*{Step 6: Compute $\hat{\kappa}_t^{\text{constant}}$}
There is a $\hat{\kappa}_t$ trajectory corresponding to a constant new number of cases day over day. Let
\begin{align}
\label{eq:avg_cases}
\bar{y}_{c,T^{\text{train}}} &= \frac{1}{7}\sum_{t=T^{\text{train}}-6}^{T^{\text{train}}} y_{c,t}
\end{align}
\noindent be the average number of daily reported cases over the last week of the training window.
Then
\begin{align}
\label{eq:kappa_constant}
\hat{\kappa}_{t}^{\text{constant}} &= \text{logit}\Bigg(\bar{y}_{c,T^{\text{train}}} \Bigg[ \Bigg( \frac{\delta_{s,0} - \ddot{y}_{c,t-1} }{\delta_{s,0}} \Bigg) \ddot{y}_{c,t-1} \Bigg]^{-1}\Bigg)
\end{align}
\noindent where we set $\delta_{s,0} = 0.55 N$, where $N$ is the population of the forecasted region and 0.55 is a nominal attack rate for COVID-19. Figure \ref{fig:kappa_constant} shows $\hat{\kappa}_{t}^{\text{constant}}$ for $t = T^{\text{train}} + k$ and $k \in 1,2,\ldots, 14$.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{logit_kappa_hat_constant.png}
\caption{The quantities $\hat{\kappa}^*_t$ for New Mexico, the US, and France. Circles are training data, left of the vertical dashed line, while testing data are the triangles to the right of the dashed vertical line. Solid line represents the values of $\hat{\kappa}_t^{\text{constant}}$, the trajectory corresponding to a constant number of new reported cases, equal to $\bar{y}_{c,T^{\text{train}}}$.}
\label{fig:kappa_constant}
\end{figure}
\subsubsection*{Step 7: Compute a Joint Probability Distribution over Tuning Parameters}
The form of the forecasting model for $\hat{\kappa}_t^{\text{forecast}}$ is
\begin{align}
\label{eq:fcstform}
\hat{\kappa}_t^{\text{forecast}}(\eta,\omega,\phi) &= \lambda_t(\phi) [w_t \text{min}(\eta^*,\hat{\kappa}_t^{\text{trend}}) + (1-w_t)\hat{\kappa}_t^{\text{constant+DOW}}]
\end{align}
\noindent for $t = T^{\text{train}} + k$ and $k \in 1,2,\ldots, 14$. The objective is to produce forecasts for $\hat{\kappa}_t$. The way COFFEE does this is be creating a blended combination of $\hat{\kappa}_t^{\text{trend}}$ and $\hat{\kappa}_t^{\text{constant+DOW}}$ where
\begin{align}
\hat{\kappa}_t^{\text{constant+DOW}} =& \hat{\kappa}_t^{\text{constant}} + \nonumber \\
&\hat{\beta}_2 \text{I}(t = \text{Monday}) + \hat{\beta}_3 \text{I}(t = \text{Tuesday}) + \ldots + \hat{\beta}_7 \text{I}(t = \text{Saturday}),
\end{align}
\noindent which is $\hat{\kappa}_t^{\text{constant}}$ with the DOW effects estimated in Equation \ref{eq:dow} added.
There are three tuning parameters in Equation \ref{eq:fcstform}, each playing a role in controlling the form of $\hat{\kappa}_t^{\text{forecast}}$.
The first tuning parameter $\eta$ puts a cap on how large $\hat{\kappa}_t^{\text{trend}}$ can get. A forecast can blow up if $\hat{\kappa}_t^{\text{trend}}$ is growing in an unmitigated fashion. The parameter $\eta$ is a safeguard against this unmitigated growth. We set
\begin{align}
\eta^* &= \text{median}(\{\hat{\kappa}^*_{T^{\text{train}}-6}, \hat{\kappa}^*_{T^{\text{train}}-5},\ldots,\hat{\kappa}^*_{T^{\text{train}}}\})\eta
\end{align}
\noindent for $\eta \in [0,1]$.
The second tuning parameter is $\omega$. The basic form of the forecasting model is to transition from a forecast that relies on the current trend $\hat{\kappa}^{\text{trend}}_t$ to a forecast that relies on $\hat{\kappa}^{\text{constant+DOW}}_t$. If $\hat{\kappa}_t^{\text{trend}}$ is trending up, this transition keeps the forecasts from blowing up. If $\hat{\kappa}_t^{\text{trend}}$ is trending down, this transition keeps the forecasts from flat-lining at 0 new cases. The assumption behind this modeling choice is that, as cases are going up, people will take action to curb the growth of the pandemic, either through independent choices of personal responsibility or governmental policies. As cases are going down, however, we assume policies will be relaxed or people will become more comfortable engaging in activities that will increase transmission pathways. The tuning parameter $\omega \geq 1$ determines how quickly the forecast transitions from $\hat{\kappa}_t^{\text{trend}}$ to $\hat{\kappa}_t^{\text{constant+DOW}}$. The closer $\omega$ is to 0, the quicker the transition occurs.
\begin{align}
w_{T^{\text{train}}+k} &=
\begin{cases}
1-\Bigg(\frac{k-1}{\omega} \Bigg)^2 & \text{ if } k \leq \omega + 1\\
0 & \text{ otherwise }
\end{cases}
\end{align}
\noindent where $k$ is a positive integer. Figure \ref{fig:omega} shows weight trajectories $w_t$ for various choices of $\omega$. When $w_{T^{\text{train}}+k} = 1$, all weight is on $\hat{\kappa}_t^{\text{trend}}$; when $w_{T^{\text{train}}+k}=0$, all weight is on $\hat{\kappa}_t^{\text{constant+DOW}}$.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{omega.png}
\caption{Weight trajectories $w_{T^{\text{train}}+k}$ for different values of $\omega$. The smaller $\omega$ is, the quicker $w_{T^{\text{train}}+k}$ transitions from 1 where all weight is assigned to $\hat{\kappa}^{\text{trend}}$ to 0 where all weight is assigned to $\hat{\kappa}_t^{\text{constant+DOW}}$.}
\label{fig:omega}
\end{figure}
The third tuning parameter is $\phi$. The trajectory $\lambda_t$ is defined as
\begin{align}
\lambda_{T^{\text{train}}+k} &= 1 + k\frac{\phi - 1}{30},
\end{align}
\noindent a linear trend starting at 1 when $k=0$. The tuning parameter $\phi > 0$ determines whether $\lambda_{T^{\text{train}}+k}$ trends up ($\phi > 1$) or down ($\phi < 1$). Examples of $\lambda_{T^{\text{train}}+k}$ are shown in Figure \ref{fig:lambda}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{lambda.png}
\caption{The trajectories $\lambda_{T^{\text{train}}+k}$ for different values of $\phi$. For $\phi$ less than 1, $\lambda_{T^{\text{train}}+k}$ tilts the blended combination of $\hat{\kappa}^{\text{trend}}_t$ and $\hat{\kappa}^{\text{constant+DOW}}_t$ up. When $\phi$ is greater than 1, $\lambda_{T^{\text{train}}+k}$ tilts it down.}
\label{fig:lambda}
\end{figure}
For a combination of $\eta$, $\omega$, and $\phi$, we compute $\hat{\kappa}_t^{\text{forecast}}(\eta,\omega,\phi)$ and compute the inverse-distance between the inverse-logit of $\hat{\kappa}_t^{\text{forecast}}(\eta,\omega,\phi)$ and $\hat{\kappa}_t$ over the test period:
\begin{align}
\label{eq:invdist}
d^{-1}(\eta,\omega,\phi) &= \Bigg(\sum_{t=T^{\text{train}}+1}^{T^{\text{train}}+14} \Bigg[ \text{logit}^{-1}\Bigg(\hat{\kappa}_t^{\text{forecast}}(\eta,\omega,\phi)\Bigg) - \hat{\kappa}_t \Bigg]^2\Bigg)^{-1}.
\end{align}
Finally we compute a joint probability distribution over $\eta$, $\omega$, and $\phi$ as the normalized inverse-distance. The probability distributions for New Mexico, the US, and France are shown in Figure \ref{fig:train}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{train.png}
\caption{(Top) $\hat{\kappa}^*_t$ for the training period (circles) and testing period (triangles). Lines in the testing period are $\hat{\kappa}_t^{\text{forecast}}$. Each line corresponds to a combination of $\eta$, $\omega$, and $\phi$. The color of the line is proportional to $d^{-1}(\eta,\omega,\phi)$ with darker lines corresponding to better agreement between $\hat{\kappa}^{\text{forecast}}_t$ and $\hat{\kappa}^*_t$ in the test set. (Bottom) The normalized inverse-distance values for $\omega$ (x-axis), $\phi$ (y-axis), and $\eta$ (panels). Darker tiles correspond to larger inverse-distances.}
\label{fig:train}
\end{figure}
\subsubsection*{Step 8: Produce the Reported Cases Forecast}
The final step is to simulate reported cases. The purpose of the previous steps was to get a joint probability distribution over the tuning parameters that can be used to sample from.
If more than 14 of the last 28 days had zero reported cases, we take independent and identically distributed (iid) samples of future reported cases from the empirical distribution of outlier adjusted reported cases over the last 28 days.
If no cases were reported over the last 28 days, we sample future reported cases as iid Bernoulli draws with success probability equal to 1/29.
If 14 or more of the last 28 days observed at least 1 reported case, we simulate $\hat{\kappa}^{\text{forecast}}_{T+k}$ for $k \in {1,2,...,K}$ by doing the following:
\begin{enumerate}
\item Do Step 5, treating the training data as days $T$ to $T - 27$, resulting in a fitted linear model in the form of Equation \ref{eq:reg}. Use this to compute $\hat{\kappa}^{\text{trend}}_{T+k}$.
\item Do Step 6 to compute $\hat{\kappa}^{\text{constant+DOW}}_{T+k}$, replacing $T^{\text{train}}$ with $T$ in Equations \ref{eq:avg_cases} and \ref{eq:kappa_constant}.
\item Draw a vector of $(\omega, \phi, \eta)$ from the joint distribution computed in Step 7.
\item Compute $\hat{\kappa}_{T+k}^{\text{forecast}}$ following Equation \ref{eq:fcstform}.
\item Compute logit$^{-1}(\hat{\kappa}_{T+k}^{\text{forecast}})$.
\item Draw an attack rate $p \sim \text{Uniform}(0.4,0.7)$ and set $\delta^{\text{forecast}}_{s,0} = pN$.
\item Set $\ddot{\delta}_{c,T} = \ddot{y}_{c,T}$ and $\delta_{s,T} = \delta^{\text{forecast}}_{s,0} - \ddot{y}_{c,T}$
\item For $k=1,2,\ldots,K$, compute
\begin{enumerate}
\item $\delta^{\text{forecast}}_{c,T+k} = \text{logit}^{-1}(\hat{\kappa}_{T+k}^{\text{forecast}})\Bigg(\frac{\delta^{\text{forecast}}_{s,T+k-1}}{\delta^{\text{forecast}}_{s,0}} \Bigg) \ddot{\delta}^{\text{forecast}}_{c,T+k-1}$
\item $\ddot{\delta}^{\text{forecast}}_{c,T+k} = \ddot{\delta}^{\text{forecast}}_{c,T+k-1} + \delta^{\text{forecast}}_{c,T+k}$
\item $\delta^{\text{forecast}}_{s,T+k} = \delta^{\text{forecast}}_{s,T+k-1} - \delta^{\text{forecast}}_{c,T+k}$
\end{enumerate}
\item Draw $y^{\text{forecast}}_{c,t}|\delta^{\text{forecast}}_{c,T+k},\hat{\alpha} \sim \text{NB}\Bigg(\delta^{\text{forecast}}_{c,T+k}, \frac{\delta^{\text{forecast}}_{c,T+k}}{\hat{\alpha}}\Bigg)$ where $\hat{\alpha}$ is the maximum likelihood estimate.
\end{enumerate}
Figure \ref{fig:cases_fcsts} shows the forecasts for New Mexico, the US, and France.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{cases_fcsts.png}
\caption{The median (black line) and 50\% and 80\% prediction intervals (ribbons) for New Mexico, the US, and France for daily reported cases.}
\label{fig:cases_fcsts}
\end{figure}
\newpage
\subsection*{Deaths Model}
Figure \ref{fig:deaths} shows the daily deaths for New Mexico, the US, and France.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{deaths.png}
\caption{Daily reported deaths for New Mexico, the US, and France.}
\label{fig:deaths}
\end{figure}
The COFFEE deaths model is
\begin{align}
\label{eq:gamma}
\delta_{d,t} &= \gamma_t f(\boldsymbol{\delta}_{c,1:t},\nu)
\end{align}
\noindent where $\gamma_t$ is the case fatality ratio and $f(\boldsymbol{\delta}_{c,1:t},\nu)$ is a moving average of $\boldsymbol{\delta}_{c,1:t}$ with window size equal to $\nu$:
\begin{align}
f(\boldsymbol{\delta}_{c,1:t},\nu) &= \frac{1}{\nu}\sum_{j=t-\nu+1}^t \delta_{c,j}.
\end{align}
\noindent The deaths model proceeds with the following steps.
\subsubsection*{Step 1: Identify and Adjust Outliers}
COFFEE uses the same outlier identification and adjustment routine as with cases, resulting in outlier adjusted deaths which are used for all subsequent forecasting steps. The outlier adjusted deaths are shown in Figure \ref{fig:deaths_outliers}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{outlier_removed_deaths.png}
\caption{The originally reported daily deaths of COVID-19 (top) and the outlier adjusted daily deaths (bottom). (Top) Purple points were identified as outliers. (Bottom) Purple points are the adjusted outliers.}
\label{fig:deaths_outliers}
\end{figure}
\subsubsection*{Step 2: Compute the Case Fatality Ratio, $\hat{\gamma}_t$}
We estimate $\gamma_t$ by rearranging Equation \ref{eq:gamma} and replacing $\delta_{d,t}$ with $y_{d,t}$ and $\boldsymbol{\delta}_{c,1:t}$ with $\boldsymbol{y}_{c,1:t}$ for $t \leq T$:
\begin{align}
\hat{\gamma}_t &= y_{d,t}/f(\boldsymbol{y}_{c,1:t},\nu).
\end{align}
\noindent Figure \ref{fig:gamma_hat} displays $\hat{\gamma}_t$ for $\nu \in 7, 14, 21, 28, 35$.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{gamma_hat.png}
\caption{The values $\hat{\gamma}_t$ for New Mexico, the US, and France (rows) for different moving average window sizes of $\nu$ (columns).}
\label{fig:gamma_hat}
\end{figure}
\subsubsection*{Step 3: Compute $\hat{\gamma}^*_t = \text{logit}(\hat{\gamma}_t)$}
COFFEE logit transforms $\hat{\gamma}_t$, setting all values of $\hat{\gamma}_t < \tau_d$ equal to $\tau_d$ and all values of $\hat{\gamma}_t > 1-\tau_d$ equal to $1-\tau_d$ where $\tau_d = 0.95*\text{min}(\{\hat{\gamma}_t | \hat{\gamma}_t > 0\})$. The logit transformed $\hat{\gamma}_t$ are shown in Figure \ref{fig:logit_gamma_hat}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{logit_gamma_hat.png}
\caption{The values of $\hat{\gamma}^*_t$ for New Mexico, the US, and France (rows) for different values of $\nu$ (columns).}
\label{fig:logit_gamma_hat}
\end{figure}
\subsubsection*{Step 4: Split Data into Training and Testing Sets}
Split $\hat{\gamma}^*_t$ in a training and testing data set, same as with the cases model.
\subsubsection*{Step 5: Compute $\hat{\gamma}_t^{\text{trend}}$}
Fit a regression model with a linear date term and a DOW effect to $\hat{\gamma}^*_t$, analogous to Equation \ref{eq:reg}. Variable selection is then performed. The fitted regression and predictions ($\hat{\gamma}^{\text{trend}}_t$) are shown in Figure \ref{fig:gamma_trend}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{gamma_trend.png}
\caption{The quantities $\hat{\gamma}^*_t$ for New Mexico, the US, and France. Circles are training data, left of the vertical dashed line, while testing data are the triangles to the right of the dashed vertical line. Solid line represents the fits (train) and predictions (test) of $\hat{\gamma}_t^{\text{trend}}$ based on the regression. The US and France have a DOW effect, while New Mexico had the DOW effect removed in the variable selection phase.}
\label{fig:gamma_trend}
\end{figure}
\newpage
\subsubsection*{Step 6: Compute a Joint Probability Distribution over Tuning Parameters}
The form of the forecasting model for $\hat{\gamma}_t^{\text{forecast}}$ is
\begin{align}
\label{eq:fcstform_gamma}
\hat{\gamma}_t^{\text{forecast}}(\nu,\theta_{\text{lower}},\theta_{\text{upper}}) &=
\begin{cases}
\theta_{\text{lower}} & \text{ if $\hat{\gamma}^{\text{trend}}_t < \theta_{\text{lower}}$}\\
\theta_{\text{upper}} & \text{ if $\hat{\gamma}^{\text{trend}}_t > \theta_{\text{upper}}$}\\
\hat{\gamma}^{\text{trend}}_t & \text{ otherwise}
\end{cases}
\end{align}
\noindent for $t = T^{\text{train}} + k$ and $k \in 1,2,\ldots, 14$. The parameters $\theta_{\text{lower}}$ and $\theta_{\text{upper}}$ act as a floor and a ceiling to $\hat{\gamma}^{\text{forecast}}_t$, keeping it from getting too large or too small. We evaluate $\hat{\gamma}^{\text{forecast}}_t$ on a grid over $\nu$, $\theta_{\text{lower}}$, $\theta_{\text{upper}}$ and compute the joint distribution as proportional to the inverse-distance between $\text{logit}^{-1}(\hat{\gamma}^{\text{forecast}}_t)$ and $\hat{\gamma}_t$, similar to Equation \ref{eq:invdist}. The estimated joint probability distribution over tuning parameters is shown in Figure \ref{fig:deaths_joint_prob}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{deaths_tuning_params.png}
\caption{Joint probability distributions over tuning parameters $\nu$ (columns), $\theta_{\text{upper}}$, and $\theta_{\text{lower}}$ for New Mexico, the US, and France (rows). }
\label{fig:deaths_joint_prob}
\end{figure}
\subsubsection*{Step 7: Produce the Reported Deaths Forecast}
The final step is to simulate reported deaths. The purpose of the previous steps was to get a joint probability distribution over the tuning parameters that can be used to sample from.
If more than 14 of the last 28 days had zero reported deaths, we take iid samples of future reported deaths from the empirical distribution of outlier adjusted reported deaths over the last 28 days.
If no deaths were reported over the last 28 days, we sample future reported deaths as iid Bernoulli draws with success probability equal to 1/29.
If 14 or more of the last 28 days observed at least 1 reported death, we simulate $\hat{\gamma}^{\text{forecast}}_{T+k}$ for $k \in {1,2,...,K}$ by doing the following:
\begin{enumerate}
\item Fit the regression outlined in Step 5 to days $T$ to $T - 27$. Use this to compute $\hat{\gamma}^{\text{trend}}_{T+k}$.
\item Draw a vector of $(\nu, \theta_{\text{lower}}, \theta_{\text{upper}})$ from the joint distribution computed in Step 6.
\item Compute $\hat{\gamma}_{T+k}^{\text{forecast}}$ following Equation \ref{eq:fcstform_gamma}.
\item Compute logit$^{-1}(\hat{\gamma}_{T+k}^{\text{forecast}})$.
\item For $k=1,2,\ldots,K$, compute
\begin{enumerate}
\item $y^{\text{forecast}}_{d,T+k} = \text{logit}^{-1}(\hat{\gamma}^{\text{forecast}}_{T+k}) f(\boldsymbol{\delta}^{\text{forecast}}_{c,1:t},\nu)$, where $\delta^{\text{forecast}}_{c,t} = y_{c,t}$ if $k-\nu \leq 0$ and $\delta^{\text{forecast}}_{c,t} = y^{\text{forecast}}_{c,t}$ if $k-\nu > 0$.
\end{enumerate}
\end{enumerate}
Figure \ref{fig:deaths_fcsts} shows the daily deaths forecasts for New Mexico, the US, and France.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{deaths_fcsts.png}
\caption{The median (black line) and 50\% and 80\% prediction intervals (ribbons) for New Mexico, the US, and France for daily reported deaths.}
\label{fig:deaths_fcsts}
\end{figure}
\newpage
\bibliographystyle{plain}
|
2,869,038,156,680 | arxiv | \section{Introduction}
It is a big theoretical challenge in deep learning studies to understand why networks trained via stochastic gradient descent (SGD) and its variants generalize so well in the overparameterized regime, in which the number of network parameters greatly exceeds that of the training data samples~\citep{Zhang2017}.
This fundamental problem has been tackled from different points of view~\citep{Dziugaite2017, Nagarajan2017, Neyshabur2017, Neyshabur2019, Arora2018, Perez2019, Jacot2018, Arora2019, dAscoli2020}.
Among them, some recent studies have pointed out the importance of an implicit regularization effect of SGD~\citep{Zhu2019, Wu2019, Smith2020}.
Indeed, it is empirically known that the SGD noise strength is strongly correlated with generalization of the trained network~\citep{Li2017, Jastrzebski2017, Goyal2017, Smith-Le2018, Hoffer2017, Hoffer2019}.
It has also been argued that the SGD noise prefers wide flat minima, which are considered to indicate good generalization~\citep{Keskar2017, Hoffer2017, Wu2018}.
From this viewpoint, not only its strength, but also the structure of the SGD noise is considered to be important since it is theoretically shown that the network can efficiently escape from bad local minima with the help of the SGD noise but not of an isotropic Gaussian noise with the same strength~\citep{Zhu2019, Wu2019}.
The covariance of the SGD noise is proportional to $\eta^2/B$, where $\eta$ and $B$ denote the learning rate and the minibatch size, respectively, and hence, the SGD noise strength can be controlled by changing $\eta$ and/or $B$.
To realize good generalization, we want to increase the SGD noise strength by increasing $\eta$ and/or decreasing $B$.
However, when $\eta$ becomes too large, the training dynamics often becomes unstable and the training fails.
On the other hand, decreasing $B$ prevents an efficient parallelization using multiple GPUs or TPUs\footnote{However, it is not at all trivial whether the large-batch training is really efficient even with an ideal parallelization. See \citet{Golmant2018, Hoffer2019} for scalability of large-batch training.}.
It is therefore desirable to control the SGD noise without changing these hyperparameters.
The main contribution of the present paper is to show that the SGD noise can be controlled without changing $\eta$ and $B$ by a simple yet efficient method that we call \textit{noise enhancement}.
In this method, the gradient of the loss function is evaluated by using two independent minibatches.
We will explain our theoretical idea in Sec.~\ref{sec:NE}.
We will also demonstrate that the noise enhancement improves generalization in Sec.~\ref{sec:experiment}.
In particular, it is empirically shown that the large-batch training using the noise enhancement even outperforms the small-batch training.
This result gives us some insights into the relation between the SGD noise and generalization, which is discussed in Sec.~\ref{sec:discussion}.
Because of its simplicity in implementation, this method would also be useful in practice.
\section{Noise enhancement}
\label{sec:NE}
We shall consider a classification problem.
The training dataset $\mathcal{D}=\{(x^{(\mu)},y^{(\mu)})\}_{\mu=1,2,\dots,N}$ consists of pairs of the input data vector $x^{(\mu)}$ and its label $y^{(\mu)}$.
The set of all the network parameters is simply denoted by $w$.
Then the output of the network for a given input $x$ is denoted by $f(x;w)$.
The loss function is defined as
\begin{equation}
L(w)=\frac{1}{N}\sum_{\mu=1}^N\ell\left(f(x^{(\mu)};w),y^{(\mu)}\right)\equiv\frac{1}{N}\sum_{\mu=1}^N\ell_\mu(w),
\end{equation}
where the function $\ell(\cdot,\cdot)$ specifies the loss (in this paper we employ the cross-entropy loss).
In the SGD, the training data is divided into minibatches of size $B$, and the parameter update is done by using one of them.
Let $\mathcal{B}_t\subset\{1,2,\dots,N\}$ with $|\mathcal{B}_t|=B$ be a random minibatch chosen at the $t$-th step, the network parameter $w_t$ is updated as
\begin{equation}
w_{t+1}=w_t-\eta\nabla_wL_{\mathcal{B}_t}(w_t), \quad L_{\mathcal{B}_t}(w)=\frac{1}{B}\sum_{\mu\in\mathcal{B}_t}\ell_\mu(w_t)
\end{equation}
in vanilla SGD, where $\eta>0$ is the learning rate.
It is also expressed as
\begin{equation}
w_{t+1}=w_t-\eta\nabla_wL(w_t)-\eta\left[\nabla_wL_{\mathcal{B}_t}(w_t)-\nabla_wL(w_t)\right]
\equiv w_t-\eta\nabla_wL(w_t)-\xi_t(w_t).
\label{eq:SGD}
\end{equation}
Here, $\xi_t$ corresponds to the SGD noise since its average over samplings of random minibatches is zero: $\mathbb{E}_{\mathcal{B}_t}[\xi_t]=0$.
Its covariance is also calculated straightforwardly~\citep{Zhu2019}:
\begin{align}
\mathbb{E}_{\mathcal{B}_t}\left[\xi_t\xi_t^\mathrm{T}\right]&=\frac{\eta^2}{B}\frac{N-B}{N-1}\left(\frac{1}{N}\sum_{\mu=1}^N\nabla_w\ell_\mu\nabla_w\ell_\mu^\mathrm{T}-\nabla_wL\nabla_wL^\mathrm{T}\right)
\nonumber \\
&\approx\frac{\eta^2}{B}\left(\frac{1}{N}\sum_{\mu=1}^N\nabla_w\ell_\mu\nabla_w\ell_\mu^\mathrm{T}-\nabla_wL\nabla_wL^\mathrm{T}\right),
\label{eq:noise_var}
\end{align}
where we assume $N\gg B$ in obtaining the last expression.
This expression\footnote{From Eq.~(\ref{eq:noise_var}), some authors~\citep{Krizhevsky2014, Hoffer2017} argue that the SGD noise strength is proportional to $\eta/\sqrt{B}$, while others~\citep{Li2017, Jastrzebski2017, Smith2018} argue that it is rather proportional to $\sqrt{\eta/B}$ on the basis of the stochastic differential equation obtained for an infinitesimal $\eta\to +0$.
Thus the learning-rate dependence of the noise strength is rather complicated.}
shows that the SGD noise strength is controlled by $\eta$ and $B$.
We want to enhance the SGD noise without changing $\eta$ and $B$.
Naively, it is possible just by replacing $\xi_t$ by $\alpha\xi_t$ with a new parameter $\alpha>1$.
Equation~(\ref{eq:SGD}) is then written as
\begin{align}
w_{t+1}&=w_t-\eta\nabla_wL(w_t)-\alpha\xi_t(w_t)
\nonumber \\
&=w_t-\eta\left[\alpha\nabla_wL_{\mathcal{B}_t}(w_t)+(1-\alpha)\nabla_wL(w_t)\right].
\label{eq:naive_NE}
\end{align}
Practically, Eq.~(\ref{eq:naive_NE}) would be useless because the computation of $\nabla_wL(w_t)$, i.e. the gradient of the loss function over the entire training data, is required for each iteration\footnote{If we have computational resources large enough to realize ideal parallelization for full training dataset, this naive noise enhancement would work. However, with limited computational resources, it is not desirable that we have to evaluate $\nabla_wL(w_t)$ for each iteration.}.
Instead, we propose replacing $\nabla_wL(w_t)$ in Eq.~(\ref{eq:naive_NE}) by $\nabla_wL_{\mathcal{B}_t'}(w_t)$, where $\mathcal{B}_t'$ is another minibatch of the same size $B$ that is independent of $\mathcal{B}_t$.
We thus obtain the following update rule of the \textit{noise-enhanced SGD}:
\begin{equation}
w_{t+1}=w_t-\eta\left[\alpha\nabla_wL_{\mathcal{B}_t}(w_t)+(1-\alpha)\nabla_wL_{\mathcal{B}_t'}(w_t)\right].
\label{eq:NE}
\end{equation}
By defining the SGD noise $\xi_t'$ associated with $\mathcal{B}_t'$ as
\begin{equation}
\xi_t'(w_t)=\eta\left[\nabla_wL_{\mathcal{B}_t'}(w_t)-\nabla_wL(w_t)\right],
\end{equation}
Eq.~(\ref{eq:NE}) is rewritten as
\begin{equation}
w_{t+1}=w_t-\eta\nabla_wL(w_t)-\xi_t^\mathrm{NE}(w_t),
\end{equation}
where the noise $\xi_t^\mathrm{NE}$ in the noise-enhanced SGD is given by
\begin{equation}
\xi_t^\mathrm{NE}=\alpha\xi_t+(1-\alpha)\xi_t'.
\end{equation}
Its mean is obviously zero, i.e. $\mathbb{E}_{\mathcal{B}_t,\mathcal{B}_t'}[\xi_t^\mathrm{NE}]=0$, and its covariance is given by
\begin{align}
\mathbb{E}_{\mathcal{B}_t,\mathcal{B}_t'}\left[\xi_t^\mathrm{NE}\left(\xi_t^\mathrm{NE}\right)^\mathrm{T}\right]
&=\alpha^2\mathbb{E}_{\mathcal{B}_t}\left[\xi_t\xi_t^\mathrm{T}\right]+(1-\alpha^2)\mathbb{E}_{\mathcal{B}_t'}\left[\xi_t'(\xi_t')^\mathrm{T}\right]
\nonumber \\
&=\left[\alpha^2+(1-\alpha)^2\right]\mathbb{E}_{\mathcal{B}_t}\left[\xi_t\xi_t^\mathrm{T}\right],
\label{eq:NE_var}
\end{align}
where we have used the fact that two noises $\xi_t$ and $\xi_t'$ are i.i.d. random variables.
In this way, the SGD-noise covariance is enhanced by a factor of $\alpha^2+(1-\alpha)^2>1$ for $\alpha>1$.
Since the size of the new minibatch $\mathcal{B}_t'$ is same as that of the original minibatch $\mathcal{B}_t$, the noise enhancement does not suffer from any serious computational cost.
If we assume $N\gg B$, Eq.~(\ref{eq:NE_var}) is equivalent to Eq.~(\ref{eq:noise_var}) with an effective minibatch size
\begin{equation}
B_\mathrm{eff}=\frac{B}{\alpha^2+(1-\alpha)^2}.
\label{eq:B_eff}
\end{equation}
If the SGD noise were Gaussian, it would mean that the noise-enhanced SGD is equivalent to the normal SGD with the effective minibatch size $B_\mathrm{eff}$.
However, the SGD noise is actually far from Gaussian during training~\citep{Panigrahi2019}, at least for not too large minibatch size.
The noise enhancement is therefore not equivalent to reducing the minibatch size unless $B_\mathrm{eff}$ is too large.
The procedure of the noise enhancement is summarized as the follows: (i) prepare two independent minibatches $\mathcal{B}_t$ and $\mathcal{B}_t'$, and (ii) replace the minibatch gradient $\nabla_wL_{\mathcal{B}_t}(w_t)$ by $\alpha\nabla_wL_{\mathcal{B}_t}(w_t)+(1-\alpha)\nabla_wL_{\mathcal{B}_t'}(w_t)$.
The numerical implementation is quite simple.
It should be noted that the noise enhancement is also applicable to other variants of SGD like Adam.
\section{Experiment}
\label{sec:experiment}
\begin{table}[t]
\caption{Network configurations.}
\vspace{11pt}
\centering
\begin{tabular}{lllll}
Name & Network type & Dataset & $L^*$ & $L^{**}$ \\
\hline
F1 & Fully connected & Fashion-MNIST & 0.01 & 0.001 \\
C1 & Convolutional & Cifar-10 & 0.01 & 0.001 \\
C2 & Convolutional & Cifar-100 & 0.02 & 0.001\\
\hline
\end{tabular}
\label{t:config}
\end{table}
We shall demonstrate the efficiency of the method of the noise enhancement (NE) for several network configurations with a real dataset as listed in Table~\ref{t:config}.
We describe the details of the network architecture below:
\begin{itemize}
\item\textbf{F1}: A fully-connected feed-forward network with 7 hidden layers, each of which has 500 neurons with the ReLU activation. The output layer consists of 10 neurons with the softmax activation.
\item\textbf{C1}: A modified version of the VGG configuration~\citep{Simonyan2014}.
Following \citet{Keskar2017}, let us denote a stack of $n$ convolutional layers of $a$ filters and a kernel size of $b\times c$ with the stride length of $d$ by $n\times[a,b,c,d]$.
The C1 network uses the configuration: $3\times[64,3,3,1]$, $3\times[128,3,3,1]$, $3\times[256,3,3,1]$, where a MaxPool(2) is applied after each stack.
To all layers, the ghost-batch normalization of size 100 and the ReLU activation are applied.
Finally, an output layer consists of 10 neurons with the softmax activation.
\item\textbf{C2}: It is similar to but larger than C1. The C2 network uses the configuration: $3\times[64,3,3,1]$, $3\times[128,3,3,1]$, $3\times[256,3,3,1]$, $2\times[512,3,3,1]$, where a MaxPool(2) is applied after each stack.
To all layers above, the ghost-batch normalization of size 100 and the ReLU activation are applied.
The last stack above is followed by a 1024-dimensional dense layer with the ReLU activation, and finally, an output layer consists of 10 neurons with the softmax activation.
\end{itemize}
For all experiments, we used the cross-entropy loss and the Adam optimizer with the default hyperparameters.
Neither data augmentation nor weight decay is applied in our experiment.
To aid the convergence, we halves the learning rate when the training loss reaches the value $L^*$.
Training finishes when the training loss becomes smaller than the value $L^{**}$.
Our choices of $L^*$ and $L^{**}$ are also described in Table~\ref{t:config}.
The convergence time is defined as the number of iteration steps until the training finishes.
Training is repeated 10 times starting from different random initializations (the Glorot initizalization is used), and we measure the mean test accuracy and the mean convergence time as well as their standard deviations.
\subsection{Effect of the noise enhancement}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\linewidth]{data_cifar10_batch_dep.eps}&
\includegraphics[width=0.5\linewidth]{data_cifar10_batch_dep_conv_time.eps}
\end{tabular}
\caption{Minibatch-size dependence of the test accuracy (left) and the convergence time (right) for each fixed value of $\alpha$ in C1.}
\label{fig:batch_dep}
\end{figure}
\begin{table}[t]
\caption{Best test accuracy for each value of $\alpha$.}
\vspace{11pt}
\centering
\begin{tabular}{l|llll}
Name & $\alpha$ & $B_\mathrm{opt}$ & test accuracy (\%) & convergence time \\
\hline\hline
C1
&1 & 100 & $88.05\pm 0.18$ & $24500\pm 2775$ \\
&1.5 & 200 & $88.69\pm 0.11$ & $20825\pm 2113$ \\
&2.0 & 300 & $\bm{88.77\pm 0.30}$ & $15932\pm 1870$ \\
&2.5 & 500 & $88.66\pm 0.22$ & $\bm{10040\pm 1153}$ \\
\hline
F1
&1 & 900 & $90.17\pm 0.14$ & $10934\pm 816$ \\
&1.5 & 2000 & $\bm{90.39\pm 0.17}$ & $\bm{7914\pm 528}$ \\
\hline
C2
&1 & 600 & $61.40\pm 0.54$ & $5292\pm 935$ \\
& 1.5 & 1000 & $\bm{61.75\pm 0.48}$ & $\bm{5175\pm 748}$ \\
\hline
\end{tabular}
\label{t:batch_dep}
\end{table}
First we demonstrate how the noise enhancement affects the generalization and the convergence time for C1 (similar results are obtained for F1 and C2 as we show later).
For each fixed value of $\alpha=1, 1.5, 2.0, 2.5$ ($\alpha=1$ means no NE applied) we calculated the mean test accuracy and the mean convergence time for varying minibatch sizes $B$.
The result is presented in Fig.~\ref{fig:batch_dep}.
We can see that the NE improves generalization for a not too large $\alpha$.
It is also observed that the generalization gap between small-batch training and large-batch training diminishes by increasing $\alpha$.
The NE with large $\alpha$ is therefore efficient for large-batch training.
On the other hand, the convergence time increases with $\alpha$ for a fixed $B$.
For each fixed $\alpha$, there is an optimal minibatch size $B_\mathrm{opt}$, which increases with $\alpha$.
In Table~\ref{t:batch_dep}, we list $B_\mathrm{opt}\in\{100,200,300,400,500,600,700,800,900,1000,2000,3000,5000\}$ as well as the test accuracy and the convergence time at $B=B_\mathrm{opt}$.
We see that the test accuracy at $B_\mathrm{opt}$ is improved by the NE.
Moreover, the NE shortens the convergence time at $B_\mathrm{opt}$ without hurting generalization performance\footnote{The NE for a fixed $B$ increases the convergence time, but $B_\mathrm{opt}$ also increases, which decreases the convergence time.}.
This experimental observation shows practical efficiency of the method of the NE.
Although we have focused on C1, other configurations F1 and C2 also show similar results.
For F1 and C2, we compare the result for $\alpha=1$ with that for $\alpha=1.5$.
In Fig.~\ref{fig:F1C2}, the minibatch-size dependences of the test accuracy and the convergence time are shown for F1 and C2.
In Table~\ref{t:batch_dep}, we also show the test accuracy and the convergence time at $B=B_\mathrm{opt}$ for each $\alpha$ in F1 and C2.
These results are qualitatively same as those in C1 (Fig.~\ref{fig:batch_dep} and Table~\ref{t:batch_dep}).
\begin{figure}[t]
\centering
\begin{tabular}{cc}
(a) test accuracy for F1 & (b) convergence time for F1 \\
\includegraphics[width=0.5\linewidth]{data_fashionMNIST_batch_dep.eps}&
\includegraphics[width=0.5\linewidth]{data_fashionMNIST_batch_dep_conv_time.eps}\\
(c) test accuracy for C2 & (d) convergence time for C2 \\
\includegraphics[width=0.5\linewidth]{data_cifar100_batch_dep.eps}&
\includegraphics[width=0.5\linewidth]{data_cifar100_batch_dep_conv_time.eps}
\end{tabular}
\caption{Minibatch-size dependence of the test accuracy and the convergence time for each fixed value of $\alpha$ in F1 and C2.}
\label{fig:F1C2}
\end{figure}
\subsection{Comparison between the noise enhancement and reducing the minibatch size}
It is pointed out that reducing the minibatch size $B$ with $\alpha=1$ has a similar effect as the NE with a fixed $B$; it results in better generalization but a longer convergence time\footnote{As was already mentioned, under the Gaussian noise approximation, increasing $\alpha$ is indeed equivalent to reducing $B$ to $B_\mathrm{eff}$ given by Eq.~(\ref{eq:B_eff}).}.
We shall compare the large-batch training with the NE to the small-batch training without the NE.
First we calculate the test accuracy and the convergence time for varying $B$ and a fixed $\alpha=1$ (no NE).
We then calculate the test accuracy for varying $\alpha> 1$ and a fixed $B=5000$, which corresponds to large minibatch training.
In other words, we compare the effect of the NE with that of reducing $B$.
The comparison between reducing $B$ with $\alpha=1$ and increasing $\alpha$ with $B=5000$ is given in Fig.~\ref{fig:data}.
We see that both give similar curves; increasing the convergence time with a peaked test accuracy.
However, in every case of F1, C1, and C2, the NE (increasing $\alpha$) results in better accuracy compared with reducing $B$ if $\alpha$ is properly chosen.
In Table~\ref{t:best}, we compare the best test accuracies between varying $B$ with $\alpha=1$ (without the NE) and increasing $\alpha$ with $B=5000$ (with the NE).
In all cases, the large-batch training with the NE outperforms the small-batch training without the NE.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\hspace{-2cm}
\includegraphics[width=0.5\linewidth]{data_fashionMNIST.eps}&
\hspace{-3cm}
\includegraphics[width=0.5\linewidth]{data_cifar10.eps}&
\hspace{-3cm}
\includegraphics[width=0.5\linewidth]{data_cifar100.eps}
\end{tabular}
\caption{Comparison between the effects of reducing the minibatch size $B$ with $\alpha=1$ and of increasing $\alpha$ with $B=5000$.
The longitudinal axis and the horizontal axis represent the test accuracy and the convergence time, respectively.
Circle data points (reducing $B$ with $\alpha=1$) correspond to $B=5000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100$ from left to right.
Triangle data points (increasing $\alpha$ with $B=5000$) correspond to $\alpha=1,2,\dots, 11$ for F1 and C1, and $\alpha=1,2,\dots,7$ for C2, from left to right.}
\label{fig:data}
\end{figure}
\begin{table}[t]
\caption{Comparison of best test accuracies for varying $B$ with $\alpha=1$ (without the noise enhancement) and for varying $\alpha$ with $B=5000$ (with the noise enhancement).
The range of varying $B$ and $\alpha$ is the same as in Fig.~\ref{fig:data}.}
\vspace{11pt}
\centering
\begin{tabular}{lllll}
\hline
Name& & $B$ & $\alpha$ & Best test accuracy (\%) \\
\hline\hline
F1
&without NE & 900 & 1 & $90.17\pm 0.14$ \\
&with NE & 5000 & 3 & $\bm{90.35\pm 0.05}$\\
\hline
C1
&without NE & 100 & 1 & $88.05\pm 0.18$ \\
&with NE & 5000 & 10 & $\bm{88.26\pm 0.23}$ \\
\hline
C2
&without NE & 600 & 1 & $61.40\pm 0.54$ \\
&with NE & 5000 & 5 & $\bm{61.53\pm 0.35}$ \\
\hline
\end{tabular}
\label{t:best}
\end{table}
\section{Discussion}
\label{sec:discussion}
We have shown that the method of the NE for gradient-based optimization algorithms improves generalization.
In particular, large-batch training with the NE even outperforms small-batch training without the NE, which clearly shows that the NE is not equivalent to reducing the minibatch size $B$.
In this section, we shall discuss two fundamental questions raised here:
\begin{description}
\item[(i)] \textit{Why does a stronger SGD noise result in a better generalization?}
\item[(ii)] \textit{How is the inequivalence between the NE and reducing $B$ theoretically understood?}
\end{description}
We first consider (i).
When the SGD noise strength is inhomogeneous in the parameter space, network parameters will be likely to evolve to a minimum of the loss landscape with a weaker SGD noise\footnote{In physics, similar phenomena are known; Brownian particles in a medium with inhomogeneous temperature tend to gather in a colder region (Soret effect)~\citep{Duhr2006,Sancho2015}.}
That is, if the SGD noise is strong enough near a minimum, the network parameters will easily escape from it with the help of the SGD noise.
As a result, only minima around which the SGD noise is weak enough survive.
Since the covariance of the SGD noise is given by Eq.~(\ref{eq:noise_var}), or Eq.~(\ref{eq:NE_var}) for the NE, the strong SGD noise is considered to have an implicit regularization effect toward minima with a small variance of $\{\nabla_w\ell_\mu\}$.
Some previous studies have introduced various measures which express an implicit regularization effect of SGD~\citep{Keskar2017, Yin2018, Wu2018}.
Among them, the ``gradient diversity'' introduced by \citet{Yin2018} is closely related to the above argument.
A small variance of the sample-dependent gradients $\{\nabla_w\ell_\mu\}$ around a minimum of the loss function implies that the loss landscape $L_\mathcal{B}(w)$ for a minibatch $\mathcal{B}$ does not largely depend on $\mathcal{B}$.
Such a minimum would contain information on common features among training data samples, which would be relevant for a given classification, but not contain information on sample-specific features which lead to overfitting.
This is our intuitive picture that explains why the strong SGD noise results in good generalization performance.
The above consideration is solely based on Eq.~(\ref{eq:noise_var}), i.e., the covariance structure of the SGD noise, and the effect of non-Gaussian noise has been ignored.
However, when the SGD noise is strengthened by reducing $B$, the SGD noise deviates from Gaussian and the above argument should be somehow modified.
As we have already mentioned, the inequivalence between the NE and reducing $B$ results from the non-Gaussian nature of the SGD noise, which is therefore a key ingredient to answer the question (ii).
The method of the NE can increase the noise strength without changing $B$, and hence it is considered to suppress the non-Gaussianity compared with the case of just reducing $B$.
The experimental result presented in Sec.~\ref{sec:experiment} then indicates that the non-Gaussian nature of the SGD noise has a negative impact on generalization.
A possible interpretation is that sample-specific features show up and are overestimated, which results in overfitting, when the central limit theorem is strongly violated\footnote{At a certain stage of training, some training data samples have been confidently classified correctly but others have not. This fact suggests that the distributions of $\ell_\mu(w)$ and $\nabla_w\ell_\mu(w)$ have a long tail and that the variance of $\{\nabla_w\ell_\mu\}$ is not small enough to justify the central limit theorem unless $B$ is sufficiently large. Indeed, \citet{Panigrahi2019} have demonstrated that the SGD noise looks Gaussian only in an early stage of training for a not too large $B$.}.
However, the relation between the non-Gaussianity of the SGD noise and generalization remains unclear~\citep{Wu2019}, and it would be an important future problem to make this point clear.
In this way, we now have intuitive arguments which might be relevant to answer the questions (i) and (ii), but theoretical solid explanations are still lacking.
Our results will not only be useful in practice, but also give theoretical insights into those fundamental questions, which merit further study.
|
2,869,038,156,681 | arxiv | \section*{Introduction}
Let $(M, \eta, g)$ be a Sasakian manifold of dimension $2n+1$. A minimal Legendrian submanifold is an $n$-dimensional submanifold $i: L \rightarrow M$ on which the contact form vanishes, $i^*\eta = 0$ and is minimal in the sense of Riemannian geometry with respect to the metric induced from $g$.
In the case where the minimal Legendrian $L$ is embedded in the standard Sasakian round $(2n+1)$-sphere, L\^e and Wang \cite{lewang} constructed a family of functions on $L$ which are eigenfunctions of the Laplacian on $L$ of the induced metric. They give also a lower bound of the dimension of the relative eigenspace and if it is attained then the submanifold is totally geodesic. Conversely they prove that a minimal submanifold of the standard sphere admitting that certain family of functions as Laplacian eigenfunctions is necessarily Legendrian.
Although their techniques make a heavy use of the particular situation, namely the theory of minimal immersion in spheres and the presence of an ambient Euclidean space, we prove that some of their ideas can be generalized to any Sasaki-Einstein manifold.
Let $L$ be a minimal Legendrian submanifold of a Sasaki-Einstein $M$. The aim of this paper is to prove that two certain families of functions on $L$, one of which constructed in terms of the contact moment map of the action of the Sasaki automorphism group, are eigenfunctions of the Laplacian of $L$ and we give a lower bound for the dimension of the eigenspace.
\begin{thmintro}
Let $L^n$ be a minimal Legendrian submanifold of an $\eta$-Sasaki-Einstein manifold $(M^{2n+1}, \eta, \xi, g, \Phi)$ with algebra of infinitesimal Sasaki automorphisms $\mathfrak g$. Then, for each $X \in \mathfrak g$, the function
\[
\eta(X) - \frac 1 {\vol(L)} \int_L \eta(X)dv,
\]
where $dv$ is the volume form of $L$ of the induced metric, is an eigenfunction of the Laplacian $\Delta_L$ with eigenvalue $2n+2$. Moreover the dimension of the $(2n+2)$-eigenspace is at least $\dim \mathfrak g - \frac 1 2 n( n+1) -1$.
\end{thmintro}
Moreover we prove, like in the sphere case although with totally different techniques, that if the lower bound is attained then the submanifold is totally geodesic together with a rigidity result about the ambient $M$, in the case of a \emph{regular} Sasaki-Einstein manifold over a base K\"ahler manifold.
\begin{thmintro} If $M$ is a regular Sasaki-Einstein manifold and the multiplicity of the eigenvalue $2n+2$ of $\Delta_L$ is exactly $\dim \mathfrak g - \frac 12 n(n+1)-1$ then $M$ is a Sasaki-Einstein circle bundle over the complex projective space endowed with the Fubini-Study metric. In particular if $M$ is simply connected then $M = S^{2n+1}$.
\end{thmintro}
Among the techniques we use we mention the theory of deformations of minimal Legendrian submanifolds, for which we refer to \cite{ono, ohnita} and, in the case of regular manifolds, the correspondence between Legendrian submanifolds of Sasakian manifolds and Lagrangian submanifolds of K\"ahler manifolds, see \cite{reckziegel}.
This result makes use of the geometry of Legendrian submanifolds of the K\"ahler-Einstein base, which exists by the regularity assumption. It would be interesting to drop this assumption and prove the result for quasi-regular or irregular Sasaki-Einstein manifolds.
Then in Theorem \ref{thmeigenfunction} we provide a generalization of the family of eigenfunctions by making use of the immersion of the Sasaki-Einstein manifold $M$ into its Ricci-flat K\"ahler cone $C(M)$. This family is parameterized by the Lie algebra of the infinitesimal K\"ahler automorphisms of $C(M)$, which is in general bigger than the Sasaki automorphism group of $M$.
The family is defined by means of the Nomizu operator on $C(M)$. This time our arguments are similar to the ones of L\^e and Wang for the sphere and they rely on the Ricci-flatness of $C(M)$ and properties of the Nomizu operator.
It would be interesting to provide sufficient conditions for the Legendrianity of a minimal submanifold by means of any of these families of functions.
\begin{problem}
Let $M^{2n+1}$ be a Sasaki-Einstein manifold with big enough automorphism group $G$, let $L^n$ be a minimal submanifold such that for each $X \in \mathfrak g$, the family of functions \eqref{eqetaX} or \eqref{equa:defnefK} are eigenfunctions of $\Delta_L$ with eigenvalue $2n+2$. Can we conclude that $L$ is Legendrian?
\end{problem}
Also, it would be interesting to relate the second family with the moment map of the symplectic action on $C(M)$ of its K\"ahler automorphism group.
The paper is organized as follows. In section \ref{sec:prelim} we recall some notions from Sasakian geometry, minimal Legendrian deformations and the contact moment map. In section \ref{sec:etaX} we introduce the first family of eigenfunctions and prove the main theorems. Finally in section \ref{sec:nomizu} we construct the functions via the Nomizu operator.
\subsection*{Acknowledgements}
The authors would like to thank Xiuxiong Chen for constant support and Fabio Podest\`a
for suggesting the problem and his help and advice. We finally thank Anna Gori for useful discussions.
\section{Preliminaries} \label{sec:prelim}
We recall some notions from Sasakian geometry, minimal Legendrian submanifolds and their deformations.
\subsection{Sasakian manifolds}
In this paper we focus on the case where the contact manifold is a Sasakian manifold, i.e. there is a contact form $\eta$, its Reeb field $\xi$, a Riemannian metric $g$ and a $(1,1)$-tensor field $\Phi$ such that
\begin{equation*}
\begin{split}
\eta(\xi) = 1, \iota_\xi d\eta = 0 \\
\Phi^2 = - \id + \xi \otimes \eta \\
g(\Phi \cdot, \Phi \cdot) = g + \eta \otimes \eta \\
d\eta = g( \Phi \cdot, \cdot) \\
N_\Phi + \xi \otimes d\eta = 0
\end{split}
\end{equation*}
where $N_\Phi$ is the torsion of $\Phi$.
An equivalent formulation is to say that a Riemannian manifold $(M, g)$ is Sasakian if and only if its symplectization $C(M)=M \times \mathbb R^+$ with metric $\bar g = r^2 g + dr^2$ is a K\"ahler manifold, where $r$ is the coordinate on $\mathbb R^+ = (0, +\infty)$.
A Sasakian structure defines a \emph{transverse K\"ahler structure} on $M$, that it is defines a K\"ahler structure $(d\eta, \Phi|_D)$ on the contact subbundle $D = \ker \eta$. This K\"ahler metric is known as the \emph{transverse metric}.
The Reeb field $\xi$ is unitary and Killing and defines a foliation called \emph{characteristic foliation}. We call $M$ \emph{regular} Sasakian if the circle action defined by the characteristic foliation is free. It is known, see e.g. \cite{monoBG, blair} that every compact regular Sasakian manifold is a Riemannian submersion $\pi:M \rightarrow B$ over a compact K\"ahler manifold.
In the \emph{quasi-regular} case, i.e. when the circle action is locally free, we have an orbifold Riemannian submersion.
A Sasaki-Einstein metric is a Sasakian metric which is Einstein, i.e. its Ricci tensor is a multiple of the metric. By curvature properties of Sasakian metrics, see i.e. \cite{monoBG}, it follows that this constant is $2n$, where $2n+1$ is the dimension of $M$.
An $\eta$-Sasaki-Einstein metric is a Sasakian metric $g$ such that its Ricci tensor satisfies $\Ric = A g + (2n-A) \eta \otimes \eta$ for some constant $A$. The following proposition is well known.
\begin{prop}[\cite{sparks_survey}]
Let $(M, g)$ be a Sasakian manifold. Then $g$ is Sasaki-Einstein if and only if the transverse metric is K\"ahler-Einstein with constant $2n+2$, if and only if the K\"ahler cone $C(M)$ is Ricci-flat.
\end{prop}
\subsection{Legendrian immersions and their deformations}
We will consider some special submanifolds of Sasakian manifolds, known as \emph{Legendrian} (or \emph{horizontal}), see \cite{reckziegel}.
A Legendrian submanifold of a $(2n+1)$-dimensional contact manifold $(M, \eta)$ is an $n$-dimensional submanifold $i: L \rightarrow M $ such that for all $p \in L$ we have $i_*(T_p L) \subseteq \ker \eta_{i(p)}$.
We will consider Legendrian submanifolds which are also \emph{minimal} in the sense of Riemannian geometry, i.e. their mean curvature field vanishes.
If we have a Legendrian submanifold $L$ in a Sasakian manifold we can identify the space of sections of the normal bundle $NL$ with $C^\infty(L) \oplus \Omega^1(L)$ via the isomorphism
\begin{align*}
\chi : \Gamma(NL) &\longrightarrow C^\infty(L) \oplus \Omega^1(L)\\
V &\longmapsto \biggl ( \eta(V), -\frac 1 2 i^* (\iota_V d\eta) \biggr )
\end{align*}
see \cite{ono}.
In the case of a compact regular Sasakian manifold $M$ with contact structure $\eta$ that fibers over a compact K\"ahler manifold $(B, \omega)$ we can take the projection $\pi(L) \subseteq B$ of a Legendrian $L$.
Following Reckziegel \cite{reckziegel} we have that $\pi(L)$ is a Lagrangian submanifold of $B$, i.e. $(\pi \circ i)^* \omega = 0$ and is finitely covered by $L$.
Conversely, given a Lagrangian submanifold $j:N \rightarrow B$, a point $q \in N$, for any choice of $p$ in the fiber of $q$ there exists a neighborhood $U$ of $q$ and a Legendrian immersion $i: U \rightarrow M$ such that $\pi \circ i = j|_U$.
Moreover, Riemannian properties of $L$ hold as well for $\pi(L)$ and conversely. Namely we have the following.
\begin{prop} [\cite{reckziegel}] \label{prop:reckziegel}
The Legendrian $L$ is minimal, or totally geodesic, if and only if the Lagrangian $\pi(L)$ is.
\end{prop}
A smooth family of minimal Legendrian immersions $i_t: L \rightarrow M$ is a family of maps $F: [0,1] \times L \rightarrow M$ such that for each $t$ the map $i_t= F(t, \cdot) : L \rightarrow M$ is a minimal Legendrian immersion.
Every smooth family points out a vector field $W_t$ on $L$ given at $p$ by
\[
W_t |_p = F_* \biggl( \frac \partial {\partial t} \biggr |_{(t, p)} \biggr ).
\]
It is known, e.g. \cite{ohnita,ono}, that a family of immersions is Legendrian if and only if the normal component $V_t$ of $W_t$ satisfies
\begin{equation} \label{infleg}
V_t = \chi^{-1} \biggl (\eta(V_t), \frac 1 2 d \eta(V_t) \biggr ),
\end{equation}
i.e. $d\eta(V_t) = - i^* (\iota_{V_t} d\eta)$. Normal fields satisfying \eqref{infleg} are called \emph{infinitesimal Legendrian deformations}.
We are interested in minimal Legendrian deformations of a Legendrian $i: L \rightarrow M$, that are smooth families $i_t: L \rightarrow M$ of minimal Legendrian immersions such that $i_0=i$.
A trivial family of deformations of a minimal Legendrian submanifolds is given by one-parameter families of ambient transformations. We will denote by $\Aut(M)$ the group of such transformations, i.e. diffeomorphisms $M \rightarrow M$ which are isometric contactomorphisms.
If we let $\phi_t \in \Aut(M)$ be one of such families. Then $i_t = \phi_t |_{i(L)} : i(L) \rightarrow M$ is a minimal Legendrian deformation, see \cite{ohnita}.
In particular, the normal component of every field in the Lie algebra $\mathfrak{aut}(M)$ of $\Aut(M)$ defines an infinitesimal Legendrian deformation. This is also minimal as we are taking the normal component of a Killing vector field, see \cite[Sec. 3]{simons}.
When we restrict ourselves to $\eta$-Sasaki-Einstein manifolds with constant $A$, we have a characterization of the space of infinitesimal minimal Legendrian deformations.
\begin{prop}[\cite{ohnita}]
Let $i:L \rightarrow M$ be a minimal Legendrian submanifolds in an $\eta$-Sasaki-Einstein manifold with constant $A$. Then the vector space of infinitesimal minimal Legendrian deformations is identified with
\[
\Defo(L) = \mathbb R \oplus \{ f \in C^\infty(L): \Delta_L f = (A+2)f \}
\]
where $\Delta_L$ denotes the Laplacian of $L$ with the induced metric.
\end{prop}
This result is obtained by combining the copy of the space of infinitesimal Legendrian deformations of $L$ given by
\[
\biggl \{ \biggl (f, \frac 1 2 df \biggr ): f \in C^\infty(L) \biggr \}
\]
and the space of minimal deformation given by the kernel of the Jacobi operator $\mathcal J$, for which we refer to \cite{simons}.
\subsection{Contact moment maps} \label{subsec:moment}
We finally recall the notion of contact moment map, we follow \cite[Sec.~8.4.2]{monoBG}.
In our setting the group $G = \Aut(M)$ is a compact group acting on $M$. We can extend this action to the symplectic cone $(C(M), d(r^2 \eta))$ by requiring that it leaves the $\{r = \const \}$ levels unchanged, i.e. the action is given by $g (p, r) = (gp, r)$.
Being $G$ a contactomorphism group it is easy to see that the action on $C(M)$ is by symplectomorphisms and, being the symplectic form on the cone exact, this action is Hamiltonian.
So there exists a map $\phi: C(M) \rightarrow \mathfrak g^*$, such that
\[
d (\phi(X)) = - \iota_X d (r^2 \eta) = d (r^2 \eta(X)).
\]
Hence, up to a constant, one can take the map $\phi(p, r) (X) = r^2 \eta_p(X)$.
Seeing $M$ as the $\{r=1 \}$ level set, we consider the restriction $\mu: M \rightarrow \mathfrak g^*$ of $\phi$ which we call the \emph{contact moment map} for the $G$-action on $M$.
\section{Eigenfunctions using the contact moment map} \label{sec:etaX}
In this section we construct one possible generalization of the functions given by L\^e-Wang \cite{lewang}.
We briefly recall their setting. They consider the standard Sasakian sphere $S^{2n+1}$ immersed in its K\"ahler cone $\mathbb C^{n+1} \backslash \{0 \}$ with respectively the round metric $g$ and the Euclidean metric $\langle \cdot, \cdot \rangle$.
It is known that the both the Sasaki transformation group of the sphere and the K\"ahler automorphism group of the cone is $G=\U(n+1)$.
Let $M \in \mathfrak u(n+1)$. Then the moment map for the $G$-action on the cone is given, up to a constant, by
\[
\phi(p, r)(M) = r^2 \eta_p(M_p) = r^2 g(\xi_p, M_p) = \langle \xi_p, M_p \rangle
\]
We see an infinitesimal Sasaki automorphism $M \in \mathfrak u(n+1)$ as a linear vector field whose value at $x \in S^{2n+1}$ is $Mx$. Then, using that $\xi$ at $x$ is $Jx$, where $J$ is the standard complex structure, the contact moment map $\mu: S^{2n+1} \rightarrow \mathfrak u(n+1)^*$ is given by
\[
\mu(x)(M) = \langle Mx, Jx \rangle
\]
which is exactly the function of L\^e-Wang.
Back to the general setting of the Sasaki group $G= \Aut(M)$ with Lie algebra $\mathfrak g$ acting on the $\eta$-Sasaki-Einstein $M$, we have the contact moment map that is given by $\mu(p)(X) = \eta_p(X_p)$.
We then consider for each $X \in \mathfrak g$ the map $p \mapsto \eta(X)$ restricted to a minimal Legendrian submanifold and up to a constant.
We prove the generalization of one of the implications of \cite[Thm. ~1.1]{lewang}.
\begin{thm} \label{thm:eigenf_etaX}
Let $(M, g, \eta, \xi)$ be a $(2n+1)$-dimensional $\eta$-Sasaki-Einstein manifold with $\Ric = A g + (2n-A) \eta \otimes \eta$ and let $L^n \subset M$ be
a minimal Legendrian submanifold.
Then for all $X \in \mathfrak{aut}(M)$ the function on $L$ given by
\begin{equation} \label{eqetaX}
f_X = \eta(X) - \frac 1 {\vol(L)} \int_L \eta(X)dv,
\end{equation}
where $dv$ is the volume form on $L$ of the induced metric, is en eigenfunction of the Laplacian $\Delta_L$ on $L$ with eigenvalue $A+2$. Moreover this eigenspace has dimension $\geq \dim \mathfrak{aut}(M) - \frac 1 2 n(n+1)-1$.
\end{thm}
\begin{proof}
We recalled above that the map $\chi: \Gamma(NL) \rightarrow C^\infty(L) \oplus \Omega^1(L)$ given by $\chi(V) = (\eta(V), -\frac 1 2 \iota_V d\eta)$
is an isomorphism if $L$ is Legendrian and that the space of infinitesimal deformations of a minimal Legendrian $L$ is
\[
\Defo(L) = \mathbb R \oplus \{ f \in C^\infty(L): \Delta_L f = (A+2) f \}.
\]
Let $X \in \mathfrak g = \mathfrak{aut}(M)$ and let $X|_L = X_1 + X_2 \in \Gamma(TL) \oplus \Gamma(NL)$ be its decomposition.
From \cite{ohnita} it follows that $X_2$ defines a Legendrian deformation of $L$ and it is known, e.g. \cite{simons},
that the normal part of a Killing field defines an infinitesimal minimal deformation. Hence $\chi(X_2) \in \chi(\ker \mathcal J)$, where $\mathcal J$
denotes the Jacobi operator, and so, following Ohnita \cite{ohnita} we have
\[
\Delta_L f - (A+2) f = \const = C.
\]
and the pair $(C, f- \bar f) \in \mathbb R \oplus \{ f \in C^\infty(L): \Delta_L f = (A+2) f \}$, where $\bar f =\frac 1 {\vol(L)} \int_L \eta(X)dv$.
So the first claim follows.
Every $X \in \mathfrak g = \mathfrak{aut}(M)$ defines a trivial deformation of $L$, hence there is a linear map $\alpha: \mathfrak g \rightarrow \Defo(L)$
given by $\alpha(X) = \chi(X_2)$.
Its kernel is $\ker \alpha = \{ X \in \mathfrak g: X|_L \in \Gamma(TL) \} \subseteq \mathfrak{iso}(L)$. So we have
\begin{align} \label{so}
1 + \dim E_{A+2} & \geq \dim \alpha(\mathfrak g) \\ \nonumber
& = \dim \mathfrak g - \dim \ker \alpha \\ \nonumber
& \geq \dim \mathfrak g - \dim \mathfrak{so}(n+1) \\
&= \dim \mathfrak g - \frac{n(n+1)}{2}. \nonumber
\end{align}
So we have the second claim in the statement.
\end{proof}
Let us specialize to Sasaki-Einstein manifolds and assume that $M$ is regular, so it is a principal circle bundle $\pi: M \rightarrow B$ over a K\"ahler-Einstein
base manifold $B$ and consider the case when the equality holds in the previous theorem. We prove the following, generalizing \cite[Thm.~1.2]{lewang} together with a rigidity result.
\begin{thm}\label{prop:lowerbound}
If $M$ is regular and the eigenvalue $2n+2$ of $\Delta_L$ has multiplicity exactly $ \dim \mathfrak{aut}(M) - \frac 1 2 n(n+1)-1$ then $L$ is totally geodesic in $M$ and $M$ is a principal circle bundle over the complex projective space.
\end{thm}
\begin{proof}
The projection $\tilde L = \pi(L) \subseteq B$ is a Lagrangian submanifold of a K\"ahler-Einstein manifold and it is known that $\tilde L$ is covered by $L$ \cite{reckziegel}.
To have equality one needs to have equality in \eqref{so} so we conclude that the isometry group of $\tilde L$ is the largest possible, i.e. its Lie algebra is $\mathfrak{so}(n+1)$. Let this isometry group be denoted by $K$.
The group $K$, being a subgroup of the Sasaki transformation group of $M$, sends leaves into leaves and thus acts on $B$. We claim that the action has cohomogeneity one.
Indeed it is known, see \cite{transfgroups}, that if $\dim K = \dim \mathfrak{so}(n+1)$ then $L^n$ is either a $n$-sphere or $\mathbb{RP}^n$, written as $\SO(n+1)/H$, where $H= \SO(n)$ or $H= \mathbb Z_2 \cdot \SO(n)$ is the stabilizer of a $q \in \tilde L$.
In any case the isotropy representation of $H$ acts transitively on the unit sphere $T_q \tilde L$. Being $\tilde L$ Lagrangian, the action is transitive also on the unit sphere in the normal space at $q$ and this action has cohomogeneity one, hence also the action of $\SO(n+1)$ on $B$ does.
Let $p \in \tilde L$. Being $\tilde L$ homogeneous under $K$, it is also known from \cite{bedgor} that the orbit $\Omega = K^\mathbb C \cdot p$ is open dense in $B$ and Stein, hence in particular affine, and that $B \backslash \Omega$ has complex codimension $1$.
Let $x \in B$ be a principal point. Being $\Omega$ open dense, the $K^\mathbb C$-orbit through $x$ is open as well and intersects $\Omega$, then they coincide. So $B$ is a two-orbit K\"ahler manifold, i.e. is acted on by a complex algebraic group admitting exactly two orbits $\Omega$ and $A$.
They were classified, as complex manifolds, by Ahiezer \cite[Table~2]{ahiezer} in the case of $\Omega$ affine and $A$ of codimension $1$. The occurrences of a group $K$ with Lie algebra $\mathfrak{so}(n+1)$ can be one of the following:
\begin{enumerate}
\item $\tilde L = \SO(n+1)/ \SO(n) = S^n$ and $B = Q_{n}$,
\item $\tilde L = \frac{\SO(n+1)}{\textup{center}}/ S(\O(1) \times \O(n)) = \mathbb{RP}^n$ and $B = \mathbb{CP}^n$,
\item $\tilde L = \Spin(7) / \G_2 = S^7$ and $B = Q_7$,
\item $\tilde L = \SO(7) / \G_2 = \mathbb{RP}^7$ and $B = \mathbb{CP}^7$;
\end{enumerate}
where the projective spaces and the complex hyperquadrics are endowed with the unique K\"ahler-Einstein metric of constant $2n+2$.
This proves that the possible $B$ are only complex hyperquadrics or complex projective spaces and $M$ is a Sasaki-Einstein principal circle bundles over $B$.
Being the pairs in this list symmetric subspaces of $B$, we have that $\tilde L$ is totally geodesic in $B$. By Proposition \ref{prop:reckziegel} of Reckziegel, this is equivalent to say that $L$ is totally geodesic in $M$.
We want now to exclude the case $B = Q_n$. So far we have the following diagram of immersions and submersions.
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=6em, column sep=5em,text height=1.5ex, text depth=0.25ex]
{ (S^n,g) & (M , g_\textup{SE}) & (S^{2n+3}, g_c) & (\mathbb C^{n+2}, \frac 4 c \langle \cdot, \cdot \rangle ) \\
(S^n, g) & (Q_n, g_Q) & (\mathbb{CP}^{n+1}, g^\textup{FS}_c) &\\ };
\path[right hook->] (m-1-1) edge node[auto] {$\tilde \imath$} (m-1-2);
\path[->](m-1-2) edge node[auto] {$\pi$} (m-2-2);
\path[->] (m-1-1) edge node[anchor=east] {$=$} (m-2-1);
\path[right hook->] (m-2-1) edge node[auto]{$i$} (m-2-2);
\path[right hook->] (m-2-2) edge node[auto]{$j$} (m-2-3);
\path[->] (m-1-3) edge node[auto] {$p$} (m-2-3);
\path[right hook->] (m-1-3) edge node[auto] {}(m-1-4);
\end{tikzpicture}
\]
For the metric point of view, we have the Fubini-Study metric $g^\textup{FS}_c$ on $\mathbb{CP}^{n+1}$
with constant holomorphic curvature $c$.
This rescaling of the Fubini-Study metric on $\mathbb{CP}^{n+1}$
is defined by the metric given by $\frac 4 c$ times the round metric on $S^{2n+3}$, which we denote by $g_c$ \cite[vol.~II, p.~273]{kn}.
The choice of $c$ in $g^\textup{FS}_c$ is such that $g_Q = j^* g^\textup{FS}_c$
is K\"ahler Einstein of Einstein constant $2n+2$ and this happens exactly for $c= \frac{4n+4}{n}$ from \cite{smyth}.
By \cite{ChenNagano} the only totally geodesic spheres in the quadric are immersions $i:x \mapsto [x]$ for $x \in S^n \subset \mathbb R^{n+1}$.
The restriction of the quadric metric on it is $\frac{n}{2n+2}$ times the round metric. Being $S^n$ simply connected for $n>1$, we have that the Legendrian $L$ is isometric to its projection in $Q_n$.
Let $\Delta$ be the Laplacian on $S^n$ associated to the metric $\frac{n}{2n+2} g_\textup{round}$. An eigenfunction of $\Delta$ with eigenvalue $2n+2$ is an eigenfunction of the round Laplacian with eigenvalue $n$.
It is known from \cite{spectre} that the round sphere admits the eigenvalue $n$ with multiplicity $n(n+1)$.
To compute the lower bound, we observe that, since every Sasaki automorphisms induces by projection a K\"ahler automorphism of the base, that $\dim \mathfrak{aut}(M) \leq \dim \mathfrak{aut}(B) + 1 = \frac 1 2 (n+2)(n+1) + 1$ since the automorphism group of the hyperquadric is $\SO(n+2)$.
In order not to attain the lower bound we need to have
\[
\dim \mathfrak{aut}(M) < \frac 3 2 n(n+1) + 1
\]
and this is always true since for $n>1$ we have $ \frac 1 2 (n+2)(n+1) + 1 < \frac 3 2 n(n+1) + 1$.
In the case $n=1$ the quadric $Q_1 = \mathbb{CP}^1$ is a complex projective space, so we are left with the only case $B = \mathbb{CP}^n$.
\end{proof}
\section{Eigenfunctions using the Nomizu operator} \label{sec:nomizu}
In this section we define another family of eigenfunctions on a Legendrian $L$ of $M$ by making use of the geometry of the K\"ahler cone and its group of K\"ahler automorphisms.
Let $(M, g)$ be a Sasakian manifold of dimension $2n+1$ and let $(C(M), \bar g)$ be its K\"ahler cone.
We let $e_A$ for $A \in \{1, \ldots, 2n+1 \}$ be a local orthonormal frame at some point of $M$ and let $\theta_A$ be its dual.
Then the set $\{ \frac 1 r e_1, \ldots, \frac 1 r e_{2n+1}, \partial_r \}$ is an orthonormal frame for the cone metric
$\bar g = r^2 g + dr^2$ and its dual is $\{ r \theta_1, \ldots, r \theta_{2n+1}, dr \}$.
Let $\bar \nabla$ be the Levi-Civita connection of the cone metric. From the well known relations \cite{sparks_survey} we have
\begin{align*}
\bar \nabla \partial_r &= \frac 1 r e_A \otimes \theta_A \\
\bar \nabla e_B &= \frac 1 r e_B \otimes dr + \theta_{BC} \otimes e_C - r \partial_r \otimes \theta_B.
\end{align*}
\begin{lemma}\label{lemma:extrinsicandintrinsic}
Let $L^n \rightarrow M$ be an immersion
and let $e_1, \ldots, e_n$ be an orthonormal frame of $L$.
Let $\nabla$ be the Levi Civita connection on $M$.
Then, for any smooth function $f: M \rightarrow \mathbb R$, we have
\begin{equation}\label{equa:extrinsicandintrinsic}
\Delta_L f|_L = - \sum_{i=1}^n \nabla d f (e_i , e_i )|_L - H \cdot f |_L .
\end{equation}
where $\Delta_L$ is the Hodge Laplacian and $H$ is the mean curvature field of the immersion.
In particular, when the immersion is minimal, we have
\begin{equation}\label{equa:minimalextrinsicandintrinsic}
\Delta_L f |_L = - \sum_{i=1}^n \nabla d f (e_i , e_i ) |_L .
\end{equation}
\end{lemma}
\begin{proof}
Label as $\nabla^L$ the induced connection on $L$; by definition we have
\begin{align*}
\sum_{i=1}^n \nabla d f (e_i , e_i ) |_L &= \sum_ i e_i e_i f |_L - \sum_{i=1}^n \nabla_{e_i} e_i f |_L \\
&= \sum_{i=1}^n e_i e_i f |_L - \sum_{i=1}^n \nabla^L_{e_i} e_i f |_L - \sum_{i=1}^n (\nabla_{e_i} e_i f |_L - \nabla^L_{e_i} e_i f |_L )\\
&= -\Delta_L f |_L - H \cdot f |_L ,
\end{align*}
which is precisely the claimed \eqref{equa:extrinsicandintrinsic}.
Since the assumption on minimality corresponds to the vanishing of $H$,
we also get the claimed \eqref{equa:minimalextrinsicandintrinsic}.
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{lemma:laplacianofefdoesnotdependonr}
Let $L^n \rightarrow M$ be a \emph{minimal} immersion in a Sasaki manifold.
Let $f$ be a function on the K\"ahler cone $C(M)$ which does not depend on $r$ and let $\Delta_L$ be the Hodge Laplacian on $L$;
finally, let $e_1 , \cdots , e_n$ be an orthonormal frame of L.
Then we have
\[
\Delta_L f |_L= - \sum_{i=1}^n \bar \nabla df (e_i, e_i) |_L.
\]
\end{lemma}
\begin{proof}
In view of Lemma \ref{lemma:extrinsicandintrinsic},
it suffices to show that for any $i,j \in \{1,\cdots ,n\}$,
then
\begin{align}
\bar \nabla d f (e_i , e_j ) |_L = \nabla d f (e_i , e_j) |_L ,
\end{align}
where as usual $\nabla$ is the Levi Civita of the Sasaki metric $g$,
while $\bar \nabla$ is the Levi Civita connection of the metric $\bar g = r^2 g + dr^2$.
By the very definition we have
\begin{align*}
\bar \nabla d f (e_i , e_j ) |_L & = e_i e_j f |_L - \bar \nabla_{e_i} e_j \cdot f |_L \\
& = e_i e_j f |_L - \bigl ( \nabla_{e_i} e_j \cdot f |_L - \delta_{ij}r\partial_r f |_L \bigr ) \\
& = \nabla d f (e_i , e_j) |_L ,
\end{align*}
where at the second equality we applied \cite[(1.1)]{sparks_survey}.
This completes the proof of the lemma.
\end{proof}
Let us now construct a family of operators.
For an infinitesimal K\"ahler automorphism $K$ on the cone, i.e. Killing and holomorphic, we define the operator on sections of $TC(M)$ given by
\begin{equation}
M_K = \bar \nabla K + \frac 1 {2n+2} \div (JK) J.
\end{equation}
\begin{lemma}\label{lemmapropMK}
Let $C(M)$ be the K\"ahler cone over a Sasaki-Einstein manifold and let $K$ as above. Then
\begin{enumerate}[(i)]
\item \label{divJKconst} $\div(JK) = \const$;
\item \label{MJJM} $M_K$ is skew-symmetric and $M_K J = J M_K$;
\item \label{traceJMK} $\tr(JM_K) = 0$;
\item \label{nablaM} $\bar \nabla M_K = \bar \Rm (\cdot, K)$ where $\bar \Rm$ is the Riemann $(3,1)$-tensor of $\bar g$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $A_K$ be the associated Nomizu operator, i.e. $A_K = \bar \nabla K$. Then since $K$ is Killing, its covariant derivative is known to be $\bar \nabla \, \bar \nabla K = \bar \Rm(\cdot, K)$.
\begin{enumerate}[(i)]
\item Fix $p \in C(M)$ and let $v_i$ be a geodesic frame at $p$ and let $Y$ be any vector field on $C(M)$. Then
\begin{align*}
Y \cdot \div(JK)|_p &= \bar g (\bar \nabla_Y \bar \nabla_{v_i} JK, v_i) \\
&= - \bar g ( \bar \nabla_Y \bar \nabla_{v_i} K, J v_i) \\
&= -\bar g ( (\bar \nabla_Y A_K) v_i, J v_i)\\
&= \bar \Rm(Y, K, Jv_i, v_i) \\
&= 2 \bar \Ric (Y, K)\\
&= 0
\end{align*}
since $C(M)$ is Ricci-flat (see \cite{sparks_survey}), where we have used the well known fact that $\Ric(X, Y) = \frac 1 2 \tr (\Rm(X, Y) \circ J)$.
\item Since $K$ is holomorphic it is $\bar \nabla_{J \cdot} K = J \bar \nabla_\cdot K$ so $M_K J = J M_K$.
Since $K$ is Killing, $\bar \nabla K$ is skew-symmetric and also $J$, so \eqref{MJJM} follows.
\item Let $v_i$ be an orthonormal frame of $C(M)$. Then
\begin{align*}
\tr(JM_K) &= \bar g(JM_K v_i, v_i)\\
&= \bar g \biggl (\bar \nabla_{v_i} JK - \frac 1 {2n+2} (\div(JK)) v_i, v_i \biggr ) \\
&= \div(JK) - \div(JK) \\
&= 0.
\end{align*}
\item By \eqref{divJKconst} and the fact that $J$ is parallel, \eqref{nablaM} follows. \qedhere
\end{enumerate}
\end{proof}
We will use the following lemma.
\begin{lemma}\label{lemmaRm}
Let $X$ be any field on $M$. Then $\bar \Rm (r \partial_r, J r \partial_r) K$ and $\bar \Rm (r \partial_r, J X) K$ vanish.
\end{lemma}
\begin{proof}
We notice that $\bar \nabla_{r \partial_r} K$ is holomorphic. Indeed, using that $r \partial_r$ is holomorphic, it is
\begin{align*}
\bar \nabla_{r \partial_r} K &= [r \partial_r, K] + \bar \nabla_K r\partial_r \\
&= [r \partial_r, K] + K
\end{align*}
using that $\bar \nabla r \partial_r = \id$. Hence $\bar \nabla_{r \partial_r}K$ is holomorphic being the sum of two holomorphic fields.
Then we compute
\begin{align*}
\bar \Rm(r \partial_r, J r\partial_r) K &= \bar \nabla_{r \partial_r} \bar \nabla_{J r \partial_r} K - \bar \nabla_{J r \partial_r} \bar \nabla_{r \partial_r} K - \bar \nabla_{[r \partial_r, J r \partial_r]} K \\
&= J \bar \nabla_{r \partial_r} \bar \nabla_{r \partial_r} K - J \bar \nabla_{r \partial_r} \bar \nabla_{r \partial_r} K - \bar \nabla_{J [r \partial_r, r \partial_r]} K \\
&= 0.
\end{align*}
Similarly $\bar \Rm (r \partial_r, J X) K = 0$.
\end{proof}
Now consider the family of functions on $f_K : C(M) \rightarrow \mathbb{R}$ defined as
\begin{equation} \label{equa:defnefK}
f_K = \bar g (M_K \partial_r, J \partial_r).
\end{equation}
We exploit the fact that $\tr (JM_K) = 0$ for the following lemma, that also uses that $L$ is Legendrian.
\begin{lemma} \label{lemma210}
Let $e_i$ be a frame of the Legendrian $L$. Then
\[
\sum_{i=1}^n \bar g(M_K e_i, Je_i) = - r^2 f_K.
\]
\end{lemma}
\begin{proof}
Since $L$ is Legendrian, we can extend $\{ e_i\}$ to an \emph{orthonormal} frame $\{ \frac 1 r e_i, J \frac 1 r e_i, \partial_r, \frac 1 r \xi = J \partial_r \}$ of $C(M)$. Then
\begin{align*}
0 = \tr (J M_K) &= \frac 1 {r^2} \sum_{i=1}^n \biggl [ \bar g( JM_K e_i, e_i) + \bar g( M_K J e_i, Je_i) \biggr ] \\
&\phantom{---------}+ \bar g (JM_K J \partial_r, J \partial_r) + \bar g(JM_K \partial_r, \partial_r)
\end{align*}
and from Lemma \ref{lemmapropMK}.\eqref{MJJM} we infer
\[
2 r^2 f + \sum_{i=1}^n 2 \bar g(M_K e_i, J e_i) = 0. \qedhere
\]
\end{proof}
\begin{lemma}\label{lemma:efconstantonar}
For any Killing and holomorphic vector field $K\in \Gamma(TC(M))$, the function $f_K$ is constant along the direction $\partial_r$.
\end{lemma}
\begin{proof}
Since $\bar \nabla_{\partial_r} \partial_r = 0$, we have
\begin{align*}
\partial_r f_K &= \bar g( (\bar \nabla_{\partial_r} M_K) \partial_r, J \partial_r) \\
&= \frac 1 {r^3} \bar \Rm(r \partial_r, K, r \partial_r, J r \partial_r)\\
&= - \frac 1 {r^3} \bar \Rm (r \partial_r, J r \partial_r, K, r \partial_r)\\
&= 0
\end{align*}
by Lemma \ref{lemmaRm}.
\end{proof}
We prove the following.
\begin{thm} \label{thmeigenfunction}
For any Legendrian minimal immersion $L^n \rightarrow M$ in a Sasaki-Einstein manifold,
and for any both holomorphic and Killing vector field on the K\"ahler cone $K\in \Gamma (TC(M))$, then
the functions $f_K$ defined by \eqref{equa:defnefK} are eigenfunctions of $\Delta_L$ with eigenvalue $2n+2$.
\end{thm}
\begin{proof}
We fix a vector field $K$ as in the statement and we set $f= f_K$.
In order to compute $\Delta_L f$, we notice that
Lemma \ref{lemma:efconstantonar} allows us
to apply Lemma \ref{lemma:laplacianofefdoesnotdependonr}.
Thus, let $\{ e_1 , \cdots , e_n \}$ be a local frame of $L$.
We begin with observing that, for any such vector field $e_i$,
then there holds
\begin{align}
\label{equa:derivativeofef}
e_i f = \frac{2}{r} \bar g (M_K \partial_r , J e_i).
\end{align}
In fact,
\begin{align*}
e_i f & = \bar g ((\bar \nabla_{e_i} M_K) \partial_r , J \partial_r)
+ \bar g ( M_K \bar \nabla_{e_i} \partial_r , J \partial_r)
+ \bar g ( M_K \partial_r , J \bar \nabla_{e_i} \partial_r )\\
& = \bar g ( \bar \Rm (e_i, K)\partial_r , J\partial_r)
+2 \bar g (M_K \partial_r , J \bar \nabla_{e_i} \partial_r) \\
& = \frac{2}{r} \bar g (M_K \partial_r , J e_i),
\end{align*}
where at the second equality we applied Lemma \ref{lemmapropMK}.\eqref{MJJM} and \eqref{nablaM},
at the third equality we applied Lemma \ref{lemmaRm} and \cite[(1.1)]{sparks_survey}.
Similarly as for \eqref{equa:derivativeofef}, we also get
\begin{align}
\label{equa:derivativeofef2}
\nabla_{e_i} e_i f = \frac{2}{r} \bar g (M_K \partial_r , J \nabla_{e_i} e_i).
\end{align}
Now we compute
\begin{align*}
e_i e_i f &= e_i \biggl ( \frac{2}{r} \bar g (M_K \partial_r , J e_i) \biggr ) \\
&= \frac{2}{r} \biggl (
\bar g ((\bar \nabla_{e_i} M_K) \partial_r , J e_i)
+ \bar g ( M_K \bar \nabla_{e_i} \partial_r , J e_i)
+ \bar g ( M_K \partial_r , J \bar \nabla_{e_i} e_i )
\biggr ) \\
&= \frac{2}{r} \biggl (
\bar g (\bar \Rm (e_i, K)\partial_r , J e_i)
+ \frac{1}{r}\bar g ( M_K e_i , J e_i) \\
&\phantom{-------} + \bar g ( M_K \partial_r , J \nabla_{e_i} e_i )
- \bar g ( M_K \partial_r , J r\partial_r)
\biggr ) \\
&= \frac{2}{r^2} \bar g ( M_K e_i , J e_i)
+ \frac{2}{r} \bar g ( M_K \partial_r , J \nabla_{e_i} e_i )
- 2\bar g ( M_K \partial_r , J \partial_r),
\end{align*}
where at the third equality we applied Lemma \ref{lemmapropMK}.\ref{nablaM} and \cite[(1.1)]{sparks_survey},
at the third equality we applied Lemma \ref{lemmaRm} and \cite[(1.1)]{sparks_survey}.
Finally we compute
\begin{align*}
\Delta_L f |_L &= -\sum_{i=1}^n \bar \nabla d f (e_i , e_i)|_L\\
&= -\sum_{i=1}^n ( e_i e_i f - \bar \nabla_{e_i}e_i f )|_L\\
&= -\sum_{i=1}^n ( e_i e_i f - \nabla_{e_i}e_i f + r\partial_r f )|_L \\
&= -\sum_{i=1}^n \biggl ( \frac{2}{r^2} \bar g ( M_K e_i , J e_i)
+ \frac{2}{r} \bar g ( M_K \partial_r , J \nabla_{e_i} e_i )\\
&\phantom{------}
- 2\bar g ( M_K \partial_r , J \partial_r) - \frac{2}{r} \bar g (M_K \partial_r , J \nabla_{e_i} e_i) \biggr ) \biggr |_L\\
&= (2n+2)f|_L,
\end{align*}
where at the third equality we applied \cite[(1.1)]{sparks_survey},
at the fourth equality we applied Lemma \ref{lemma:efconstantonar} and \eqref{equa:derivativeofef2},
at the fifth equality we applied Lemma \ref{lemma210}.
This completes the proof of the theorem.
\end{proof}
\begin{oss}
Let us see how to recover the functions of L\^e-Wang in this setting.
Let $M \in \mathfrak{su}(n+1)$ and consider it as a real $(2n+2) \times (2n+2)$ matrix. It is skew-symmetric and such that $\tr(JM) = 0$.
Consider the vector field on $\mathbb C^{n+1}$ given at $x$ by $K_x = Mx$, which is Killing and holomorphic.
We claim that the function $f_K$ is exactly the function $\langle Mx, Jx \rangle$.
Indeed, if $\bar \nabla$ is the flat connection on $\mathbb C^{n+1}$, it is $\bar \nabla_y K = My$ for $y \in \mathbb C^{n+1}$.
Moreover $\div(JK) = \tr(JU) = 0$. So $f_K = \langle Mx, Jx \rangle$ after identifying $x$ with $\partial_r|_{(x, 1)}$.
\end{oss}
Let us now see the connection between our two different generalizations.
It is known that there is an inclusion $\mathfrak{aut}(M) \subseteq \mathfrak{aut}(C(M))$ of the algebra of infinitesimal Sasaki automorphisms of $M$ into the algebra of infinitesimal K\"ahler automorphisms of the cone $C(M)$. It consists in seeing a field $V \in \mathfrak{aut}(M)$ trivially extended to the cone and it turns out to be holomorphic and Killing with respect to the cone metric.
We proved in Theorem \ref{thm:eigenf_etaX} that for $X \in \mathfrak{aut}(M)$ the functions on $L$ given by $\eta(X) - \frac 1 {\vol(L)} \int_L \eta(X)dv$ are eigenfunctions of $\Delta_L$ with eigenvalue $2n+2$, in the Sasaki-Einstein assumption.
By seeing $X$ as an infinitesimal K\"ahler automorphism of $C(M)$ we compute
\[
M_X \partial_r = \frac 1 r \biggl [ \bar \nabla_{r \partial_r} X + \frac 1 {2n+2} \div(JX) \xi \biggr ]
\]
and $J \partial_r = \frac 1 r \xi$.
Taking their inner product we have
\begin{equation} \label{eq:etaXfX}
f_X = \bar g (M_X \partial_r, J \partial_r) = \frac 1 {r^2} \bar g(X, \xi) + \frac 1 {2n+2} \div(JX) = \eta(X) + \frac 1 {2n+2} \div(JX).
\end{equation}
Using Theorem \ref{thm:eigenf_etaX} together with Theorem \ref{thmeigenfunction} we have, after applying the Laplacian to \eqref{eq:etaXfX}, that
\begin{equation}
f_X = \eta(X) - \frac 1 {\vol(L)} \int_L \eta(X)dv.
\end{equation}
Hence our second generalization extends the first.
In the L\^e-Wang setting, we reobtain the fact that $\int_L \eta(X)dv = 0$, which is a fortiori true being $\eta(X)$ an eigenfunction of $\Delta_L$.
|
2,869,038,156,682 | arxiv | \section{Introduction}
Masses of free fermions arise from local fermion bilinear terms in the action. If symmetries of the theory prevent such terms, fermions remain massless perturbatively. However, these symmetries can break spontaneously and generate non-zero fermion bilinear condensates that can make fermions massive. This traditional mechanism of fermion mass generation is well known and is used in the standard model of particle physics to give quarks and leptons their masses. In QCD, along with confinement, this mechanism also helps explain the existence of light pions while making nucleons heavy. In this work we explore a different mechanism of fermion mass generation where fermions acquire their mass through four-fermion condensates, while fermion bilinear condensates vanish. This alternate mechanism has been the focus of many recent studies in 3D lattice models \cite{Slagle:2014vma,He:2015bda,Ayyar:2014eua,Catterall:2015zua,Ayyar:2015lrd,He:2016sbs}. Here we explore if these results extend to 4D. The 3D studies also show that no spontaneous symmetry breaking of any lattice symmetries is necessary for fermions to become massive. The presence of a new second order critical point makes the mechanism interesting even in the continuum.\footnote{The fermion mass generation mechanism we explore in this work is different from the one proposed in \cite{Stern:1998dy,PhysRevD.59.016001,Kanazawa:2015kca} where chiral symmetry is spontaneously broken due to four-fermion condensates instead of fermion bilinear condensates and massless bosons are present. In the massive fermion phase we explore here all particles, including bosons are massive.}
We believe that the alternate mechanism of mass generation can be understood qualitatively if we view the four-fermion condensate as a fermion bilinear condensate between a composite fermion (consisting of three fundamental fermions) and a fundamental fermion. When three fundamental fermions bind to form a composite fermion state, the four-fermion condensate can begin to act like a conventional mass term. However, since such composite states can only form at sufficiently strong couplings a non-perturbative approach is required to uncover it. At weak couplings, when composite states do not form, four-fermion condensates cannot act like mass terms although they are still non-zero. Since there are no local order parameters that signal the formation of the composite fermion bound states, the massive phase does not require spontaneous symmetry breaking. All these arguments are consistent with the results in 3D lattice models mentioned above.
Generating fermion masses through interactions but without spontaneous symmetry breaking is a subtle problem from the perspective of 4D continuum quantum field theories due to anomaly matching arguments \cite{tHooft:1979bh,PhysRevLett.45.100,Banks:1991sh,Banks:1992af}, but 4D lattice models that display such a mechanism of mass generation are well known and have been studied extensively in the context of lattice Yukawa models with both staggered fermions \cite{Hasenfratz:1988vc,Hasenfratz:1989jr,Lee:1989mi} and Wilson fermions \cite{Bock:1990tv,Bock:1990cx,Golterman:1990yb,Gerhold:2007gx,Bulava:2011jp}. These models contain a massless fermion phase at weak couplings (referred to as the paramagnetic weak or PMW phase), and a non-traditional symmetric massive fermion phase at strong couplings (referred to as the paramagnetic strong or PMS phase). The fermion mass in the PMS phase can be argued as being generated due to four fermion condensates since fermion bilinear condensates vanish in that phase. A review of the early work can be found in \cite{Shigemitsu:1991tc}.
In order for the PMS phase found in previous lattice calculations to become interesting from the point of view of continuum quantum field theory, it must be possible to tune the fermion mass to zero in lattice units. In units where the fermion mass remains fixed, this would imply that the lattice spacing vanishes. This can be accomplished in the presence of a direct second order transition between the PMW and the PMS phase. Such a transition was proposed as an important ingredient for realizing chiral fermions on the lattice \cite{Eichten:1985ft,Golterman:1992yha}. Unfortunately, all previous studies found that there was always an intermediate phase (referred to as the ferromagnetic of FM phase) where the symmetry that protected the fermions from becoming massive at weak couplings, was broken spontaneously. In the presence of the FM phase, fermions in the PMS phase cannot be made arbitrarily light in lattice units. We believe this was the reason the PMS phase was abandoned as merely a lattice artifact. In fact earlier studies in 3D also found an intermediate FM phase \cite{Alonso:1999hh}. Hence the recent discovery of a possible direct second order PMW-PMS phase transition in 3D is exciting, and raises the possibility that such transitions may exist even in 4D.
A second order PMW-PMS transition does not fall under the usual Landau-Ginzburg paradigm, since both the phases have the same global symmetries and there is no local order parameter that distinguishes them. For this reason it must be different from the usual Gross Neveu universality class. Such transitions are known in condensed matter literature and usually driven due to a change in the topological properties of the ground state \cite{Senthil:2014ooa}. The PMW-PMS transition could occur due to a similar reason although it does not seem to involve any topological order \cite{He:2016sbs}. From a condensed matter perspective, the PMS phase can be viewed as a trivial insulator where the ground state does not break any lattice symmetries since it is formed by local singlets. In contrast, the traditional massive fermion phase with fermion bilinear condensates is like a gapped semi-metal or a topological insulator. Topological insulators can have chiral zero modes attached to domain walls where the sign of the condensate changes \cite{Callan:1984sa}. Such zero modes are extensively used today in lattice QCD studies through the domain wall formulation introduced by Kaplan \cite{Kaplan:1992bt}. Many interesting properties of such topological insulators in background fields have also been studied by particle physicists many years ago \cite{Golterman:1992ub,Kaplan:1999jn}. Recently the focus has shifted to the classification of the topological insulators in the presence of fermion self interactions \cite{Fidkowski:2009dba,PhysRevB.83.075103,1367-2630-15-6-065002,Fidkowski:2013jua,PhysRevB.92.125104,PhysRevB.89.195124}. These studies suggest that when the fermion content of the theory is chosen appropriately, such interactions can smoothly deform a topological insulator to a trivial insulator. During such a change massless chiral fermions on the edges acquire non-traditional masses due to the formation of four-fermion condensates since fermion bilinear condensates are forbidden. The associated phase transition on the edge need not involve any spontaneous symmetry breaking \cite{Slagle:2014vma}. This has prompted many applications of the alternate mass generation mechanism to particle physics \cite{Wen:2013ppa,You:2014vea,You:2014oaa,BenTov:2015gra}.
A direct second order PMW-PMS phase transition has remained elusive in 4D so far. Given the discovery of such a transition in 3D \cite{Ayyar:2015lrd}, we believe it is worth searching for it even in 4D. The first step obviously would be to explore if the same lattice model that showed its presence in 3D, also contains it in 4D. Interestingly, this model contains sixteen Weyl fermions, which has been argued to be the right number necessary for the possible existence of the transition in 4D \cite{You:2014vea}. However, this model was already studied long ago in the context of Higgs-Yukawa models and a {\em wide} intermediate FM phase was found \cite{Lee:1989mi}, implying that one needs to explore extensions to it. It may be possible to add new couplings to the model that have the effect of narrowing the width of the FM phase. Unfortunately, the conclusions of the earlier work were mostly drawn from mean field theory and crude Monte Carlo calculations. Hence, in this work we focus on accurately determining the phase boundaries of the model so as to get a sense of how far away is the possible critical point in the extended parameter space. By working in the limit where the Higgs field can be integrated out explicitly, we can accurately study the model in the chiral limit with Monte Carlo methods on lattices up to $12^4$ using the fermion bag approach \cite{PhysRevD.82.025007,Chandraepja13}. In contrast to the earlier work, our results shows a surprisingly {\em narrow} intermediate FM phase, assuming it exists.
Our paper is organized as follows. In section \ref{sec2} we provide a new view point for our lattice model and discuss its symmetries. We also discuss observables that shed light on the phase structure of the model. In section \ref{sec3} we present the fermion bag approach and show that fermion bags have interesting topological properties. In particular we discuss an index theorem very similar to the one in non-Abelian gauge theories with massless fermions. In section \ref{sec4} we explain how the fermion bag approach provides a new theoretical perspective on the physics of the PMS phase and the alternate mass generation mechanism. In particular we explain how all fermion bilinear mass order parameters in the model must vanish at sufficiently strong couplings, although fermions are massive. In section \ref{sec5} we present our Monte Carlo results and in section \ref{sec6} we present our conclusions.
\section{The Model}
\label{sec2}
Our model was originally studied within the context of lattice Higgs-Yukawa models \cite{Lee:1989mi}. However, it can also be obtained directly by discretizing naively the continuum four-fermion action containing a single Dirac fermion field $\psi^a(x)$ and ${\overline{\psi}}^a(x)$ where $a$ labels the four spinor indices. We believe this alternate view point sheds more light on the mechanism of mass generation with four-fermion condensates (or equivalently the PMS phase) at strong couplings. Consider the continuum Euclidean action given by
\begin{equation}
S_{\rm cont} = \int d^4x \ \Big\{\ {\overline{\psi}}(x)\gamma_\alpha\partial_\alpha \psi(x) \ -\ U\Big(\psi^4(x) \psi^3(x) \psi^2(x) \psi^1(x) + {\overline{\psi}}^4(x) {\overline{\psi}}^3(x) {\overline{\psi}}^2(x) {\overline{\psi}}^1(x)
\Big) \Big\}.
\label{contact}
\end{equation}
where $\gamma_\alpha$ are the usual $4 \times 4$ Hermitian Dirac matrices. Note that the continuum model breaks the $U(1)$ fermion number symmetry
\begin{equation}
\psi(x) \rightarrow \exp(i\theta)\psi(x), \ \ {\overline{\psi}}(x) \rightarrow \exp(-i\theta){\overline{\psi}}(x)
\label{u1fs}
\end{equation}
explicitly, but it is invariant under Euclidean rotations and the $U(1)$ chiral symmetry
\begin{equation}
\psi(x) \rightarrow \exp(i\theta \gamma_5)\psi(x), \ \ {\overline{\psi}}(x) \rightarrow
{\overline{\psi}}(x)\exp(i\theta \gamma_5).
\label{u1cs}
\end{equation}
Perturbatively, no fermion bilinear mass term can be generated through radiative corrections since all such terms break either the $U(1)$ chiral symmetry or the rotational symmetry. Thus, the model must contain a massless fermion phase (or the PMW phase) at weak couplings. At strong couplings, assuming we can perform a perturbative (strong coupling) expansion in the kinetic term (similar to the hopping parameter expansion on the lattice), we can see the presence of a symmetric massive fermion phase (or the PMS phase).
In particular the leading order theory is trivial since all fermion fields are bound into local space-time singlets under the symmetries of the action. Introduction of the kinetic term can create excitations that transform non-trivially under both chiral and rotational symmetries, but all of these must be massive since energetically favored singlets need to be broken to create them. But, can the strong coupling expansion as described above be justified after the subtleties of UV divergences are taken into account? Although we cannot answer this question for a single Dirac field, ignoring the fermion doubling problem, we can easily discretize the continuum action (\ref{contact}) naively on the lattice and ask the same question in a controlled setting in the lattice theory. In particular we can even explore if the fermion mass of the lattice theory at strong couplings (i.e., in the PMS phase) can be made light as compared to the cutoff.
Discretizing (\ref{contact}) naively on a space-time lattice we obtain
\begin{equation}
S_{\rm naive} = \sum_{x,y} \ \Big\{\ {\overline{\psi}}_x \gamma_\alpha \frac{1}{2}(\delta_{x+\hat{\alpha},y} - \delta_{x-\hat{\alpha},y})\psi_y\ -\ U \Big(\psi^4_x \psi^3_x \psi^2_x \psi^1_x + {\overline{\psi}}^4_x {\overline{\psi}}^3_x {\overline{\psi}}^2_x {\overline{\psi}}^1_x\Big) \Big\},
\end{equation}
where we use the notation $\psi^a_x$ to denote the lattice Grassmann fields. Using the well known spin diagonalization transformation
\begin{equation}
\psi_x \rightarrow (\gamma_1)^{x_1}(\gamma_2)^{x_2}(\gamma_3)^{x_3}(\gamma_4)^{x_4} \psi_x,\ \
{\overline{\psi}}_x \rightarrow {\overline{\psi}}_x (\gamma_4)^{x_4}(\gamma_3)^{x_3}(\gamma_2)^{x_2}(\gamma_1)^{x_1} ,\ \
\end{equation}
used to define staggered fermions \cite{Sharatchandra:1981si}, we obtain the lattice action,
\begin{equation}
S_{\rm naive} = \sum_{x,y} \ \Big\{\ {\overline{\psi}}_x M_{x,y}\psi_y\ -\ U\Big(\psi^4_x \psi^3_x \psi^2_x \psi^1_x + {\overline{\psi}}^4_x {\overline{\psi}}^3_x {\overline{\psi}}^2_x {\overline{\psi}}^1_x\Big) \Big\}.
\end{equation}
where $M_{x,y}$ is the free staggered fermion matrix
\begin{equation}
M_{x,y} \ =\ \frac{1}{2}\ \sum_{\alpha} \ \eta_{\alpha,x}\ \big(\delta_{x+\hat{\alpha},y} - \delta_{x-\hat{\alpha},y}\big).
\end{equation}
The phase factors $\eta_{1,x}=1$, $\eta_{2,x}=(-1)^{x_1}$, $\eta_{3,x}=(-1)^{x_1+x_2}$, $\eta_{4,x}=(-1)^{x_1+x_2+x_3}$ are well known. Since $\psi^a_x$ on even sites only connect with ${\overline{\psi}}^a_x$ on odd sites and vice versa, we can eliminate half the degrees of freedom by defining $\psi^a_x$ only on even sites and ${\overline{\psi}}^a_x$ only odd sites. We can go a step further and stop distinguishing between $\psi^a_x$ and ${\overline{\psi}}^a_x$ since every site has an identical single four component Grassmann variable. This finally leads to the Euclidean action,
\begin{equation}
S \ =\ \frac{1}{2}\sum_{x,y,a}\ \psi^a_x \ M_{x,y} \ \psi^a_y \ - \ U\ \sum_x \ \psi^4_x \psi^3_x \psi^2_x \psi^1_x.
\label{act}
\end{equation}
We can also view the above action as being constructed directly with four reduced flavors of staggered fermions with an onsite four-fermion interaction. In this interpretation, the spinor indices $a=1,2,3,4$ are viewed as labels of the four reduced staggered flavors. When $U=0$ the above model describes eight flavors of Dirac fermions (or equivalently sixteen flavors of Weyl fermions) in the continuum. This matches the required number of fermions that allows for a non-traditional massive phase according to recent insights \cite{You:2014vea}.
Lattice symmetries of staggered fermions are well known \cite{Golterman:1984cy}. These include: \\
\noindent (1) {\em Shift Symmetry:}
\begin{equation}
\psi^a_x \rightarrow \xi_{\rho,x} \psi^a_{x+\rho},
\end{equation}
where $\xi_{1,x}=(-1)^{x_2+x_3+x_4}$, $\xi_{2,x}=(-1)^{x_3+x_4}$, $\xi_{3,x}=(-1)^{x_4}$, $\xi_{4,x}=1$. This symmetry is based on the identity $\xi_{\rho,x} \eta_{\alpha,x}\xi_{\rho,x+\hat{\alpha}} = \xi_{\rho,x} \eta_{\alpha,x}\xi_{\rho,x-\hat{\alpha}} = \eta_{\alpha,x+\hat{\rho}}$.\\
\noindent (2) {\em Rotational Symmetry:}
\begin{equation}
\psi^a_x \rightarrow S_R(R^{-1}\ x)\psi^a_{R^{-1}\ x},
\end{equation}
where $R \equiv R^{(\rho\sigma)}$ is the rotation $x_\rho \rightarrow x_\sigma$, $x_\sigma = -x_\rho$, $x_\tau \rightarrow x_\tau$ for $\tau \neq \rho,\sigma$ and
\begin{equation}
S_R(x) = \frac{1}{2}[1\pm \eta_{\rho,x} \eta_{\sigma,x} \mp \xi_{\rho,x}\xi_{\sigma,x} +
\eta_{\rho,x} \eta_{\sigma,x}\xi_{\rho,x}\xi_{\sigma,x}].
\end{equation}
This symmetry follows from the relation
$S_R(R^{-1}\ x) \eta_{\alpha,x} S_R(R^{-1}\ x + R^{-1} \hat{\alpha}) = R_{\mu\nu} \eta_{\nu,R^{-1}\ x}$. \\
\noindent (3) {\em Axis Reversal Symmetry:}
\begin{equation}
\psi^a_x \rightarrow (-1)^{x_\rho}\psi^a_{I\ x},
\end{equation}
where $I \equiv I^{(\rho)}$ is the axis reversal $x_\rho \rightarrow -x_\rho$, $x_\tau = x_\tau$, $\tau \neq \rho$.\\
\noindent (4) {\em Global Chiral Symmetry:}
\begin{equation}
\psi^a_x \rightarrow (V)^{ab}\psi^b_x, \ x\ \in \ \mbox{even},\ \
\psi^a_x \rightarrow (V^*)^{ab}\psi^b_x,\ x\ \in \mbox{odd}.\ \
\end{equation}
where $V$ is an $SU(4)$ matrix in the fundamental representation. Note that the fields at even and odd sites transform differently.
As in the continuum, the above symmetries forbid fermion bilinear mass terms to be generated through radiative corrections. The corresponding mass order parameters were constructed long ago \cite{vandenDoel:1983mf,Golterman:1984cy} and were studied recently in \cite{Catterall:2015zua}. They are given by
\begin{subequations}
\begin{eqnarray}
O^0_{ab}(x) \ &=&\ \psi^a_x\psi^b_x \\
O^1_{\mu,a}(x) \ &=&\ \epsilon_x \xi_{\mu,x}\psi^a_x\ S_\mu\psi^a_{x} \\
O^{2A}_{\mu\nu,a}(x) \ &=&\ \xi_{\mu,x} \xi_{\nu,x+\hat{\mu}} \psi^a_x S_\mu S_\nu\psi^a_{x} \\
O^{2B}_{\mu\nu,a}(x) \ &=&\ \ \epsilon_x\xi_{\mu,x} \xi_{\nu,x+\hat{\mu}} \psi^a_x S_\mu S_\nu\psi^a_{x} \\
O^{3}_{\mu\nu\lambda,a}(x) \ &=&\ \xi_{\mu,x} \xi_{\nu,x+\hat{\mu}} \xi_{\nu,x+\hat{\mu}+\hat{\nu}} \psi^a_x S_\mu S_\nu S_\lambda\psi^a_{x},
\end{eqnarray}
\label{orderparam}
\end{subequations}
where $x$ is a lattice site, $\epsilon_x = (-1)^{x_1+x_2+x_3+x_4}$ and $S_\mu \psi^a_x = \psi^a_{x+\hat{\mu}} + \psi^a_{x-\hat{\mu}}$. Further we assume $\mu\neq\nu\neq \lambda$ in the above expressions. These order parameters naturally vanish in the PMW phase, when fermions are massless. However, we will argue in section \ref{sec4} that even the PMS phase they vanish where fermions become massive.
\section{Fermion Bags, Topology and an Index Theorem}
\label{sec3}
The partition function of our lattice model whose action is given in (\ref{act}), can be written in the fermion bag approach \cite{PhysRevD.82.025007,Chandraepja13}. In addition to providing an alternate Monte Carlo method to solve lattice fermion field theories, this alternative approach also gives new theoretical insight into the fermion mass generation mechanism involving four-fermion condensates \cite{Chandrasekharan:2014fea}. While the details of this approach was already discussed in \cite{Ayyar:2014eua}, here we repeat the steps for reduced staggered fermions instead of regular staggered fermions. Although the final expression is identical, here it is written in terms of Pfaffians instead of determinants. We first write
\begin{equation}
Z \ =\ \sum_{[n]}\ \int \prod_x \ [d\psi_x^1 d\psi_x^2 d\psi_x^3 d\psi_x^4]\
\ \prod_a \exp\Big(-\frac{1}{2}\ \sum_{x,y} \psi_x^a \ M_{x,y}\ \psi_x^a\Big)\
\prod_x\ \Big(U \psi^4_x \psi^3_x \psi^2_x \psi^1_x \Big)^{n_x}.
\label{pf}
\end{equation}
where $[n]$ is a configuration of monomers defined by the binary field $n_x=0,1$ that represents the absence ($n_x=0$) or presence ($n_x=1$) of a monomer (see Fig.~\ref{fig:1} for an illustration). We can perform the Grassmann integral at the sites that contain the monomer first to obtain
\begin{equation}
Z \ =\ \sum_{[n]}\ U^{N_m}\ \prod_a \int [d\psi_{x_1}^a d\psi_{x_2}^a...]
\ \exp\Big(-\frac{1}{2}\ \sum_{x,y} \psi_x^a \ W_{x,y}\ \psi_y^a\Big),
\end{equation}
where $N_m$ is the total number of monomers in the configuration $[n]$, and sum in the exponent is only over free sites (i.e., sites without monomers) ordered in a convenient way say $x_1,x_2,...$ and $W_{x,y}$ is the reduced staggered Dirac matrix $M_{x,y}$ connecting only the free sites. Performing the remaining Grassmann integration over the free sites we obtain
\begin{equation}
Z \ =\ \sum_{[n]} U^{N_m} \big(\mathrm{Pf}(W)\big)^4
\label{fbpf}
\end{equation}
where $\mathrm{Pf}(W)$ refers to the Pfaffian of the matrix $W$. Note that the sign of $\mathrm{Pf}(W)$ is ambiguous and depends on the order in which the free sites are chosen in the definition of the matrix $W$. However, this ambiguity cancels in the full partition function and all physical correlation functions. Since $W$ is an anti-symmetric matrix connecting only even and odd sites, the matrix $W$ can be expressed as a block matrix by separating even and odd sites into separate blocks with non-zero entries only in the off-diagonal block. Hence $\mathrm{Pf}(W)$ is the determinant of this off-diagonal block. When the number of odd and even sites are different then $\mathrm{Pf}(W) = 0$.
Free fermion bags refer to the connected set of free sites which do not belong to a monomer. A monomer configuration can in principle have different disconnected fermion bags, which implies that
\begin{equation}
\mathrm{Pf}(W)\ =\ \prod_{\cal B}\ \mathrm{Pf}(W_{\cal B})
\end{equation}
where the matrix $W_{\cal B}$ refers to the reduced staggered Dirac matrix $W$ connecting only the sites within the bag ${\cal B}$ and the product is over all bags. We will refer to $W_{\cal B}$ as the {\em fermion bag matrix}. If $S_{\cal B}$ is the total number of sites of the fermion bag, then $W_{\cal B}$ is also an $S_{\cal B} \times S_{\cal B}$ anti-symmetric matrix with non-zero entries only between even and odd sites. It is easy to see that $\mathrm{Pf}(W_{\cal B})=0$ for a bag with an unequal number of even and odd sites.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig1.pdf}
\end{center}
\caption{\label{fig:1} An illustration of a fermion bag configuration. The sites with monomers are marked with filled circles. Connected sites without monomers form fermion bags.}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.5\textwidth]{fig2.pdf}
\end{center}
\caption{\label{fig:2} An illustration of a $\nu=1$ topological fermion bag configuration. The Dirac matrix $W_{\cal B}$ associated with this bag will contain at least one zero mode. This connection between topology and zero modes is analogous to the index theorem of the massless Dirac operator in non-Abelian gauge theories.}
\end{figure}
Let us now discuss a curious connection between the zero modes of the fermion bag matrix $W_{\cal B}$ and the topology of the fermion bag ${\cal B}$. This connection is analogous to the well known index theorem of the massless Dirac operator in non-Abelian gauge theories \cite{Atiyah:1971rm,Smit:1986fn,Adams:2009eb}. Note that when a bag does not contain an equal number of even and odd sites $W_{\cal B}$ is a matrix with zero modes. If we introduce the concept of a topological charge for the fermion bag through the integer $\nu = n_e - n_o$ (where $n_e$ ($n_o)$ refer the number of even (odd) sites of the bag), then it is easy to argue that the fermion bag matrix $W_{\cal B}$, will have at least $|\nu|$ zero modes, similar to the index of the massless Dirac operator in non-Abelian gauge theories. An example of a $\nu = 1$ topological fermion bag is shown in Fig.~\ref{fig:2}. The analogy with non-Abelian gauge theories extends even further. For example, in certain massless four-fermion models where a chiral symmetry forbids the presence of the chiral condensate, they can still acquire non-zero expectation values due to the presence of fermion bags with topological charge $\nu = \pm 1$ \cite{Chandrasekharan:2014fea}. This is similar to the fact that chiral condensates obtain a non-zero contribution in QCD with a single massless quark flavor due to the presence of gauge field configurations with topological charge $\nu = \pm 1$ \cite{Leutwyler:1992yt}.
\section{Absence of SSB at Strong Couplings}
\label{sec4}
The conventional wisdom is that when fermions become massive, one or more of the fermion bilinear mass order parameters given in (\ref{orderparam}) acquire a non-zero expectation value due to spontaneous breaking of some of the lattice symmetries. However, it was discovered long ago that the usual single site order parameter vanishes at sufficiently strong couplings even though fermions are massive \cite{Hasenfratz:1988vc,Hasenfratz:1989jr,Lee:1989mi,Bock:1990tv,Bock:1990cx}. More recently the vanishing of all bilinear mass order parameters at strong couplings was studied in \cite{Catterall:2015zua}. In this section we give analytic arguments for this result within the fermion bag approach. Our aim is to illustrate the importance of topology and zero modes of the fermion bag matrix in some of these arguments. A simple extension of these arguments allow us to also conclude the absence of any spontaneous symmetry breaking.
All bilinear mass order parameters given in (\ref{orderparam}) can be written compactly in the form
\begin{equation}
O^\alpha(x) = \sum_y f^\alpha_{a,b}(x,y)\psi^a_x\psi^b_y,
\end{equation}
where $\alpha = 0,1,2A,2B,3$ and $f^\alpha_{a,b}(x,y)$ is appropriately defined with non-zero values only when $x$ and $y$ lie within a hypercube. On a finite lattice we expect $\langle O^\alpha(x)\rangle = 0$ purely from symmetry transformations on Grassmann fields, assuming boundary conditions do not break the symmetries \footnote{our choice of anti-periodic boundary conditions fall in this class}. In the fermion bag approach this vanishing of the symmetry order parameter can be understood through the following three facts:
\begin{enumerate}
\item $\langle O^\alpha(x)\rangle$ can get non-zero contributions from a fermion bag configuration only when both the Grassmann fields in $O^\alpha(x)$ are present within the same fermion bag. To show this let us prove that the weight of a fermion bag vanishes due to the insertion of a single $\psi^a_x$. First note that inserting $\psi^a_x$ in the path integral means that $x$ must be a free site within a fermion bag which we refer to as ${\cal B}_x$. Inserting $\psi^a_x$ and performing the Grassmann integration within the bag gives
\begin{equation}
\mathrm{Pf}(W_{{\cal B}_x}([x]))\ =\ \int\ \prod_{x'\in {\cal B}_x}\ [d\psi^a_{x'}]\ \psi^a_x\ \exp\Big(-\frac{1}{2}\sum_{x,y\in {\cal B'}_x} \psi^a_x W_{x,y} \psi^a_y\Big).
\end{equation}
Note that this is equivalent to removing the site $x$ from the bag and the matrix $W_{{\cal B}_x}([x])$ refers to the fermion bag matrix without the site $x$. Note this matrix has one row and one column less than $W_{{\cal B}_x}$ which contains the site $x$. Without $\psi^a_x$, the above Grassmann integral would give $\mathrm{Pf}(W_{{\cal B}_x})$. Since $x$ will either be an even or an odd site, removing it changes the topology of the fermion bag as defined in the previous section. Thus, if $\mathrm{Pf}(W_{{\cal B}_x}) \neq 0$, then $\mathrm{Pf}(W_{{\cal B}_x}([x])) = 0$ and vice versa. Since the weight of the fermion bag involves a product of four Pfaffians, one for each flavor, the fermion bag weight always vanishes in the presence of a single $\psi^a_x$ source term inside it.
\item When $\alpha=0,2A,2B$, contribution to $\langle O^\alpha(x)\rangle$ from every single fermion bag configuration vanishes because the fermion bag weight that contains the fermion source terms vanishes. In the fermion bag approach these expectation values are given by
\begin{eqnarray}
\langle O^0(x)\rangle &=& \frac{1}{Z}
\Big\{\sum_{[n]} U^{N_m} \Big(\mathrm{Pf}(W_{{\cal B}_{x}}[x])\Big)^2\Big(\mathrm{Pf}(W_{{\cal B}_{x}})\Big)^2
\prod_{{\cal B} \neq {\cal B}_{x}} \Big(\mathrm{Pf}(W_{\cal B})\Big)^4\Big\},
\\
\langle O^{2A,2B}(x)\rangle &=& \frac{1}{Z}
\Big\{\sum_{y} f^{2A,2B}_{a,a}\sum_{[n]} U^{N_m}
\Big(\mathrm{Pf}(W_{{\cal B}_{x,y}}[x,y])\Big)\Big(\mathrm{Pf}(W_{{\cal B}_{x,y}})\Big)^3
\prod_{{\cal B} \neq {\cal B}_{x}} \Big(\mathrm{Pf}(W_{\cal B})\Big)^4\Big\},
\nonumber \\
\end{eqnarray}
where ${\cal B}_{x}$, $\mathrm{Pf}(W_{{\cal B}_x})$ and $\mathrm{Pf}(W_{{\cal B}_x}([x]))$ were already defined above. We now define ${\cal B}_{x,y}$ as the free fermion bag containing both the sites $x,y$, $\mathrm{Pf}(W_{{\cal B}_{x,y}})$ is the Pfaffian of that fermion bag matrix, and $\mathrm{Pf}(W_{{\cal B}_{x,y}}([x,y]))$ is the Pfaffian of the fermion bag matrix where $x$ and $y$ are also dropped from the bag ${\cal B}_{x,y}$.
Mathematically,
\begin{equation}
\mathrm{Pf}(W_{{\cal B}_{x,y}}([x,y]))\ =\ \int\ \prod_{x'\in {\cal B}_x}\ [d\psi^a_{x'}]\ \psi^a_x \ \psi^a_y\ \exp\Big(-\frac{1}{2}\sum_{x,y \in {\cal B}_{x,y}} \psi^a_x W_{x,y} \psi^a_y\Big),
\end{equation}
and
\begin{equation}
\mathrm{Pf}(W_{{\cal B}_{x,y}})\ =\ \int\ \prod_{x'\in {\cal B}_x}\ [d\psi^a_{x'}]\ \exp\Big(-\frac{1}{2}\sum_{x,y \in {\cal B}_{x,y}} \psi^a_x W_{x,y} \psi^a_y\Big).
\end{equation}
Since $O^0(x)$ contains $\psi^a_x\psi^b_x$ where $a\neq b$, out of the four Pfaffians contributing to the weight of the fermion bag ${\cal B}_x$, two involve matrices that contain the site $x$ and two involve matrices that do not contain it. Their product vanishes for topological reasons like before, since one of the two matrices will have an extra even or odd site and its pfaffian will vanish. In the case of $O^{2A,2B}(x)$ the weight function $f^{2A,2B}_{a,b}(x,y) \neq 0$ when $a=b$, but when both $x,y$ belong to either even sites or odd sites. Thus, either $W_{{\cal B}_{x,y}}[x,y]$ or $W_{{\cal B}_{x,y}}$ has two extra even or odd sites. Again for topological reasons like before, the pfaffian of one of these two matrices will vanish.
\item The contribution to $\langle O^\alpha(x)\rangle$ for $\alpha = 1,3$ from a given fermion bag configuration may be non-zero, since in these two cases the fermion bag weight that contains the fermion source terms can have a non-zero weight. For these bilinears, since $a=b$ the expression for the expectation value is given by
\begin{equation}
\langle O^{1,3}(x)\rangle = \frac{1}{Z} \sum_y \ f^\alpha_{a,a}(x,y)
\Big\{\sum_{[n]} U^{N_m}
\Big(\mathrm{Pf}(W_{{\cal B}_{x,y}}[x,y])\Big)\Big(\mathrm{Pf}(W_{{\cal B}_{x,y}})\Big)^3
\prod_{{\cal B} \neq {\cal B}_{x,y}} \Big(\mathrm{Pf}(W_{\cal B})\Big)^4\Big\}
\end{equation}
where ${\cal B}_{x,y}$, $\mathrm{Pf}(W_{{\cal B}_{x,y}})$ and $\mathrm{Pf}(W_{{\cal B}_{x,y}}([x,y]))$ were defined above. Note that now both $\mathrm{Pf}(W_{{\cal B}_{x,y}})$ and $\mathrm{Pf}(W_{{\cal B}_{x,y}}([x,y]))$ can be non-zero.
\end{enumerate}
Using the above three facts we can understand why $\langle O^\alpha(x)\rangle = 0$ for all values of $\alpha$. When $\alpha = 0,2A,2B$, contribution from each fermion bag configuration vanishes. However, when $\alpha =1,3$ a further sum over contributions from all symmetry transformations of the fermion bag configuration is necessary to show that the expectation value vanishes. Under these transformations fermion bags transform as a classical extended objects in space-time. For example under a shift in some direction, all monomers and fermion bags get shifted by one lattice unit in that direction. Similarly under rotation by $90^o$ about some axis, the full fermion bag configuration rotates by the same amount. All such configurations obtained by symmetry transformations will have the same weight in the absence of source insertions. This means $\mathrm{Pf}(W_{\cal B})$ remains the same for all fermion bags ${\cal B} \neq {\cal B}_{x,y}$. On the other hand $\mathrm{Pf}(W_{{\cal B}_{x,y}}([x,y]))$ transforms like $\psi^a_x\psi^a_y$ because of the source insertions and hence will cancel due to the sum over symmetry transformations.
Interestingly, if fermion bags are sufficiently far apart symmetry operations can be performed on a single fermion bag without affecting other bags. Such symmetry fluctuations of the single fermion bag that contains the fermion source terms then naturally lead to $\langle O^\alpha(x)\rangle = 0$. Let us illustrate this by considering the calculation of $\langle O^1_{\mu,a}(x)\rangle$ which is given by
\begin{equation}
\langle O^1_{\mu,a}(x)\rangle \ =\ \epsilon(x)\xi_\mu(x) \ \Big(
\langle \psi^a_x \psi^a_{x+\hat{\mu}}\rangle - \langle \psi^a_{x-\hat{\mu}} \psi^a_x\rangle \Big).
\end{equation}
Note that under the shift symmetry we expect
\begin{equation}
\langle \psi^a_x \psi^a_{x+\hat{\mu}}\rangle = \langle \psi^a_{x-\hat{\mu}} \psi^a_x\rangle.
\end{equation}
which is the reason for $\langle O^1_{\mu,a}(x) \rangle$ to vanish. In the fermion bag approach we have
\begin{equation}
\langle \psi^a_x\psi^a_{x+\hat{\mu}}\rangle = \frac{1}{Z}
\Big\{\sum_{[n]} U^{N_m} \
\Big[\mathrm{Pf}(W_{{\cal B}_{x,x+\hat{\mu}}}[x,x+\hat{\mu}]) \Big(\mathrm{Pf}(W_{{\cal B}_{x,x+\hat{\mu}}})\Big)^3\Big] \ \prod_{{\cal B} \neq {\cal B}_{x,x+\hat{\mu}}} \Big(\mathrm{Pf}(W_{\cal B})\Big)^4
\Big\}.
\end{equation}
If the fermion bags are sufficiently far apart, then we can map the bag ${\cal B}_{x-\hat{\mu},x}$ that contributes to $\langle\psi^a_{x-\hat{\mu}}\psi^a_x\rangle$ to another unique bag ${\cal B}^\mu_{x,x+\hat{\mu}}$ obtained by translating ${\cal B}_{x-\hat{\mu},x}$ by one lattice spacing in the $\hat{\mu}$ direction without disturbing any of the other bags. An illustration of such a translation is shown in Fig.~\ref{fig:fluct}. Due to translational invariance we must have
\begin{equation}
\mathrm{Pf}(W_{{\cal B}_{x-\hat{\mu},x}}([x-\hat{\mu},x]))\ =\
\mathrm{Pf}(W_{{\cal B}^\mu_{x,x+\hat{\mu}}}([x,x+\hat{\mu}])),
\ \
\mathrm{Pf}(W_{{\cal B}_{x-\hat{\mu},x}})\ =\
\mathrm{Pf}(W_{{\cal B}^\mu_{x,x+\hat{\mu}}}).
\end{equation}
Since none of the other bags are disturbed, we see that $\langle O^1_\mu(x)\rangle = 0$ simply due the sum over all symmetry fluctuations of the single fermion bag containing the fermion source terms.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=.4\textwidth]{fig1.pdf}
\hspace{0.5in}
\includegraphics[width=.4\textwidth]{fig3.pdf}
\end{center}
\caption{\label{fig:fluct} An illustration of a symmetry fluctuation of a fermion bag when other fermion bags are sufficiently far apart. The fermion bag in the center of the figure on the left has been translated by one unit to the right and shown in the figure on the right. Such a change in a fermion bag is referred to as a symmetry fluctuation and the sites affected during the fluctuation are shown with a different color in the right figure.}
\end{figure*}
The above discussion sheds light on why $\langle O^\alpha(x)\rangle=0$ in a finite system due to symmetry transformations. However, the more important question is to address whether some of these symmetries can break spontaneously. For this one must compute the two point correlation function of local order parameters,
\begin{equation}
C^{\alpha}(x,x') \ =\ \langle O^\alpha(x) O^\alpha(x')\rangle.
\end{equation}
If $C^{\alpha}(x,x') \neq 0$ in the limit when $|x-x'| \rightarrow\infty$, then we say the order parameter is non-zero and the corresponding symmetry is spontaneously broken. Let us now argue that $C^{\alpha}(x,x') = 0$ for all fermion bilinear mass order parameters when $|x-x'| \rightarrow\infty$ at sufficiently strong couplings. We will assume that fermion bags of large size are exponentially suppressed and that fermion bags are well separated from each other so that when fermion bags fluctuate due to symmetry transformations they rarely touch each other. Emperical evidence shows that this assumption is quite reasonable. Hence, contribution from configurations where $x$ and $x'$ lie within the same fermion bag should also be exponentially suppressed in the limit where $|x-x'|$ is large. Thus, a non-zero order parameter requires a non-vanishing contribution from configurations where $x$ and $x'$ are in two different fermion bags. However, we have already argued that $\langle O^\alpha(x)\rangle=0$ within each fermion bag once fluctuations of these two bags are taken into account. Thus, all fermion bilinear mass order parameters must vanish at sufficiently strong coupling, (in the PMS phase) even though fermions are massive. The fact that fermion masses and fermion bilinear condensates need not be related to each other was first presented in \cite{Chandrasekharan:2014fea}.
A straightforward generalization of the above arguments show that any symmetry local order parameter that vanishes within a fermion bag (after taking into account symmetry fluctuations of the fermion bag), cannot develop long range order, as long as fermion bags are well separated from each other and large fermion bags are exponentially suppressed. Since distinct fermion bags are always separated by local singlets (monomers), correlations between them are screened. The only way for long range correlations to arise is due to topology that requires the presence of another fermion bag far away, whose weight vanishes due to zero modes that arise through an index theorem. In such cases while the order parameter vanishes on a finite lattice, two point correlations can develop long range correlations. Examples of lattice models that contain such topological correlations are easy to construct \cite{Chandrasekharan:2014fea}. However, as we have discussed above, our model is different and such topological correlations in symmetry order parameters are absent. Hence there can be no spontaneous symmetry breaking of any lattice symmetries at sufficiently large couplings.
\section{Width of the Intermediate Phase}
\label{sec5}
The arguments of the previous section no longer apply when free fermion bags become large and are not well separated from each other. This occurs in the intermediate coupling region where fermion bilinear condensates can in principle form and lattice symmetries can break spontaneously. As explained in the introduction, it would be exciting to find a 4D lattice model without such an intermediate FM phase, but with a direct PMW-PMS second order phase transition. Unfortunately, earlier studies suggest that our lattice model (\ref{act}) has a {\em wide} intermediate FM phase \cite{Lee:1989mi}, although the phase boundaries were not accurately determined. Using the fermion bag approach, in this section we determine them. Since our algorithms scale badly with system size (especially in 4D), we have been able to perform Monte Carlo calculations only up to $L=12$ (we have one result at $L=14$ at $U=1.75$). Assuming the presence of an intermediate phase and using finite size scaling, we are still able to determine the phase boundaries accurately. In contrast to earlier work, our results point to a surprisingly {\em narrow} intermediate FM phase.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{fig4.pdf}
\end{center}
\caption{\label{fig:rmno} The monomer density (left) and the condensate susceptibility $\chi_1$ (right) plotted as a function of $U$ in the intermediate coupling region for various lattice sizes. There is no sign of a first order transition, but the rapid growth of the susceptibility suggests an intermediate phase with spontaneous breaking of the $SU(4)$ symmetry.}
\end{figure}
We first show results for the four-fermion condensate defined through the monomer density $\rho_m$ in the fermion bag approach using the relation
\begin{equation}
\rho_m = \frac{U}{L^4}\ \sum_x \ \langle \ \psi^4_x\psi^3_x\psi^2_x\psi^1_x \ \rangle.
\end{equation}
Note that with our normalization $\rho_m=0$ at $U=0$ and $\rho_m=1$ at $U=\infty$. In Fig.~\ref{fig:rmno} (on the left side) we plot the behavior of $\rho_m$ as a function of $U$ for various lattice sizes. The condensate increases rapidly but smoothly between $U=1.5$ and $1.9$ suggesting the absence of any large first order transitions. However, with this data alone it is unclear if there is a single transition due to the absence of an intermediate phase, or two transitions due its presence. For this purpose we compute the two independent susceptibilities
\begin{equation}
\chi_1 \ =\ \frac{1}{2}\ \sum_{x} \ \langle \psi^1_0\psi^2_0\ \psi^1_x\psi^2_x \rangle,\ \
\chi_2 \ =\ \frac{1}{2}\ \sum_{x} \ \langle \psi^1_0\psi^2_0\ \psi^3_x\psi^4_x \rangle,
\end{equation}
that can help in determining if bilinear condensate $\Phi = \langle O^0_{ab}(x) \rangle \neq 0$. In general, $\chi_1 \neq \chi_2$, as can be easily verified for small values of $U$, but for large values of $U$ they become almost similar. Assuming a fermion bilinear condensate forms, the leading behavior at large volumes is expected to scale as $\chi_1 \sim \chi_2 \sim \Phi^2 L^4/4$, since only half the lattice volume contributes in the sum. In other words, a clear signature for the formation of the condensate is the volume scaling of the susceptibilities and that for large $L$ the two susceptibilities become identical.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{fig5.pdf}
\end{center}
\caption{\label{fig:susvsL} The plots on the left show $2\chi_1/L^4$ and $2\chi_2/L^4$ as a function of $L$ at $U=1.67$ (squares) and $1.75$ (circles). Also $\chi_1$ is higher than $\chi_2$. The plot on the right shows the condensate $\Phi = \langle O^0_{ab}(x) \rangle$ as a function of $U$. We see the intermediate FM phase extends roughly from $1.60 \leq U \leq 1.81$.}
\end{figure}
\begin{figure}[t]
\begin{center}
\vbox{
\includegraphics[width=\textwidth]{fig6.pdf}
\includegraphics[width=\textwidth]{fig7.pdf}
}
\end{center}
\caption{\label{fig:suscrit} Plots of $\chi_1/L^2$ (top row) and $\chi_2/L^2$ (bottom row) as a function of $U$ for various lattice sizes near the two transitions. The value of $U$ where the curves cross is shown as the dotted line and indicates the rough location of the critical point.}
\end{figure}
In Fig. (\ref{fig:rmno}) (on the right side) we show the behavior of $\chi_1$ as a function of $U$ for various lattice sizes. For these couplings we find $\chi_2$ to be qualitatively similar. In Fig.\ref{fig:susvsL} (on the left side) we plot both $2\chi_1/L^4$ and $2\chi_2/L^4$ as a function of $L$ at $U=1.67$ and $1.75$. We take the fact that the data seems to be saturating as a sign that a condensate is forming. Further, we observe that $\chi_1 \sim \chi_2$ for the highest two lattices, which provides further evidence for this view point. In contrast, in three dimensions we never found evidence that $\chi_i/L^3$ saturates \cite{Ayyar:2015lrd}. Assuming that the bilinear condensate does form, we fit our data to the form
\begin{equation}
\chi = \frac{1}{4} \Phi^2 L^4 + b L^2,
\end{equation}
which we found empirically to be a good form for the behavior of the susceptibilities in the intermediate region, to extract the condensate $\Phi = \langle O^0_{ab}(x) \rangle$. This plotted in Fig. \ref{fig:susvsL} (on the right side).
The fact that $\Phi = \langle O^0_{ab}(x) \rangle \neq 0$ implies that the $SU(4)$ symmetry is broken in the range $1.60 \leq U \leq 1.81$. However, note that this region is much narrower than what was computed in the earlier work. It also means we should have two transitions in our model in quick succession (the PMW-FM transition and the FM-PMS transition). Here we assume that both transitions are second order since we have not seen any reason to believe one of them is first order, but with our small lattice results we cannot rule out the possibility of weak first order transitions. This is especially true for the FM-PMS transition, where the condensate seems to rapidly reducing. Assuming they are second order the PMW-FM transition would follow the Gross Neveu universality while the FM-PMS transition could follow the $SU(4) \sim SO(6)$ spin model universality, both of which would show mean field exponents up to logarithmic corrections. This means
\begin{equation}
\chi_i /L^{2-\eta} \sim f_i((U-U_c) L^{1/\nu})
\end{equation}
where $\eta = 0$ and $\nu = 1/2$ (up to log corrections). In Fig. \ref{fig:suscrit} we plot $\chi_i /L^2 $ versus $U$ for different $L$ values. As the figure shows, all these curves (for large $L)$ appear to intersect through $U_c$ as expected. We see that $U_c$ for the PMW-FM transition is at roughly $1.60$, and for FM-PMS phase is at around $1.81$, in agreement with our previous conclusion based on computing $\Phi$.
\section{Conclusions}
\label{sec6}
In this work we have studied a lattice field theory model where fermions are massless at weak couplings, but become massive at sufficiently strong couplings even though all fermion bilinear condensates vanish. Fermions seem to acquire their mass through four-fermion condensates. The presence of an intermediate FM phase does not rule out the possibility that this alternate mechanism of mass generation is only a lattice artifact in 4D. On the other hand since the intermediate phase is quite narrow in bare coupling constant space, extending only from $1.60 \leq U \leq 1.81$, it is likely that an extension of the model may reveal the absence of the intermediate phase and may even show the presence of a direct second order PMW-PMS phase transition like in 3D. Such a transition would make the mass generation mechanism through four-fermion condensates interesting even in the continuum.
We can use the continuum model (\ref{contact}) to understand this alternate mechanism of mass generation better. We view four-fermion condensates as a fermion bilinear condensate between a fundamental fermion field and a composite fermion fermion field. For example if $\psi_a(x), a=1,2,3,4$ represents the four components of a Dirac field in four dimensions, then we can view the composite field
\begin{equation}
\bar{\chi}_a(x) = \varepsilon_{abcd} \psi^b(x) \psi^c(x) \psi^d(x),
\end{equation}
as an independent Dirac field such that $\bar{\chi}\psi$ acts as the chirally invariant mass term for a theory that contains both $\psi(x)$ and $\bar{\chi}(x)$. Note that the $U(1)$ fermion number symmetry (\ref{u1fs}) acts as the chiral symmetry for this fermion mass term, while the $U(1)$ chiral symmetry (\ref{u1cs}) acts as the fermion number symmetry of this mass term. Since the continuum model (\ref{contact}) breaks the $U(1)$ fermion number symmetry explicitly, the new type of mass term is always allowed by interactions. However, at weak couplings composite states do not form and the mass term continues to behave as an irrelevant four-fermion coupling. At sufficiently strong couplings, when the composite states form the four-fermion coupling begins to behave like a mass term and becomes relevant.
This fermion mass generation mechanism where fundamental fermions pair with composite fermions is an old idea \cite{Eichten:1985ft,Golterman:1992yha}. The fact that such a mass generation mechanism can occur without any spontaneous symmetry breaking within a phase (PMS phase) of a regulated microscopic field theory that also contains a phase (PMW phase) with massless fermions was also known before but not emphasized. We find the existence of both these phases within the same regulated microscopic theory exciting, since it means that fermion mass generation can be a dynamical phenomenon purely related to renormalization group arguments rather than symmetry breaking. For all this to be of interest in continuum quantum field theory, there must be a direct second order PMW-PMS transition in the regulated theory. Search for it in 4D would be an interesting research direction for the future.
\acknowledgments
We would like to thank M.~Golterman for pointing us to the lattice literature on the subject. We also thank S.~Catterall, U.-J.~Wiese and C.~Xu for helpful discussions at various stages of this work. SC would like to thank the Center for High Energy Physics at the Indian Institute of Science for hospitality, where part of this work was done. The material presented here is based upon work supported by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award Number DE-FG02-05ER41368. An important part of the computations performed in this research was done using resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science \cite{Pordes2008,Sfiligoi2009}.
|
2,869,038,156,683 | arxiv | \section*{Acknowledgements} \label{sec:acknowledgements}
This work was funded in part by a Privacy Enhancing Technologies Award\footnote{\url{ https://research.facebook.com/research-award-recipients/year/2021/?s=korolova}} from Facebook and NSF awards \#1916153, \#1956435, and \#1943584.
\section{Additional Related Works} \label{sec:many-queries-related-works}
In this section, in addition to the prior works on large-scale query answering previously discussed (Section~\ref{sec:many-queries-prior-work}), we discuss other important works related to differentially private query answering.
We begin by discussing some works (concurrent with and subsequent to our work) related to answering large sets of prespecified queries.
For the mechanisms defined in these works, a prime direction for future research would be to evaluate them analogously to our evaluation of \texttt{RAP}\ in this work.
For instance, evaluating their scalability to larger query spaces as well as their generalizability for answering queries posed in the future, perhaps in a manner similar to Tao et al.~\cite{tao2021benchmarking}).
We then briefly discuss some lines of research related to the general problem and settings explored in this work.
\subsubsection*{Answering Many Queries}
One closely related work to the goals of this work is that of Liu et al.~\cite{liu2021iterative}, which studies the problem of constructing an algorithmic framework for privately answering a prespecified set of statistical queries --- our first setting of interest.
Concretely, the framework they construct unifies several DP mechanisms which specifically answer queries by building a synthetic dataset through iterative, adaptive updates.
These mechanisms include the previous practical state-of-the-art mechanisms~\cite{gaboardi2014dual, vietri2020new}, as well as a modified variant of a \textit{preliminary} version of the \texttt{RAP}\ mechanism (where a softmax transformation~\cite{bridle1990training} is applied to each row of the synthetic dataset $D$ after each iteration of \texttt{RAP}'s optimization procedure).
Liu et al. then leverage their framework to design two new mechanisms for answering prespecified sets of queries, and empirically show that both achieve high utility.
However, in their empirical evaluations, Liu et al. find that the modified \texttt{RAP}\ mechanism's utility is on par with the utility of their two newly proposed mechanisms, and that \texttt{RAP}\ is computationally cheaper to execute.
Thus, we do not additionally evaluate their two new mechanisms in this work.
Moreover, Aydore et al.\ have subsequently updated the \texttt{RAP}\ mechanism to incorporate a similar modification (applying the Sparsemax transformation~\cite{martins2016softmax}, and optionally finishing with randomized rounding) and showed that it further improves utility --- in turn, further justifying our focus on the \texttt{RAP}\ mechanism.
Along similar lines, another closely related work is the recently introduced \textit{Adaptive and Iterative Mechanism} (AIM) by McKenna et al.~\cite{mckenna2022aim}.
AIM is a mechanism for DP synthetic data generation to specifically answer workloads of marginal queries.
The high-level idea of their approach is similar to that of \texttt{RAP}\ and Liu et al.'s work~\cite{liu2021iterative}, adaptively selecting marginals to use to optimize the synthetic dataset.
However, their work takes this a step further by designing a method to more intelligently perform the selection.
Moreover, they develop new techniques to quantify the uncertainty to answers derived from the generated synthetic data.
Empirically evaluating AIM, they show that it generally outperforms prior state-of-the-art mechanisms, including \texttt{RAP}.
However, their evaluation setting was somewhat different; specifically, they reduced the domain size of the datasets by discretizing numerical features into 32 equal-width bins.
This makes the optimization problem significantly easier for all mechanisms they evaluate, which is highly useful when running a wide range of experiments across many random trials.
However, it leaves AIM's utility unclear when the data is unbinned and sparse (e.g., for a numerical attribute with 100 possible values).
Moreover, since the source code of AIM's implementation was never released, we consider a ground-up reimplementation of AIM amenable to large-scale evaluations on large and sparse data spaces to be out of the scope of this work.
Performing such evaluations, especially in connection to the computational resources required by each method (AIM, \texttt{RAP}, and others), is a prime direction for future work.
Another closely related work is the concurrent theoretical work of Nikolov~\cite{nikolov2022private}, which proposes and analyzes a new mechanism for answering sets of prespecified queries with differential privacy.
Their new mechanism is based on randomly projecting the queries to a lower dimensional space, answering the projected queries with a simple DP additive-noise mechanism, then lifting the answers back into their original dimension.
The primary focus and contribution of their work is the thorough mathematical analysis of the mechanism's utility, showing that it achieves optimal worst case sample complexity under an average error metric.
Such results are less directly relevant to our work though, as our focus is on different error metrics for fixed real-world datasets (rather than in the worst case across all possible datasets).
However, conceptually, Nikolov's newly proposed mechanism could be used to tackle the same problem as our work.
Practically though, the runtime of Nikolov's mechanism (although polynomial) would prevent it from being used to answer the large number of queries that we answer with \texttt{RAP}\ in this work.
An intriguing direction for future work would be adapting Nikolov's new mechanism for practical query answering, and determining ways to scale it up to accurately answer queries on a truly large scale.
A final line of closely related work is the subsequent work of Vietri et al.~\cite{vietri2022private}.
The focus of their work is explicitly on enhancing the \texttt{RAP}\ mechanism, creating a new mechanism they call \texttt{RAP}++.
Their goal is orthogonal to the goal of this work, in that they seek to extend the original \texttt{RAP}\ mechanism so that it is able to support numerical features natively.
Prior to their work, \texttt{RAP}\ required one-hot discretization of any numerical features in the dataset.
For features with wide numerical ranges, one-hot discretization greatly increases the dimensionality of \texttt{RAP}'s optimization problem, which in turn increases the computational burden and simultaneously decreases the mechanism's overall utility.
In \texttt{RAP}++, Vietri et al.\ incorporate tempered sigmoid annealing and random linear projection queries into \texttt{RAP}\ in order to handle a mixture of categorical and numerical features without any discretization.
They perform several empirical evaluations on \texttt{RAP}++, finding that it achieves state-of-the-art utility and runtime.
Despite their goal being orthogonal to the goal of this work, our findings from this work could be used to further improve the \texttt{RAP}++ mechanism and its evaluation.
\subsubsection*{Related Lines of Research}
One related (but disjoint) line of research is on the \textit{public/private} model of differential privacy, where some data must be protected with differential privacy while the remaining ``public'' data requires no privacy protections \cite{beimel2013private, ji2013differential, hamm2016learning, papernot2016semi, bassily2020private, alon2019limits, liu2021leveraging, tao2021prior}.
These works have shown that mechanisms can be designed which make use of a small amount of public data in order to significantly boost utility.
Our work differs from this model in that it does not directly make use of any public data.
In our newly defined partial knowledge setting, we instead assume that the entire set of user data $D$ is private, but that there exist publicly known historically posed queries $Q_H$ which are not privacy sensitive.
Assuming that $Q_H$ was generated from a random distribution $\mathcal{T}_H$, we seek to understand the extent to which the \texttt{RAP}\ mechanism is able to take advantage of $Q_H$ using $D$ in order to accurately answer future queries generated from a distribution $\mathcal{T}_F$ related to $\mathcal{T}_H$.
The final related line of work is on reconstruction attacks, which studies how accurately sets of queries can be answered before private information in the dataset can be recovered.
The high level results of this research can be summarized through the Fundamental Law of Information Recovery~\cite{dwork2014algorithmic}: ``overly accurate answers to too many questions will destroy privacy in a spectacular way.''
Initial work on reconstruction attacks~\cite{dinur2003revealing} inspired the conception of DP, and subsequent works have improved the computational efficiency of attacks, improved the theoretical analyses of attacks, or crafted highly effective attacks to specific cases~\cite{dwork2007price, dwork2008new, muthukrishnan2012optimal, dwork2017exposed, garfinkel2019understanding}.
Although somewhat related, the focus of this line of work significantly differs from the focus of our work.
In research on reconstruction attacks, the basic goal is to find worst-case sets of queries (or the minimal sizes thereof) such that it is impossible to answer them all accurately while simultaneously maintaining privacy.
In this work, our focus is not on generic worst-case queries, but rather on efficiently and accurately answering practical sets of prespecified or randomly sampled queries with privacy.
Thus, the works on reconstruction attacks are not directly relevant to our problem in either of the two settings we consider.
\section{Appendix} \label{app:many-queries-appendix}
\subsection*{Deferred Regression Analysis Details}
In this portion, we present the details of the setup and results for the regression analysis on the utility impact of filtering ``large'' marginals out of \texttt{RAP}'s evaluation.
\subsubsection*{Present Error vs.\ Workload Size}
For this regression analysis on each dataset, we define the following regression variables:
\begin{itemize}
\item $x_1, x_2$: dummy variable encodings for the three levels of $\epsilon$ evaluated. I.e.,
\begin{itemize}[label=$\circ$]
\item $x_1 = x_2 = 0$ represents $\epsilon = 0.01$.
\item $x_1 = 1, x_2 = 0$ represents $\epsilon = 0.1$.
\item $x_1 = 0, x_2 = 1$ represents $\epsilon = 1$.
\end{itemize}
\item $x_3$: logarithm of the workload size.
\item $x_4$: indicator variable representing whether thresholding was applied. I.e., $x_4=0$ if thresholding was not applied, $x_4=1$ if it was.
\item $\zeta$: stochasticity in the process (e.g., from randomness in the \texttt{RAP}\ mechanism due to privacy, from randomness in the marginal selection process across independent trials, etc.).
\end{itemize}
With these variables defined, we state the full regression model with interactions as
\[ \textstyle\err_P = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + (\beta_3 + \beta_4 x_1 + \beta_5 x_2)x_3 + (\beta_6 + \beta_7 x_1 + \beta_8 x_2 + (\beta_9 + \beta_{10} x_1 + \beta_{11} x_2)x_3)x_4 + \zeta, \]
and the restricted regression model as
\[ \textstyle\err_P = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + (\beta_3 + \beta_4 x_1 + \beta_5 x_2)x_3 + \zeta. \]
We then fit both the full and restricted regression models to the results of the \texttt{RAP}\ evaluations for the ADULT and LOANS datasets (separately).
Regression results for the full models (ADULT on left and LOANS on right) are stated below.
\begin{table}[!htb]
\begin{minipage}{.5\linewidth}
\begin{singlespace}
\scriptsize
\begin{tabular}{lclc}
\toprule
\textbf{Dep. Variable:} & present\_err & \textbf{ R-squared: } & 0.963 \\
\textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.959 \\
\textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 266.6 \\
\textbf{Covariance Type:} & nonrobust & \textbf{ Prob (F-statistic):} & 7.40e-76 \\
\textbf{No. Observations:} & 126 & \textbf{ Log-Likelihood: } & 295.45 \\
\textbf{Df Residuals:} & 114 & \textbf{ AIC: } & -566.9 \\
\textbf{Df Model:} & 11 & \textbf{ BIC: } & -532.9 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lcccccc}
& \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\
\midrule
$\beta_{0}$ & 0.0320 & 0.009 & 3.415 & 0.001 & 0.013 & 0.051 \\
$\beta_{1}$ & -0.0066 & 0.013 & -0.495 & 0.621 & -0.033 & 0.020 \\
$\beta_{2}$ & -0.0248 & 0.013 & -1.869 & 0.064 & -0.051 & 0.001 \\
$\beta_{3}$ & 0.0650 & 0.003 & 24.536 & 0.000 & 0.060 & 0.070 \\
$\beta_{4}$ & -0.0528 & 0.004 & -14.075 & 0.000 & -0.060 & -0.045 \\
$\beta_{5}$ & -0.0600 & 0.004 & -16.015 & 0.000 & -0.067 & -0.053 \\
$\beta_{6}$ & 0.0280 & 0.013 & 2.120 & 0.036 & 0.002 & 0.054 \\
$\beta_{7}$ & -0.0309 & 0.019 & -1.649 & 0.102 & -0.068 & 0.006 \\
$\beta_{8}$ & -0.0277 & 0.019 & -1.482 & 0.141 & -0.065 & 0.009 \\
$\beta_{9}$ & -0.0036 & 0.004 & -0.952 & 0.343 & -0.011 & 0.004 \\
$\beta_{10}$ & 0.0052 & 0.005 & 0.988 & 0.325 & -0.005 & 0.016 \\
$\beta_{11}$ & 0.0040 & 0.005 & 0.762 & 0.448 & -0.006 & 0.014 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lclc}
\textbf{Omnibus:} & 24.270 & \textbf{ Durbin-Watson: } & 1.693 \\
\textbf{Prob(Omnibus):} & 0.000 & \textbf{ Jarque-Bera (JB): } & 114.122 \\
\textbf{Skew:} & 0.434 & \textbf{ Prob(JB): } & 1.65e-25 \\
\textbf{Kurtosis:} & 7.581 & \textbf{ Cond. No. } & 64.4 \\
\bottomrule
\end{tabular}\\
\normalsize
\end{singlespace}
\end{minipage}%
\begin{minipage}{.05\linewidth}
\phantom{.}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{singlespace}
\scriptsize
\begin{tabular}{lclc}
\toprule
\textbf{Dep. Variable:} & present\_err & \textbf{ R-squared: } & 0.942 \\
\textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.937 \\
\textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 193.4 \\
\textbf{Covariance Type:} & nonrobust & \textbf{ Prob (F-statistic):} & 1.05e-75 \\
\textbf{No. Observations:} & 144 & \textbf{ Log-Likelihood: } & 228.17 \\
\textbf{Df Residuals:} & 132 & \textbf{ AIC: } & -432.3 \\
\textbf{Df Model:} & 11 & \textbf{ BIC: } & -396.7 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lcccccc} & \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\
\midrule
$\beta_{0}$ & 0.0372 & 0.019 & 1.982 & 0.050 & 7.32e-05 & 0.074 \\
$\beta_{1}$ & -0.0113 & 0.027 & -0.425 & 0.671 & -0.064 & 0.041 \\
$\beta_{2}$ & -0.0282 & 0.027 & -1.062 & 0.290 & -0.081 & 0.024 \\
$\beta_{3}$ & 0.0966 & 0.004 & 21.626 & 0.000 & 0.088 & 0.105 \\
$\beta_{4}$ & -0.0767 & 0.006 & -12.134 & 0.000 & -0.089 & -0.064 \\
$\beta_{5}$ & -0.0882 & 0.006 & -13.953 & 0.000 & -0.101 & -0.076 \\
$\beta_{6}$ & 0.0215 & 0.027 & 0.812 & 0.418 & -0.031 & 0.074 \\
$\beta_{7}$ & -0.0273 & 0.038 & -0.729 & 0.467 & -0.102 & 0.047 \\
$\beta_{8}$ & -0.0275 & 0.038 & -0.733 & 0.465 & -0.102 & 0.047 \\
$\beta_{9}$ & -0.0039 & 0.006 & -0.619 & 0.537 & -0.016 & 0.009 \\
$\beta_{10}$ & 0.0039 & 0.009 & 0.437 & 0.663 & -0.014 & 0.022 \\
$\beta_{11}$ & 0.0051 & 0.009 & 0.574 & 0.567 & -0.013 & 0.023 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lclc}
\textbf{Omnibus:} & 29.738 & \textbf{ Durbin-Watson: } & 2.677 \\
\textbf{Prob(Omnibus):} & 0.000 & \textbf{ Jarque-Bera (JB): } & 208.504 \\
\textbf{Skew:} & 0.355 & \textbf{ Prob(JB): } & 5.29e-46 \\
\textbf{Kurtosis:} & 8.852 & \textbf{ Cond. No. } & 75.6 \\
\bottomrule
\end{tabular}\\
\normalsize
\end{singlespace}
\end{minipage}%
\end{table}
\subsubsection*{Present Error vs.\ Number of Queries}
For this regression analysis on each dataset, we define the same variables as in before, with the only change being that $x_3$ now represents the logarithm of the total number of consistent queries that \texttt{RAP}\ evaluates (rather than the size of the workload that \texttt{RAP}\ evaluates).
With these variables, we define the same full and restricted regression models as before, and we fit both to the results of the \texttt{RAP}\ evaluations.
Regression results for the full models (ADULT on left and LOANS on right) are stated below.
\begin{table}[!htb]
\begin{minipage}{.5\linewidth}
\begin{singlespace}
\scriptsize
\begin{tabular}{lclc}
\toprule
\textbf{Dep. Variable:} & present\_err & \textbf{ R-squared: } & 0.889 \\
\textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.879 \\
\textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 83.19 \\
\textbf{Covariance Type:} & nonrobust & \textbf{ Prob (F-statistic):} & 3.83e-49 \\
\textbf{No. Observations:} & 126 & \textbf{ Log-Likelihood: } & 227.07 \\
\textbf{Df Residuals:} & 114 & \textbf{ AIC: } & -430.1 \\
\textbf{Df Model:} & 11 & \textbf{ BIC: } & -396.1 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lcccccc}
& \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\
\midrule
$\beta_{0}$ & -0.3210 & 0.043 & -7.438 & 0.000 & -0.406 & -0.235 \\
$\beta_{1}$ & 0.2882 & 0.061 & 4.722 & 0.000 & 0.167 & 0.409 \\
$\beta_{2}$ & 0.3014 & 0.061 & 4.939 & 0.000 & 0.181 & 0.422 \\
$\beta_{3}$ & 0.0472 & 0.004 & 12.856 & 0.000 & 0.040 & 0.054 \\
$\beta_{4}$ & -0.0390 & 0.005 & -7.516 & 0.000 & -0.049 & -0.029 \\
$\beta_{5}$ & -0.0436 & 0.005 & -8.398 & 0.000 & -0.054 & -0.033 \\
$\beta_{6}$ & 0.1198 & 0.057 & 2.110 & 0.037 & 0.007 & 0.232 \\
$\beta_{7}$ & -0.1237 & 0.080 & -1.540 & 0.126 & -0.283 & 0.035 \\
$\beta_{8}$ & -0.1189 & 0.080 & -1.480 & 0.142 & -0.278 & 0.040 \\
$\beta_{9}$ & -0.0127 & 0.005 & -2.742 & 0.007 & -0.022 & -0.004 \\
$\beta_{10}$ & 0.0123 & 0.007 & 1.886 & 0.062 & -0.001 & 0.025 \\
$\beta_{11}$ & 0.0124 & 0.007 & 1.894 & 0.061 & -0.001 & 0.025 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lclc}
\textbf{Omnibus:} & 53.796 & \textbf{ Durbin-Watson: } & 1.512 \\
\textbf{Prob(Omnibus):} & 0.000 & \textbf{ Jarque-Bera (JB): } & 189.737 \\
\textbf{Skew:} & -1.528 & \textbf{ Prob(JB): } & 6.30e-42 \\
\textbf{Kurtosis:} & 8.177 & \textbf{ Cond. No. } & 572. \\
\bottomrule
\end{tabular}\\
\normalsize
\end{singlespace}
\end{minipage}%
\begin{minipage}{.05\linewidth}
\phantom{.}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{singlespace}
\scriptsize
\begin{tabular}{lclc}
\toprule
\textbf{Dep. Variable:} & present\_err & \textbf{ R-squared: } & 0.887 \\
\textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.877 \\
\textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 93.96 \\
\textbf{Covariance Type:} & nonrobust & \textbf{ Prob (F-statistic):} & 7.68e-57 \\
\textbf{No. Observations:} & 144 & \textbf{ Log-Likelihood: } & 180.50 \\
\textbf{Df Residuals:} & 132 & \textbf{ AIC: } & -337.0 \\
\textbf{Df Model:} & 11 & \textbf{ BIC: } & -301.4 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lcccccc}
& \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\
\midrule
$\beta_{0}$ & -0.6398 & 0.070 & -9.171 & 0.000 & -0.778 & -0.502 \\
$\beta_{1}$ & 0.5254 & 0.099 & 5.326 & 0.000 & 0.330 & 0.721 \\
$\beta_{2}$ & 0.5873 & 0.099 & 5.952 & 0.000 & 0.392 & 0.782 \\
$\beta_{3}$ & 0.0779 & 0.005 & 14.839 & 0.000 & 0.068 & 0.088 \\
$\beta_{4}$ & -0.0618 & 0.007 & -8.321 & 0.000 & -0.076 & -0.047 \\
$\beta_{5}$ & -0.0709 & 0.007 & -9.550 & 0.000 & -0.086 & -0.056 \\
$\beta_{6}$ & 0.1453 & 0.096 & 1.509 & 0.134 & -0.045 & 0.336 \\
$\beta_{7}$ & -0.1293 & 0.136 & -0.949 & 0.344 & -0.399 & 0.140 \\
$\beta_{8}$ & -0.1474 & 0.136 & -1.082 & 0.281 & -0.417 & 0.122 \\
$\beta_{9}$ & -0.0240 & 0.007 & -3.647 & 0.000 & -0.037 & -0.011 \\
$\beta_{10}$ & 0.0195 & 0.009 & 2.088 & 0.039 & 0.001 & 0.038 \\
$\beta_{11}$ & 0.0227 & 0.009 & 2.431 & 0.016 & 0.004 & 0.041 \\
\bottomrule
\end{tabular}\\
\begin{tabular}{lclc}
\textbf{Omnibus:} & 18.588 & \textbf{ Durbin-Watson: } & 1.775 \\
\textbf{Prob(Omnibus):} & 0.000 & \textbf{ Jarque-Bera (JB): } & 78.893 \\
\textbf{Skew:} & -0.142 & \textbf{ Prob(JB): } & 7.39e-18 \\
\textbf{Kurtosis:} & 6.615 & \textbf{ Cond. No. } & 726. \\
\bottomrule
\end{tabular}\\
\normalsize
\end{singlespace}
\end{minipage}%
\end{table}
\section{Conclusions} \label{sec:conclusions}
In this work, we address the high-level research question: \textit{to what extent are differentially private mechanisms able to answer a large number of statistical queries efficiently and with low error?}
We analyze this problem in two settings, the classic prespecified queries setting, and a new setting that we introduced where only partial knowledge of the queries is available to the DP mechanism in advance.
In both settings, our contributions are grounded in the state-of-the-art DP mechanism for answering large numbers of queries, the \texttt{RAP}\ mechanism.
In the prespecified queries setting, we perform a focused but thorough reproducibility study on Aydore et al.'s original evaluation of \texttt{RAP}\ in order to clarify its value and strengthen its adoptability for practical uses.
We also expand the class of queries that \texttt{RAP}\ is capable of evaluating, thus extending \texttt{RAP}'s applicability in practice.
Aside from the prespecified queries setting, we concretely specify a new partial knowledge setting where a mechanism is provided with a set of historically posed queries which are similar to queries that will be posed in the future.
In this setting, we define a machine learning inspired utility measure to quantify a mechanism's ability to answer such future queries.
Then, utilizing this utility measure, we evaluate \texttt{RAP}'s suitability for generating synthetic datasets to answer queries posed in the future, finding that it is both efficient and effective.
Our findings in this chapter further the state of the art in differentially private large-scale query answering, and additionally open new directions for future work on other problems in differential privacy within our newly defined partial knowledge setting.
\section{Extending \texttt{RAP}'s Applicability} \label{sec:extending-applicability}
In this section, we address our third contribution for the setting where queries are prespecified: extending \texttt{RAP}'s applicability by expanding the class of queries that it is able to evaluate.
We begin by discussing the motivation behind this contribution.
We then describe what we expand the query class to ($r$-of-$k$ thresholds) and how we accomplish it.
Finally, we detail the empirical evaluations we perform on \texttt{RAP}\ within this expanded query class to quantify its utility and feasibility, finding that \texttt{RAP}\ efficiently evaluates $r$-of-$k$ thresholds with high utility.
\subsection{Motivation}
We contextualize the motivation for this contribution by considering the contributions of prior works.
Prior work on answering statistical queries in practical settings has been focused on relatively simple classes of statistical queries --- most popularly, $k$-way marginals (Definition~\ref{def:kw}), as these are a useful query class which is evaluable within a reasonable computational budget~\cite{barak2007privacy, thaler2012faster, gupta2013privately, chandrasekaran2014faster}.
Aydore et al.'s claim is that their gradient-based \texttt{RAP}\ mechanism~\cite{aydore2021differentially} is able to answer queries from richer classes.
In addition to evaluating $k$-way marginals, they demonstrated this claim by briefly evaluating a new class of queries, 1-of-$k$ thresholds (Definition~\ref{def:1k}).
However, 1-of-$k$ thresholds are essentially a negation of $k$-way marginals.
As such, Aydore et al.\ were able to evaluate \texttt{RAP}\ on 1-of-$k$ thresholds by reusing virtually the same class of EEDQs and the same underlying implementation as they used for $k$-way marginals.
Thus, although their evaluation demonstrated that \texttt{RAP}\ attains high utility on both query classes, these choices of query classes were not fully convincing in demonstrating that \texttt{RAP}\ is effective for answering truly richer classes of queries.
Therefore, it remained an open question whether \texttt{RAP}\ is able to answer richer, more general query classes.
\subsection{Expanding the Query Class} \label{sec:expanding-queries}
To extend \texttt{RAP}'s applicability, we develop the mathematical and computational machinery necessary for \texttt{RAP}\ to evaluate a class of queries which generalizes both $k$-way marginals and 1-of-$k$ thresholds: $r$-of-$k$ thresholds (Definition~\ref{def:rk}).
We first describe this query class in detail, then derive its corresponding EEDQs.
Finally, we show how we optimize the derived EEDQs to be more efficiently evaluable, greatly reducing \texttt{RAP}'s per-query evaluation time.
\subsubsection{Generalizing to $r$-of-$k$ Thresholds}
Informally, an $r$-of-$k$ threshold query counts what fraction of datapoints in the dataset have at least $r$ out of the $k$ specified attributes.
Thus, it strictly generalizes both $k$-way marginals (when $r=k$) and 1-of-$k$ thresholds (when $r=1$).
$r$-of-$k$ thresholds are a useful generalization because they allow for more expressive, dynamic queries beyond the rigid ``everything'' ($r=k$) or ``anything'' ($r=1$) queries that were previously studied.
The challenge when expanding \texttt{RAP}'s evaluation to $r$-of-$k$ thresholds is deriving corresponding EEDQs.
$r$-of-$k$ thresholds cannot trivially reuse the EEDQs relied upon by Aydore et al.\ to evaluate $k$-way marginals and 1-of-$k$ thresholds.
Thus, we must derive new EEDQs for $r$-of-$k$ thresholds, and we accomplish this by generalizing the EEDQs of $k$-way marginals and 1-of-$k$ thresholds.
Towards this, we first reframe the standard definition of $r$-of-$k$ thresholds to enable explicit accounting of all possible combinations of matching and non-matching terms.
\begin{definition}[$r$-of-$k$ thresholds, Alternative] \label{def:rk-alt}
An $r$-of-$k$ threshold query $q_{\phi_{S,y,r}}$ is a statistical query whose predicate is specified by a positive integer $r \le k$, a set $S$ of $k$ features $f_1 \neq \dots \neq f_k \in [d]$, and a target $y \in (\mathcal{X}_{f_1} \times \dots \times \mathcal{X}_{f_k})$.
Let $\mathcal{R}$ denote the set of all partitions $(R_+, R_-)$ of the $k$ features in $S$, such that each $|R_+| \ge r$ and each corresponding $R_- = S - R_+$.
The predicate $\phi_{S,y,r}$ is then given by
\[ \phi_{S,y,r}(x)=
\begin{cases}
1 & \text{if }\ \bigvee_{(R_+, R_-) \in \mathcal{R}} \left(\bigwedge_{i \in R_+} (x_{f_i} = y_i) \bigwedge_{i \in R_-} (x_{f_i} \neq y_i)\right)\\
0 & \text{otherwise.}
\end{cases}
\]
Note that at most one partition in $\mathcal{R}$ will satisfy the predicate.
\end{definition}
We now use this equivalent definition of $r$-of-$k$ thresholds queries to design corresponding EEDQs.
For $k$-way marginals, Aydore et al.\ used \textit{product queries} (Definition~\ref{def:pq}) as EEDQs, which simply compute the product of a datapoint's values at the $k$ specified indices.
For $r$-of-$k$ threshold queries, we generalize product queries in the following ways.
First, we expand the product queries to explicitly include both positive and negated terms, which we refer to as \textit{generalized product queries}.
\begin{definition}[Generalized Product Query] \label{def:gpq}
Given two disjoint subsets of features $T_+, T_- \subseteq [d']$, the generalized product query $\hat{q}_{\hat{\phi}_{T_+,T_-}}$ is a surrogate query parameterized by $\hat{\phi}_{T_+,T_-}$ which is defined as
\[ \hat{\phi}_{T_+,T_-}(x) = \prod_{i \in T_+} x_i \prod_{i \in T_-} (1-x_i). \]
\end{definition}
\noindent Informally, a generalized product query effectively serves as a ``sub''-EEDQ for the conjunction portion of a single partition of $\phi_{S,y,r}(x)$ in Definition~\ref{def:rk-alt}.
Then, leveraging this alternative definition of $r$-of-$k$ thresholds together with generalized product queries, we define a new class of EEDQs in Definition~\ref{def:ptq}: \textit{polynomial threshold queries}.
\begin{definition}[Polynomial Threshold Query] \label{def:ptq}
Given a subset of features $T \subseteq [d']$ and integer $r$, let $\Upsilon$ denote the set of all partitions $(T_+, T_-)$ of $T$ such that each $|T_+| \ge r$ and each corresponding $T_- = T - T_+$.
The \textit{polynomial threshold query} $\hat{q}_{\hat{\phi}_{T,r}}$ is a surrogate query parameterized by $\hat{\phi}_{T,r}$ which is defined in terms of the generalized product query predicates as
\[ \hat{\phi}_{T,r}(x) = \sum_{(T_+, T_-) \in \Upsilon} \hat{\phi}_{T_+,T_-}(x). \]
\end{definition}
\noindent Informally, a polynomial threshold query computes the sum of generalized product queries across all $\sum_{t=r}^k {k \choose t}$ partitions of $T$, where $T$ is constructed identically as in Lemma~\ref{lem:pq}; i.e., for every $i \in S$, we include in $T$ the coordinate corresponding to $y_i \in \mathcal{X}_{f_i}$.
\subsubsection{Optimizing the Evaluation of Polynomial Threshold Queries} \label{sec:extending-optimizing}
Evaluating polynomial threshold queries can be computationally expensive due to their combinatorial expansion and summation of generalized product query predicates.
Therefore, optimizing their definition to be efficiently evaluable is of utmost importance for enabling \texttt{RAP}\ to evaluate large sets of $r$-of-$k$ thresholds.
Towards this, we present two optimizations that can be used together, which significantly improve the practical runtime of \texttt{RAP}.
The first optimization is inspired by Aydore et al.'s implicit reduction of 1-of-$k$ threshold queries to $k$-way marginal queries.
They accomplished this by recognizing that a 1-of-$k$ threshold predicate is the negation of a $k$-way marginal predicate on a negated datapoint; i.e., $\phi_{S,y,1}(x) = 1-\phi_{S,y,k}(1-x)$.
This equivalence enabled them to efficiently reuse the $k$-way marginals' EEDQs (product queries) in \texttt{RAP}'s evaluation.
Applying this concept more generally to computing an $r$-of-$k$ threshold predicate $\phi_{S,y,r}(x)$, the idea is that when $r \le k/2$, it is logically equivalent to compute the negation of a corresponding predicate (with $r' = k-r+1$) on the negated datapoint; i.e., $\phi_{S,y,r}(x) = 1-\phi_{S,y,r'}(1-x)$.
The benefit of utilizing this equivalence when using a polynomial threshold query as the EEDQ to evaluate $\phi_{S,y,r}(x)$ is that \textit{at most} $\lceil k/2 \rceil$ different partition sizes now need to be computed over, compared to at most $k$ when not utilizing this equivalence.
The computational savings from utilizing the equivalence are especially apparent when $r$ is small, as it leads to an exponential (in $k$) reduction in the required number of predicate evaluations.
For the second optimization, the goal is to eliminate the need to explicitly account for the negated terms in our alternative definition of $r$-of-$k$ thresholds (Definition~\ref{def:rk-alt}), as this in turn necessitates the computation of the product of negated values in generalized product queries (Definition~\ref{def:gpq}).
Removing the conjunction over negated terms from Definition~\ref{def:rk-alt} yields a logically equivalent predicate; i.e.,
\[ \phi_{S,y,r}(x)=
\begin{cases}
1 & \text{if }\ \bigvee_{(R_+, R_-) \in \mathcal{R}} \bigwedge_{i \in R_+} (x_{f_i} = y_i)\\
0 & \text{otherwise.}
\end{cases}
\]
However, more than one partition of $\mathcal{R}$ may now satisfy the predicate.
As a result, analogously eliminating the product of negated values from the generalized product query definition (reducing it to a standard product query) would cause the summation in the polynomial threshold query's definition (Def~\ref{def:ptq}) to overcount.
To eliminate computing the product of negated values while simultaneously remedying this overcount, we utilize the principle of inclusion-exclusion to equivalently redefine polynomial threshold queries purely in terms of standard product queries (Definition~\ref{def:pq}).
\begin{definition}[Polynomial Threshold Query, Inclusion-Exclusion] \label{def:ptq-eff}
Given a subset of features $T \subseteq [d']$ and integer $r$, let $\Upsilon(i)$ denote the set of all $i$-size combinations of features in $T$ for $i=r \dots k$; i.e., each $T_i \in \Upsilon(i)$ is such that $|T_i|=i$ and $T_i \subseteq T$.
The polynomial threshold query $\hat{q}_{\hat{\phi}_{T,r}}$ parameterized by $\hat{\phi}_{T,r}$ can be defined in terms of product query predicates $\hat{\phi}_{T_\cdot}$ as
\[ \hat{\phi}_{T,r}(x) = \sum_{i=r}^k (-1)^{i-r} \binom{i-1}{i-r} \sum_{T_i \in \Upsilon(i)} \hat{\phi}_{T_i}(x). \]
\end{definition}
\noindent Utilizing this redefinition of polynomial threshold queries reduces the number of arithmetic operations by nearly half relative to the original definition (when $r > k/2$, which we assume without loss of generality by simultaneously utilizing the first optimization in this section).
In our subsequent experiments with $r$-of-$4$ thresholds (Section~\ref{sec:evaluating-r-of-k}), this reduction in operations results in a maximal runtime improvement of approximately 40\% for evaluating the polynomial threshold queries.
\subsection{Evaluating \texttt{RAP}\ on $r$-of-$k$ Thresholds} \label{sec:evaluating-r-of-k}
With the class of EEDQs derived, the only question that remains is how well \texttt{RAP}\ is able to utilize the EEDQs to answer prespecified sets of $r$-of-$k$ thresholds.
We investigate this question by evaluating how the various inputs to \texttt{RAP}\ affect its present utility and runtime.
\begin{table}
\centering
\begin{tabular}{ |c|c| }
\hline
Primary Mechanism & \texttt{RAP} \\
\hline
Baseline Mechanisms & \texttt{All-0}, \texttt{GM} \\
\hline
Utility Measure & $\err_P$ \\
\hline
$D$ & ADULT, LOANS \\
\hline
$\epsilon$ & $0.1, 1$ \\
\hline
$\delta$ & $1/|D|^2$ \\
\hline
$|W|$ & $1, 4, 16, 64, 256$ \\
\hline
$n^\prime$ & $500, 1000, 2000$ \\
\hline
$T$ & $1, 4, 16, 64$ \\
\hline
$K$ & $4, 16, 64, 256$ \\
\hline
$r$ & $1,2,3,4$ \\
\hline
$k$ & $4$ \\
\hline
\end{tabular}
\caption{Experimental reference table for evaluating $r$-of-$k$ thresholds with \texttt{RAP}.}
\label{tab:thresholds-experiments}
\end{table}
\subsubsection{Utility on $r$-of-$k$ Thresholds}
To begin, we evaluate the present utility of \texttt{RAP}\ on $r$-of-$k$ thresholds, with $k$ fixed at 4.
As in our prior experiments in Section~\ref{sec:improving-evaluation}, we contextualize \texttt{RAP}'s utility by comparing against the utilities of the \texttt{All-0}\ and \texttt{GM}\ baseline mechanisms.
We then evaluate the utility of each mechanisms across a range of $r$ values, $\epsilon$ values, datasets $D$, workload sizes $|W|$, and synthetic dataset sizes $n'$, and across the same $T,K$ values for \texttt{RAP}\ as before.
Table~\ref{tab:thresholds-experiments} contains a summary of the precise parameter values.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=1\linewidth]{figs/extending/adult-lines.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=1\linewidth]{figs/extending/loans-lines.pdf}
\end{tabular}
\caption{\texttt{RAP}'s minimal present error across all $T, K$ values considered alongside present error of the baseline mechanisms.}
\label{fig:extending-lines}
\end{figure}
Figure~\ref{fig:extending-lines} displays the results of this experiment for $n'=1000$, showing the minimal present error of \texttt{RAP}\ across all $T,K$ values considered alongside the present error of the baseline mechanisms.
The present error of both baseline mechanisms are as expected, with the \texttt{All-0}\ mechanism's present error having a clear and straightforward dependence on $r$, whereas the \texttt{GM}\ mechanism's present error is independent of $r$.
Immediately, we see that \texttt{RAP}\ significantly outperforms the baseline mechanisms in all settings.
Across the $r$ values, we find that \texttt{RAP}\ achieves its minimal present error at $r=4$ (i.e., 4-way marginals).
Although \texttt{RAP}'s present error for $r < 4$ is not much greater than for $r=4$, we find no further obvious relationship between \texttt{RAP}'s present error and $r$.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/extending/adult-adaptivity-r.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/extending/loans-adaptivity-r.pdf}
\end{tabular}
\caption{\texttt{RAP}'s present error at each $T, K$ value considered on a workload of 64 $r$-of-$k$ thresholds with $\epsilon = 0.1$.}
\label{fig:extending-adaptivity-r}
\end{figure}
To understand the role of that \texttt{RAP}'s adaptivity plays in this experiment, in Figure~\ref{fig:extending-adaptivity-r} we visualize \texttt{RAP}'s present error for each individual combination of $T,K$ settings considered.
Just as with 3-way marginals in Section~\ref{sec:reevaluating-role-of-adaptivity}, we find that the same adaptivity behavior emerges with 4-way marginals ($r=4$); i.e., \texttt{RAP}\ primarily needs to evaluate a specific number of queries to achieve low present error, regardless of whether those queries are evaluated jointly in a small number of adaptive rounds or individually across a large number of adaptive rounds.
However, we find that this behavior no longer holds for $r<4$.
Instead, the only consistent pattern that we find for $r<4$ in this figure (which holds across other workload sizes and $\epsilon$ values as well) is that \texttt{RAP}\ achieves its minimal present error when the number of adaptive rounds is relatively large but the number of selected queries per round of adaptivity is relatively small.
Since executing \texttt{RAP}\ for a large number of adaptive rounds is computationally expensive, this finding motivates future work on reducing the necessary number of rounds of adaptivity.
This could potentially be done by more strategically selecting the set of queries in each round --- for instance, by considering their expected joint impact on \texttt{RAP}'s present error in the next optimization step, rather than selecting the individual queries with highest present error independently.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=.7\linewidth]{figs/extending/adult-synth-data-lines.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=.7\linewidth]{figs/extending/loans-synth-data-lines.pdf}
\end{tabular}
\caption{\texttt{RAP}'s present error and runtime as a function of the synthetic dataset size on a workload of 64 $r$-of-$k$ thresholds with $\epsilon = 0.1$.}
\label{fig:extending-synth-data}
\end{figure}
\subsubsection{Effect of Synthetic Dataset Size}
Lastly, we investigate how \texttt{RAP}'s synthetic dataset size $n'$ affects its present error and runtime.
Conceptually, $n'$ controls \texttt{RAP}'s learning capacity --- the larger $n'$, the better the answers to the queries should be.
However, since optimizing large synthetic datasets is computationally expensive, $n'$ cannot be taken arbitrarily large.
Similarly, when the synthetic dataset size is too small, the optimization problem becomes underparameterized, which also results in a computationally expensive optimization process.
Aydore et al.\ empirically confirmed this utility--computation trade-off for \texttt{RAP}\ with $k$-way marginals, where they found that setting $n'=1000$ served as a good balance between utility and runtime for (filtered) 3-way and 5-way marginals.
We evaluate this trade-off on (unfiltered) $r$-of-4 thresholds, with the results shown in Figure~\ref{fig:extending-synth-data}.
For each setting of $r$, we find that increasing $n'$ generally results in a mild reduction of \texttt{RAP}'s present error, but that at $n'=1000$ \texttt{RAP}\ often attains minimal or near-minimal runtime.
This mirrors Aydore et al.'s results, and thus supports their findings regarding \texttt{RAP}'s utility--computation trade-off.
However, one interesting new finding is the effect that $r$ has on \texttt{RAP}'s runtime.
Apriori, we expected that \texttt{RAP}\ would have the shortest runtime when evaluating $r$-of-$4$ thresholds with $r\in\{1,4\}$, and that their runtimes would be comparable.
This is because at $r\in\{1,4\}$, \texttt{RAP}\ has the least arithmetic operations to perform in order to evaluate each predicate (compared to $r\in\{2,3\}$, refer to Section~\ref{sec:extending-optimizing} for details on predicate evaluation).
Although we confirm that \texttt{RAP}\ acheives minimal runtime at $r=4$, we find that nearly the opposite holds true for $r=1$, which induces up to a 20x longer runtime.
This increase in runtime is primarily explained by our prior observation that for $r < 4$, \texttt{RAP}\ achieves its maximal utility via a larger number of adaptive rounds (where \texttt{RAP}'s runtime appoximately linearly increases with the number of rounds).
However, even with this jump in runtime taken into consideration, we find that \texttt{RAP}\ is a highly performant mechanism for evaluating large sets of queries.
For instance, consider the worst-case runtime at $n'=1000$ in Figure~\ref{fig:extending-synth-data}, which occurs where \texttt{RAP}\ answered a workload of 64 1-of-4 thresholds on the LOANS dataset.
Here, \texttt{RAP}\ answered approximately \num{3.5e7} individual consistent queries in 1,240 seconds --- a rate of over 28,000 queries per second.
Based on these findings, we conclude that \texttt{RAP}\ is highly efficient for answering large sets of $r$-of-$k$ thresholds.
\section{Enhancing \texttt{RAP}'s Evaluation} \label{sec:improving-evaluation}
In this section, we address our first two contributions in the setting where all queries are prespecified: we strengthen and clarify our understanding of \texttt{RAP}'s utility by performing a thorough reproducibility study on two important aspects of Aydore et al.'s evaluation of \texttt{RAP}.
These two aspects are:
\begin{enumerate}
\item The benefit of \texttt{RAP}'s adaptive component relative to its non-adaptive component was unclear in its initial evaluation. We conclusively determine and quantify this component's utility benefit, finding that it is crucial for enabling \texttt{RAP}\ to achieve high utility.
\item \texttt{RAP}\ was initially only evaluated on highly reduced portions of the query space. We instead evaluate \texttt{RAP}'s utility across the entire query space, answering up to 50x more queries than in its initial evaluation.
\end{enumerate}
The first aspect is significant because it improves our understanding of how \texttt{RAP}'s adaptivity parameters affect its utility and establishes whether \texttt{RAP}'s adaptive component is necessary in order to achieve high utility.
The second aspect is important because \texttt{RAP}'s initial evaluation on highly reduced portions of the query space yielded potentially biased utility results.
By instead evaluating \texttt{RAP}\ across the entire query space, we establish \texttt{RAP}'s unbiased utility and determine what impact reducing the query space has on \texttt{RAP}'s utility.
In order to evaluate both aspects, we must reimplement \texttt{RAP}\ from the ground up in order to improve its efficiency for evaluating large sets of prespecified queries.
We then use the new implementation to evaluate both aspects, clarifying the value of the \texttt{RAP}\ mechanism and thus improving its adoptability for practical uses.
To make the description of our improved evaluation precise, in Section~\ref{sec:present-err} we define the utility metric used by Aydore el al.\ and by the prior state-of-the-art mechanisms for answering prespecified queries, which we also use in our evaluations.
We then discuss in Section~\ref{sec:reevaluation-focus} the details and implications of the two aspects of Aydore et al.'s initial evaluation of \texttt{RAP}\ that we are improving upon.
In Section~\ref{sec:reimplementing}, we detail the particular obstacle in \texttt{RAP}'s initial implementation which prevents its use for our improved evaluation.
To overcome this obstacle, we reimplement \texttt{RAP}\ from the ground up and make its implementation publicly available\footnote{\href{https://github.com/bavent/large-scale-query-answering}{{https://github.com/bavent/large-scale-query-answering}}.}.
Finally, in Section~\ref{sec:reevaluation-experiments}, we describe how we use our improved implementation to perform our enhanced evaluation of \texttt{RAP}.
With regards to the role of adaptivity in \texttt{RAP}, we not only find that it is crucial to achieving high-utility, we also quantitatively and definitively measure how \texttt{RAP}'s adaptivity parameters ($T$ and $K$) affect its utility.
This motivates new, more efficient search strategies to find optimal $T$ and $K$ values, thus reducing \texttt{RAP}'s computational burden and privacy cost in practice.
With regards to evaluating \texttt{RAP}\ on the full query space, we find that Aydore et al.'s initial evaluation of \texttt{RAP}\ on a reduced portion of the query space likely \textit{underestimated} \texttt{RAP}'s utility.
This was due to their reduced query space having less ``sparsity'' in the query answers (i.e., a larger portion of the queries they evaluated had non-0 answers).
This finding motivates a new line of research on mechanisms for the separate cases of when query answers are and are not sparse.
Together, the improved \texttt{RAP}\ implementation combined with the enhanced evaluation clarifies the value of the \texttt{RAP}\ mechanism, and thus improves \texttt{RAP}'s adoptability and usability in practice.
\subsection{Measuring Utility of Prespecified Queries} \label{sec:present-err}
We define the concrete utility measure used in prior works to evaluate DP mechanisms that answer prespecified sets of statistical queries.
Prior works in this setting measured the utility of DP mechanisms in terms of a mechanism's maximum error over the answers to all queries in the prespecified query set~\cite{mckenna2019graphical, vietri2020new, liu2021iterative, aydore2021differentially}.
We refer to this measure of utility as \textit{present utility}, since it is the error on the set of presently available queries, and measure it in terms of the negative of \textit{present error}; i.e., a mechanism with low present error has high present utility, and vice versa.
This error measure is formally defined as follows.
\begin{definition}[Present error] \label{def:present-err}
Let $a = Q(D) = (a_1,\dots,a_m)$ be the true answers to a given query vector $Q$ on dataset $D$, and let $\tilde{a} = (\tilde{a}_1,\dots,\tilde{a}_m)$ be mechanism $M$'s corresponding answers to the query vector.
Then $\err_P$ is the present error of the mechanism, defined as $\err_P(M,D,Q) = \E_{M(D)} \Vert a - \tilde{a}\Vert_\infty$, where the expectation is over the randomness of the mechanism.
\end{definition}
We choose the $\ell_\infty$ norm as the base metric for present error because of its use in Aydore et al.'s evaluation of \texttt{RAP}\ and because it is the most popular norm utilized in the most closely related literature~\cite{mckenna2019graphical, vietri2020new, liu2021iterative, aydore2021differentially}.
However, other norms (e.g., $\ell_1$ and $\ell_2$) and even definitions of error may be equally valid in the prespecified queries setting depending on the practical use case~\cite{tao2021benchmarking}.
Thus, although we do not empirically evaluate \texttt{RAP}\ on such alternative definitions, investigating how the findings in this work change based on the error definition is an excellent direction for future work.
\subsection{Focus of \texttt{RAP}'s Reevaluation} \label{sec:reevaluation-focus}
We now detail the two primary aspects of Aydore et al.'s evaluation of \texttt{RAP}\ that we enhance in this work, and how their origins trace back to a particular challenge in \texttt{RAP}'s initial implementation.
\paragraph{Adaptivity Evaluation:}
The first aspect that we address in \texttt{RAP}'s reevaluation is how \texttt{RAP}'s adaptive component affects its utility.
To provide context, we briefly describe the non-adaptive form of \texttt{RAP}.
We then describe the adaptive form of \texttt{RAP}\ and the motivation behind its design.
Finally, we detail how Aydore et al.'s evaluation of \texttt{RAP}\ omitted studying the adaptive component's effect on utility, and we describe why that is an issue.
In its non-adaptive form, the \texttt{RAP}\ mechanism essentially reduces to privately answering the full query vector $Q$ with the Gaussian Mechanism, then applying the \texttt{RP}\ mechanism to generate a synthetic dataset.
This non-adaptive form of the \texttt{RAP}\ mechanism is a novel reimagining of the classic \textit{Projection Mechanism}~\cite{nikolov2013geometry}, a near-optimal but computationally intractable mechanism for answering prespecified queries.
By leveraging a relaxation of the query space and utilizing EEDQs, Aydore et al.\ describe how their non-adaptive \texttt{RAP}\ mechanism can use modern tools (e.g., GPU-accelerated optimization) to efficiently generate a relaxed synthetic dataset which can hypothetically answer the prespecified queries with low (albeit non-optimal) error.
Moreover, they prove a theoretical result (Theorem 4.1,~\cite{aydore2021differentially}) which confirms the power of the non-adaptive \texttt{RAP}\ mechanism, achieving a $\sqrt{d'}$ factor of utility improvement over the prior state-of-the-art mechanism.
Aydore et al.\ go on to describe the full adaptive form of \texttt{RAP}\ parameterized by $T$ and $K$.
This adaptive form of \texttt{RAP}\ optimizes the synthetic dataset iteratively over $T$ separate rounds, in each round adaptively selecting $K$ new queries to incorporate into the optimization procedure.
Their stated motivation for introducing adaptivity into \texttt{RAP}\ was to more wisely expend the privacy budget by adaptively optimizing over a small number of ``hard'' queries, and they conjecture (without a result similar to that of their Theorem 4.1) that such adaptivity will result in higher utility than that achieved by the non-adaptive form of \texttt{RAP}.
Aydore et al.\ then perform an empirical evaluation of \texttt{RAP}\ across a range of parameters and datasets, and establish that it achieves state-of-the-art utility --- however, the utility benefits of \texttt{RAP}'s adaptivity are left unanalyzed.
Specifically, in all evaluations they report the best utility of \texttt{RAP}\ across $2 \le T \le 50$ and $5 \le K \le 100$.
There are two issues related to this.
\begin{enumerate}
\item The values of $T$ and $K$ that achieved the maximum utility are not reported, only what that maximum utility was. Thus, it is unclear how these parameters affect utility. This is problematic in practice because not only is evaluating \texttt{RAP}\ on multiple choices of $T$ and $K$ computationally expensive, but because each evaluation consumes a portion of the overall differential privacy budget.
\item The non-adaptive form of \texttt{RAP}\ is not empirically evaluated. Without evaluating the non-adaptive \texttt{RAP}\ mechanism as a baseline, there is no meaningful way to understand or measure the benefit of adaptivity.
\end{enumerate}
Combined, these two issues leave open the question of how valuable the adaptive component of \texttt{RAP}\ is, and to what extent its adaptivity affects utility.
\paragraph{Query Space Evaluation:}
The second aspect that we address in \texttt{RAP}'s reevaluation is how reducing the query space affects \texttt{RAP}'s utility for answering $k$-way marginals.
To begin, we describe the motivation behind evaluating this aspect: that for computational ease, Aydore et al.\ only evaluated \texttt{RAP}\ on a reduced portion of the query space.
We then detail how this reduction may have biased their evaluation's results.
Aydore et al.'s empirical evaluation focuses on \texttt{RAP}'s utility for answering $k$-way marginals, specifically 3-way and 5-way marginals.
Reviewing the code of their published \texttt{RAP}\ implementation, we determined that a heuristic filtering criterion of the query space was being applied to remove any ``large'' marginals from possible evaluation.
Specifically, any marginal which had more consistent queries than the number of records in the dataset ($n$) was not considered for evaluation.
The impact that filtering had on the evaluated workloads varied depending on $k$ and $n$.
For instance, with 3-way marginals on the ADULT dataset, the filtering criterion removed the top $24\%$ largest 3-way marginals which accounted for over $90\%$ of all consistent queries.
With 5-way marginals on the ADULT dataset, this filtering criterion removed the top $92\%$ largest 5-way marginals which accounted for over $99.99\%$ of all consistent queries.
Discussing this discrepancy directly with the authors\footnote{\href{https://github.com/amazon-research/relaxed-adaptive-projection/issues/2}{https://github.com/amazon-research/relaxed-adaptive-projection/issues/2}} revealed that the filtering criterion was an intentional choice meant to reduce the computational burden during experimentation, and they conjectured that removing this criterion and rerunning all experiments would yield results comparable to those obtained by increasing the workload size.
Since all baseline mechanisms were evaluated on the same query vectors, the filtering criterion does not result in favorable utility for \texttt{RAP}\ relative to the prior state-of-the-art mechanisms that serve as their baselines.
However, for marginals with a significantly larger number of consistent queries than $n$, most queries will evaluate to 0 by a Pigeonhole principle argument.
Thus, the filtering criterion may result in favorable utility for \texttt{RAP}\ relative to the naive baseline mechanism that they consider in their work: \texttt{All-0}, the mechanism which outputs 0 as the answer to every query.
This leaves open the question of \texttt{RAP}'s utility on large, unfiltered query spaces, both in absolute terms and relative to the baseline \texttt{All-0}\ mechanism.
\subsection{Reimplementing \texttt{RAP}} \label{sec:reimplementing}
We now describe why these two aspects cannot be evaluated using Aydore et al.'s initial \texttt{RAP}\ implementation: briefly, the amount of memory required by the implementation is inordinate.
We then detail how we overcome this challenge by reimplementing \texttt{RAP}\ in a way that trades-off a significant amount of memory usage for a potential increase in runtime.
Conceptually, both aspects could be evaluated using Aydore et al.'s published code.
However, evaluating either the non-adaptive form of \texttt{RAP}\ or evaluating a larger portion of the query space both lead to the same obstacle: Aydore et al.'s \texttt{RAP}\ implementation requires an inordinate amount of memory to answer the corresponding large number of queries.
We have identified several portions of their code where this memory bottleneck occurs, all of which fail to execute either when the total number of consistent queries is ``too large'' or when any marginal has ``too many'' consistent queries.
Consequently, Aydore et al.\ were unable to evaluate either the non-adaptive form of \texttt{RAP}\ or a significant portion of the $k$-way marginals' consistent query space.
The high-level idea behind our approach for overcoming this implementation challenge is to trade-off some of \texttt{RAP}'s required memory for a potential increase in its runtime.
Our motivation for this approach is inspired by recent advances in differentially private deep learning literature.
In particular, the canonical DP-SGD mechanism~\cite{abadi2016deep} for training machine learning models with differential privacy had been plagued by poor computational performance due to several of its underlying operations (e.g., per-example gradient clipping, uniformly random batch sampling without replacement, etc.) not being natively supported by modern machine learning frameworks.
More recently however, several highly performant DP-SGD implementations~\cite{papernot2019machine, opacus, subramani2021enabling} have been deployed which dramatically decrease the mechanism's runtime in exchange for a mild increase in its memory usage.
To our knowledge, our high-level approach is the first in DP literature to make practical use of this trade-off in the opposite direction: decreasing the mechanism's memory requirement by increasing its runtime.
Concretely, we overcome this implementation challenge by reimplementing \texttt{RAP}\ via the following high-level steps.
First, we reduce the maximal memory requirement in \texttt{RAP}'s original implementation caused by the original implementation's implicit evaluation all marginals (or, more generally, all thresholds) in parallel.
We accomplish this by evaluating each marginal (or threshold) sequentially in order to distribute the computational burden.
To further reduce the overall memory requirement, rather than explicitly enumerating and storing every query consistent with each marginal (threshold), we represent the queries implicitly and only convert a query to its explicit representation when it is needed for evaluation.
To evaluate arbitrary sets of such individual queries, we implement the core EEDQ evaluation function from the ground up by designing a simple, direct function to efficiently evaluate arbitrary predicates.
With such a function implemented, we then leverage a combination of powerful language features --- namely vectorizing maps and just-in-time compilation in JAX~\cite{jax2018github} --- to enable efficient evaluation, summation, and differentiation of large sets of predicates without exceeding memory constraints.
In addition to these implementation improvements which primarily serve to reduce \texttt{RAP}'s memory requirement, we additionally incorporate an algorithmic improvement based on recent theoretical findings to help offset the increased runtime from our aforementioned deparallelization step.
Specifically, by trivially adapting the \textit{Oneshot Top-$K$ Selection with Gumbel Noise} mechanism~\cite{durfee2019practical, cesar2021bounding} to our setting, we replace \texttt{RAP}'s iterative Adaptive Selection (\texttt{AS}) mechanism with the more efficient Oneshot Adaptive Selection (\texttt{OSAS}) mechanism in Alg.~\ref{alg:osas}.
The results of \cite{durfee2019practical} prove that the \texttt{OSAS}\ mechanism is probabilistically equivalent to \texttt{AS}\ (i.e., both mechanisms have identical output distributions, and thus achieve identical privacy and utility), but \texttt{OSAS}\ requires only 1 pass over a set of values in order to select the top-$K$ instead of the $K$ passes that \texttt{AS}\ requires.
\begin{algorithm}
\caption{Oneshot Adaptive Selection (\texttt{OSAS}) Mechanism}
\vspace{0.4em} \hspace*{\algorithmicindent} \textbf{Input} \vspace*{-0.6em}
\begin{itemize}[leftmargin=1.5em] \setlength\itemsep{-0.2em}
\item $D, D'$: The dataset and synthetic dataset.
\item $Q, \hat{Q}$: A vector of all statistical queries and their corresponding surrogate queries.
\item $Q_s$: A set of already selected queries.
\item $K$: The number of new queries to select $K$.
\item $\rho$: Differential privacy parameter.
\end{itemize}
\hspace*{\algorithmicindent} \textbf{Body}
\begin{algorithmic}[1]
\STATE Let $\Delta = (|\hat{q}_{i}(D) - \hat{q}_{i}(D')| : q_i \in Q \setminus Q_s)$.
\STATE Let $I$ denote the indices of the top-$K$ values of: $\Delta_i + Z_i$, where $Z_i \overset{\text{iid}}{\sim} \texttt{Gumbel}\left(\sqrt{\frac{K}{2\rho |D|^2}}\right)$.
\STATE Let $\tilde{a}_i = \texttt{GM}(D, q_i, \frac{\rho}{2K}) \quad \forall i \in I$.
\STATE Let $Q_s = Q_s \cup \{q_i\}_{i \in I}$.
\STATE {\bfseries Return:} $Q_s$ and $\tilde{a} = (\tilde{a}_i: q_i \in Q_s)$.
\end{algorithmic}
\label{alg:osas}
\end{algorithm}
Figure~\ref{fig:rap-runtime} compares our new implementation to Aydore et al.'s original implementation without filtering out any large marginals.
Specifically, this figure shows the runtimes of both implementations executing the non-adaptive and adaptive variants of \texttt{RAP}\ given the same amount of GPU memory on two datasets across a range of workload sizes\footnote{The runtimes for both implementations (and all subsequent evaluations in this work) were executed on an Nvidia RTX 3090 consumer GPU with 24 GB VRAM.}.
We find that for the non-adaptive variant of \texttt{RAP}, the original implementation was only able to evaluate tiny workloads, while our new reimplementation was able to evaluate massive workloads (albeit, with a very high runtime); this represents a 500x improvement in memory efficiency for our reimplementation.
For the adaptive variant of \texttt{RAP}\ (specifically, with $T$=16 and $K$=4), we find the our reimplementation's runtime is comparable to the original implementation's --- outperforming it slightly on one dataset, while being outperformed slightly on the other.
On the ADULT dataset, both implementations were able to exhaustively evaluate the complete space of marginals.
On the LOANS dataset, the original implementation was able to consistently evaluate marginal workloads of size 256, but was unable to consistently evaluate the largest workload size of 1024; this represents up to a 4x improvement in memory efficiency for our reimplementation.
\begin{figure}
\centering
\begin{tabular}{c|c}
\hspace*{-0.5cm}
\includegraphics[width=.4\linewidth]{figs/reevaluating/nonadaptive-runtime.pdf} & \includegraphics[width=.4\linewidth]{figs/reevaluating/adaptive-runtime.pdf}
\end{tabular}
\caption{Runtime evaluations of non-adaptive and adaptive \texttt{RAP}\ variants on the original implementation and reimplementation, on both ADULT and LOANS datasets.}
\label{fig:rap-runtime}
\end{figure}
\subsection{Reevaluating \texttt{RAP}} \label{sec:reevaluation-experiments}
Using our new implementation, we reevaluate both the adaptivity and query space aspects of \texttt{RAP}, enabling new findings.
We start by simply establishing \texttt{RAP}'s present utility for answering $k$-way marginals on unbiased random samples of the full marginal space (i.e., without filtering out any ``large'' marginals).
This results in \texttt{RAP}\ answering approximately 50x more queries at its peak than in Aydore et al.'s initial evaluation on filtered marginals.
We then use these results to analyze the role that adaptivity plays in \texttt{RAP}'s utility.
Finally, we address the question of whether filtering the large marginals out of \texttt{RAP}'s evaluation significantly impacts its utility in order to determine if the filtering criterion is a reasonable heuristic to apply to reduce \texttt{RAP}'s computational burden in future evaluations.
This improved implementation and reevaluation, taken together, conclusively demonstrates that \texttt{RAP}\ is a feasible and valuable mechanism for practical, real-world use cases.
Furthermore, in conjunction with our improved implementation, our findings enable new capabilities such as more efficient search strategies for optimal $T$ and $K$ parameters.
\subsubsection*{Evaluation Datasets} \label{sec:rap-datasets}
As in prior works on evaluating DP mechanisms that answer statistical queries \cite{aydore2021differentially, vietri2020new, mckenna2019graphical}, all empirical evaluations use the ADULT~\cite{frank2010uci} and LOANS~\cite{vietri2020new} datasets with the same preprocessing.
Table~\ref{tab:rap-datasets} contains a high level description of each dataset.
\begin{table}
\centering
{\begin{tabular}{c|c|c|c}
\bf{Dataset} & \bf{Records} & \bf{Features} & \bf{Binarized Features} \\
\hline
ADULT & 48,842 & 14 & 588 \\
LOANS & 42,535 & 48 & 4,427 \\
\end{tabular}}
\caption{Datasets for empirical evaluations. Binarized features represent the features after a transformation via one-hot encoding.}
\label{tab:rap-datasets}
\end{table}
\subsubsection{$k$-way Marginal Evaluation of \texttt{RAP}}
To begin \texttt{RAP}'s reevaluation, we concretely establish its utility on a larger portion of the query space than previously considered by Aydore et al.
Specifically, we evaluate \texttt{RAP}'s present error for answering uniformly random workloads of 3-way marginals across a range of parameters on both the ADULT and LOANS datasets, and we do so \textit{without any} thresholding criterion to filter out ``large'' marginals.
This results in \texttt{RAP}\ answering approximately 50x as many queries as in its original evaluation by Aydore et al.
Table~\ref{tab:evaluation-experiments} provides a reference for the parameter ranges in this experiment.
For each setting of parameters, we evaluate the adaptive variant of \texttt{RAP}\ across a range of $T$ and $K$ values and report the combinations that achieve minimal present error.
We separately evalaute the non-adaptive ($T=1, K=m$) variant of \texttt{RAP}\ across the same range of parameters in order answer the question of whether or not there is any benefit to \texttt{RAP}'s adaptivity.
Additionally, as baselines, we evaluate the present utility of the \texttt{All-0}\ and \texttt{GM}\ mechanisms, enabling us to put the utility of \texttt{RAP}\ into context.
The results of this experiment are visualized in Figure~\ref{fig:reevaluating-lines}.
\begin{table}
\centering
\begin{tabular}{ |c|c| }
\hline
Primary Mechanism & \texttt{RAP} \\
\hline
Baseline Mechanisms & $\texttt{All-0}, \texttt{GM}$ \\
\hline
Utility Measure & $\err_P$ \\
\hline
$D$ & ADULT, LOANS \\
\hline
$\epsilon$ & $0.01, 0.1, 1$ \\
\hline
$\delta$ & $1/|D|^{2}$ \\
\hline
$|W|$ & $1, 4, 16, 64, 256$ \\
\hline
$n^\prime$ & $10^3$ \\
\hline
$T$ & $1, 4, 16, 64$ \\
\hline
$K$ & $4, 16, 64, 256, m$ \\
\hline
$k$ & $3$ \\
\hline
\end{tabular}
\caption{Experimental reference table for reevaluating \texttt{RAP}'s utility on $k$-way marginals.}
\label{tab:evaluation-experiments}
\end{table}
\begin{figure}
\centering
\begin{tabular}{c|c}
\hspace*{-0.5cm}
\includegraphics[width=.4\linewidth]{figs/reevaluating/adult-lines.pdf} & \includegraphics[width=.4\linewidth]{figs/reevaluating/loans-lines.pdf}
\end{tabular}
\caption{Present error across a range of parameters and datasets for the adaptive and non-adaptive variants of \texttt{RAP}, the \texttt{GM}\ baseline, and the \texttt{All-0}\ baseline. Present error for the adaptive variant of \texttt{RAP}\ is computed as the minimal error across the range of $T$ and $K$ values (with the specific $(T,K)$ pair that achieved the minima reported at each point).}
\label{fig:reevaluating-lines}
\end{figure}
There are several immediate conclusions that can be drawn from these results.
The first is that while the non-adaptive variant of \texttt{RAP}\ achieves lower error than the \texttt{GM}\ baseline, its utility is nearly identical to the \texttt{All-0}\ baseline for all but the smallest workload sizes.
This result likely stems from the fact that the answers to the large majority of a marginal's consistent queries are 0 or nearly 0, with only a small percentage of answers having larger values.
Since the non-adaptive variant of \texttt{RAP}\ first privatizes the answers to all queries, in the synthetic dataset optimization procedure it is likely unable to distinguish between the few answers that are truly larger than 0 vs.\ the outliers that are only large due to random chance.
The second conclusion is that the adaptive variant of \texttt{RAP}\ achieves significantly lower present error than the non-adaptive \texttt{RAP}\ variant as well as the baselines.
This implies that \texttt{RAP}'s adaptivity is critical for achieving low error, and thus warrants a more thorough investigation into $T$ and $K$'s precise impact on utility.
\subsubsection{Role of Adaptivity} \label{sec:reevaluating-role-of-adaptivity}
In this next experiment, we seek to understand the precise impact that $T$ and $K$ have on \texttt{RAP}'s utility.
From Figure~\ref{fig:reevaluating-lines}, we are only able to glean that \texttt{RAP}\ typically achieves minimal error via smaller values of $T$ in conjunction with relatively larger values of $K$.
However, these values of $T$ and $K$ vary dramatically across parameter settings and datasets.
Moreover, Figure~\ref{fig:reevaluating-lines} provides no information about \texttt{RAP}'s utility for $T$ and $K$ combinations that did not achieve minimal error.
To better understand the role these parameters play in \texttt{RAP}'s utility, we examine the present error of the adaptive variant of \texttt{RAP}\ for every $(T,K)$ pair across the same parameter settings from Table~\ref{tab:evaluation-experiments}.
The results of this experiment are shown in Figures~\ref{fig:reevaluating-adaptivity-ws} and \ref{fig:reevaluating-adaptivity-eps}.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=1.05\linewidth]{figs/reevaluating/adult-adaptivity-ws.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=1.05\linewidth]{figs/reevaluating/loans-adaptivity-ws.pdf}
\end{tabular}
\caption{Present error across a range of workload sizes with $\epsilon = 0.1$ for the adaptive variant of \texttt{RAP}\ at every combination of $T$ and $K$ value considered.}
\label{fig:reevaluating-adaptivity-ws}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/reevaluating/adult-adaptivity-eps.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/reevaluating/loans-adaptivity-eps.pdf}
\end{tabular}
\caption{Present error across a range of $\epsilon$ values with $|W|=256$ for the adaptive variant of \texttt{RAP}\ at every combination of $T$ and $K$ value considered.}
\label{fig:reevaluating-adaptivity-eps}
\end{figure}
The heatmaps in both figures provide interesting insight into \texttt{RAP}'s adaptivity.
In Figure~\ref{fig:reevaluating-adaptivity-ws}, with $\epsilon$ fixed at $0.1$, we see that there is no single $(T,K)$ value or region that consistently achieves minimal error across all workload sizes.
Instead, we notice that at each workload size, there is some diagonal banding at around a fixed region of $T \cdot K$ that achieves approximately minimal error.
That is, for any particular workload size, let $(T^*, K^*)$ denote the $T$ and $K$ value that induces minimal error for \texttt{RAP}\ across our considered range of $T,K$ values, and let $c^*\coloneqq T^* \cdot K^*$.
We see that for other $(T,K)$ pairs such that $T \cdot K \approx c^*$, the corresponding error is typically comparable to the minimal error.
Moreover, we see that as $T\cdot K$ diverges from $c^*$, \texttt{RAP}'s error increases essentially monotonically.
We hypothesize that for $T \cdot K \ll c^*$, \texttt{RAP}'s error is relatively high because \texttt{RAP}\ had not answered and optimized over a sufficient number of queries.
For $T \cdot K \gg c^*$, we hypothesize that \texttt{RAP}'s error is relatively high because the privacy budget is spread too thin across across answering a large number of queries, resulting in \texttt{RAP}\ utilizing overly noisy queries to optimize its underlying synthetic dataset.
These hypotheses are supported by the results in Figure~\ref{fig:reevaluating-adaptivity-eps}.
Specifically, as $\epsilon$ becomes larger, not only does the minimal error of \texttt{RAP}\ decrease, but the $T$ and $K$ values that achieve the minimal error (along with their corresponding diagonal bands) are pushed to increasingly large values.
Taken together, these results imply that in order to achieve low error, \texttt{RAP}\ primarily requires answering and optimizing over a \textit{specific number} of queries --- it is less important whether those queries are answered in small batches over a large number of adaptive rounds or in large batches over a small number of adaptive rounds.
This finding is important to \texttt{RAP}'s usefulness in practice, as it motivates improved search strategies for optimal $(T,K)$ values.
Improved search strategies (beyond the naive $N\times N$ grid search that we performed) are important for two reasons.
\begin{enumerate}
\item Evaluating \texttt{RAP}\ across a range of $T$ and $K$ values can be computationally expensive. Thus, improved search strategies would decrease the computational cost. Alternatively, at a fixed computational cost, improved search strategies would allow \texttt{RAP}\ to be evaluated across a larger set of $T$ and $K$ values.
\item In practice, each evaluation of \texttt{RAP}\ on any $(T,K)$ setting consumes a portion of the privacy budget, even if only the optimal setting is chosen in the end. Thus, reducing the total number of evaluated $(T,K)$ settings enables more efficient use of the overall privacy budget.
\end{enumerate}
We provide one example of an improved search strategy over the naive $N\times N$ grid search strategy as follows.
First, the observed monotonicity of present error about $c^*$ could be leveraged to binary search for a $c \coloneqq T\cdot K$ setting along the positive diagonal that achieves approximately minimal error.
Then, a linear search across all $(T^\prime, K^\prime)$ settings such that $T^\prime \cdot K^\prime = c$ could be performed to compute the setting that achieves minimal error.
Relative to the grid search, this strategy would yield an $\mathcal{O}(N)$ factor improvement both in the portion of the privacy budget consumed as well as in the computational cost.
\subsubsection{Utility Impact of Filtering Marginals}
In the final experiment, we analyze what impact filtering out marginals with ``too many'' consistent queries has on \texttt{RAP}'s utility.
Recall that in Aydore et al.'s evaluation, as a heuristic to reduce the computational burden of experimentally evaluating \texttt{RAP}, any marginal was removed from consideration if it contained more consistent queries than the number of records in the underlying dataset.
Here, we compare how \texttt{RAP}'s utility is affected by this marginal filtering criterion.
We initiate this comparison by reevaluating \texttt{RAP}\ with and without the filtering criterion.
We do so across the range of parameters in Table~\ref{tab:evaluation-experiments}, and we record the minimal present error of \texttt{RAP}\ at each parameter setting across all $(T,K)$ pairs.
We then perform two analyses on these results, one focusing on how the workload size affects \texttt{RAP}'s present error with and without marginal filtering, and another analyzing how the total number of queries affects \texttt{RAP}'s present error.
We conclusively determine that \texttt{RAP}'s present error is impacted by filtering large marginals.
More specifically, we find that when holding the number of queries that \texttt{RAP}\ evaluates constant, filtering large marginals \textit{increases} \texttt{RAP}'s present error.
\paragraph{Influence of Workload Size on Utility}
Aydore et al.\ hypothesized that removing the marginal filtering criterion would cause \texttt{RAP}'s present error to increase comparably to the error increase induced by increasing the workload size.
To test this hypothesis, we perform a standard nested regression analysis~\cite{gelman2006data} on the \texttt{RAP}\ evaluation results.
For brevity, we state the steps of this analysis and then immediately jump to the results, deferring the regression details to Appendix~\ref{app:many-queries-appendix}.
At the high level, the steps for this analysis are as follows.
For the ADULT and LOANS datasets separately, we define a full regression model to account for the following three variables' (and their interactions') impact on \texttt{RAP}'s present error: the DP level $\epsilon$, the workload size $|W|$, and whether the marginal filtering criterion was applied.
We also define a restricted regression model that accounts for $\epsilon$ and $|W|$, but does not distinguish whether or not a result had the marginal filtering criterion applied.
Following the standard approach for a nested regression analysis, we first determine whether the full regression model is a good fit for the \texttt{RAP}\ evaluation results (based on the fitted model's adjusted $r^2$ value, $F$-statistic $p$-value, and omnibus $p$-value).
We then compare the fit of the full model to the fit of the restricted model by performing a likelihood ratio test, analyzing the $p$-value of the resulting $\chi^2$ statistic.
Since the full model only differs from the restricted model in that it accounts for whether the marginal filtering criterion was applied, we can conclude that if the fit of the full model is both statistically sound and statistically significantly better than that of the restricted model, then the marginal filtering criterion impacts \texttt{RAP}'s present error.
\begin{figure}
\centering
\begin{tabular}{cc}
\hspace*{-0.65cm}
\includegraphics[width=.45\linewidth]{figs/reevaluating/adult-ws-regressions.pdf} & \includegraphics[width=.45\linewidth]{figs/reevaluating/loans-ws-regressions.pdf}
\end{tabular}
\caption{Regression models for each dataset of \texttt{RAP}'s present error vs.\ workload size for results from filtered and unfiltered marginals, at $\epsilon=0.1$.}
\label{fig:ws-regressions}
\end{figure}
From this analysis, Figure~\ref{fig:ws-regressions} shows the fitted full regression model on both datasets with $\epsilon$ fixed at $0.1$.
We find that the full regression models for both datasets fit the \texttt{RAP}\ evaluation results well.
Thus, we perform the aforementioned likelihood ratio test against the restricted models for each dataset.
The corresponding $p$-values for the models on the ADULT and LOANS \texttt{RAP}\ evaluations were $0.026$ and $0.623$ respectively.\footnote{We report the individual $p$-values for all statistical hypotheses tested. However, we control the family-wise error rate $\alpha$ (i.e., the probability $\alpha$ that at least one ``false positive'' finding will occur) using the Holm–Bonferroni method~\cite{holm1979simple}. At the $\alpha=0.05$ level, no conclusions based on the individual $p$-values change when the Holm–Bonferroni method is applied.}\enlargethispage{-\baselineskip}
The small $p$-value for the model corresponding to the \texttt{RAP}\ evaluations on the ADULT dataset enables us to conclude that the marginal filtering criterion does have an impact on \texttt{RAP}'s present error.
However, the coefficients (and their corresponding $p$-values) in the full regression model do not indicate any clear, statistically significant trend for how the present error is impacted by the workload size when comparing the filtered vs.\ unfiltered \texttt{RAP}\ evaluations.
Moreover, regardless of the workload size, due to the lack of significance in many of the coefficients' $p$-values, we are unable to use this model to confidently determine the marginal filtering criterion's impact on \texttt{RAP}'s present error.
Thus, although we are able to conclude that incorporating the marginal filtering criterion into \texttt{RAP}'s evaluation does impact its present error, we are unable to confirm Aydore et al.'s hypothesis on the precise nature of this impact.
\paragraph{Influence of Number of Queries on Utility}
We now perform a more direct analysis of the marginal filtering criterion's impact on \texttt{RAP}'s utility.
Our previous regression analysis assessed Aydore et al.'s hypothesis regarding the filtering criterion's influence on \texttt{RAP}'s present error as a function of workload size.
However, the filtering criterion does not affect workload size directly --- it only affects the total number of queries consistent with the marginals in the workload.
As such, we believe that a more informative assessment would be to analyze the marginal filtering criterion's influence on \texttt{RAP}'s present error as a function of the total number of consistent queries that it evaluates.
\begin{figure}
\centering
\begin{tabular}{cc}
\hspace*{-0.65cm}
\includegraphics[width=.45\linewidth]{figs/reevaluating/adult-nq-regressions.pdf} & \includegraphics[width=.45\linewidth]{figs/reevaluating/loans-nq-regressions.pdf}
\end{tabular}
\caption{Regression models for each dataset of \texttt{RAP}'s present error vs.\ number of queries for results from filtered and unfiltered marginals, at $\epsilon=0.1$.}
\label{fig:nq-regressions}
\end{figure}
We perform this assessment using precisely the same statistical analysis and regression models as before, only now having the full and restricted models account for the total number of queries rather than workload size.
Figure~\ref{fig:nq-regressions} shows the fitted full regression models on both datasets with $\epsilon$ fixed at $0.1$.
Again, the full regression models for both datasets fit the \texttt{RAP}\ evaluation results well, allowing us to then test these full models against their corresponding restricted models.
The corresponding $p$-values of the likelihood ratio tests for the models on both the ADULT and LOANS \texttt{RAP}\ evaluations were less than $0.0001$, indicating that the filtering criterion has a statistically significant impact on \texttt{RAP}'s present error (for both datasets this time).
The results from the figure for both datasets visually imply that including the filtering criterion increases \texttt{RAP}'s present error for any given number of queries, and that this increase worsens as the total number of queries grows.
By examining the coefficients (and their corresponding $p$-values) of the full regression models on both datasets, we confirm that this visual trend holds statistically as well.
These results match intuition: in order for a result with the filtering criterion to have approximately the same number of queries as a result without, the result with filtering would likely have corresponded to a larger sized workload.
A larger size workload with the same number of queries implies a more diverse set of queries, whereas a smaller workload with the same number of queries implies a less diverse set of queries with sparser support (i.e., more of the queries evaluate to 0).
Thus, we conclude that Aydore et al.'s initial evaluation of \texttt{RAP}\ --- especially for the highly filtered 5-way marginals --- likely overestimates \texttt{RAP}'s present error.
Moreover, this finding motivates a new branch of work on large-scale query answering for the separate cases of when the queries have dense support vs.\ sparse.
\section{Overview} \label{sec:many-queries-introduction}
Many data analysis and machine learning algorithms, at their core, involve answering \textit{statistical queries}.
Statistical queries are the class of queries that answer the question: ``What fraction of entries in a given dataset have a particular property $P$?''
Because of their ubiquity, developing differentially private mechanisms to effectively answer statistical queries has been one of the most well studied problems in DP~\cite{dinur2003revealing, blum2005practical, dwork2006calibrating, blum2008learning, dwork2009complexity, dwork2010boosting, roth2010interactive, hardt2010multiplicative, hardt2010simple, gupta2012iterative}.
Early DP research primarily focused on designing mechanisms to answer specific, individual statistical queries in an interactive setting.
In that setting, queries are posed and answered one at a time with the goal of answering each query with minimal error while ensuring privacy.
However, most practical data-driven algorithms do not pose only a single query.
Instead, they pose a large number of queries, referred to as a \textit{query workload}.
When a query workload is available in advance (i.e., prespecified), it is possible to design DP mechanisms that take advantage of the relationships between the queries to achieve higher utility relative to answering the individual queries independently.
In this work, we address the problem of privately answering a large number of queries by answering the following high-level research question.
\begin{quoting}
\textit{In the two following settings, to what extent are differentially private mechanisms able to answer a large number of statistical queries efficiently and with low error?
\begin{itemize}
\item[] Setting 1: All queries are prespecified; i.e., known in advance.
\item[] Setting 2: Only partial knowledge of the queries is available in advance.\\
\end{itemize}
}
\end{quoting}
\subsubsection*{Motivating Example}
A motivating data analysis example for this work is The American Community Survey (ACS), a demographics survey program conducted by the U.S.\ Census Bureau~\cite{bureau2016american}.
The ACS regularly gathers information such as ancestry, citizenship, educational attainment, income, language proficiency, migration, disability, employment, and housing characteristics.
The Census Bureau aggregates the individual ACS responses (microdata), then generates population estimates which are available to the public via online data tools.
The most popular tool, Public Use Microdata Sample (PUMS), enables researchers to generate custom cross-tabulations of responses to the ACS questions.
To protect the privacy of the ACS respondents, PUMS data are sampled, anonymized, and only available for sufficiently populous geographics regions.
However, studies have found that the ad hoc anonymization techniques used are not entirely sufficient to protect the privacy of individual respondents (e.g., via re-identification attacks)~\cite{abowd2018staring, christ2022differential}.
As a result, the Census Bureau has announced plans to incorporate differential privacy into the American Community Survey, and declared that it is researching ``a new fully-synthetic data product'' with a development period ending in 2025~\cite{rodriguez2021synacs,daily_2022}.
One promising and active direction within DP research is synthetic data generation~\cite{mckenna2019graphical, vietri2020new, liu2021iterative}.
The hope is that once a synthetic dataset is generated via a differentially private mechanism, researchers and analysts can pose an arbitrary number of queries against the synthetic dataset without increasing the privacy risk to those who contributed the original underlying data.
DP synthetic data generation mechanisms seek to strike a balance between distilling the information in the underlying dataset most useful to analysts while simultaneously ensuring privacy of the underlying dataset.
Thus, to maximize the eventual usefulness of the synthetic dataset, synthetic data generation mechanisms must tailor the generated dataset to the specific class of downstream tasks (e.g., a particular class of queries) that analysts are most likely interested in.
This is typically done by providing a set of queries (the query workload) to the DP mechanism, so that the mechanism can tailor the synthetic dataset to answering these queries (and, ideally, to other similar queries).
Much of DP synthetic data research has focused on designing mechanisms to generate synthetic data which can provide accurate answers (under a variety of metrics, most commonly $\ell_\infty$ error) to the subset of statistical queries known as $k$-way marginal queries~\cite{barak2007privacy, thaler2012faster, gupta2013privately, chandrasekaran2014faster, cormode2018marginal, mckenna2019graphical, vietri2020new, nixon2022latent}.
Informally, a $k$-way marginal query is one which answers the question: ``What fraction of people in the private dataset have all of the following $k$ attributes: ...?''
In this work, we focus on a strict generalization of $k$-way marginal queries known as $r$-of-$k$ threshold queries~\cite{kearns1987learnability,littlestone1988learning,hatano2004learning, thaler2012faster,ullman2013privacy,aydore2021differentially} under the $\ell_\infty$ error metric.
Informally, $r$-of-$k$ threshold queries answer the question: ``What fraction of people in the private dataset have at least r of the following $k$ attributes: ...?''.
As a simplified example of where such queries can be used, we consider the scenario where a social scientist is interested in using ACS data to determine what portion of a community has a substandard quality of living.
Suppose the scientist wants to examine the four following attributes for each person in the community: is their income level below the poverty line, are they unemployed, are they homeless, do they have a low net worth?
Clearly, a person having any single attribute does not necessarily mean that they have a substandard quality of living.
Similarly, a person does not need to have all four attributes to have a substandard quality of living.
Thus, the social scientist can formulate this as an $r$-of-$k$ threshold query with $r=3$, $k=4$; i.e., a person has a substandard quality of living if they have at least three of the four attributes.
This social scientist may have many such queries, and other researchers may have sets of queries of their own that they wish to pose. Thus, a natural algorithm design question is: how should the U.S. Census Bureau answer everyone's queries with low error while still ensuring the ACS respondents’ privacy?
The simplest option is to use a portion of the DP budget to individually answer each query, independent of all other queries.
This would likely be unsatisfactory utility-wise, since it both limits how many queries can be answered and ignores any relationships between queries (which would likely lead to large $\ell_\infty$ error over the set of answers).
However, we posit two potentially superior alternatives whose performance we will investigate.
\begin{enumerate}
\item One alternative is to collect a large group of queries, and then use a state-of-the-art DP query answering mechanism to answer them all simultaneously.
This is an example of answering queries in the ``prespecified queries'' setting (studied in Sections~\ref{sec:improving-evaluation} and \ref{sec:extending-applicability}).
With careful DP mechanism design or selection, this alternative typically leads to lower $\ell_\infty$ error over the set of answers than answering each query independently.
\item A separate alternative is along the lines of synthetic data generation, and is applicable to the Census Bureau if queries which have been posed in the past are in some sense similar to queries which analysts will likely pose in the future.
Concretely, we hypothesize that the Census Bureau can leverage those past queries in conjunction with a state-of-the-art DP synthetic data generation mechanism to privately generate a synthetic dataset.
Researchers can then pose their own queries directly against the synthetic dataset without needing to go through the Census Bureau, and without needing to worry about the original ACS respondents’ privacy.
This is an example of answering queries in the ``partial knowledge'' setting (studied in Section~\ref{sec:understanding-generalizability}), as knowledge from the past is being used to inform the future.
If the queries posed in the past are indeed similar to the queries posed in the future, then a synthetic dataset generated using the past queries has the potential to answer the future queries with low $\ell_\infty$ error.
\end{enumerate}
\subsection{Prior Work on Large-Scale Query Answering} \label{sec:many-queries-prior-work}
To address answering a large number of queries under differential privacy in an improved manner over the naive interactive approach, two separate lines of research previously emerged: synthetic data generation, and workload evaluation.
We describe both lines of research, then briefly introduce the state-of-the-art mechanism which we build upon in this work.
\paragraph{Synthetic Data Generation:}
One line of research studies the problem of answering a large number of queries via private synthetic dataset generation.
In differentially private synthetic dataset generation, a DP mechanism is applied to the original, sensitive data in order to generate a synthetic dataset.
The synthetic dataset's purpose is then to directly answer arbitrary queries posed in the future, without the further need to account for potential privacy leakage or manage differential privacy budgets.
In this setting, aside from knowing the general query class, \textit{no knowledge is typically assumed about which specific queries will be posed in the future}.
The proven advantage of this approach is that DP synthetic datasets are theoretically capable of accurately answering an exponentially larger number of queries relative to the aforementioned interactive approach~\cite{gupta2012iterative, cheraghchi2012submodular, hardt2012private, gupta2013privately}.
However, actually generating a synthetic dataset which accurately answers exponentially many queries has been proven intractable~\cite{dwork2009complexity, ullman2011pcps, ullman2016answering}, even for simple subclasses of statistical queries (e.g., 2-way marginals).
Thus, a significant recent research focus has been on designing efficient mechanisms for privately generating synthetic datasets which accurately answer increasingly large numbers of queries~\cite{gaboardi2014dual, mckenna2019graphical, vietri2020new, liu2021iterative}.
\paragraph{Workload Evaluation:}
A separate line of research focuses on the problem of answering a large number of queries when the concrete query workload is prespecified; i.e., \textit{when all queries are known in advance}.
Pre-specifying the query workload enables researchers to design DP mechanisms to take advantage of the workload's structure in order to answer the queries with lower error relative to the interactive approach or the private synthetic dataset approach.
Early research in this setting produced mechanisms with optimal or near-optimal error guarantees, but with impractical (typically exponential) running times for even modestly sized real-world problems~\cite{hardt2010multiplicative, hardt2010simple, gupta2012iterative, li2015matrix}.
As a result, recent research has focused on designing computationally efficient mechanisms to answer prespecified workloads with low error on real-world datasets~\cite{mckenna2018optimizing, snoke2018pmse, aydore2021differentially}, at the cost of losing the strong theoretical utility guarantees of prior works and thus necessitating thorough empirical utility evaluations to demonstrate their value.
\paragraph{\textit{Relaxed Adaptive Projection} Mechanism:}
Our approach for evaluating suitable (i.e., efficient and accurate) mechanisms in both our settings of interest builds on Aydore et al.'s~\cite{aydore2021differentially} recently introduced \textit{Relaxed Adaptive Projection} (\texttt{RAP}) mechanism.
\texttt{RAP}\ is the current state-of-the-art mechanism for answering large sets of statistical queries in the setting where the query workload is prespecified.
At a high-level, \texttt{RAP}\ works by:
\begin{enumerate}
\item Initializing a synthetic dataset $D'$ in a relaxed data space (e.g., by relaxing a binary feature in the original dataset to the interval $[0,1]$ in the synthetic dataset).
\item For each original prespecified query, specifying a surrogate query which is equivalent to the original in the unrelaxed data space, but which is differentiable everywhere in the relaxed space.
\item Iteratively applying an \textit{Adaptive Selection} (\texttt{AS}) step followed by a \textit{Relaxed Projection} (\texttt{RP}) step. In the \texttt{AS}\ step, adaptivity is introduced to allow the subset of queries with the highest error on $D'$ to be privately selected. In the \texttt{RP}\ step, these selected queries' surrogates are used to optimize $D'$ using standard gradient-based optimization techniques.
\item Finally, answering the original set of queries using the optimized synthetic dataset $D'$.
\end{enumerate}
For $k$-way marginals, a canonical subclass of statistical queries~\cite{barak2007privacy, thaler2012faster, gupta2013privately, chandrasekaran2014faster, cormode2018marginal} (formally defined in Section~\ref{sec:preliminaries}), Aydore et al.\ theoretically and empirically demonstrate that \texttt{RAP}\ outperforms prior state-of-the-art mechanisms.
Theoretically, they provide an ``oracle efficient'' (i.e., assuming the optimization procedure achieves a global minima) utility result characterizing \texttt{RAP}'s error, showing that \texttt{RAP}\ achieves strictly lower error than the previous practical state-of-the-art mechanism~\cite{vietri2020new}.
Experimentally, they compare the \texttt{RAP}\ mechanism with prior state-of-the-art mechanisms~\cite{mckenna2019graphical, vietri2020new}, demonstrating that \texttt{RAP}\ answers prespecified sets of queries with lower error.\\
\subsection{Our Contributions} \label{sec:contributions}
To answer this work's high-level research question, we make the following contributions in both settings of interest.
In the classic setting where all queries are known in advance, our contributions are as follows.
\begin{itemize}
\item We overcome memory hurdles in \texttt{RAP}'s initial implementation by reimplementing \texttt{RAP}\ in a memory-efficient way, thus enabling the evaluation of significantly larger query spaces than previously considered.
\item We utilize the new implementation to enhance \texttt{RAP}'s evaluation, evaluating \texttt{RAP}\ on larger query spaces (answering approximately 50x more queries) than in its initial evaluation, and conclusively determining the role that adaptivity from the \texttt{AS}\ step plays in \texttt{RAP}'s utility.
\item We extend \texttt{RAP}'s applicability by expanding the class of queries that it evaluates, finding that it can efficiently and effectively answer more complex query classes than previously considered.
\end{itemize}
\noindent As a realistic intermediate setting that lies between the two classic extremes of no-knowledge vs.\ full-knowledge of which queries will be posed, we propose a new setting where partial knowledge of the future queries is available.
In this new setting, our contributions are as follows.
\begin{itemize}
\item We concretely define this setting as well as how to measure utility within it. Specifically, we assume that a set of historical queries was independently drawn from some unknown distribution $\mathcal{T}_H$, and that the mechanism has access to these historical queries. In the future, the mechanism will be posed an arbitrary number of queries sampled from a distribution $\mathcal{T}_F$, which may be related to $\mathcal{T}_H$. We define the utility of the mechanism in terms of its generalization error; i.e., its expected error across these future queries drawn from $\mathcal{T}_F$ having been given access to the historical queries from $\mathcal{T}_H$.
\item We assess how suitable \texttt{RAP}\ is for this new setting by formulating query distributions according to real-world phenomena, then empirically evaluating \texttt{RAP}'s generalization error on these distributions. When future queries are drawn from the same distribution as the historical queries that \texttt{RAP}\ used to learn its synthetic dataset (i.e., $\mathcal{T}_H=\mathcal{T}_F$), we find that regardless of what the distribution is, \texttt{RAP}\ is able to achieve high utility. When the distribution of future queries diverges from the distribution of historical queries, we find that \texttt{RAP}'s utility slowly and gracefully declines.
\end{itemize}
\noindent These contributions, in both the prespecified queries setting and the partial knowledge setting, definitively demonstrate the practical value of \texttt{RAP}\ and improve \texttt{RAP}'s adoptability for real-world uses.\\
The remainder of this work is structured as follows.
Beginning in Section~\ref{sec:preliminaries}, we provide a comprehensive overview of the relevant technical terminology and definitions, and detail the \texttt{RAP}\ mechanism that we build upon.
In Section~\ref{sec:improving-evaluation}, we perform a focused but thorough reproducibility study on Aydore et al.'s~\cite{aydore2021differentially} evaluation of the \texttt{RAP}\ mechanism.
To accomplish this, we first improve \texttt{RAP}'s implementation from the ground up, and then leverage the new implementation to enhance \texttt{RAP}'s initial evaluation in order to strengthen our comprehension of its utility.
Building on the improved \texttt{RAP}\ implementation, in Section~\ref{sec:extending-applicability} we expand the class of queries that \texttt{RAP}\ is able to accommodate.
We then empirically evaluate \texttt{RAP}\ on this new class of queries, finding that it is able to efficiently answer large numbers of queries from this class while maintaining high utility.
In Section~\ref{sec:understanding-generalizability}, we concretely define our newly proposed setting where a mechanism is given partial knowledge of the queries that will be posed in the future.
We define how we assess \texttt{RAP}'s performance in this setting, and detail the distinct new ways that \texttt{RAP}'s performance may be affected in this new setting.
We then empirically evaluate \texttt{RAP}\ in this setting, finding that even with only partial knowledge of which queries will be posed in the future, \texttt{RAP}\ is able to efficiently and effectively achieve high utility.
Finally, in Section~\ref{sec:many-queries-related-works}, in addition to the related works already discussed in this section, we describe other important relevant works and the future directions they motivate related to this work.
\section{Overview of the Approach} \label{sec:overview}
In this section, we provide high level descriptions of and motivations for the approaches that we will take to accomplish our proposed contributions (Section \ref{sec:prop-contributions}).
\subsection{Expanding the Class of Queries}
We now describe the expanded class of queries that we are proposing to evaluate: $r$-of-$k$ thresholds.
We begin with a discussion of the query classes that have been studied in prior works in order to understand their connection to $r$-of-$k$ thresholds as well as to motivate the need for expanding the class of queries.
Prior work on answering statistical queries in practical settings has been focused on relatively simple classes of statistical queries -- most popularly $k$-way marginals, as these are an important and useful query class which is evaluable within a reasonable computational budget~\cite{barak2007privacy, thaler2012faster, gupta2013privately, chandrasekaran2014faster}.
With the introduction of Aydore et al.'s gradient-based Relaxed Projection (\texttt{RP}) mechanism~\cite{aydore2021differentially}, they claim that \texttt{RP}\ is able to answer queries from richer classes under two conditions: (1) that EEDQs can be derived which correspond to the original queries, and (2) that answers to those EEDQs can be efficiently computed.
They demonstrated this claim by evaluating a new class of queries, 1-of-$k$ thresholds, in addition to the standard $k$-way marginals.
Although their evaluation attained high utility results on both query classes, the choice to extend from $k$-way marginals to 1-of-$k$ thresholds enabled them to reuse virtually the same class of surrogate queries, and correspondingly, virtually the same highly-optimized NumPy~\cite{numpy} method (\einsum) within JAX~\cite{bradbury2020jax} to efficiently evaluate them.
Thus, the evaluation was not fully convincing in demonstrating that \texttt{RP}\ is truly effective for richer classes of queries.
Our contribution will be to develop the mathematical and computational machinery necessary to demonstrate the effectiveness of \texttt{RP}\ on a class of queries which generalizes both $k$-way marginals as well as 1-of-$k$ threshold queries: $r$-of-$k$ threshold queries.
$r$-of-$k$ thresholds have previously received some theoretical attention from the DP community \cite{thaler2012faster, neel2019use}, although no practical evaluations have been done, likely due to the computational difficulty.
Aydore et al.'s mechanism may be able to overcome this difficulty, but requires differentiable surrogate queries for the class of $r$-of-$k$ queries.
However, since $r$-of-$k$ thresholds strictly generalize both $k$-way marginals and 1-of-$k$ thresholds, their previous class of surrogate queries aren't able to trivially represent $r$-of-$k$ thresholds.
Therefore, we must analogously generalize the class of surrogate queries.
Moreover, Aydore et al. relied on clever use of the highly-optimized \einsum\ method to efficiently evaluate their surrogate queries.
We instead seek more straightforward, approachable methods to evaluate our generalized surrogate queries, relying on modern powerful language primitives (such as vectorization, just-in-time-compilation, and static graph optimization) to enable their efficient evaluation.
Once implemented, we will experimentally evaluate the mechanism's utility using the ADULT \cite{frank2010uci} and ACS \cite{ruggles2020ipums} datasets to determine whether or not this approach is effective on the richer class of queries.
\subsection{Extending the Utility Analysis}
We now turn our attention to why and how we intend to extend the utility analysis of the \texttt{RP}\ mechanism.
In statistical modeling, and especially in the subfield of synthetic data generation, the primary goal isn't to generate a model or synthetic dataset that answers a prespecified set of queries well.
Rather, the goal is to generate a model or dataset that \textit{generalizes} well to future queries.
When it comes to differentially private mechanisms for answering statistical queries through a synthetic dataset, prior utility analyses have only focused on either: (a) how well those mechanisms answer the prespecified set of queries, or (b) on theoretically bounding how well the mechanisms can answer any set of queries in the worst-case.
However, in practical settings, it may be more useful to analyze how well the mechanism can answer future queries which are similar to the prespecified ones; i.e., analyzing how well the mechanism generalizes.
Concretely, assuming the prespecified queries come from some query distribution, we are interested in how well the mechanism (whose synthetic data was learned using those prespecified queries) is able answer future queries coming from the same distribution.
For \textit{a priori} unknown distributions, this is a challenging problem to approach theoretically without resorting to worst-case guarantees.
Experimentally, however, we intend to efficiently approach this as follows: (1) construct query distributions according to real-world phenomena, (2) sample separate sets of prespecified and ``future'' queries from the same constructed distribution, and then (3) use the ADULT and ACS datasets to empirically evaluate how well the
mechanism generalizes from the prespecified queries to the future queries.
This is analogous to standard practice in empirical machine learning research where data is split into ``training'' and ``test'' sets randomly (to ensure distributional similarity); the model is then learned on the training set, and subsequently evaluated on the test set to measure how well it generalizes.
\subsection{Incorporating Historical Queries}
For this final component, we describe our proposed method for incorporating information from the past (via historical queries) by combining the expanded class of queries and extended utility analysis.
We describe its motivation, detail some straightforward methods that are unsatisfactory, then provide intuition for why we expect that our proposed method will attain high utility.
Returning to the 2020 US Census example from Section~\ref{sec:prop-summary}, in addition to the queries posed by the US Census Bureau to release their requisite statistics, researchers have also posed many queries of their own on prior census data.
A method which allows the researchers' historically-posed queries to be incorporated into the synthetic data generation process alongside the Census Bureau's prespecified queries has the potential to ensure that if similar queries are posed in the future on the new Census data, then the corresponding answers would retain high utility.
One straightforward approach to answering both the Census Bureau's and the researchers' queries would be to simply combine the prespecified queries directly with the historical queries.
This approach has the advantage of simplicity, but it has some drawbacks.
Primarily, it implicitly assumes that the prespecified and historical queries are of the same importance.
Often in practice, it's the prespecified queries that need to be answered most accurately, and the ability to answer some future queries accurately is essentially a side benefit.
For example, the Census Bureau is mandated release specific prespecified statistics to a certain degree of accuracy; once this is ensured, it's a bonus if researchers' future queries can be answered accurately.
However, there may be other scenarios where the analyst only needs a few rough answers to their queries, and intends to release the synthetic dataset with the hopes that it will be highly accurate for others' future queries.
Another straightforward approach which could remedy previous approach's drawback would be to learn a separate synthetic dataset for each of these two tasks: one synthetic dataset for answering the prespecified queries, and one synthetic dataset (learned on the historical queries) for answering future queries.
With a fixed computational budget and privacy budget, this approach necessitates splitting the size of the synthetic dataset between each group as well as splitting the privacy budget between each group's queries (where the curator specifies the splits as a hyperparameter, based on the relative importance between the two groups of queries).
However, results from the field of multi-task learning~\cite{zhang2021survey} have consistently shown that shared information from multiple related tasks can be leveraged to improve the performance of all the tasks.
Thus, we hypothesize that learning a single joint synthetic dataset for both prespecified queries and future queries will yield higher utility results than either of the straightforward approaches.
The challenge here is in balancing the utility of prespecified queries with the utility of future queries.
In machine learning, when one wants to constrain the optimization or optimize more than one objective simultaneously, it is standard to incorporate the constraints and objectives into the loss function and let the optimization procedure implicitly sort it out.
We can address our challenge by having our loss function be a weighted combination of the loss on the prespecified and historical queries (with some prespecified weight hyperparameter).
Such a hyperparameter would act as a knob that controls the importance of the prespecified queries vs the historical queries.
We will experimentally investigate the effectiveness of this approach on $r$-of-$k$ queries using the ADULT and ACS datasets, comparing against the two aforementioned straightfoward approaches as baselines.
\section{Technical Preliminaries} \label{sec:preliminaries}
In this section, we define the requisite technical terminology.
The fundamental concepts introduced here were primarily presented in prior works~\cite{gaboardi2014dual, vietri2020new, aydore2021differentially}.
We restate them to aid in understanding and contextualizing Aydore et al.'s \texttt{RAP}\ mechanism, which we use to answer this work's research questions.
Towards this, we first define statistical queries and their subclasses that are relevant to this work.
We then define what it means to be a ``surrogate'' query for one of these statistical queries.
Next, we describe what workloads are and how we use them.
Finally, we detail the \texttt{RAP}\ mechanism that we build on in this work.
Because this work is notationally dense, Table~\ref{tab:symbols} serves as a reference for the various symbols that we define.
\begin{table}
\setlength\tabcolsep{5pt}
\begin{tabularx}{\columnwidth}{rc|X}
& \textbf{Symbol} & \textbf{Usage} \\
\midrule
\midrule
& $\epsilon, \delta$ & Differential privacy parameters. \\
\hline
& $\mathcal{X},\ d,\ \mathcal{X}_i$ & Data space $\mathcal{X}$ for any possible record consisting of $d$ features. $\mathcal{X}_i$ is the domain of feature $i$.\\
& $D,\ n$ & Dataset $D$ containing $n$ records from $\mathcal{X}$. \\
\hline
& $q_{\phi}$ & Statistical query $q$ defined by the mean of the predicate $\phi$ over a set of records from $\mathcal{X}$. \\
& $Q,\ m,\ a$ & $Q$ is a vector of $m$ queries, and $a$ represents the answers to the vector of queries over the dataset $D$ such that $Q(D) = a = (a_1,\dots,a_m)$. \\
& $W$ & Threshold workload $W$ which defines the concrete query vector $Q$. \\
& $q_{\phi_{S,y,k}}$ & $k$-way marginal query specified by set $S$ of $k$ features and values $y$ for each feature. \\
& $q_{\phi_{S,y,1}}$ & 1-of-$k$ threshold query specified by set $S$ of $k$ features and values $y$ for each feature. \\
$\star$ & $q_{\phi_{S,y,r}}$ & $r$-of-$k$ threshold query specified by set $S$ of $k$ features and values $y$ for each feature, and threshold $r$. \\
\hline
& $\mathcal{Y},\ d' $ & Data space $\mathcal{Y}$ consisting of $d'$ features, which is a relaxation of the one-hot encoded $\mathcal{X}$ data space. \\
& $D',\ n'$ & Synthetic dataset $D'$ containing $n'$ features from $\mathcal{Y}$. \\
& $\hat{q}_{\hat{\phi}}$ & Surrogate query $\hat{q}$ defined by the mean of the function $\hat{\phi}$ over a set of records from $\mathcal{Y}$. \\
& $\hat{Q}$ & Vector of surrogate queries. \\
& $\hat{q}_{\hat{\phi}_T}$ & Product query, specified by a set of features $T$. \\
$\star$ & $\hat{q}_{\hat{\phi}_{T_+,T_-}}$ & Generalized product query, specified by a set of positive and negated features $T_+$ and $T_-$. \\
$\star$ & $\hat{q}_{\hat{\phi}_{T,r}}$ & Polynomial threshold query, specified by a set of features $T$ and integer $r$. \\
\hline
$\star$ & $\err_P$ & Measure of a mechanism's present error, used when all queries are known in advance.\\
$\star$ & $\err_F$ & Measure of mechanism's future error, used when only partial knowledge of queries is available in advance.\\
$\star$ & $\mathcal{F}, \mathcal{T}$ & Distribution $\mathcal{T}$ from which thresholds in a random workload are sampled i.i.d.\ in order to form a corresponding vector of consistent queries. The threshold distribution may be formed by a distribution over features $\mathcal{F}$. \\
\hline
& \texttt{RAP}, \texttt{AS}, \texttt{RP} & Relaxed Adaptive Projection mechanism, with its primary subcomponents: the Adaptive Selection and Relaxed Projection mechanisms. \\
& \texttt{RNM} & Report Noisy Max mechanism, used by the \texttt{AS}\ mechanism to select high-error queries. \\
& \texttt{GM} & Gaussian noise-addition mechanism, used as both a baseline mechanism as well as a subcomponent of \texttt{RAP}\ to privately answer queries directly. \\
$\star$ & \texttt{OSAS} & Oneshot Adaptive Selection mechanism, introduced as more efficient a drop-in replacement for \texttt{RAP}'s \texttt{AS}\ mechanism. \\
& \texttt{All-0} & Baseline mechanism that returns only 0 for all queries. \\
\end{tabularx}
\caption{Comprehensive list of notation. Lines marked with a $\star$ indicate new concepts not found in~\cite{aydore2021differentially}.} \label{tab:symbols}
\end{table}
\subsection{Statistical Queries and their Subclasses} \label{sec:stat-qs}
The general class of queries that we are interested in (which the \texttt{RAP}\ mechanism can, in theory, be used to answer) are statistical queries.
\begin{definition}[Statistical query]
A \textit{statistical query} $q_{\phi}$ is parameterized by a predicate $\phi: \mathcal{X} \rightarrow \{0,1\}$; i.e., the predicate takes as input a record $x$ of a dataset $D$, and outputs a boolean value.
The statistical query is then defined as the normalized count of the predicate over all $n$ records of the input dataset; i.e.,
$$q_{\phi}(D) = \frac{\sum_{x\in D} \phi(x)}{n}.$$
Given a vector of $m$ statistical queries $Q$, we define $Q(D)=(a_1,\dots,a_m)$ to be the answers to each of the queries on $D$; i.e., $a_i = q_{\phi_i}(D)$ for all $i \in [m]$.
\end{definition}
We now formally define the specific subclasses of statistical queries that we reference throughout this work.
Let the space for each record in the dataset consist of $d$ categorical features $\mathcal{X} = (\mathcal{X}_1 \times \cdots \times \mathcal{X}_d)$, where each $\mathcal{X}_i$ is the discrete domain of feature $i$, and let $x_i \in \mathcal{X}_i$ denote the value of feature $i$ of record $x \in \mathcal{X}$.
Prior works have primarily evaluated the subclass of statistical queries known as $k$-way marginals (also known as $k$-way contingency tables or $k$-way conjunctions)~\cite{barak2007privacy, thaler2012faster, gupta2013privately, chandrasekaran2014faster, cormode2018marginal, mckenna2019graphical, vietri2020new}, and typically focused specifically on 3-way and 5-way marginals.
\begin{definition}[$k$-way marginal] \label{def:kw}
A \textit{$k$-way marginal query} $q_{\phi_{S,y,k}}$ is a statistical query whose predicate $\phi_{S,y,k}$ is specified by a set $S$ of $k$ features $f_1 \neq \cdots \neq f_k \in [d]$ and a target $y \in (\mathcal{X}_{f_1} \times \cdots \times \mathcal{X}_{f_k})$, given by
\[ \phi_{S,y,k}(x)=
\begin{cases}
1 & \text{if }\ x_{f_1} = y_1 \ \wedge \cdots \ \wedge x_{f_k} = y_k \\
0 & \text{otherwise.}
\end{cases}
\]
Informally, a row satisfies the predicate if \textit{all} of its values match the target on the specified features.
A \textit{$k$-way marginal} is then specified by a set $S$ of $k$ features, and consists of all ($\Pi_{i=1}^k |\mathcal{X}_{f_i}|$) $k$-way marginal queries with feature set $S$.
\end{definition}
1-of-$k$ thresholds (also known as $k$-way disjunctions) were briefly evaluated in \cite{aydore2021differentially}, and are defined similarly.
\begin{definition}[1-of-$k$ threshold] \label{def:1k}
A \textit{1-of-$k$ threshold query} $q_{\phi_{S,y,1}}$ is a statistical query whose predicate $\phi_{S,y,1}$ is specified by a set $S$ of $k$ features $f_1 \neq \cdots \neq f_k \in [d]$ and a target $y \in (\mathcal{X}_{f_1} \times \cdots \times \mathcal{X}_{f_k})$, given by
\[ \phi_{S,y,1}(x)=
\begin{cases}
1 & \text{if }\ x_{f_1} = y_1 \ \vee \cdots \ \vee x_{f_k} = y_k \\
0 & \text{otherwise.}
\end{cases}
\]
Informally, a row satisfies the predicate if \textit{any} of its values match the target on the specified features.
A \textit{1-of-$k$ threshold} is then specified by a set $S$ of $k$ features, and consists of all ($\Pi_{i=1}^k |\mathcal{X}_{f_i}|$) 1-of-$k$ threshold queries with feature set $S$.
\end{definition}
Finally, in this work, we evaluate a generalization of both of these subclasses of statistical queries: $r$-of-$k$ thresholds~\cite{kearns1987learnability,littlestone1988learning,hatano2004learning, thaler2012faster,ullman2013privacy,aydore2021differentially}.
\begin{definition}[$r$-of-$k$ threshold] \label{def:rk}
An \textit{$r$-of-$k$ threshold query} $q_{\phi_{S,y,r}}$ is a statistical query whose predicate $\phi_{S,y,r}$ is specified by a positive integer $r \le k$, a set $S$ of $k$ features $f_1 \neq \cdots \neq f_k \in [d]$, and a target $y \in (\mathcal{X}_{f_1} \times \cdots \times \mathcal{X}_{f_k})$.
The predicate is then given by
\[\phi_{S,y,r}(x) = \mathbbm{1}\left[ \sum_{i=1}^k \mathbbm{1}[x_{f_i} = y_i] \ge r \right].\]
Informally, a row satisfies the predicate if \textit{at least} $r$ of its values match the target on the specified features.
An \textit{$r$-of-$k$ threshold} is then specified by positive integer $r \le k$ and a set $S$ of $k$ features, and consists of all ($\Pi_{i=1}^k |\mathcal{X}_{f_i}|$) $r$-of-$k$ threshold queries with feature set $S$.
This class generalizes $k$-way marginals when $r=k$, and generalizes 1-of-$k$ thresholds when $r=1$.
\end{definition}
The expressiveness of $r$-of-$k$ thresholds make them more useful than $k$-way marginals, as they enable more nuanced queries to be easily and intuitively posed.
This is particularly useful when the implications behind categories of distinct features in a dataset have some overlap.
For instance, in the motivating U.S. Census example, there were several features with categories that were indicative of a substandard quality of living.
Requiring someone to belong to \textit{all} of the categories (as a $k$-way marginal requires) is overly restrictive, and $r$-of-$k$ thresholds allow this restrictiveness to be relaxed.
\begin{remark}
We say that any $r$-of-$k$ threshold query (and, by extension, any $k$-way marginal query or 1-of-$k$ threshold query) specified by $r$, $k$, $S$, and $y$ is \textit{consistent} with the $r$-of-$k$ threshold specified by $r$, $k$, and $S$.
That is, we often refer to an $r$-of-$k$ threshold simply as the features it specifies, whereas a query \textit{consistent with} that $r$-of-$k$ threshold is one which specifies concrete target values corresponding to those features.
\end{remark}
\subsection{Surrogate Queries}
Aydore et al.~\cite{aydore2021differentially} introduce surrogate queries to replace the original statistical queries with queries that are similar, but that are amenable to first-order optimization methods.
These first-order optimization methods, thanks to significant recent advances in hardware and software tooling, can enable highly efficient learning of synthetic datasets.
\begin{definition}[Surrogate Query]
A \textit{surrogate query} $\hat{q}_{\hat{\phi}}$ is parameterized by function $\hat{\phi}: \mathcal{Y} \rightarrow \mathbb{R}$; i.e., the function takes as input a record $x \in \mathcal{Y}$ from a dataset $D'$, and outputs a real value.
The surrogate query is then defined as the normalized count of the function over all $n'$ records of the input dataset; i.e.,
\[ \hat{q}_{\hat{\phi}}(D') = \frac{\sum_{x\in D'} \hat{\phi}(x)}{n'}.\]
The only distinctions between the definitions of a surrogate query with $\hat{\phi}$ and a statistical query with $\phi$ are that $\hat{\phi}$'s domain may be different than $\phi$'s, and $\hat{\phi}$'s codomain is the entire real line instead of $\{0,1\}$.
\end{definition}
We are interested in surrogate queries that are \textit{equivalent extended differentiable queries} (EEDQs) as defined in~\cite{aydore2021differentially}.
\begin{definition}[Equivalent Extended Differentiable Query]
Let $q_{\phi}$ be an arbitrary statistical query parameterized by $\phi(x):\mathcal{X} \rightarrow \{0,1\}$, and let $\hat{q}_{\hat{\phi}}$ be a surrogate query parameterized by $\hat{\phi}: \mathcal{Y} \rightarrow \mathbb{R}$.
We say that $\hat{q}_{\hat{\phi}}$ is an \textit{equivalent extended differentiable query} to $q_{\phi}$ if it satisfies the following properties:
\begin{enumerate}
\item $\hat{\phi}$ is differentiable over $\mathcal{Y}$. I.e., for every $x \in \mathcal{Y},\ \nabla \hat{\phi}(x)$ is defined.
\item $\hat{\phi}$ agrees with $\phi$ on every possible database record that results from a one-hot encoding. I.e., for every $x \in \mathcal{X}$ where $h(x)$ represents a one-hot encoding\footnote{A one-hot encoding of a categorical feature $\mathcal{X}_i$ with $t_i$ categories is a mapping from each category to a unique $1 \times t_i$ binary vector that has exactly 1 non-zero coordinate.} of $x$: $\phi(x) = \hat{\phi}(h(x))$.
\end{enumerate}
\end{definition}
\paragraph{Notation of Feature Spaces:}
Recall the original feature space $\mathcal{X} = (\mathcal{X}_1 \times \dots \times \mathcal{X}_d)$, where each $\mathcal{X}_i$ is the discrete domain of feature $i$, and let $t_i$ be the number of distinct values/categories that $\mathcal{X}_i$ can attain.
A one-hot encoding $h(x)$ of any record $x$ results in a binary vector $\{0,1\}^{d'}$, where $d' = \sum_{i=1}^d t_i$.
Just as in~\cite{aydore2021differentially}, we are interested in constructing a synthetic dataset that lies in a continuous relaxation of this binary feature space.
A natural relaxation of $\{0,1\}^{d'}$ is $[0,1]^{d'}$, so we adopt $\mathcal{Y} = [0,1]^{d'}$ as the relaxed space for the remainder of this work.\\
As an illustrative example of an EEDQ, we define the class of EEDQ's used by Aydore et al.\ for $k$-way marginals.
Concretely, \cite{aydore2021differentially} defines the class of surrogate queries known as \textit{product queries}, and shows how to construct an EEDQ product query for any given $k$-way marginal.
\begin{definition}[Product Query] \label{def:pq}
Given a subset of features $T \subseteq [d']$, the \textit{product query} $\hat{q}_{\hat{\phi}_T}$ is a surrogate query parameterized by function $\hat{\phi}_T$ which is defined as $\hat{\phi}_T(x) = \prod_{i \in T} x_i$.
\end{definition}
\begin{lemma}[\cite{aydore2021differentially}, Lemma 3.3] \label{lem:pq}
Every $k$-way marginal query $q_{\phi_{S,y,k}}$ has an EEDQ in the class of product queries. By construction, every $\hat{\phi}_T$ satisfies the requirement that it is defined over the entire relaxed space $\mathcal{Y}$ and is differentiable.
Additionally, for every $q_{\phi_{S,y,k}}$, there is a corresponding product query $\hat{q}_{\hat{\phi}_T}$ with $|T|=k$ such that for every $x \in \mathcal{X}: \phi_{S,y,k}(x) = \hat{\phi}_T(h(x))$. We construct this $T$ in the following straightforward way: for every $i \in S$, we include in $T$ the coordinate corresponding to $y_i \in \mathcal{X}_{f_i}$.
\end{lemma}
\subsection{Threshold Workloads}
It was standard in prior works to evaluate \textit{workloads} of $k$-way marginals~\cite{li2015matrix, mckenna2018optimizing, mckenna2019graphical, vietri2020new, liu2021leveraging, liu2021iterative}.
A $k$-way marginal workload $W$ is specified by a set of $k$-way marginals, $W = \{S_1, \dots, S_{|W|}\}$ such that each $S_i \in W$ is a set of $k$ features.
This workload $W$ defines a concrete query vector $Q$ which consists of all queries consistent with each marginal in $W$.
Since $Q$ is defined by the marginal workload defines, $Q$ is commonly referred to as the \textit{query workload}.
For example, a workload may be specified by the following two $3$-way marginals, $W = \{(1,2,5), (2,3,7)\}$, and would therefore define the query vector $Q$ containing all marginal queries consistent with those feature sets.
The number of queries in this query vector would then be $|Q|=|\mathcal{X}_1||\mathcal{X}_2||\mathcal{X}_5| + |\mathcal{X}_2||\mathcal{X}_3||\mathcal{X}_7|$.
Since our work extends the class of queries from marginals to $r$-of-$k$ thresholds, rather than a workload being specified by a set of marginals, we say that a workload $W$ is specified by a set of $r$-of-$k$ thresholds.
$W$ similarly defines the concrete query vector $Q$ which consists of all $r$-of-$k$ threshold queries consistent with each $r$-of-$k$ threshold in $W$.
For example, when $r=1$ and $k=3$, we can specify a similar workload as before $W = \{(1,2,5), (2,3,7)\}$ which defines query workload $Q$ containing the same number of consistent queries as before ($|Q|=|\mathcal{X}_1||\mathcal{X}_2||\mathcal{X}_5| + |\mathcal{X}_2||\mathcal{X}_3||\mathcal{X}_7|$) --- however, here each $q \in Q$ is a 1-of-3 threshold query instead of a 3-way marginal query.
Lastly, we let $\hat{Q}$ denote the corresponding vector of surrogate queries for $Q$.
We use threshold workloads (and their corresponding vector of all consistent queries) for the empirical evaluations of our mechanisms.
\subsection{\textit{Relaxed Adaptive Projection} (\texttt{RAP}) Mechanism}
We now describe the details of the \texttt{RAP}\ mechanism, including how it works as well as its DP guarantee.
Algorithm~\ref{alg:rap} formally defines the \texttt{RAP}\ mechanism.
The input to the mechanism is the dataset $D$ of sensitive user data, the desired size of the synthetic dataset $n'$, privacy parameters $(\epsilon, \delta)$, a vector of $m$ statistical queries $Q$ and their corresponding surrogate queries $\hat{Q}$, adaptiveness parameters $T,K \in [m]$.
The final outputs are (1) an $n'$-row synthetic dataset, and (2) estimates to the original queries $Q$ obtained by evaluating their surrogate queries on the synthetic dataset; i.e., \texttt{RAP}\ outputs (1) $D'$ and (2) $\hat{Q}(D')$.
\begin{algorithm}
\caption{Relaxed Adaptive Projection (\texttt{RAP}) Mechanism}
\vspace{0.4em} \hspace*{\algorithmicindent} \textbf{Input} \vspace*{-0.6em}
\begin{itemize}[leftmargin=1.5em] \setlength\itemsep{-0.2em}
\item $D$: Dataset of $n$ records from space $\mathcal{X}$.
\item $Q, \hat{Q}$: A vector of $m$ statistical queries and their corresponding surrogate queries.
\item $n', \mathcal{Y}$: Desired size of synthetic dataset with records from relaxed space $\mathcal{Y}$.
\item $T$: Number of rounds of adaptiveness.
\item $K$: Number of queries to select per round of adaptiveness.
\item $\epsilon, \delta$: Differential privacy parameters.
\end{itemize}
\hspace*{\algorithmicindent} \textbf{Body}
\begin{algorithmic}[1]
\STATE Let $\rho = \epsilon + 2\left(\log(\frac{1}{\delta}) - \sqrt{\log(\frac{1}{\delta})(\epsilon+\log(\frac{1}{\delta}))}\right)$.
\STATE Independently uniformly randomly initialize $D' \in \mathcal{Y}^{n'}$.
\IF{ $T=1,\ K=m$ }
\FOR{$i=1,2,\dots,m$}
\STATE Let $\tilde{a}_i = \texttt{GM}(D, q_i, \rho / m)$.
\ENDFOR
\STATE Let $D' = \texttt{RP}(\hat{Q}, \tilde{a}, D')$.
\ELSE
\STATE Let $Q_s = \emptyset$.
\FOR{$t=1,2,\dots,T$}
\STATE Let $Q_s, \tilde{a} = \texttt{AS}(D, D', Q, \hat{Q}, Q_s, K, \frac{\rho}{T})$.
\STATE Let $\hat{Q}_{s} = (\hat{q}_i: q_i \in Q_s)$.
\STATE Let $D' = \texttt{RP}(D', \hat{Q}_s, \tilde{a})$.
\ENDFOR
\ENDIF
\STATE {\bfseries Return:} Final synthetic dataset $D'$ and answers $\hat{Q}(D')$.
\end{algorithmic}
\label{alg:rap}
\end{algorithm}
\begin{algorithm}
\caption{Adaptive Selection (\texttt{AS}) Mechanism}
\vspace{0.4em} \hspace*{\algorithmicindent} \textbf{Input} \vspace*{-0.6em}
\begin{itemize}[leftmargin=1.5em] \setlength\itemsep{-0.2em}
\item $D, D'$: Dataset of $n$ records from space $\mathcal{X}$, and synthetic dataset of $n'$ records from relaxed space $\mathcal{Y}$.
\item $Q, \hat{Q}$: Vector of all statistical queries and their corresponding surrogate queries.
\item $Q_s$: Set of already selected queries.
\item $K$: Number of new queries to select.
\item $\rho$: Differential privacy parameter.
\end{itemize}
\hspace*{\algorithmicindent} \textbf{Body}
\begin{algorithmic}[1]
\FOR{$j = 1, 2, \dots, K$}
\STATE Let $\Delta = (|\hat{q}_{i}(D) - \hat{q}_{i}(D')| : q_i \in Q \setminus Q_s)$.
\STATE Let $i = \texttt{RNM}(\Delta, \frac{\rho}{2K})$
\STATE Add $q_i$ into $Q_s$.
\STATE Let $\tilde{a}_i = \texttt{GM}(D, q_i, \frac{\rho}{2K})$.
\ENDFOR
\STATE {\bfseries Return:} $Q_s$ and $\tilde{a} = (\tilde{a}_i: q_i \in Q_s)$.
\end{algorithmic}
\label{alg:as}
\end{algorithm}
\begin{algorithm}
\caption{Relaxed Projection (\texttt{RP}) Mechanism}
\vspace{0.4em} \hspace*{\algorithmicindent} \textbf{Input} \vspace*{-0.6em}
\begin{itemize}[leftmargin=1.5em] \setlength\itemsep{-0.2em}
\item $D'$: Synthetic dataset of $n'$ records from relaxed space $\mathcal{Y}$.
\item $\hat{Q}$: Vector of surrogate queries.
\item $\tilde{a}$: Vector of ``true'' privatized answers corresponding to each surrogate query.
\end{itemize}
\hspace*{\algorithmicindent} \textbf{Body}
\begin{algorithmic}[1]
\STATE Use any iterative differentiable optimization technique (SGD, Adam, etc.) to attempt to find:
$$D' = \argmin_{D' \in \mathcal{Y}^{n'}} ||\hat{Q}(D') - \tilde{a}||_2^2,$$
applying the Sparsemax transformation to every feature encoding in each row of $D'$ between each iteration.
\STATE {\bfseries Return:} $D'$.
\end{algorithmic}
\label{alg:rp}
\end{algorithm}
\paragraph{Non-Adaptive Case:}
In its most basic form ($T=1, K=m$), \texttt{RAP}\ employs no adaptivity.
Here, the vector of $m$ queries are first privately answered directly on the sensitive dataset $D$ using the \textit{Gaussian Mechanism} (\texttt{GM}).
These answers, along with the vector of surrogate queries $\hat{Q}$ and a uniformly randomly initialized $n'$-row synthetic dataset $D'$, are passed to the \textit{Relaxed Projection} mechanism (\texttt{RP}, Algorithm~\ref{alg:rp}).
The \texttt{RP}\ subcomponent utilizes an iterative gradient-based optimization procedure (such as SGD) to update $D'$ by minimizing the disparity between the surrogate queries answers on $D'$ and the privatized answers on the sensitive dataset $D$.
After iterative update, the Sparsemax transformation is applied to every feature encoding in each row of $D'$.
Once the procedure reaches a stopping condition (e.g., $\hat{Q}(D')$ is within a certain tolerance of $\tilde{a}$, or a certain number of iterations have occurred), \texttt{RP}\ returns the final $D'$.
\texttt{RAP}\ then returns $D'$ along with estimated answers to the query workload $\hat{Q}(D')$.
\paragraph{Adaptive Case:}
In the more general case, \texttt{RAP}\ proceeds in $T > 1$ rounds.
In each round $t$, \texttt{RAP}\ uses the \textit{Adaptive Selection} (\texttt{AS}) mechanism to select $K$ new queries to add to the set $Q_s$.
\texttt{AS}\ iteratively uses the Gumbel noise \textit{Report Noisy Max} (\texttt{RNM})~\cite{chen2016truthful, durfee2019practical} and \texttt{GM}\ mechanisms together to privately choose the $K$ queries that have the largest disparity between their current answers on the synthetic dataset $D'$ and their answers on the true dataset $D$.
The \texttt{RP}\ mechanism is then applied only to this subset $Q_s$ containing $tK$ queries in each round, rather than applying \texttt{RP}\ in 1 round on the full vector of privately answered queries $Q$ (as in the non-adaptive case).
Aydore et al.\ claim that the aim of incorporating this adaptivity is to expend the privacy budget more wisely by selectively answering only the $TK \ll m$ total worst-performing queries.
\subsubsection*{Concentrated Differential Privacy}
To state and understand \texttt{RAP}'s DP guarantee, we must briefly discuss \textit{zero-concentrated differential privacy} (zCDP)~\cite{bun2016concentrated}.
Although \texttt{RAP}\ is given $\epsilon$ and $\delta$ values as input and in turn guarantees $(\epsilon, \delta)$-DP, its DP sub-mechanisms and corresponding privacy proof are in terms of $\rho$-zCDP.
Zero-concentrated differential privacy is a different definition of DP that provides a weaker guarantee than pure DP but a stronger guarantee than approximate DP.
It is formally defined as follows.
\begin{definition}[\cite{bun2016concentrated}]
A randomized mechanism $\mathcal{M}$ is $\rho$-zCDP if and only if for all neighboring input datasets $D$ and $D'$ that differ in precisely one individual's data and for all $\alpha \in (1,\infty)$, the following inequality is satisfied:
$$\mathbb{D}_\alpha(\mathcal{M}(D) || \mathcal{M}(D')) \le \rho \alpha,$$
where $\mathbb{D}_\alpha(\cdot || \cdot)$ is the $\alpha$-R\'enyi divergence.
\end{definition}
We omit a detailed discussion of zCDP in this work, referring an interested reader to Bun and Steinke's work~\cite{bun2016concentrated} for more details.
However, its value for \texttt{RAP}\ comes from the fact that zCDP has better composition properties than approximate DP, yet \texttt{RAP}'s final composed zCDP guarantee (parameterized by $\rho$) can be converted back into an $(\epsilon, \delta)$-DP guarantee.
This converted $(\epsilon, \delta)$-DP guarantee is better than if standard composition results of approximate DP had been directly applied.
We now informally state these composition and conversion properties.
zCDP's composition property ensures that if two mechanisms satisfy $\rho_1$-zCDP and $\rho_2$-zCDP, then a mechanism that sequentially composes them satisfies $\rho$-zCDP with $\rho = \rho_1 + \rho_2$.
zCDP's conversion property ensures that if a mechanism satisfies $\rho$-zCDP, then for any $\delta > 0$, the mechanism also satisfies $(\epsilon, \delta)$-DP with $\epsilon = \rho + 2\sqrt{\rho\log(1/\delta)}$.
Finally, we define the two fundamental DP mechanisms used in \texttt{RAP}\ --- \texttt{GM}\ and \texttt{RNM}\ --- and state their DP guarantees in terms of zCDP.
The first mechanism is the Gaussian mechanism, which we restate here in terms of zCDP and for the particular use case of answering a single statistical query.
\begin{definition}
The Gaussian mechanism \texttt{GM}$(D, q_i, \rho)$ takes as input a dataset $D \in \mathcal{X}^n$, a statistical query $q_i$, and a zCDP parameter $\rho$.
It outputs $a_i = q_i(D) + Z$, where $Z \sim \textrm{Normal}(0,\sigma^2)$ and $\sigma^2 = \frac{1}{2n^2\rho}$.
\end{definition}
\begin{lemma}[\cite{bun2016concentrated}]
For any query $q_i$ and $\rho > 0$, the \texttt{GM}$(D, q_i, \rho)$ satisfies $\rho$-zCDP.
\end{lemma}
The second fundamental mechanism that \texttt{RAP}\ uses is the Gumbel noise Report Noisy Max (\texttt{RNM}) mechanism.
\begin{definition}
The Report Noisy Max mechanism \texttt{RNM}$(D, \Delta, \rho)$ takes as input a dataset $D \in \mathcal{X}^n$, a vector of real values $\Delta$, and a zCDP parameter $\rho$.
It outputs the index of the highest noisy value in $\Delta$; i.e., $i^* = \argmax_i \Delta_i + Z_i$, where each $Z_i \sim \textrm{Gumbel}\left(\frac{1}{\sqrt{2\rho|D|^2}}\right)$.
\end{definition}
\begin{lemma}[\cite{durfee2019practical}]
For any real vector $\Delta$ and $\rho > 0$, the \texttt{RNM}$(D, \Delta, \rho)$ satisfies $\rho$-zCDP.\\
\end{lemma}
With these fundamental mechanisms and their zCDP guarantees defined, we are now able to formally reproduce Aydore et al.'s original theorem and proof of \texttt{RAP}'s DP guarantee.
\begin{theorem}[\cite{aydore2021differentially}]
For any class of queries and surrogate queries $Q$ and $\hat{Q}$, and for any set of parameters $n'$, $T$, and $K$, the \texttt{RAP}\ mechanism satisfies $(\epsilon, \delta)$-DP.
\end{theorem}
\begin{proof}
First, consider the non-adaptive case where $T=1, K=m$.
Here, the sensitive dataset $D$ is only accessed via $m$ invocations of the Gaussian mechanism, each with privacy $\rho/m$.
Therefore, by the composition property of zCDP, \texttt{RAP}\ satisfies $\rho$-zCDP.
Thus, by our choice of $\rho$ in line 1, we conclude that \texttt{RAP}\ satisfies $(\epsilon, \delta)$-DP.
Next, assume $T > 1$.
\texttt{RAP}\ executes $T$ iterations of its loop, only accessing the sensitive dataset $D$ via the Adaptive Selection (\texttt{AS}) mechanism each iteration.
Thus, we seek to prove that the \texttt{AS}\ mechanism satisfies $\rho/T$-zCDP.
Each invocation of the \texttt{AS}\ mechanism receives as input the privacy parameter $\rho'=\rho/T$, and accesses the sensitive dataset via $K$ invocations of \texttt{RNM}\ and $K$ invocations of \texttt{GM}.
Each invocation of either mechanism ensures $\frac{\rho'}{2K}$-zCDP, and therefore by the composition property of zCDP, the total $2K$ mechanism invocations ensure $\rho'$-zCDP.
Thus, the \texttt{AS}\ mechanism satisfies $\rho/T$-zCDP.
Leveraging zCDP's composition property again, because \texttt{RAP}\ invokes \texttt{AS}\ $T$ times, \texttt{RAP}\ therefore satisfies $\rho$-zCDP.
Finally, by our choice of $\rho$ in line 1, we conclude that \texttt{RAP}\ satisfies $(\epsilon, \delta)$-DP.
\end{proof}
\section{Understanding \texttt{RAP}'s Generalizability} \label{sec:understanding-generalizability}
In this final section, we propose a new and realistic intermediate setting that lies between the classic settings of having full knowledge of all queries in advance (i.e., the prespecified queries setting) vs.\ having no knowledge of which queries will be posed.
We begin by concretely defining this new partial knowledge setting along with a generalization-based measure of utility for mechanisms operating within it.
We then address our final contribution by empirically evaluating \texttt{RAP}'s utility to determine its suitability in the new setting.
\subsubsection*{Motivation}
In statistical modeling, and especially in the subfield of synthetic data generation, the primary goal is not to generate a model or a synthetic dataset that answers a prespecified set of queries well.
Rather, the goal is to generate a model or synthetic dataset that \textit{generalizes} well to future queries~\cite{vapnik1999overview, mohri2012foundations}.
When it comes to differentially private mechanisms for answering statistical queries through a synthetic dataset, prior utility analyses have focused on either: (a) how well those mechanisms answer the prespecified set of queries, or (b) theoretically bounding how well the mechanisms can answer any class of queries in the worst-case.
For example, the utility of \texttt{RAP}\ (and the related practical mechanisms which preceded it) had previously been based on solely the answers to the prespecified workload; e.g., present utility.
Experimentally evaluating a mechanism's present utility is straightforward: simply report the error of the highest error query from the prespecified query set.
However, in some settings, it may be more useful to understand how well the mechanism can answer future queries.
Towards this, theoretical bounds can provide strong guarantees for the mechanism's worst-case utility across an entire query class~\cite{blum2008learning, dwork2009complexity, dwork2010boosting, hardt2010simple, thaler2012faster}.
The drawback to using these theoretical bounds in practical settings is that they may be overly pessimistic, especially if the queries posed in the future are highly similar to the queries that were used to generate the synthetic dataset.
This apparent disparity between the utility suggested by theoretical analyses and the actual utility that may be observed in practice is nearly identical to the disparity that famously exists between utility analyses in theoretical vs.\ empirical machine learning research~\cite{vapnik1998support, bartlett2006empirical, shalev2014understanding, neyshabur2014search, zhang2021understanding}.
However, for answering statistical queries with DP, the theoretical worst-case bounds are currently the best tool available without introducing additional information or assumptions.
\subsection{Defining the Partial Knowledge Setting}
We now motivate the design of this particular partial knowledge setting, then formally define it.
Much like in the machine learning research literature, we motivate a new partial knowledge setting for the context of differential privacy based on the rationale that in some realistic settings, future queries may be similar to queries posed in the past; i.e., historical queries.
For instance, the U.S.\ Census Bureau periodically collects sensitive data for the decennial census, and routinely allows researchers to securely pose queries directly on the collected data.
Because similar data is being collected each decennial census, it is very likely that some of the queries analysts pose on one census dataset will be similar to the queries that analysts pose on the next census dataset.
We formalize this intuition on partial query repeatability for $r$-of-$k$ thresholds in a general manner in Definition~\ref{def:pksetting}.
For ease of exposition, we first introduce the following notation.
Let $\mathcal{T}$ be an arbitrary distribution over thresholds, and let $Q \leftarrow \mathcal{T}$ denote the vector of all consistent queries $Q$ of a threshold randomly drawn from distribution $\mathcal{T}$.
Similarly, we let $Q \xleftarrow{|W|} \mathcal{T}$ denote the vector of all consistent queries $Q$ from a $|W|$ size workload of thresholds sampled i.i.d.\ from $\mathcal{T}$.
\begin{definition}[Partial Knowledge Setting, General] \label{def:pksetting}
Let $\mathcal{T}_H$ and $\mathcal{T}_F$ be arbitrarily related distributions over thresholds.
In this setting, DP mechanisms are expected to answer arbitrary future thresholds drawn i.i.d.\ from $\mathcal{T}_F$.
However, the DP mechanisms are not provided $\mathcal{T}_F$ explicitly.
Instead, DP mechanisms are provided access to partial knowledge of $\mathcal{T}_F$ via a workload $W_H$ of ``historical'' thresholds sampled i.i.d.\ from $\mathcal{T}_H$; i.e., the mechanisms are given access to $Q_H \xleftarrow{|W_H|} \mathcal{T}_H$.
\end{definition}
Intuitively, in this partial knowledge setting, mechanisms can utilize $Q_H$ to learn about the underlying threshold distribution $\mathcal{T}_H$, and if $\mathcal{T}_H$ is similar to $\mathcal{T}_F$, this will, in turn, inform what areas of the threshold space future thresholds are most likely to be sampled from.
The role of $Q_H$ in this setting is analogous to the role that training data plays in machine learning; i.e., it is the concrete sample of data provided to the mechanism that the mechanism can use to attempt to generalize.
In order for the historical queries $Q_H$ to convey useful information about $\mathcal{T}_F$ to the DP mechanism, $\mathcal{T}_H$ and $\mathcal{T}_F$ should be related.
Towards this, in Definition~\ref{def:pkconc} we specify two concrete instantiations of the partial knowledge setting which make the relationship between $\mathcal{T}_H$ and $\mathcal{T}_F$ explicit.
\begin{enumerate}
\item Informally, the first concrete instantiation is the \textit{exact} partial knowledge setting, where historical thresholds are drawn from the same distribution as the future thresholds.
\item The second concrete instantiation is the \textit{drifting} partial knowledge setting, which extends the exact partial knowledge setting.
The drifting partial knowledge setting is inspired by the practical consideration that even if the historical and future thresholds distributions are initially the same, they may gradually drift apart over time.
\end{enumerate}
In both settings, we ground the historical and future thresholds distributions in the observation that in practice, certain features (or combinations of features) are likely to be more relevant to analysts than other features.
For instance, in the ADULT dataset, ``Age'' and ``Years of education'' might be more relevant and useful for analyses than ``Capital loss amount'' and ``Relationship status''.
We model this relevance as a historical probability distribution $\mathcal{F}_H$ over the \textit{features}, such that the probability mass corresponding to any $r$-of-$k$ threshold in $\mathcal{T}_H$ corresponds to the (normalized) product of the $k$ features' probabilities; i.e., $\mathcal{T}_H$ is the sampling distribution of $k$ features from $\mathcal{F}_H$ without replacement.
Our definition of the drifting partial knowledge setting specifically attempts to capture the practical phenomenon that if (for instance) analysts' interests are concentrated primarily in a small subset of features, then even if their interests drift over time, the analysts' new interests may still be concentrated in a small subset of different features.
Based on this, we now formally define both concrete instantiations of the partial knowledge setting.
\begin{definition}[Partial Knowledge Setting, Exact \& Drifting] \label{def:pkconc}
Let $\mathcal{F}_H$ be an arbitrary historical distribution over features with $\mathcal{T}_H$ as its corresponding historical thresholds distribution.
Without loss of generality, assume the features are sorted in descending order of their probability masses under $\mathcal{F}_H$; i.e., for each feature $f_i$ with probability $p_i$, we have that $p_i \ge p_{i+1}$.
Let $\gamma \in [0,1]$ be a drift parameter, which defines the distributional similarity of the future distribution over features $\mathcal{F}_F$ (and correspondingly the future thresholds distribution $\mathcal{T}_F$) as follows.
For each probability $p_i$, associate the corresponding key
\[
k_i =
\underbrace{(1-2\gamma)}_{\substack{\text{ordering} \\ \text{weighting}}} \ \cdot
\underbrace{\frac{d-i}{d-1}}_{\substack{\text{relative order,} \\ \text{normalized}}} + \
\underbrace{(1-|1-2\gamma|)}_{\substack{\text{shuffling} \\ \text{weighting}}} \ \cdot
\underbrace{u_i}_{\substack{\text{random} \\ \text{shuffling} \\ \text{amount}}},
\]
where $u_i \overset{\text{iid}}{\sim} \textrm{Uniform}[0,1]$.
The feature distribution $\mathcal{F}_F$ is defined by leaving the features fixed in their original ordering, but reordering the probability masses in descending order of their keys.
This results in a distribution of the same concentration, but with probability masses re-assigned to potentially different features.
The future thresholds distribution $\mathcal{T}_F$ is therefore the sampling distribution of $k$ features without replacement from $\mathcal{F}_F$.
When $\gamma = 0$, this procedure yields $\mathcal{T}_F = \mathcal{T}_H$, and we refer to this as the \textit{exact} partial knowledge setting.
When $\gamma > 0$, we refer to this as the \textit{drifting} partial knowledge setting.
\end{definition}
This model of drift is designed to maintain the concentration of the initial feature distribution $\mathcal{F}_H$ while interpolating between the exact partial knowledge setting ($\gamma=0$) and a uniformly random reshuffling of the features' probabilities ($\gamma=1/2$).
For $0 < \gamma < 1/2$, this model induces a weighted amount of random reshuffling of probabilities in conjunction with simultaneously encouraging features' probabilities to remain ``similar'' to what they initially were; e.g., features with large probability masses under $\mathcal{F}_H$ are likely to retain large probability masses under $\mathcal{F}_F$.
On the other end of the spectrum is the $\gamma > 1/2$ setting, where the relative orderings of probabilities become more likely to be reversed; e.g., features with large probability masses under $\mathcal{F}_H$ are likely to be assigned small probability masses under $\mathcal{F}_F$.
At the extreme of this setting is $\gamma=1$, which induces $\mathcal{F}_F$ of maximal total variation distance to $\mathcal{F}_H$ by deterministically reversing the relative ordering of the features' probabilities.
Figures~\ref{fig:generalizing-drift-hists} and \ref{fig:generalizing-drift-tvs} concretely illustrate how the drift amount $\gamma$ affects the distribution of future features.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=1\linewidth]{figs/generalizing/drift/adult-drift-hists.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=1\linewidth]{figs/generalizing/drift/loans-drift-hists.pdf}
\end{tabular}
\caption{Examples of drifted feature distributions $\mathcal{F}_F$ across a range of drift parameters $\gamma$, with an initial Geometric distribution for $\mathcal{F}_H$ on the ADULT and LOANS datasets. Categorical features are numbered (rather than named) along the $x$-axis.}
\label{fig:generalizing-drift-hists}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figs/generalizing/drift/drift-tvs.pdf}
\caption{Effect of drift parameter $\gamma$ on the total variation distance between the historical features distribution $\mathcal{F}_H$ and the future features distribution $\mathcal{F}_F$, with an initial Geometric distribution for $\mathcal{F}_H$ on the ADULT and LOANS datasets.}
\label{fig:generalizing-drift-tvs}
\end{figure}
\subsection{Measuring and Computing Utility}
Having concretely defined the partial knowledge setting, we formally define a utility measure to quantify how well a mechanism can answer future thresholds based on the historical thresholds it was given access to; i.e., a measure quantifying how well the mechanism generalizes.
We then describe how to empirically evaluate this defined utility measure in an efficient way.
In this setting, we are interested in the mechanism's error across its answers to the consistent queries of $r$-of-$k$ thresholds drawn from $\mathcal{T}_F$.
This new utility measure is based on the classic utility measure used in the prespecified queries setting (Definition \ref{def:present-err}), with the only difference being that the randomness of the future thresholds distribution $\mathcal{T}_F$ is now explicitly taken into account.
We thus define \textit{future utility} which we measure in terms of the negative of \textit{future error}; i.e., a mechanism with low future error has high future utility, and vice versa.
Specifically, future error is the expected absolute error taken over the randomness of both $M$ and $\mathcal{T}_F$, formally defined as follows.
\begin{definition}[Future error]
Let $a = Q(D)$ be the true answers to all queries in $Q$ on $D$, and let $\tilde{a}$ be mechanism $M$'s corresponding answers. Then $\err_F$ is the future error of mechanism $M$, defined as $\err_F(M,D,\mathcal{T}_F) = \E_{M(D), Q\leftarrow \mathcal{T}_F} \Vert a - \tilde{a} \Vert_\infty$, where the expectation is over the randomness of both the mechanism and future threshold distribution.
\end{definition}
Theoretically evaluating $\err_F$ of a mechanism on \textit{a priori} unknown threshold distributions without resorting to worst-case bounds is a challenging problem.
Experimentally, however, we are able to efficiently and accurately estimate $\err_F$ for the \texttt{RAP}\ mechanism as follows:
\begin{enumerate}[leftmargin=40pt]
\item Construct feature distributions $\mathcal{F}_H$ and $\mathcal{F}_F$ according to real-world phenomena, which in turn define threshold distributions $\mathcal{T}_H$ and $\mathcal{T}_F$.
\item Generate a workload $W_H$ of historical thresholds, yielding query vector $Q_H \xleftarrow{W_H} \mathcal{T}_H$. Independently, generate a workload $W_F$ of future thresholds, yielding query vector $Q_F \xleftarrow{W_F} \mathcal{T}_F$.
\item Provide $Q_H$ as the input queries to \texttt{RAP}\ in order to generate a synthetic dataset.
\item Use the synthetic dataset to answer $Q_F$, recording the mean error (and optionally, the corresponding confidence intervals to quantify how faithfully $\err_F$ was approximated).
\end{enumerate}
This evaluation approach is analogous to standard practice in empirical machine learning research where data is split into ``training'' and ``test'' sets randomly (to ensure distributional similarity)~\cite{hastie2009elements}.
The model is then learned on the training set, and subsequently evaluated on the test set to measure how well it generalizes.
\subsection{Evaluating \texttt{RAP}'s Future Utility}
As our final contribution, we empirically evaluate \texttt{RAP}'s future utility for answering $r$-of-$k$ thresholds.
The experiments that we perform on \texttt{RAP}\ to understand its suitability in this new partial knowledge setting are as follows:
\begin{itemize}[leftmargin=30pt]
\item Evaluating the effects that the threshold distribution concentration and the historical threshold workload size $|W_H|$ have on \texttt{RAP}'s future utility.
\item Evaluating the effect that ``overfitting'' in the synthetic data optimization step has on \texttt{RAP}'s future utility.
\item Evaluating the effect that the distribution drift amount $\gamma$ has on \texttt{RAP}'s future utility.
\end{itemize}
These experiments are designed to assess the distinct new ways (beyond those in the previous prespecified queries setting) in which \texttt{RAP}'s inputs may influence its future utility.
\subsubsection{Effect of Threshold Distribution Concentration \& Historical Workload Size}
To empirically evaluate \texttt{RAP}'s future utility in the exact partial knowledge setting, we must specify the particular threshold distribution $\mathcal{T}_H = \mathcal{T}_F$ from which we generate both the input queries $Q_H$ and future queries $Q_F$ used to evaluate $\err_F$.
As previously discussed, we do so by specifying feature distributions $\mathcal{F}_H$ and $\mathcal{F}_F$ that, in turn, define the threshold distributions.
As a baseline, we choose what is intuitively the most challenging extreme: setting $\mathcal{F}_H$ and $\mathcal{F}_F$ to be the Uniform distribution.
We expect the future utility of this baseline to be the lowest among all possible distributions since it is the least concentrated, implying that it provides the least amount of information possible to the mechanism about any particular region of the threshold space.
In an effort to model the real-world phenomena that certain features are likely to be more relevant to analysts than other features, we utilize the following two feature distributions.
For a highly concentrated distribution, we use the exponentially-tailed Geometric distribution.
For a mildly concentrated distribution, we use the heavy-tailed Zipfian distribution.
Both distributions are commonly used in practice when modeling real-world phenomena; e.g., ~\cite{miller1989modeling, yu2004false, zeng2012topics, okada2018modeling}.
We hypothesize that the highly concentrated Geometric distribution will induce high-utility results, since many of the same features in $Q_H$ will also appear in $Q_F$.
Analogously, we hypothesize that the mildly concentrated Zipfian distribution will induce lower-utility results (although still higher than the Uniform distribution baseline).
With a fixed threshold distribution $\mathcal{T}_H$ defined by the feature distribution $\mathcal{F}_H$, we must specify how many thresholds will be randomly sampled to form the historical threshold workload $W_H$ (and corresponding vector of all consistent queries $Q_H$) that \texttt{RAP}\ takes as input.
Obtaining a clear understanding what impact the historical workload size $|W_H|$ has on \texttt{RAP}'s future utility is important because there may be a tension between the number of historical $r$-of-$k$ thresholds and \texttt{RAP}'s future utility.
On the one hand, the more sampled thresholds there are, the more information \texttt{RAP}\ has about the underlying distribution $\mathcal{T}_F$ from which future thresholds will be generated.
This suggests that the more historical $r$-of-$k$ thresholds there are, the higher \texttt{RAP}'s future utility should be.
On the other hand, to optimize \texttt{RAP}'s underlying synthetic dataset, its privacy budget is split between all queries consistent with the historical thresholds.
This implies that the more historical thresholds there are, the more noise will be added to each consistent query's answer, which seems to suggest that this will cause the future utility to be lower.
Thus, we seek to understand whether one of these two possibilities is correct, or whether there is a ``sweet spot'' where a certain number of historical thresholds is just enough for the mechanism to implicitly learn $\mathcal{T}_F$ but does not result in the privacy budget being spread too thin.
\begin{table}
\centering
\begin{tabular}{ |c|c| }
\hline
Primary Mechanism & \texttt{RAP} \\
\hline
Baseline Mechanism & \texttt{All-0} \\
\hline
Utility Measure & $\err_F$ \\
\hline
$D$ & ADULT, LOANS \\
\hline
$\epsilon$ & $0.1$ \\
\hline
$\delta$ & $1/|D|^2$ \\
\hline
$|W_H|$ & $1, 4, 16, 64, 256, 1024$ \\
\hline
$n^\prime$ & $1000$ \\
\hline
$T$ & $1, 4, 16, 64$ \\
\hline
$K$ & $4, 16, 64, 256$ \\
\hline
$r$ & $1$ \\
\hline
$k$ & $3$ \\
\hline
$\mathcal{T}_H, \mathcal{T}_F$ & Uniform, Zipf, Geometric \\
\hline
$\gamma$ & $0, 0.05, 0.1, 0.2, 0.5, 1$ \\
\hline
\end{tabular}
\caption{Experimental reference table for evaluating the future utility of \texttt{RAP}\ on $r$-of-$k$ thresholds.}
\label{tab:generalizability-experiments}
\end{table}
To empirically quantify the effect of both the threshold distribution concentration as well as historical workload size on \texttt{RAP}'s future utility, we evaluate \texttt{RAP}\ across a range of workload sizes using the three specified distributions over features in both the ADULT and LOANS datasets.
To put \texttt{RAP}'s future utility into context, we also evaluate the future utility of the \texttt{All-0}\ baseline mechanism.
Refer to Table~\ref{tab:generalizability-experiments} for a summary of this experiment.
\begin{figure}
\centering
\begin{tabular}{c}
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/generalizing/exact/adult-dist-lines.pdf} \\
\hline
\hspace*{-0.65cm}
\includegraphics[width=\linewidth]{figs/generalizing/exact/loans-dist-lines.pdf}
\end{tabular}
\caption{\texttt{RAP}'s future error (and 95\% confidence intervals) across all $T,K$ values considered where \texttt{RAP}\ achieves minimal present error, plotted across a range of workload sizes and historical threshold distributions. ``\texttt{RAP}\ (opt)'' represents \texttt{RAP}'s future error across all $T,K$ values considered where \texttt{RAP}\ achieves minimal future error. Future error of \texttt{All-0}\ included as a baseline.}
\label{fig:generalizing-dist-lines}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\hspace*{-0.65cm}
\includegraphics[width=.45\linewidth]{figs/generalizing/exact/adult-dist-overlay.pdf} & \includegraphics[width=.45\linewidth]{figs/generalizing/exact/loans-dist-overlay.pdf}
\end{tabular}
\caption{\texttt{RAP}'s future utility on each threshold distribution across a range of workload sizes.}
\label{fig:generalizing-dist-overlays}
\end{figure}
Figure~\ref{fig:generalizing-dist-lines} shows the results of this experiment.
As in our prior experiments, each point of the \texttt{RAP}\ line is taken to be where \texttt{RAP}\ achieves minimal \textit{present error} across all combinations of $T,K$ evaluated.
The future error at this minimizing $T,K$ pair is then evaluated and plotted, along with a corresponding 95\% confidence interval to account for randomness both between independent repetitions and across sampling future thresholds from the threshold distribution.
For real-world applications, this reflects what a practitioner using \texttt{RAP}\ would be able to do; i.e., choose the best performing instance of \texttt{RAP}\ across $T,K$ values on the present error metric (since they would not be able to evaluate future error), and then use that instance to answer future queries.
Ideally though, the practitioner would have omnisciently been able to choose the best performing instance of \texttt{RAP}\ across $T,K$ values on the future error metric directly, as this approach will never have larger future error than the former (feasible) approach.
To understand whether there is a significant difference in the future error between these two scenarios, we additionally plot the latter as ``\texttt{RAP}\ (opt)''.
For each distribution individually, we find the results are as expected.
Namely, \texttt{RAP}'s future error is always lower than the baseline mechanism \texttt{All-0}'s future error, and \texttt{RAP}'s future error decreases as the number of historical thresholds that it is given increases.
Interestingly, we find no evidence that there is any point in which the number of historical thresholds given to \texttt{RAP}\ becomes ``too large'' and causes \texttt{RAP}'s future error to begin increasing.
Instead, we find that \texttt{RAP}\ benefits from being provided more historical thresholds when the historical workload size is small, and then eventually reaches a point of diminishing returns.
Additionally, we find that the future error corresponding to the \texttt{RAP}\ instance that attains minimal present error across $T,K$ values is nearly identical to the future error corresponding to the \texttt{RAP}\ instance that attains minimal future error across $T,K$ values.
This indicates that in practice, answering future queries using the \texttt{RAP}\ instance that achieved minimal present error across $T,K$ values will likely also yield the minimal future error.
To better visualize the differences across distributions, \texttt{RAP}'s future error lines are overlayed in Figure~\ref{fig:generalizing-dist-overlays} for both the ADULT and LOANS datasets.
From this, we see that the differences between \texttt{RAP}'s future error across all three distributions are not as striking as one may expect, although for small historical workload sizes (less than 16 and 64 on the ADULT and LOANS datasets respectively) we find that the results roughly align with our intuition: the least concentrated (Uniform) distribution induces the highest future error, while the most concentrated (Geometric) distribution induces the lowest future error.
These findings, taken together with those of Figure~\ref{fig:generalizing-dist-lines}, yield a simple, useful insight into how to achieve low future error with \texttt{RAP}\ in practice.
Specifically, if the size of the historical workload is small, a practitioner can simply augment it by adding uniformly randomly sampled thresholds from the space of all possible thresholds (regardless of what the underlying threshold distribution $\mathcal{T}_H$ is).
In the worst case, \texttt{RAP}'s future error will be essentially unaffected (if $|W_H|$ was already in the region where returns are diminishing); in the best case, \texttt{RAP}'s future error will be reduced significantly.
\subsubsection{Effect of ``Overfitting" the Synthetic Dataset}
When answering a prespecified set of queries using \texttt{RAP}, the goal in the relaxed projection step is to achieve as close to a global minima as possible.
In fact, although such an achievement is unlikely in practice, Aydore et al.'s theoretical utility result relies on a global minima having been reached.
However, when the goal is to learn a model that generalizes to unseen data, it is well known that optimizing the loss function to a global minima will lead to an extremely overfit model.
In the exact partial knowledge setting where future utility is the metric of choice, we seek to determine whether a conceptually similar ``overfitting'' phenomena may be occurring when \texttt{RAP}\ uses the historical threshold workload to generalize to future thresholds.
Towards this, we recall our finding from Figure~\ref{fig:generalizing-dist-lines}.
Specifically, that \texttt{RAP}\ does not seem to noticeably overfit to the historical queries when selecting the adaptivity parameters $T$ and $K$ based on the instance of \texttt{RAP}\ that had minimal present error.
However, this finding does not eliminate the possibility that \texttt{RAP}\ is overfitting to the historical queries during the synthetic dataset optimization procedure itself.
For instance, in Figure~\ref{fig:generalizing-dist-overlays} on the LOANS dataset at a historical workload size of 4, there is a significant difference between \texttt{RAP}'s future errors on the Uniform vs.\ Geometric distributions.
This could be explained either by \texttt{RAP}\ overfitting to the historical workload generated from the Uniform distribution (which is relatively less informative regarding which thresholds are likely to be sampled in the future), or it could simply indicate that the historical workload does not contain enough information about the relevant space of thresholds that \texttt{RAP}\ needs in order to generalize well.
To analyze this possibility, we perform the same experiment as above while simultaneously evaluating \texttt{RAP}'s future utility not just at the end of the optimization procedure, but after each iteration of the optimization procedure.
Figure~\ref{fig:generalizing-training-progress} displays the results, along with \texttt{RAP}'s training loss and present error after each iteration of the optimization procedure.
In classic ML, a canonical symptom of overfitting is observing a point in the training progress where the training error continues decreasing, but where the test error begins steadily increasing.
In our setting, the analogue would be observing a point where the present error continues decreasing, but where the future error begins increasing.
However, we do not observe such behavior in either graph, as the future error steadily decreases throughout the entire training procedure.
The primary difference between the two graphs is that \texttt{RAP}'s decrease in future error under the Uniform distribution is much smaller than under the Geometric distribution.
This simply indicates that, as expected, \texttt{RAP}\ is able to take advantage of the significantly more informative (with respect to the relevant portions of the space future thresholds will be drawn from) historical workload from the Geometric distribution.
Viewed differently, in the case of the Uniform distribution, \texttt{RAP}\ did not ``overfit'' to the historical workload --- rather, the historical workload simply did not provide enough information to \texttt{RAP}\ about the relevant remainder of the query space.
The take-away from these findings is that while \texttt{RAP}\ would have benefited from having a larger historical threshold workload, it would not have benefited from introducing analogues to other classic overfitting remedies.
For example, a practitioner may be tempted to reserve a held-out set of thresholds from the historical workload with the intention of using them between iterations as a proxy to estimate future utility, stopping the training early when the error on the held-out set begins increasing.
Not only do these findings indicate that such a strategy would not be beneficial, but combined with the findings from the previous experiment, we conclude that such a strategy would result in relatively \textit{greater} future error due to the reduced historical workload that \texttt{RAP}\ is given.
\begin{figure}
\centering
\begin{tabular}{cc}
\hspace*{-0.65cm}
\includegraphics[width=.45\linewidth]{figs/generalizing/exact/loans-uniform-4-training.pdf} & \includegraphics[width=.45\linewidth]{figs/generalizing/exact/loans-geometric-4-training.pdf}
\end{tabular}
\caption{Training progress across iterations for \texttt{RAP}\ on Uniform vs.\ Geometric distributions over features in LOANS dataset, both with a small historical workload size of 4.}
\label{fig:generalizing-training-progress}
\end{figure}
\subsubsection{Effect of Threshold Distribution Drift}
In the drifting partial knowledge setting, as the future features distribution $\mathcal{F}_F$ drifts further from the historical features distribution $\mathcal{F}_H$, it is clear that \texttt{RAP}'s future utility should decrease.
However, it is unclear how \textit{sensitive} \texttt{RAP}'s future utility is to such drift.
Thus, we seek to quantify the extent to which \texttt{RAP}\ can tolerate distributional drift while maintaining high future utility.
To achieve this, we evaluate \texttt{RAP}'s future utility in the following experiment.
We first define the historical features distribution $\mathcal{F}_H$ using the aforementioned highly concentrated Geometric distribution over features in both the ADULT and LOANS datasets.
We then measure \texttt{RAP}'s future error across a range of drift amounts.
Because \texttt{RAP}\ achieved low future error in the exact partial knowledge setting on all distributions when the workload size was large enough, we anticipate that distributional drift will similarly not have a significant impact when the historical workload size is large.
Thus, in Figure~\ref{fig:generalizing-drift-effect}, we evaluate the impact of distributional drift specifically with small historical workload sizes of 4 and 16 on the ADULT and LOANS datasets respectively.
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{figs/generalizing/drift/gamma-lines.pdf}
\caption{Future error of \texttt{RAP}\ across a range of range of distributional drift amounts on the ADULT and LOANS datasets, given small historical workload sizes of 4 and 16 respectively.}
\label{fig:generalizing-drift-effect}
\end{figure}
The results of this experiment reveal that on both datasets, \texttt{RAP}\ is fairly impervious to distributional drift.
\texttt{RAP}'s future error only begins to exhibit a significant increase at approximately $\gamma=0.4$ on the ADULT dataset and $\gamma=0.1$ on the LOANS dataset.
Comparing with Figure~\ref{fig:generalizing-drift-tvs}, these points both correspond to an expected total variation distance between the historical and future features distributions of approximately $0.5$ on their respective datasets.
Thus, we are able to conclude that even if the future features distribution drifts from the historical features distribution by a moderate amount, \texttt{RAP}\ can still be expected to maintain high utility. |
2,869,038,156,684 | arxiv | \section{Introduction}
Holographic Space Time (HST) is a formalism for generalizing string theory to situations where the asymptotic regions of space-time are not frozen vacuum states.
In particular, it gives us a well defined holographic quantum theory of Big Bang cosmology\cite{holocosm}. In \cite{holoinflation}, two of the authors (TB and WF) introduced a model of HST, which they claimed could reproduce the results of slow-roll inflation. In this paper, we use results from \cite{malda},\cite{mcfaddenskenderis} and \cite{others} to prove and improve that claim. As a consequence we will show that {\it if the two-point functions of inflationary fluctuations coincide with those in a single-field slow-roll model, then they probe only coarse features of the underlying fundamental quantum theory. } Any model which produces small, approximately Gaussian, approximately $SO(1,4)$ covariant fluctuations yields two- and three-point functions determined by two unitary representations of $SO(1,4)$. A generic model has $9$ parameters: the scaling dimension of the scalar operator on the projective light cone (see below); the strength of the scalar and tensor Gaussian fluctuations; the normalizations of the $\langle S^3\rangle$, $\langle ST^2\rangle$, and $\langle S^2 T\rangle$ three-point functions; and the 3 different tensor structures in the $\langle T^3 \rangle$ three-point function. Maldacena's squeezed limit theorem, when combined with $SO(1,4)$, fixes all 3-point functions involving the scalar in terms of the scale dependence of the corresponding two-point functions, reducing the number of parameters to $6$\footnote{We emphasize that the word parameters above actually refers to functions of the background Hubble radius $H(t)$ and its first two time derivatives.}. A general quantum theory with $SO(1,4)$ invariance and localized operators $S$ and $T$ contains many invariant density matrices, and the two- and three-point functions do not determine either the underlying model or the particular invariant density matrix. Finally, we note that the dominance of the scalar over the tensor two-point fluctuations, which is evident in the CMB data, follows from general cosmological perturbation theory along with the assumptions that there was a period of near dS evolution and that the intrinsic $S$ and $T$ fluctuations are of the same order of magnitude, which is the case in both conventional slow-roll models and the HST model of inflation. However, HST predicts a different dependence of the fluctuation normalizations on the time dependent background $H(t)$. For correlation functions involving the scalar, this difference can be masked by choosing different backgrounds to fit the data. HST makes the unambiguous prediction that the tensor tilt vanishes, whereas conventional slow roll predicts it to be $r/8$ (where $r$ is the ratio of amplitudes of tensor and scalar fluctuations) and data already constrains $r < 0.1$, so this may be hard to differentiate from zero even if we observe the tensor fluctuations.
We have noted a set of relations for fluctuations that follow from very general properties of cosmological perturbation theory. For us, the validity of classical cosmological perturbation theory follows from Jacobson's observation that Einstein's equations (up to the cosmological constant, which in HST is a boundary condition for large proper times) are the hydrodynamic equations of a system, like HST, which obeys the Bekenstein-Hawking relation between entropy and area. In \cite{holoinflation} we argued that this meant that there would be a hydrodynamic inflaton field (or fields) even in regimes where HST is not well-approximated by {\it quantum} effective field theory (QUEFT)\footnote{We refer to the classical field equations, which, according to Jacobson, encode the hydrodynamics of space-time as a Thermodynamic Effective Field Theory, or THEFT.}. The fluctuations calculated from an underlying HST model, which we argued to be approximately $SO(1,4)$ covariant and whose magnitude we estimated, are put into the classical hydrodynamical inflaton equations as fluctuating initial conditions. In fact, in the co-moving gauge, we can view the inflaton as part of the metric and this picture follows from Jacobson's original argument\footnote{Even in the FRW part of the metric, we can view the inflaton field as just a generic way to impose the dominant energy condition on a geometry defined by an otherwise unconstrained Hubble radius $H(t)$.}.
We will argue below that certain constraints on the parameters not determined by symmetries follow from quite general arguments based on classical cosmological perturbation theory, while other constraints correspond to a choice of parameters in the underlying discrete HST model. Note that, within the HST framework, the validity of classical cosmological perturbation theory is a statement about the Jacobsonian hydrodynamics of a system, which is NOT well-approximated by QUEFT. The constraints fix the $SO(1,4)$ representation of the tensor fluctuations, and imply the dominance of scalar over tensor fluctuations. When combined with estimates from the HST model, these constraints suggest that the tensor two-point function {\it might} be observed in the Planck data. Non-gaussian fluctuations involving at least one scalar component are small. The most powerful discriminant between models is the tensor 3 point function. Standard slow-roll models produce only one of the three forms allowed by symmetry. A second one can be incorporated by adding higher derivative terms to the bulk effective action, but the validity of the bulk effective field theory expansion requires it to be smaller than the dominant term\footnote{In weakly coupled string theory, we might consider inflation with Hubble radius at the string scale, and get this second term to be sizable, but this is a non-generic situation, based on a hypothetical weakly coupled inflation model, for which we do not have a worldsheet description.}. The third form violates parity and cannot arise in any model based on bulk effective field theory. We argue that it could be present in more general models, if parity is not imposed as a fundamental symmetry.
Unfortunately, all extant models, whether based on effective field theory or HST, predict that the tensor 3 point function is smaller than the, as yet unobserved, tensor two-point function by a power of $\frac{H}{m_P}$ and we cannot expect to detect it in the foreseeable future.
This paper is organized as follows: in the next section we present the mathematical analysis of two- and three-point functions in a generic quantum theory carrying a unitary representation of $SO(1,4)$, written in terms of operators localized on a three sphere.
We emphasize that this theory does not satisfy the axioms of quantum field theory on the three sphere. In particular, it does not satisfy reflection positivity, because the Hamiltonian is a generator of a unitary representation of $SO(1,4)$, and it is not bounded from below. The absence of highest weight generators prevents the usual continuation to a Lorentzian signature space-time. In consequence, a general theory of this type will have a large selection of dS invariant density matrices, rather than the unique pure state of conventional CFT. Nonetheless, two- and three-point functions are completely determined by symmetry, up to a few constants. The work of Maldacena and collaborators\cite{malda} (see also later work of Mcfadden and Skenderis\cite{mcfaddenskenderis} and others\cite{others}) shows that, to leading order in slow-roll parameters, single field slow-roll inflation is in this category of theories\footnote{When we use the phrase slow-roll model, we mean a model in which the fluctuations are calculated in terms of QUEFT in a slow-roll background. In our HST model, the fluctuations are calculated in an underlying non-field theoretical quantum model, and put into the hydrodynamic equations of that model as initial conditions.}. Thus, the oft-heard claim that objections\footnote{We recall these objections in Appendix A.} to the conceptual basis of slow-roll inflation must be wrong, because the theory fits the data, are ill-founded. Our analysis shows that current data probe only certain approximate symmetries of a theory of primordial fluctuations, and determine a small number of parameters that are undetermined by group theory. Furthermore, the success of the slow-roll fit to these parameters, amounts, so far, to the statement that the fluctuations are predicted to be small and approximately Gaussian; that the scaling exponents are within a small range around certain critical values; and that the scalar two-point function is much larger than that of the tensor. Given the central limit theorem, the first part of this prediction does not seem to be such an impressive statement. Note also that if there is any environmental selection going on in the explanation of the initial conditions of the universe, even the statement that the fluctuations are small might be understood as environmental selection. The fact that the scalar two-point function dominates the tensor is a consequence of general properties of classical gravitational perturbation theory around a background which is approximately de Sitter.
The relative sizes of various three-point functions were also derived by Maldacena in this quite general setting.
We should emphasize that our remarks are relevant to data analysis only if future data remains consistent with slow-roll inflation. There is a variety of inflationary models, and many of them produce non-Gaussian fluctuations which are not $SO(1,4)$ covariant. If future observations favor such a model, they would rule out the simple symmetry arguments, and disprove both slow-roll inflation and the holographic model of inflation. Thus, although we set out to prove that our HST model {\it could} fit the data, we have ended up realizing that the current data probe only a few general properties of the underlying theory of primordial fluctuations. At the moment, we do not have enough control over our model to make predictions that go beyond these simple ones.
In section 3 we sketch the holographic inflation (HI) model of \cite{holoinflation} and recall how it leads to a prediction of small, approximately Gaussian, approximately $SO(1,4)$ invariant fluctuations. It also resolves all of the conceptual problems of conventional inflation and gives a completely quantum mechanical and causal solution to the flatness and horizon problems, as well as an explanation of the homogeneity and isotropy of the very early universe --- all of the latter without inflation. Within the HST formalism, the HI model also explains why there is any local physics in the world, despite the strong entropic pressure to fill the universe with a single black hole at all times.
In the Conclusions we discuss ways in which the data might distinguish between different models of fluctuations. Appendix A recalls the conceptual problems of conventional inflation models, while Appendix B recalls the unitary irreducible representations of $SO(1,4)$.
\section{Fluctuations from symmetry}
In early work on holographic cosmology\cite{holocosm} TB and WF postulated that inflation took place at a time when the universe was well described by effective quantum field theory (QUEFT), and that the inflaton was a quantum field. Our attitude to this began to change as a consequence of two considerations. The first was that, although inflationary cosmology and de Sitter space are {\it not} the same thing, it seemed plausible that at least part of the fully quantum version of inflationary cosmology should involve evolution of independent dS horizon volumes in a manner identical to a stable dS space, over many e-foldings. However, over such time scales we expected each horizon volume to be fully thermalized. The black hole entropy formulas in dS space tell us that the fully thermalized state has {\it no local excitations} and therefore is not well modeled by field theory.
In parallel with this realization, we began to appreciate Jacobson's 1995 argument\cite{ted}, indicating that the classical Einstein equations were just hydrodynamics for a system obeying the local connection between area and entropy, for maximally accelerated Rindler observers. The gravitational field should only be quantized in special circumstances where the covariant entropy bound is far from saturated and bulk localized excitations are decoupled from most of the horizon degrees of freedom (DOF). Jacobson's argument does not give a closed system of equations, because it does not provide a model for the stress tensor. We realized that this meant that other fields like the inflaton, which provide the stress tensor model, could also be classical hydrodynamical fields, unrelated to the QUEFT fields that describe particle physics in the later stages of the universe.
In \cite{holoinflation} TB and WF constructed a model which begins with a maximal entropy $p=\rho$ Big Bang, passes through a stage with $e^{3N_e}$ decoupled horizon volumes of dS space and evolves to a model with approximate $SO(1,4)$ invariance. The corrections to $SO(1,4)$ for correlation functions of a small number of operators are of order $e^{-N_e}$. In trying to assess the extent to which the predictions of such a model could fit CMB and large scale structure data, we realized that, to leading order in the slow-roll approximation, {\it the results of many conventional inflationary models amounted to a prediction of approximate $SO(1,4)$ invariance and the choice of a small number of parameters.} Work of \cite{malda},\cite{mcfaddenskenderis} and \cite{others} has shown that even if we can measure all two- and three-point functions of both scalar and tensor fluctuations, there are only $9$ parameters. Some of these parameters are related, by an argument due to Maldacena. Current measurements only determine two of the parameters and bound some of the others
(the tensor spectral index can't be measured until we actually see tensor fluctuations). Our conclusion is that observations and general principles tell us only that the correct theory of the inflationary universe has the following properties
\begin{itemize}
\item It is a quantum theory that is approximately $SO(1,4)$ invariant, and the density matrix of the universe at the end of inflation is approximately invariant. There are many such density matrices in a typical reducible representation of $SO(1,4)$.
\item The tensor and scalar fluctuations are expectation values of operators transforming in two particular representations of $SO(1,4)$. CMB and LSS data determine the normalization of the two-point function and representation
of the scalar operator, and put bounds on the two-point function of the tensor and all 3 point functions. If future measurements detect neither B mode polarization nor indications that the fluctuations are non-Gaussian, then we will learn no more about the correct description of the universe before and during the inflationary era.
\item When combined with Maldacena's ``squeezed limit" theorem and general features of cosmological perturbation theory around an approximately dS solution, $SO(1,4)$ invariance gives results almost equivalent to a single field slow-roll model. We will discuss the differences below. Thus, it is possible that even measurement of {\it all} the two- and three-point functions will teach us only about the symmetry properties of the underlying model. In fact, measurement of the tensor 3-point function could rule out conventional slow roll, but would not distinguish between more general models obeying the symmetry criteria described above.
\end{itemize}
In particular, the models proposed in \cite{holoinflation}, which resolve all of the conceptual problems of QUEFT based inflation models (see Appendix A),
will fit the current data as well as any conventional model. At our current level of understanding, those models do not permit us to give a detailed prediction for the scalar tilt, apart from the fact that it should be small. The tensor tilt is predicted to vanish. HST models suggest very small non-Gaussianity in correlation functions involving scalar perturbations, as we will see below, and explain why the scalar two-point function is much larger than that of the tensor. Indeed, this follows from $SO(1,4)$ symmetry, Maldacena's long wavelength theorem for scalar fluctuations, and very general properties of classical gravitational fluctuations around a nearly dS FRW model.
Classical cosmological perturbation theory identifies two gauge invariant quantities, which characterize fluctuations, and transform as a scalar and a transverse traceless tensor under $SO(3)$. We will attempt to find $SO(1,4)$ covariant operators, whose expectation values give us the two- and three-point correlation functions of these fluctuations. The form of these fluctuations is determined by group theory, up to a few constants.
In order to handle $SO(1,4)$ transformation properties in a compact manner, we use the description of the 3-sphere as the projective future light cone in $4 + 1$ dimensional Minkowski space. That is, it is the set of $5$ component vectors $X^{\mu}$ satisfying
$X^{\mu} X_{\mu} = 0$, $X^0 > 0$ and identified under $X^{\mu} \rightarrow \lambda X^{\mu}$ with $\lambda > 0$. Fields on the sphere are $SO(1,4)$ tensor functions of $X$, which transform as representations of the group of identifications, isomorphic to $R^+$. These representations are characterized by their tensor transformation properties, covariant constraints, and a single complex number $\Delta$
such that $$ F (\lambda X) = \lambda^{- \Delta} F(X), $$ so that the field is completely determined by its values on the sphere. The allowed values of $\Delta$ are constrained by the unitarity of the representation of $SO(1,4)$ in the Hilbert space of the theory. An expectation value of the products of two or three of these operators, in any dS invariant density matrix, is determined in terms of the $9$ numbers discussed above. For the tensor modes, the determination of the 3 point function has been demonstrated in \cite{malda} and \cite{others}, but there is not yet a compact formula. We hope that the five dimensional formalism will provide one, but we reserve this for future work.
Ordinary QFT in dS space has {\it many} dS invariant states, both pure and impure, as a consequence of the fact that there are no highest weight unitary representations of $SO(1,4)$\footnote{We are not talking here about the (in)-famous $\alpha$ vacua, which are states of Gaussian quantum fields whose two-point function is dS invariant, but has singularities when a point approaches its anti-pode. Rather, in the context of bulk QUEFT, we're speaking about dS invariant excitations of the conventional Bunch-Davies vacuum. These are not represented by Gaussian wave functionals.}. In conventional CFT, the product of two lowest weight representations contains only representations of weight higher than the sum of the individual weights, but this is not true for unitary representations of $SO(1,4)$. Products of non-trivial irreps can have singlet components. We can make even more general $SO(1,4)$ invariant density matrices by taking weighted sums of the projectors on this plethora of pure invariant states. This is in marked contrast to conventional CFT, whose Hilbert space consists of lowest weight unitary representations of $SO(2,3)$ and has a unique invariant state. Nonetheless, because the constraints of dS invariance on $2$ and $3$ point functions are expressed as analytic partial differential equations, in which the cosmological constant appears as an analytic parameter, these functions {\it are} analytic continuations of corresponding expressions in ordinary CFT. While we do not think that QFT in dS space is the correct theory of inflationary fluctuations, nor that the quantum theory of dS space is $SO(1,4)$ invariant; we do think that it is plausible that the quantum theory of a cosmology that has a large number of e-folds of inflation, followed by sub-luminal expansion which allows observers to see all of that space-time, should have an approximate $SO(1,4)$ symmetry realized by unitary operators in a Hilbert space. This was explained in \cite{holoinflation}.
The scaling symmetry $R^+$ plays another useful role, since we are trying to make a quantum model of many horizon volumes of the asymptotic future of a classical dS space. dS space asymptotes to the future light cone $X^2 = 0$, and the rescaling transformation is simply time evolution in either the global or flat slicing. The two times are asymptotically equivalent.
For large time in the flat slicing, we have $$X\cdot Y \sim e^{2t/R} ({\bf x - y})^2 .$$ Thus, the scaling dimension of the operator tells us about its large time behavior in the flat slicing of dS space.
The field corresponding to the scalar fluctuations is a scalar $S (X)$, with $\Delta_S = (3/2 - \sqrt{3/2 - m^2 R^2})$. In this formula $m^2$ is the mass of a bulk scalar field, which would give rise to this representation by dS/CFT calculations, as in \cite{malda}, when evaluated in the Bunch-Davies vacuum. It parametrizes one of the series (Called Class I in Appendix B) of unitary representations of $SO(1,4)$ described in \cite{thomasnewton}. These are the analogs of the complementary and principal series of unitary representations of $SL(2,R)$. In ordinary QFT in dS space, the Wheeler-DeWitt wave function for this bulk field determines the in-in correlator of the corresponding bulk quantum field in the Bunch-Davies vacuum. We note that this is the result of direct calculation of ordinary QFT in dS space and does not invoke any analytic continuation from an AdS calculation. We will discuss the relation to dS/CFT in more detail below. The values for which the square root is real are called the complementary series of representations, while those for which the real part is fixed at $3/2$ while the imaginary part varies, are the principal series. At the level of the two-point function we could view the usual scalar fluctuation as determined by the correlator of $\langle \Phi^{\dagger} \Phi \rangle$, which makes sense even for the complex operators of the principle series. However, there is no consistent interpretation of the 3 point function of principal series operators, except to set the various complex 3 point functions to zero. Thus, in order to have an interpretation as fluctuations of the real quantity $\zeta$, we must restrict attention to the complementary series. Thus $\Delta_S$ is bounded between $0$ and $3/2$. Conventional slow-roll models have $\Delta_S = 0$. Note that, strictly speaking, this is not in the list of unitary representations, but is a limit of them. We believe that this is a consequence of the logarithmic behavior of massless minimally coupled propagators in dS space, which may make the definition of the global symmetry generators a bit delicate.
From the point of view of phenomenology, this subtlety is irrelevant. $SO(1,4)$ invariance is only an approximate property of inflationary models and we can certainly consider arbitrarily small values of $\Delta_S$, so the distinction between zero and other values could never be determined from the data.
Our point is that the representation constant of the field fixes its two and 3 point functions up to pre-factors. The two-point function of such a field is fixed, up to a multiplicative constant to be
$$ {\rm Tr} [\rho \Phi (X) \Phi (Y) ] = C^S_2 (X^{\mu} Y_{\mu} )^{- 2 \Delta_S} .$$ Here $\rho$ can be any $SO(1,4)$ invariant density matrix.
The flat slicing of dS space is
$$ ds^2 = - dt^2 + e^{2t/R}\ d {\bf y}^2 .$$ Using the asymptotic relation between the flat coordinates and the light cone, we find a momentum space correlator (in radial momentum space coordinates)
$$ 4\pi C^S_2 k^{-1}( \frac{e^{t/R}}{k})^{ 4\Delta_S} .$$
Similarly, the 3 point function is determined to be
$${\rm Tr} [ \rho \Phi (X_1 , X_2 , X_3 ) ] = C_3 \prod_{ij} X_{ij}^{\frac{-\Delta_S}{2}} ,$$ where
$X_{ij} = X_i \cdot X_j .$ This form follows from $SO(1,4)$, the scaling symmetry, and symmetry under permutations of the points. The latter symmetry follows from the assumption that the operators commute with each other. In both HST and a conventional slow-roll model, this is a consequence of the fact that the different points are causally disconnected during inflation, and that we are computing an expectation value at fixed time. In the HST model, $SO(1,4)$ invariance sets in via a coupling together of DOF at different points, using a time dependent Hamiltonian, which approaches a generator of $SO(1,4)$ when the number of e-folds is large\footnote{Below, we'll recall the meaning of the bulk concept ``number of e-foldings" in the HST model.}.
Maldacena and Pimentel\cite{malda}, among others \cite{others}, have shown how all of the $3$ point functions are determined up to 3 normalizations by $SO(1,4)$ group theory. Similar results were obtained by McFadden, Skenderis and collaborators\cite{mcfaddenskenderis}. Our results for correlation functions of tensor modes have to coincide with theirs because the only representation of $SO(1,4)$ that has the right number of components to represent the transverse traceless graviton fluctuation is the Class IV representation (in the notation of Newton\cite{thomasnewton}) with $s=2$. The Casimir operators have the values $Q = -6 , W = -12$. The class $IV_{a,b}$ representations are the two different helicity modes of the graviton. See Appendix B for details of the classification of $SO(1,4)$ representations. Note however that the coefficients in front of these group theoretic predictions are different in slow-roll and HST models. In slow-roll models, both scalar and tensor fluctuations are computed as two-point functions of quantum fields in the background space-time $H(t)$, while in HST, the magnitude of the fluctuations is determined by a fixed Hubble parameter $H$, as we will review below.
Two interesting points about the pure tensor 3-point functions were made by \cite{parityviolating} and by Maldacena and Pimentel\footnote{The explicit forms for these three-point functions are not terribly illuminating.
The most elegant expression we know is in the spinor helicity formalism used by Maldacena and Pimentel, and it would be redundant and pointless to reproduce that here. We hope that the realization of the three sphere as the five dimensional projective light cone will simplify these expressions, but we have not yet succeeded in showing this.}. In bulk field theory computations, the parity violating term allowed by group theory does not appear in correlation functions. The corresponding term in the logarithm of the WD wave function is purely imaginary and does not contribute to correlators of operators that are simply functionals of the fields appearing in the wave functional. Neither the tensor nor scalar operators involve functional derivatives acting on the wave functional, and so their correlators are insensitive to this term. In addition, one of the two parity conserving structures only appears if we allow higher derivative terms in the bulk action. In a more general $SO(1,4)$ invariant theory, the vanishing of the parity violating term might follow from an underlying reflection invariance of the microscopic dynamics, while there is no reason for the two parity conserving terms to have very different normalizations\footnote{Maldacena and Pimentel point out that in a hypothetical model of inflation in perturbative string theory, the derivative expansion can break down even though quantum gravity corrections are negligible, if the Hubble scale is the string scale. They argue that this could produce parity conserving terms, with comparable magnitude. An actual computation of these terms would
require us to find a worldsheet formulation of the hypothetical weakly coupled string model of inflation.}.
In general, knowledge of the parity operation on the fields, plus the fact that the fields commute with each other, {\it does not} imply that the parity operator commutes with the fields. Rather, it is like a permutation operator, which permutes the elements of a complete ortho-normal basis. The properties of parity imply that it squares to a multiple of the unit operator.
In the conventional approach to slow-roll inflation models, the Hilbert space is interpreted as the thermo-field double of field theory in a single causal patch and the state is taken to be the Bunch-Davies state, which reproduces thermal correlation functions in the theory of the causal patch. This state is invariant under a $Z_2$ which reverses both the orientation of the 3-sphere and the time in the causal patch. When combined with the TCP invariance of bulk quantum field theory, this leads to parity invariant correlation functions. Another way of seeing the same result is to note that the WD density matrix for the Bunch-Davies vacuum, is diagonal in the same basis as the fields whose expectation values we are computing. The parity operation is defined as complex conjugation of the WD wave function, and leaves the diagonal matrix elements of the density matrix invariant. These are the only matrix elements relevant to calculating these particular expectation values.
In the HI model of inflation, thermal fluctuations in {\it many} initially decoupled dS causal patches are coupled together by a time dependent Hamiltonian, which, in the limit of a large number of e-folds, approaches a generator of $SO(1,4)$. In this limit, one can argue that the density matrix should become approximately $SO(1,4)$ invariant, but we do not see a general argument that it be parity invariant. Similarly, there is no reason for the density matrix to be diagonal in the same basis as the fields $S(X)$ and $T(X)$ on the three sphere. The parity operation acts simply on the fields, but not necessarily on the density matrix. {\it Consequently, there is no argument that the parity violating part of the tensor 3 point function must vanish, or be small compared to the other two terms.} Thus, the tensor bispectrum may be the only clear discriminant between slow-roll inflation and a general class of $SO(1,4)$ invariant models that includes HI.
The group theory analysis does not determine the scaling dimension $\Delta_{S}$ or the coefficients of the various two- and three-point functions. In the next section, we review the Holographic Inflation (HI) model, which makes predictions for some of these unknown constants.
Note however, that Maldacena, using the bulk effective field theory description of fluctuations, has derived several relations between the nine parameters on quite general grounds. The fundamental gauge invariant measure of scalar fluctuations is the scalar metric perturbation $\zeta$, where
\begin{eqnarray*}
ds^2 &=& - N^2 dt^2 + h_{ij} (dx^i + N^i)(dx^j + N^j)\\
h_{ij} &=& a^2(t) [(1 + 2\zeta)\delta_{ij} + \gamma_{ij} ],
\end{eqnarray*}
with $\gamma_{ij} $transverse and traceless. When $\zeta (x) $ is constant, this is just a rescaling of the spatial FRW coordinates so its effect is completely determined. Thus, in a three-point function including $\zeta$, which depends on three momentum vectors satisfying the triangle condition ${\bf k_1 + k_2 + k_3} = 0$, the squeezed limit where the $k_i$ of $\zeta$ is taken to zero, is completely determined by the coordinate transformation of the corresponding two-point function. Since $SO(1,4)$ fixes the momentum dependence of all $3$ point functions up to a multiplicative constant, the constants in the $\langle S T T \rangle$, $\langle SST\rangle$ and $\langle S S S \rangle$ three-point functions, are determined by those in the $\langle T T \rangle$, and $ \langle S S \rangle$ two-point functions. This leads to the prediction of small non-Gaussianity in the slow-roll limit, and reduces our $9$ constants to $6$. We have argued that the HST model does have a description in terms of coarse grained classical field theory, and so should obey Maldacena's constraint.
Slow-roll inflation models determine the magnitudes of fluctuations in terms of the quantum fluctuations of canonically normalized free fields in the Bunch-Davies state. In the single field slow-roll models, this leads to an exact relation between the scalar and tensor tilts and the normalizations of the $\langle S^2\rangle$ and $\langle T^2\rangle$ correlators. The HI model does not lead to this relation. However, the relative orders of magnitude of the scalar and tensor two-point functions are determined by very general geometrical considerations. The quantity $\zeta$is shown in Appendix A of\cite{lyth} to satisfy
$$\zeta = - 3 \bar{H} \delta t ,$$ where $\delta t$ is the proper time displacement between two infinitesimally separated co-moving hypersurfaces, and $\bar{H}$ the homogeneous Hubble radius. This requires only that the metric be locally FRW, that the cosmological fluid have vanishing vorticity, and that fluctuations away from homogeneity and isotropy are treated to first order. On the other hand,
$$\delta t = \frac{\delta H}{\dot{\bar{H}}}, $$ where $\delta H$ is the local fluctuation in the Hubble parameter. If the metric is close to dS then $\dot{\bar{H}}$ is small.
$\delta H$ is the fluctuation in the inverse radius of space-time scalar curvature, while the tensor fluctuations are fluctuations in the spin two part of the curvature, which is defined intrinsically by the fact that the background is spatially flat.
Thus, we can conclude quite generally that the fluctuations in $\delta H$ and in the spin two piece should be of the same order of magnitude. In section 3 we will recall that in the HI model, general statistical arguments indicate that these fluctuations have the magnitude $\frac{1}{R M_P}$, where $R$ is the radius of the approximate dS space.
We want to emphasize that apart from the last remark, these are purely classical geometrical considerations. Adopting Jacobson's point of view about Einstein's equations, we can say that any quantum theory of gravity whose local hydrodynamics looks like dS space for a sufficiently long period, will give predictions for scalar and tensor fluctuations that are qualitatively similar to those of slow-roll inflation. We will discuss the observational discrimination between different models below.
\subsection{Tilt}
The scalar two-point function is given at large times in the flat slicing by
$$ \langle \zeta (k) \zeta (-k) \rangle = \frac{A}{k^3} \frac{H^2}{\dot{H}^2} (\frac{e^{t/R}}{k})^{- 4\Delta_S} .$$
In slow-roll models, The relevant value of $t$, at which to evaluate this formula, depends on $k$ via the equation
$$ k = a (t(k)) H(t(k)).$$ Notice that in these formulae we've reverted to the use of $R$ for the constant inflationary dS radius, while $H$ is the varying Hubble parameter. In a general model, $H$ will be decreasing with time and $\dot{H}$ will be increasing, as inflation ends.
Modes with higher $k$ leave the horizon at a later time and so the normalization $\frac{H^4}{\dot{H}^2}$ will be smaller for these modes.
However, there is another effect coming from the fact that $\Delta_S$ is positive. As inflation ends, $a(t)$ is not increasing as rapidly as the exponential so $\frac{e^{t/R}}{a(t)} $ increases as $t$ increases (we neglect the variation of $H(t)$ in the horizon crossing formula, because it is not in the exponential). Thus, the logarithmic derivative of the correlation function will have a negative contribution from the prefactor and a positive one proportional to $ \Delta_S$ . Since both effects depend on the slow variation of $H$, the tilt will be small (remember that $ \Delta_S$ is bounded by unitarity), but its sign depends on the value of $\Delta_S$. The slow-roll result of red tilt is obtained for small $\Delta_S$, but near the unitarity bound the tilt could be of either sign, depending on the behavior of $H(t)$. The conventional slow-roll model usually assumes $\Delta_S =0$.
Similar remarks apply to the tensor fluctuations. However, the overall constant in these is not in general fixed in terms of the normalization of the scalar two-point function, as it is in a conventional slow-roll model. If we ever measure the tensor fluctuations, we will be able to see whether the slow-roll consistency condition, relating the magnitudes and tilts of tensor and scalar two-point functions is satisfied.
Thus far, we've compared slow-roll inflation to a general model satisfying only approximate $SO(1,4)$ invariance of the density matrix, and the existence of an approximately de Sitter classical background geometry $H(t)$. If we now specialize to models constructed in HST, we find a different prediction for the scale dependence of the normalization parameter $A$ (and the corresponding normalization of the tensor two-point function). In slow-roll models we find
$$A_{S,T} = C_{S,T} (\frac{H(t)}{M_P})^2 , $$ with fixed numerical coefficients. The HST model, as we will explain below, predicts instead that $$A_{S,T} = D_{S,T}^{\prime} (\frac{1}{R M_P})^2 , $$ with numerical coefficients which are not yet calculable. This has the consequence that {\it the HST model predicts no tensor tilt}. It also suggests that the size of the tensor fluctuations might be large enough to be seen in the Planck data (but the unknown coefficients make it impossible to say this definitively).
For a given function $H(t)$ we have the following predictions for the scalar tilt
$$ n_s^{\rm slow\ roll} = \frac{H}{H^2 + \dot{H}}\frac{d}{dt} (6 {\rm ln}\ H - 2 {\rm ln}\ (\dot{H}) ) .$$
$$ n_s^{HST} = \frac{H}{H^2 + \dot{H}}\bigl[ \frac{d}{dt} (4 {\rm ln}\ H - 2 {\rm ln}\ (\dot{H}) ) - 4\frac{\Delta_S}{R} \bigr] + 4 \Delta_S .$$
Note that in the slow-roll limit, where $H \approx R^{-1}$, the last term in square brackets cancels the term outside the brackets, and also that $\Delta_S$ is bounded from above by $3/2$. The two formulae for the tilt
are different, but both predict that it is small, and that the sign of $n_s - 1$ depends on the time variation of $H(t)$. Note however that $H(t)$ is not measured by anything else than the primordial fluctuations, so we can adjust $H(t)$ and $\Delta_S$ to make a slow-roll model have, within the observational errors, the same predictions as an HST model. It's possible that further study of the consistency conditions on HST models would enable us to make more precise theoretical statements about $H(t)$ and $\Delta_S$, but at the moment it does not appear that the scalar power spectrum can distinguish between them. The absence of tensor tilt {\it is} a clear distinguishing feature.
Measurement of the tensor bispectrum would give us a much finer discrimination between models. {\it In particular, observation of the parity violating part of this function would rule out all models based on conventional effective field theory in the Bunch-Davies vacuum.} It's unfortunate then that HI, like the slow-roll models, predicts that the tensor bispectrum is down by a factor of $\frac{H}{m_P}$, from the tensor two-point function, which is in turn smaller than the already observed scalar fluctuations by a factor of order $\frac{\dot{H}}{H^2}$. It seems unlikely that we will measure it in the near future.
\subsection{Comparison With the Approaches of Maldacena and McFadden-Skenderis}
Maldacena's derivation of the dS/CFT correspondence implies that the quantum theory defined by his equations carries a unitary representation of $SO(1,4)$, {\it within the semi-classical approximation to the bulk physics}. He argues that the analytic continuation, in the c.c., of the generating functional of correlators of an Euclidean CFT with a large radius AdS dual, is the Wheeler DeWitt (WD) wave functional of the corresponding bulk Lagrangian on the 3-sphere. This argument has been generalized to all orders in the semiclassical expansion of the bulk Lagrangian by Harlow and Stanford\cite{hs}. {\it To all orders in the semi-classical expansion} the WD wave functional defines a positive metric Hilbert space. The correlation functions defined by Maldacena are expectation values of operators localized at points on the sphere, in a given state in this Hilbert space. They are covariant under $SO(1,4)$ and the Hilbert space carries a unitary representation of $SO(1,4)$. In this semi-classical analysis the state in the Hilbert space is the Bunch Davies vacuum for dS quantum field theory, defined by analytic continuation from a Euclidean functional integral.
It is important to realize that these correlators are {\it not} correlators in the "non-unitary" CFT, which defines the coefficients in the exponent of the WD wave function. The complex weights, which seem so mysterious in the CFT are familiar as the complex parameters labeling the complementary and principle series of {\it unitary} representations of $SO(1,4)$. Furthermore, although our quantum theory contains operators localized at points on the 3-sphere, it is NOT a Euclidean QFT on the 3-sphere. The correlators in such a QFT would be analytic continuations of expectation values of operators in a theory on $2 + 1$ dimensional dS space, and would satisfy reflection positivity on the 3-sphere. This cannot be the case because none of the generators of $SO(1,4)$ are bounded from below. The usual radial quantization of a theory on the sphere describes a Hilbert space composed of unitary highest weight representations of $SO(2,3)$, whose analytic continuation are highest weight non-unitary representations of $SO(1,4)$, not the unitary unbounded representations that one finds by doing bulk quantum field theory.
Many people have been tempted to use the AdS/CFT correspondence to {\it define} a quantum theory in dS space by just using the analytically continued correlators of some exact CFT to define a non-perturbative WD wave function. We are not sure what this would mean. The formal analytic continuation of the path integral gives rise to a wave functional satisfying the exact WD equation. There is no positive definite scalar product on the space of solutions of this hyperbolic equation,and it is not clear how to give a quantum interpretation of the correlation functions that would be defined by this procedure. We propose instead that the correct non-perturbative generalization of Maldacena's observation is that the inflationary correlation functions, in leading order in deviations from dS invariance, be given by expectation values of localized operators in a quantum theory on the 3-sphere, carrying a unitary representation of $SO(1,4)$. As we've noted above, current observations are completely accounted for by this principle, without the need for a detailed model.
The application of Maldacena's version of dS/CFT to inflation works only to leading order in the slow-roll approximation. M-S instead begin from a correspondence between holographic RG flows and full inflationary cosmologies. It has long been known that the equations for gravitational instantons, of which domain walls are a special case, have the form of FRW cosmologies, with positive spatial curvature, if we interpret the AdS radial direction as time. In particular, when the Lorentzian signature potential has negative curvature in AdS space, corresponding to a Breitenlohner-Freedman allowed tachyonic direction, the cosmology asymptotes to dS space. From the AdS point of view, such domain walls represent RG flows under perturbation of the CFT at the AdS maximum, by a relevant operator. M-S show that by performing a careful analytic continuation of fluctuations around the domain wall solution, they can write inflationary fluctuations in terms of analytically continued correlators in the QFT defined implicitly by the domain wall. Furthermore, in another paper, they show that if the relevant operator is nearly marginal, then the analytic continuation of the formulas for correlators in the perturbed CFT, computed by conformal perturbation
theory, produce fluctuations corresponding to a slow-roll model. That is, they obtain slow-roll correlators when these correlators are plugged into the formulae they derived in the domain wall case where the RG flow was tractable in the leading order AdS/CFT approximation. They suggest a holographic theory of inflation, in which their formulae are applied to the correlators in a general QFT.
To leading order in slow-roll parameters, and in the bulk semi-classical approximation, the results of M-S are equivalent to those of Maldacena, though they are derived by a different method.
Thus, we can give them a quantum mechanical interpretation, as above.
We are unsure what to say about the non-perturbative definitions of inflationary correlators, which they propose, since we do not know how to interpret them as quantum expectation values. In Maldacena's case, the attempt to interpret the analytically continued generating function as the WD wave function, no longer produces quantum mechanics if the semi-classical approximation does not apply. We cannot make a similarly definitive negative statement about the non-perturbative proposals of M-S, but we cannot prove that their procedure defines a quantum mechanics. We suspect that the proposals of M-S and Maldacena are in fact equivalent to all orders in the bulk semi-classical approximation, at least for slow-roll models (Maldacena only treats slow-roll models), and that the same objections to the M-S proposal for using exact, analytically continued CFT correlators, would apply.
We would like to opine that the term dS/CFT and the analytic continuation from AdS space are both somewhat misleading. dS is inappropriate because we are not dealing with a theory of eternal dS space, with an entropy proportional to $R^2$. In a stable dS space the correlators that we compute can never be measured by any local observer. Instead, these formulae apply, approximately, to an inflationary model with a large number of e-folds of inflation. In such a model, the entropy accessible after inflation is of order $e^{3 N_e} R^2$, and these correlators are measurable by post-inflationary observers. CFT is inappropriate in general because a CFT has a unique $SO(1,4)$ invariant density matrix. The analytic continuation from AdS space is meaningful only in the semi-classical expansion, and in that expansion it gives the unique Bunch-Davies state of $SO(1,4)$ invariant bulk field theory. We have argued that apart from the precise slow-roll consistency relation, this does not give predictions for two-point functions that are significantly different than those provided by symmetry and general theorems alone.
A number of other authors \cite{others} have invoked ``conformal invariance" to constrain inflationary correlators. While we agree with many of the equations proposed by these authors, we believe that we have provided the only correct interpretation of these results within the framework of an underlying quantum mechanical theory. One interesting question that we have not resolved is the extent to which there exist ``Ward identities" relating correlators of different numbers of tensor fluctuations. In standard slow-roll inflation, the normalization of tensor to scalar fluctuations is completely fixed (not just in order of magnitude in slow roll), by the normalization of the bulk Einstein action. In the relationship with AdS/CFT, this normalization is ``dual" to the fact that the coefficients of the log of the WD wave function, are analytically continued correlators of the stress tensor.
In ordinary QFT, there are two ways to derive stress tensor Ward identities. We can analytically continue relations derived from commutators and time ordered products in the Lorentzian continuation of the theory, {\it or} we can interpret stress tensor correlators as the response to variations of the metric of the Euclidean manifold on which the functional integral is defined. We have taken pains to stress that in the proper interpretation of our $SO(1,4)$ invariant quantum theory, no analytic continuation to Lorentzian signature is allowed. In the next section, when we review the construction of the quantum theory from HST, it will be apparent that the round metric on the 3 sphere plays a special role in the construction, and it is not clear how to define the model on a generic 3-geometry. Consequently, we do not see how to define Ward identities beyond the semi-classical approximation to bulk geometry.
In summary, while the results of previous authors on ``dS/CFT" for the computation of inflationary correlators are correct to all orders in the bulk semi-classical expansion, they {\it do not} lead to a new non-perturbative definition of quantum gravity in an inflationary universe. An appropriate non-perturbative generalization of these results is to assume the fluctuations may be calculated as expectation values of a scalar and tensor operator on $S^3$. The density matrix is approximately $SO(1,4)$ invariant. That is, we assume that the quantum theory is approximately a reducible unitary representation of $SO(1,4)$. We emphasize the word {\it approximate}
in these desiderata, because our HST model is finite dimensional, but approaches a representation of $SO(1,4)$ exponentially, as $N_e \rightarrow\infty$. This statement should be take to refer to convergence of expectation values of a small number of operators.
The theory should also have a Jacobsonian hydrodynamic description in terms of classical fields in a space-time which is close to dS space for a long period, but allows the horizon to expand to encompass many horizon volumes of dS. The purpose of the present paper was primarily to show that this broad framework was sufficient for understanding the observations. Two of the authors, TB and WF, believe the HST model of \cite{holoinflation} is the only genuine model of quantum gravity that has these properties. One need not share this belief to accept the general framework of symmetries and cosmological perturbation theory.
\section{The Holographic Inflation model}
The basic idea of HST is to formulate quantum gravity as an infinite set of independent quantum systems, with consistency relations for ``mutually accessible information". Each individual system describes the universe as seen from a given time-like world line (not always, or even usually, a geodesic), evolving in proper time along that trajectory. The dynamics along each trajectory is constrained by causality: the evolution operator for any proper time interval factorizes as\footnote{We use notation appropriate for a Big Bang cosmology. $0$ is the time of the Big Bang. An analogous treatment of a time symmetric space-time would use an evolution operator $U(T,-T)$.} $$U(T,0) = U_{in} (T,0) \otimes U_{out} (T,0),$$ where $U_{in}$ acts only on ``the Hilbert space ${\cal H} (T, {\bf x})$ of degrees of freedom in the causal diamond determined by the past and future endpoints of the trajectory". $U_{out} (t,0)$ operates in the tensor complement of ${\cal H} (T, {\bf x})$ in ${\cal H} (T_{max}, {\bf x})$ . ${\bf x}$ is a label for the trajectory. The dimension of ${\cal H} (T, {\bf x})$, in the limit that it is large, determines the area of the holographic screen of the causal diamond via the Bekenstein-Hawking relation (generalized beyond black holes by Fischler, Susskind and Bousso), $$A(T, {\bf x}) = 4 L_P^{d-2}\ {\rm ln\ dim}\ [ {\cal H} (T, {\bf x})] .$$ We will take $d = 4$ in this paper. The causal relations between different diamonds are encoded in commutation properties of operators, as in quantum field theory (QFT).
$A(T,{\bf x})$ must not decrease as $T$ increases. For small $T$ it will always increase. It may reach infinity at finite $T$, as in AdS space; remain finite as $T\rightarrow\infty$, as in dS space; or asymptote to infinity with $T$. For trajectories inside black holes, or Big Crunch universes, $T_{max}$ will be finite. It's clear that there must be jumps in $T$, where the dimension of ${\cal H} (T, {\bf x})$, changes, and it's not likely that we need to discuss continuous interpolations between these discrete times. In the models of this paper, the discrete jumps will be of order the Planck time.
For any time, and any pair of trajectories, we introduce a Hilbert space ${\cal O} (T, {\bf x,y})$ whose dimension encodes the information mutually accessible to detectors traveling along the two different trajectories. ${\cal O} (T, {\bf x,y})$ is a tensor factor in both ${\cal H} (T, {\bf x})$ and ${\cal H} (T, {\bf y})$.
We define two trajectories to be nearest neighbors if
\begin{equation*}
{\rm dim}\ {\cal O} (T, {\bf x,y}) = {\rm dim}\ {\cal H} (T - 1, {\bf x}) = {\rm dim}\ {\cal H} (T - 1, {\bf y}).
\end{equation*}
Translated into geometrical terms, this means that the space-like distance between nearest neighbor trajectories, at any time, is the Planck scale. The second equality defines what we call {\it equal area time slicing} for our cosmology. We want the nearest neighbor relation to define a topology on the space of trajectories, which we think of as the topology of a Cauchy surface in space-time. It is probable that it is enough to think of this space as the space of zero simplices of a $d - 1 = 3$ dimensional simplicial complex, but for ease of exposition we use a cubic lattice. We require that ${\rm dim}\ {\cal O} (T, {\bf x,y})$ be a non-increasing function of the number of steps $d({\bf x,y})$ in the minimal lattice walk between the two-points.
The choice of ${\rm dim}\ {\cal O} (T, {\bf x,y})$ for points which are not nearest neighbors is determined by an infinite set of dynamical consistency requirements. Given time evolution operators and initial states in each trajectory Hilbert space, we can determine two time dependent density matrices $\rho (T, {\bf x})$ and $\rho (T, {\bf y})$ in ${\cal O} (T, {\bf x,y})$. We require that
$$\rho (T, {\bf x}) = V(T,{\bf x,y}) \rho (T, {\bf x}) V^{\dagger} (T,{\bf x,y}),$$ with $V(T,{\bf x,y})$ unitary. This constrains the overlap Hilbert spaces, as well as the time evolution operators and initial states.
As TB and WF have emphasized many times, the structure of space-time, both causal and conformal, is completely determined by quantum mechanics in HST, but the space-time metric is not a fluctuating quantum variable. The true variables are quantized versions of the orientation of pixels on the holographic screen. They are sections of the spinor bundle over the screen, but in order to satisfy the Covariant Entropy Bound for a finite area screen, we restrict attention to a finite dimensional subspace of the spinor bundle, defined by an eigenvalue cutoff on the Dirac operator\cite{tbjk}. For the geometries considered in this paper, with only four large space-time dimensions, the screen is a two sphere with radius $\sim N$ in Planck units times an internal manifold $K$ of fixed size. The variables are a collection of $N\times N+1$ complex matrices, $\psi_i^A (P)$ one for each independent section of the cutoff spinor bundle on $K$. Their anti-commutation relations are
$$[\psi_i^A (P), {\psi^{\dagger}}^j_B (Q)]_+ = \delta_i^j \delta^A_B Z_{PQ},$$ with appropriate commutation relations with the $Z_{PQ}$ to make this into a super-algebra with a finite dimensional unitary representation whose representation space is generated by the action of the fermionic generators.
We will not have to use much of this formalism in the present paper, because the era of cosmic history that we are discussing is almost featureless. The covariant entropy bound is almost saturated, with the size of deviation from its saturation related to the size of the fluctuations discussed in this paper. We will explain this somewhat oracular remark below.
\subsection{Review of the HI Model}
We now review the model of inflation and fluctuations described in \cite{holoinflation}.
We begin with a holographic space time model of a flat FRW universe with $p=\rho$\cite{holocosmmath}, which we believe is the generic description of the early stages of any Big Bang universe. The Big Bang hypersurface is a topological cubic lattice of observer trajectories. The Hilbert space of {\it any} observer's causal diamond $T$ units of Planck time after the Big Bang, has dimension ${\rm dim\ }{\cal P}^{T(T+1)}$, where ${\cal P}$ is the fundamental representation of the compactification superalgebra. At each time the Hamiltonian is chosen from a random distribution of Hermitian matrices in this Hilbert space, with the following provisos
\begin{itemize}
\item Every observer has the {\it same} Hamiltonian at each instant of time.
\item For large $T$, the Hamiltonian approaches\footnote{The word {\it approaches} means that the CFT can be perturbed by a random irrelevant operator.} that of a non-integrable $1 + 1$ dimensional CFT with central charge $T^2$, living on an interval of length of $o(T)$, with a cutoff of order $1/T$, in Planck units. The bulk volume scales like $T^3$, so the bulk energy density scales like $1/T^2$, and the bulk entropy density like $1/T$, which is the Friedman equation for the $p = \rho$ FRW space-time. The theory has no scale but the Planck scale, so the spatial curvature vanishes, and the model saturates the covariant entropy bound\cite{FSB} at all times.
\end{itemize}
We then modify this model in the following way. Choose two integers, $n$ and $N$ such that $1 \ll n \ll N$, which will determine the Hubble scale during inflation and the value of the Hubble scale corresponding to the observed cosmological constant, respectively. Choose one point on the lattice to represent the origin of ``our" coordinate system. We will treat the tilted hypercube consisting of all points a distance $\leq N/n$ lattice steps from the origin, differently than the points outside. For these points, we stop the growth of the Hilbert space at time $n$, for a while, and allow the Hamiltonian to remain constant. We also use $1 + 1$ dimensional conformal transformations to replace it with the same model on an interval of length $n^3$ with a cutoff of order $1/ n^3$. In \cite{holoinflation} we argued that this was the Hamiltonian of a single horizon volume of dS space, with Hubble radius $n$. The rescaling of the Hamiltonian should be viewed as a change of the trajectory under consideration from that of a geodesic observer in the original FRW, to that of a static observer in dS space. The Jacobsonian effective geometry corresponding to this model up to time $n$ is a $p=\rho$ FRW, which evolves to a dS space with Hubble radius $n$. The Jacobsonian Lagrangian contains the gravitational field and a scalar field, and the dynamics of the underlying model would imply that they were both homogeneous, if we had stopped the growth of the Hilbert space everywhere in the lattice of trajectories.
Outside the tilted hypercube however, we continue to use the $p=\rho$ Hamiltonian. In
\cite{holoinflation}, TB and WF argued that if $n = N$ there was a consistent set of overlap rules, which had the property that points outside the hypercube were forever decoupled from those inside, in the sense that the overlaps between interior and exterior points are always empty. The exterior Jacobsonian effective geometry corresponding to this model is a spherically symmetric black hole of radius $N$ in the $p =\rho$ geometry. The interior geometry is not, however consistent with this unless $n = N$. The Israel junction condition, if we insisted on a dS geometry in the interior, would require that the boundary of the hypercube be a trapped surface with Hubble radius $N$.
We then proposed to modify the time evolution inside the hypercube to resolve this problem. Our modification is only to the Hamiltonian of a single observer at the center of the hypercube. We do not have a fully consistent HST model, with compatible Hamiltonians for all observers, corresponding to this model. However, since our single observer model behaves approximately like a local field theory at times $\gg n$, and QFT satisfies the HST overlap rules approximately, we expect that a full model can be constructed. We call this single observer model, Holographic Inflation (HI).
According to the rules of HST, the observer at the center of the HI model will be decoupled from the rest of the DOF of the universe forever. Since there exist solutions of Einstein's equations with multiple black holes embedded in a $p =\rho$ universe, we believe that the HI model can be embedded inside a larger model, in which the central observer's finite universe eventually collides with other universes, with different values of the cosmological constant. In \cite{holoinflation} we argued that this is one of many possible ways to solve the ``Boltzmann brain non-problem". Since the collision time can be any time between a few times the current age of the universe, to the unimaginably long recurrence time for the first Boltzmann brain, this embedding is completely irrelevant to any observation we could conceivably make.
\begin{figure}[t!]
\centering
\includegraphics[width=.8\textwidth]{HST_Inflation_1}
\caption{This figure illustrates how the time dependent Hamiltonian of the HI model encompasses more DOF on the fuzzy 3-sphere (explained below), as time goes on. Each band in the figure represents a fuzzy 2-sphere of radius $R(t_k ) = R \sin (\theta_k)$ at time $t_k$. The horizon radius $R(t)$ is a smooth function that approximates this discrete growth of the horizon for a large number of e-folds. It determines an FRW cosmology through
$ R(t) = R a(t)\int_{I}^t \frac{ds}{a(s)} .$.}
\label{couple}
\end{figure}
The Hilbert space of the Holographic Inflation model has entropy of order $N^2$. Initially, the Hilbert space is broken up into $(N/n)^2 $ tensor factors, each of which
behaves like a single horizon volume of dS space. That is to say, the state of each of these systems is changing rapidly in time in a manner that leads to scrambling of information on a time scale $n\ {\rm ln}\ n$\cite{susskindsekino}. Now we gradually begin to couple these systems together, starting from those that are close to the center of the hypercube as shown in Figure \ref{couple}. The idea behind this is that time evolution up to time $n$ gave us multiple copies of the single $dS_n$ Hilbert space, corresponding to different observers. We now map all of those copies into the Hilbert space of the central observer. We want to get an emergent space-time which looks like multiple horizon volumes of $dS_n$.
Initially, the Hamiltonians of different observers were synchronized and the universe was exactly homogeneous and isotropic. However, when we couple together the copies of these systems in the Hamiltonian of the central observer, the coupling does not occur at synchronized times. Thus, the initial state as each successive horizon volume is coupled in can be thought of as a tensor product, but with a different, randomly chosen, state of the $dS_n$ system in each factor. {\it This is the origin of the local fluctuations, which eventually show up in the microwave sky of the central observer. It is also the origin of LOCALITY itself. } A conformal diagram of this unsynchronized coupling of $dS_n$ horizon volumes can be seen in Figure \ref{conformal}.
Indeed, in \cite{holoinflation} we pointed out that if we took $N = n$ then we can find a completely consistent model of a universe which evolves smoothly from the $p=\rho$ Big Bang, to $dS_N$, without ever producing a local fluctuation. It is exactly homogeneous and isotropic at all times, despite the fact that the initial state is random and the Hamiltonian is a fast scrambler. Although it corresponds to a coarse grained effective geometry, the model contains no local excitations around that background. Instead, it saturates the Covariant Entropy Bound at all times and is never well approximated by QUEFT, despite the fact that it is, for much of its history, a low curvature space-time. By taking $1 \ll n \ll N$, we find a model that interpolates between the $p = \rho$ Big Bang, and asymptotic $dS_N$, via an era of small localized fluctuations, which, for a long time, remain decoupled from the majority of the horizon DOF in $dS_N$.
\begin{figure}[t!]
\centering
\includegraphics[width=.8\textwidth]{Conformal_Inflation_2}
\caption{A conformal diagram showing initially, separately evolving horizon volumes being coupled together asynchronously. The observer starts in the central horizon volume and the colored regions later in that horizon's history indicate when a nearby horizon volume (discretely separated at the colored points at $t=0$) is coupled to the Hilbert space of the observer. The red regions indicate sections of space-time that are decoupled from the central observer and allowed to evolve freely. Since this evolution is not synchronized with the time dependence of the Hamiltonian of the central horizon volume, the asynchronous coupling of independent horizon volumes gives rise to local fluctuations (indicated by different color opacities in the figure). }
\label{conformal}
\end{figure}
Thus, the role of inflation in the Holographic Inflation model is precisely to generate localized fluctuations, by starting the system off in a state where commuting copies of the same DOF are in different quantum states, from the point of view of the central observer. Below, we will map these commuting copies to different points on a fuzzy 3-sphere, so that the fluctuations in their quantum states become local inhomogeneities of the 3-sphere. These are, in our model, the origin of the CMB fluctuations, and they provide the raison d'{\^ etre} for localized excitations of the ultimate $dS_N$ space. One might say that the most probable path between the $p=\rho$ geometry and $dS_N$ is the homogeneous model described in the previous paragraph. By forcing the universe to go through a state where tensor factors of its Hilbert space are decoupled, the inflation model chooses a less probable, though more interesting, path\footnote{We are using the word probable in a somewhat peculiar way in this sentence. That is, the exactly homogeneous, entropy maximizing, model is a different choice of time dependent evolution operator than the HI model, which contains a period of inflation, and produces localized fluctuations. The latter model exploits the basic postulate of HST that the initial state in any causal diamond whose past tip is on the Big Bang hypersurface, is unentangled with DOF outside that diamond, to construct an evolution operator that exhibits approximate locality for a subset of DOF. As a consequence, the state of this model does not have maximal entropy for the period between the beginning of inflation and the time when all localized excitations decay to the dS vacuum. It's not clear whether we should call the second model "less probable" than the first. They are not part of the same theory. What we mean is that, at intermediate times, a random choice of state would coincide with the actual state determined by the time dependence of the first model, while the states of the second model would look non-random.}.
In the model described in \cite{holoinflation}, we organized all of the DOF which have interacted up to the end of inflation, in terms of variables localized on a fuzzy hemi-3-sphere of radius $E$. In order to match with the bulk picture of inflationary geometry, this corresponds to sphere with $e^{3N_e} n^2$ DOF. The boundary of this sphere is the holographic screen of the central observer's causal diamond at the end of inflation.
Indeed, in the bulk picture of inflation, all DOF encountered by the central observer in the future have been processed during the inflationary period.
Thus $$E^2 = e^{3N_e} n^2 \leq N^2 = 10^{123},$$
and $$ N_e \leq 94. 4 - 2/3 {\rm ln}\ n = 85.4 - 4/3 {\rm ln}\ \frac{M_I}{M_U} ,$$ where the ratio in the last term is that of the scale of inflation to the unification scale ($2\times 10^{16}$ GeV).
On the other hand, $E$ must be large enough to encompass all of the degrees of freedom that manifest themselves as fluctuations in the CMB.
The entropy of CMB photons in the current universe is
$$ (\frac{T}{M_P} N)^3 \sim 10^{89} .$$ However, the entropy in the {\it fluctuations} is only a fraction of this
$$S_{fluct} = 3\frac{\delta T}{T} \times 10^{89} \sim e^{169} \leq e^{3N_e} \times n^2 .
$$Thus $$ N_e \geq 49 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U}.$$
We can estimate the size of the local fluctuations by the usual rules of statistical mechanics. The local subsystems have entropy of order $n^2$, so that a typical fluctuation of a local quantity, is $o(1/n)$. This indicates an inflation scale of order the GUT scale, if we use the CMB data to normalize the two-point function. The fluctuations are also close to Gaussian, again because they are extensive on the inflationary holoscreen. $k$-point functions scale like $n^{-k}$. Note that, apart from factors which arise from the translation of these quantum amplitudes into the fluctuations used in classical cosmological perturbation theory, this is the scaling of $k$ point functions expected in a conventional slow-roll model. However, the size of these fluctuations is fixed by $n{-1}$, rather than by the effective $H(t)$ that one would get if one computed QUEFT fluctuations in a slowly evolving cosmology. We pointed out in the previous section that this leads to a prediction of zero tensor tilt in the HI model, but that our ignorance of the correct form of $H(t)$ makes it difficult to differentiate the predictions for scalar tilt of the HI and bulk QUEFT models of fluctuations.
The main burden of the present paper is to explore the consequences of the dS invariance of these fluctuations. Note that there is no meaning to dS invariance in the theory of a single stable dS space. The physics of that system is confined to a single horizon volume, and only an $R \times SU(2)$ subgroup of $SO(1,4)$ leaves the horizon volume invariant. The coset of this subgroup maps the observer's horizon volume into others, and does not act on physical observables. However, in our Holographic Inflation model, the central observer sees $(E/n)^2$ horizon volumes, and if this number is large, we must build a model which closely approximates the properties of the classical $dS_n$ space, which is seen by a single observer. At the end of inflation, this observer's causal diamond contains many horizon volumes of $dS_n$ and so a model {\it approximately} invariant under $SO(1,4)$ is appropriate. We will argue that the corrections to this symmetric model, {\it for the calculation} of correlations between fluctuations at small numbers of points, are suppressed by powers of $e^{ - N_e}$, and it is reasonable to neglect them. The continuous $SO(1,4)$ invariant model overestimates the total number of quantum states in the universe by an infinite factor, but most of those states are not probed by the limited observations we make on the CMB.
At this point it is worth noting that the model presented in \cite{holoinflation} and the present paper, does not really describe the CMB. Within the HST formalism, we have not yet understood how to describe conventional radiation and matter dominated universes, where the source of the gravitational field is particles, rather than another effective classical field (the inflaton). Our model ends with a time independent Hamiltonian which is (approximately) a generator, $\mathcal{L}_{04}$, of $SO(1,4)$. It is {\it not} the Hamiltonian we have conjectured to describe particle physics in $dS_N$\cite{tbds}\cite{holounruh} .
Thus our model is not a realistic cosmology. Its hydrodynamic description is that of an FRW geometry coupled to a scalar field, which has small inhomogeneous fluctuations on a 3-hemisphere of radius $e^{N_e} n$. In the Jacobsonian effective field theory description, these are fluctuations in the classical value of the inflaton, which are chosen from an approximately Gaussian, approximately dS invariant distribution described in the previous section. The normalization of the two-point function is determined by $n$, and we've observed that it coincides with the observed strength of CMB fluctuations if $n$ is of order the unification Hubble radius, but the model has no CMB. The Lagrangian for the inflaton must interpolate between the $p = \rho$ geometry, and $dS_N$ via a period of $N_e = \frac{2}{3} {\rm ln}\ (E/n)$ e-folds of inflation, plus sub-luminal expansion for the period when the horizon radius stretches from $E$ to $N$. It is therefore a conventional slow-roll inflation Lagrangian, with parameters chosen to fit the underlying quantum model.
To accommodate hypothetical HI models with blue tilt, we can either tune the inflaton potential so that the slow-roll parameter $\eta > 6\epsilon$, or use a hybrid model as the Jacobsonian THEFT. We emphasize that from the point of view of HST, we are merely searching for the classical model that fits the hydrodynamics of an underlying quantum system. In HI, that quantum system is not even approximately a QUEFT, at least at the beginning of inflation, when the fluctuations are actually generated. The fluctuations calculated from the density matrix of the underlying model are inserted into the classical space-time equations of the THEFT, as fluctuations of the metric, in the co-moving gauge for the inflaton.
In a more realistic model, we would have to make a transition from $\mathcal{L}_{04}$ to the Hamiltonian of a geodesic observer in $dS_N$. The latter Hamiltonian describes particles, and we would have to show how the fluctuations in the inflaton get transmuted into distributions of photons and matter. This is the physics encompassed in the conventional process of {\it reheating}, and the subsequent propagation of photons through an inhomogeneous space-time, including phenomena like the
Sachs-Wolfe effect. We know perfectly well how to build a QUEFT of this era, by coupling the classical inhomogeneous inflaton field to quantum fields describing particles. It's basically the challenge of describing the particle physics in terms of HST that is beyond our reach at present. There are however, a few remarks that we can make. The first is that the conventional matter and radiation dominated eras lead to an increase in the radius of the horizon by an amount $\alpha N$, with $\alpha$ a parameter strictly less than, but of order, $1$. The fact that $\alpha$ is less than one follows from the general properties of asymptotically dS cosmologies which are not exactly dS, while the fact that it's of order one reflects the very recent crossover between matter and radiation domination. Thus, we should take $E \ll N$.
\subsection{The Fuzzy 3-sphere}
In order to construct a model, which is effectively local in a 3 dimensional space, we label the $E^2 = e^{3 N_e} n^2 $ variables in the following manner. The geometry seen by an observer at the end of inflation is a 3-sphere of radius $R_I = e^{N_e} n$.
We have of order $E^2$ degrees of freedom, which can be thought of holographically as living on the holographic screen of a causal diamond, with radius $ E \sim e^{\frac{3 N_e}{2}} n\gg R_I$, when the number of e-foldings is large. We will distribute these ``uniformly" over a fuzzy 3-sphere of radius $R_I$.
\begin{figure}[t!]
\centering
\includegraphics[width= .9\textwidth]{geodesic}
\caption{Tilings of fuzzy two spheres of different radii. The maximally localized spinor wave functions at the centers of the tiles are a basis for the cutoff spinor bundle, with angular momentum cutoff determined by the radius of the sphere in Planck units.}
\label{tile}
\end{figure}
A 3-hemisphere can be thought of as a fiber bundle with two-sphere fibers, over the interval $[0, \frac{\pi}{2}]$ . The two-sphere at angle $\theta$ has a radius $R_I \sin\theta $. The HST version of this geometry is a collection of variables $\psi_i^A (\theta_k ) ,$ where the matrix
at $\theta_k$ is $N_k \times N_k + 1$ with $$N_k =R_I \sin\theta_k .$$
We take $\sin\theta_1 = n/R_I$, while $$\sum_k \sin\theta_k (\sin\theta_k + 1/R_I ) = e^{N_e} .$$ The $\theta_k$ are equally spaced in angle along the interval. Since each $N_k \geq n \gg 1$, we can construct, for each two sphere, a basis of spinor spherical harmonics localized on the faces of a truncated-icosahedral, geodesic tiling of the sphere, obtaining an approximately local description of our hemi-3-sphere. This tiling scheme is shown in Figure \ref{tile}. The centers of the faces, combined with the discretized interval parametrized by $\theta_k$ define a lattice on the 3 sphere. Our spinor variables $\psi_i^A$ have a natural action of $SO(4) = SU(2) \times SU(2)$ acting separately on the rows and columns of the matrix. We combine this with the discrete $SO(4)$ rotations which take points of the lattice into each other. As $N_e \rightarrow\infty$ we can construct operators which turn our unitary representation of $SO(3)$ into a unitary representation of $SO(4)$. In addition, we argued in \cite{holoinflation} that, in the limit, we could construct operators $\mathcal{L}_{0M}$ which extend this to a unitary representation of $SO(1,4)$. We thus conjectured that the Hilbert space of the localized variables $\psi_i^A (\theta_k )$ admits an action of $SO(1,4)$, in the limit $N_e \rightarrow\infty$ ,
and that it can be described in terms of field operators $O_A (X)$ transforming covariantly under the action of $SO(1,4)$ on the 3-sphere, as we assumed in the previous section. {\it We have argued that they DO NOT obey the axioms of conventional Euclidean CFT. In particular, the Hilbert space admits an infinite dimensional unitary representation of $SO(1,4)$ which cannot be highest weight (there are no highest weight unitary representations). This also implies that there are generally many $SO(1,4)$ invariant states in the representation. Our results for the correlation functions of inflationary fluctuations depended on the assumption that the state of the system after inflation is invariant, but not on the particular choice of invariant state.} Note that the operators $O(X)$ representing the local fluctuations commute at different points, because they probe properties of the individual, originally non-interacting, horizon volumes. We work in the Schrodinger picture, in which the density matrix, rather than the operators, evolve.
The preceding paragraph described mathematics. We incorporate it into the physics of our inflationary universe in the following way. We have followed the universe using the rules of HST from its inception until a time when the particle horizon had a size $n$. At that time, a very large number of observers have Hilbert spaces of entropy $n^2$ and are described by identical states and Hamiltonians. The individual Hamiltonian is that of a non-integrable, cutoff $1 + 1$ dimensional field theory whose evolution, time averaged over several e-foldings, produces a maximally uncertain density matrix. This description extends out from a central point on the lattice of observers for a distance $N/n$, up to the surface that will eventually be our cosmological horizon. Points on the lattice of observers that are more than $n$ steps apart, have no overlap conditions. We make a coarser sublattice, consisting of centers of tilted cubes on the original lattice, whose Hilbert spaces have no overlaps. We now want to describe the Hamiltonian of the central observer, from the time that individual points on the coarse sublattice thermalize, until the end of inflation. We will {\it not} provide a complete HST description of this era, because it is currently beyond our powers.
To construct this Hamiltonian, we begin with the Hilbert space of entropy $E^2$ described above, and identify points in the coarse lattice of HST observers, with points on the fuzzy 3-sphere described above. The central observer is identified with the point $\theta_1$ on the interval. There is no sense in further localizing it on the fuzzy two sphere at that point, because the state in its Hilbert space is varying randomly over a Hilbert space of entropy $n^2$. There are no localized observables at length scales smaller than $n$. We think of this geometrically as saying that the area of the hexagon centered at this observer's position has area $n^2$. Each point on the 3-spherical lattice has, at the beginning of this era, an identical wave function in a Hilbert space of entropy $n^2$. The time dependent Hamiltonian of the central observer now begins to couple together points on the spherical lattice, in a manner consistent with causality. That is, as the proper time of the central observer increases, we assume that its causal diamond increases in area, and the Hamiltonian couples together points that are closest to it on the 3-sphere, in accordance with the covariant entropy bound. In principle, the rate, in proper time, at which the area of the holographic screen grows, tells us about the FRW background geometry. The Jacobsonian effective field theory of this is a model of gravity coupled to a scalar, with a potential that leads to $N_e$ e-folds of inflation, and a rapid transition to $dS_N$. We are dealing with only a single observer, and do not have overlap constraints to guide us, so we could incorporate any geometry consistent with the entropy bounds.
The rate at which different points on the sphere are coupled together is not connected to the rate of change of the state according to the local Hamiltonian, which is randomizing individual Hilbert spaces of entropy $n^2$. {\it Therefore there will be local fluctuations of the initial quantum state at different points on the 3-sphere}. This is the physical origin of the fluctuations whose form we described in the previous section. Above, we have argued that when $n \gg 1$ they are approximately Gaussian and estimated their magnitude. They should clearly be thought of as statistical fluctuations in the quantum state, rather than quantum fluctuations in a pure state. Of course, since we detect these fluctuations in properties of a macroscopic system, there is no way that one could have ever detected the quantum nature of fluctuations in the conventional inflationary picture, but the point of principle is significant. In a more realistic model, these fluctuations would be the origin of what we observe in the CMB and the clumpy distribution of matter around us.
We construct our model so that, by the time the size of the holographic screen has reached $E$, the Hamiltonian of the DOF in that diamond is the generator $\mathcal{L}_{04}$, which approaches an element of the $SO(1,4)$ Lie Algebra in the (fictitious) limit $N_e \rightarrow\infty$. The system is characterized by a density matrix, because the state of each point on the fuzzy 3-sphere is random, and the times at which different points become coupled together are not locked in unison\footnote{For purists, we should point out that we're not postulating non-unitary evolution, merely noting that the initial conditions of our problem introduce some randomness into the pure state of the universe. We're simply making predictions by averaging over this ensemble of possible random states, since no observation can ever determine what the correct initial state was.}. Note however that the initial time averaged density matrices at each point {\it are} identical, by construction, and are exactly $SO(3)$ invariant. It is extremely plausible that the density matrix is approximately $SO(1,4)$ invariant when $N_e$ is large. This is our principal assumption. The ``lattice spacing" on our 3-sphere is of order $e^{- N_e}$ so corrections to $SO(1,4)$ invariance are, plausibly, exponentially small. Note that we are free to construct a model for which this is true. The only constraint on model building in HST (apart from those we are clearly satisfying) comes from the overlap rules. We are not, of course, implementing the overlap rules in this paper, but we see no reason why they should be incompatible with approximate $SO(1,4)$ invariance of a single observer's density matrix.
It's important to realize that $SO(1,4)$ invariance of the density matrix does not imply exact dS invariance of the universe, as described by its THEFT. The density matrix is a probability distribution for fluctuations and the THEFT is the result of classical evolution starting from typical initial conditions. This is, of course, exactly as in conventional inflation models. Also, the fact that, in the underlying HI model, all degrees of freedom are in interaction, means that inflation is ending, so even the homogeneous background should be moving away from its dS form.
\section{Conclusions and Comparison With Observations}
We have argued that the form of primordial fluctuations, which has been derived to leading order in slow-roll parameters for a slow-roll inflation model with the assumption of the Bunch-Davies vacuum (see Appendix A for an argument that this assumption is a fine tuning of massive proportions), in fact follows from a much less restrictive set of assumptions. These are $SO(1,4)$ invariance and approximate Gaussianity, plus a particular choice for the $SO(1,4)$ representations for the operator representing scalar fluctuations. This choice, plus $8$ normalizations for the different two- and three-point functions, determine the fluctuations uniquely. In slow-roll models, these normalizations depend on parameters in the slow-roll potential, while Gaussianity is a prediction of the model and the leading non-Gaussian amplitude is suppressed by a power of the slow-roll parameters. We have noted that the dominance of scalar over tensor two-point fluctuations is a general consequence of cosmological perturbation theory for near de Sitter backgrounds, and the assumption that the scalar and tensor components of the curvature have similar intrinsic fluctuations (as they do in both slow-roll and HI models). Maldacena's squeezed limit theorem, combined with $SO(1,4)$ invariance, determines all three-point functions involving scalars in terms of the scalar and tensor two-point functions.
We've also reviewed the HST model of inflation presented in \cite{holoinflation}. It predicts approximately Gaussian and $SO(1,4)$ invariant fluctuations, robustly and without assumptions about the initial state. Like all HST cosmologies it is completely finite and quantum mechanical. $SO(1,4)$ invariance follows from the assumption that evolution with the $\mathcal{L}_{04}$ generator of an initially $SO(3)$ invariant density matrix will lead to an $SO(1,4)$ invariant density matrix after a large number of e-foldings.
The number of e-foldings is not a completely independent parameter, but is bounded by the ratio between the inflationary and final values of the Hubble radius. If we require that we have enough entropy in the system at the end of inflation to account for the CMB fluctuations, then
$$ 49 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U} \leq N_e \leq 85.4 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U} .$$ In order to leave room for the subluminal expansion of conventional cosmology, we should not be near the lower bound.
In the slow-roll models, the small deviation from the ``scale invariant" predictions $$n_S = n_T + 1 \sim 1$$ is explained by the slow-roll condition. A similar argument for a general $SO(1,4)$ symmetric model (and in particular the HI model) follows from the fact that the parameter $\Delta_{S}$ labeling the scalar fluctuations is bounded, $\leq 3/2$, by unitarity of the representation of $SO(1,4)$. The construction of the HST model guarantees that the effective bulk geometry, constructed from local thermodynamics following the prescription of Jacobson, goes through a period of inflation, which ends. We do not yet have an HST description of reheating, and the era of cosmology dominated by particle physics. The dominance of the scalar over tensor fluctuations, the smallness of non-Gaussianity involving the scalar, and the fact that the scalar and tensor tilt are both small, all follow from the fact that $\frac{\dot{\bar{H}}}{(\bar{H})^2}$ is small and that $\Delta_{S}$ is bounded. At the level of two-point functions, the only relation that distinguishes conventional slow-roll inflation (including hybrid inflation models) from generic dS invariant quantum theory is the precise relation between the normalizations and tilts of the scalar and tensor fluctuations and the fact that the HI model predicts vanishing tensor tilt. Depending on the precise form of $H(t)$, there may be a critical value of $\Delta_S$ for which the scalar tilt shifts from red to blue. It will be interesting to see whether further investigation of HST models can predict that the scalar tilt is red. At our present level of understanding, the scalar tilt is a competition between a blue tilt induced by choosing a ``massive" representation for $\Delta_S$ and a red tilt induced by the conventional normalization of fluctuations. We do not have an {\it a priori} argument for which of these dominates, or even whether there are different models where either can dominate.
At the level of non-Gaussian fluctuations, things are a bit more interesting. Slow-roll models with Lagrangians containing only the minimal number of derivative terms give rise to only one of the three possible $SO(1,4)$ covariant forms for the triple tensor correlation function. Even if we include higher derivatives, we cannot get the parity violating form. Thus, observation of the purely tensor bispectrum could tell us whether we were seeing conventional slow roll, or merely a generic model with approximate $SO(1,4)$ symmetry. On the other hand, the parity violating amplitude might be forbidden in general by a discrete symmetry of the HI model. At the moment, we do not see an argument, which would require such a symmetry.
We also want to emphasize that the inflation literature is replete with models which give the standard predictions for two-point functions, but predict three-point functions which are far from $SO(1,4)$ invariant. In these models, Maldacena's squeezed limit theorem does not imply that the scalar three-point function is small everywhere in momentum space. According to our current understanding, observation of a large scalar 3 point function, could rule out all models based on $SO(1,4)$ symmetry, and might point to some non-vanilla, QUEFT based inflation model.
Our considerations imply that so long as observations remain consistent with some slow-roll inflation model, they will not distinguish a particular model among the rather large class we have discussed without also observing tensor fluctuations. The only observations that are likely to validate the idea of a QUEFT with Bunch-Davies fluctuations of quantum fields are a precise validation of the single field slow-roll relation between two-point functions, or a measurement of the tensor three-point function. On the other hand, observations that validate non-standard inflationary models, like DBI inflation, or show evidence for iso-curvature fluctuations, could rule out the general framework discussed in this paper. While it is possible that HST models can be generalized to include iso-curvature fluctuations, this is not in the spirit of those models. The key principle of HST cosmologies is that the very early universe is in a maximally mixed state, which is constantly changing as new DOF enter the horizon. The model of \cite{holoinflation}, was designed to be the minimal deviation from such a maximal entropy cosmology, which allowed for a period in which localized excitations decouple from the bulk of the degrees of freedom on the horizon. A model with more structure during the inflationary era would introduce questions like ``Why was this necessary?", which could at best be justified (though it seems unlikely) by anthropic considerations.
To conclude, we want to re-iterate a few basic points. Conventional inflation appears fine tuned because of what is usually called the Trans-Planckian problem (Appendix A). A generic state of the DOF that QUEFT buries in the extreme UV modes of the inflationary patch, has no reason to evolve to the Bunch-Davies state. Moreover if we accept the idea that local patches of dS space become completely thermalized within a few e-foldings, and that the generic state has no localized excitations, then it does not really make sense to treat its dynamics by QUEFT. In \cite{holoinflation} we proposed a model that preserves causality, unitarity, the covariant entropy bound, and which, with no fine tuning of initial conditions, leads to a coarse grained space-time description as a flat FRW model with a large number of e-folds of inflation. The model produces a nearly Gaussian spectrum of almost-deSitter invariant scalar and tensor metric fluctuations. The model can be matched to a slow-roll QUEFT model (with a different space-time metric) at the level of scalar fluctuations, but predicts no tensor tilt and, in the absence of an explicitly imposed symmetry, would have all three invariant forms of the tensor three-point function with roughly equal weights.
|
2,869,038,156,685 | arxiv |
\section{Factual representation of example metabolic network}
\label{sec:appendix}
The factual representation of the metabolic network in Fig.~\ref{gra:toy} is given in Listing~\ref{lst:instance}.
\lstinputlisting[numbers=left,numberblanklines=false,basicstyle=\ttfamily\footnotesize,caption={Example instance of metabolic network},label=lst:instance]{toy_instance.lp}
Note that in lines 33 to 37 of Listing~\ref{lst:instance},
the values of \texttt{objective} and \texttt{bounds} are set globally,
but they may be arbitrary in general.
\section{Solving Hybrid Metabolic Network Completion}
\label{sec:approach}
In this section, we present our hybrid approach to metabolic network completion.
We start with a factual representation of problem instances.
A metabolic network $G$ with a typing function $t: M\cup R\rightarrow\{\texttt{d,r,s,t}\}$,
indicating the origin of the respective entities,
is represented as follows:
\begin{align*}
F(G,t) = & \phantom{\cup\;}\;\{\texttt{metabolite($m$,$t(m)$)}\mid m\in M\} \\
& \cup\; \{\texttt{reaction($r$,$t(r)$)}\mid r\in R\} \\
& \cup\; \{\texttt{bounds($r$,$lb_r$,$ub_r$)$\mid r\in R$}\} \;\cup\; \{\texttt{objective($r$,$t(r)$)$\mid r\in R$}\}\\
& \cup\; \{\texttt{reversible(r)}\mid r\in R, \Reactants{r}\cap\Products{r}\neq\emptyset\} \\
& \cup\; \{\texttt{rct($m$,$\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}(m,r)$,$r$,$t(r)$)$\mid r\in R, m\in\Reactants{r}$}\} \\
& \cup\; \{\texttt{prd($m$,$\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}(r,m)$,$r$,$t(r)$)$\mid r\in R, m\in\Products{r}$}\}
\end{align*}
While most predicates should be self-explanatory,
we mention that \texttt{reversible} identifies bidirectional reactions.
Only one direction is explicitly represented in our fact format.
The four types \texttt{d}, \texttt{r}, \texttt{s}, and \texttt{t} tell us whether an entity stems from the
\textbf{d}raft or \textbf{r}eference network, or belongs to the \textbf{s}eeds or \textbf{t}argets.
In a metabolic network completion problem,
we consider
a draft network $G=(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}})$,
a set $S$ of seed compounds,
a set $R_{T}$ of target reactions,
and a reference network $G'=(R'\cup M',E',\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}')$.
An instance of this problem is represented by the set of facts
\(
F(G,t)\cup F(G',t')
\).
In it, a key role is played by the typing functions that differentiate the various components:
\[
t(n) =
\left\{
\begin{array}{ll}
\texttt{d}, & \text{if } n\in (M\setminus (T\cup S))\cup (R\setminus(R_{S_{b}\,}\cup R_T)) \\
\texttt{s}, & \text{if } n\in S\cup R_{S_{b}\,} \\
\texttt{t}, & \text{if } n\in T\cup R_T
\end{array}
\right.
\quad\text{ and }\quad
t'(n) = \texttt{r},
\]
where
\(
T=\{m\in\Reactants{r}\mid r\in R_T\}
\)
is the set of target compounds and
\(
R_{S_{b}\,}=\{r\in R\mid m\in S_{b}\,(G), m\in\Products{r}\}
\)
is the set of reactions related to boundary seeds.
Our encoding of hybrid metabolic network completion is given in Listing~\ref{lst:encoding}.
\lstinputlisting[float=t,floatplacement=t,numbers=left,numberblanklines=false,basicstyle=\ttfamily\scriptsize,firstline=1,lastline=27,caption={Encoding of hybrid metabolic network completion},label=lst:encoding,belowskip=-2em]{encoding.lp}
Roughly,
the first 10 lines lead to a set of candidate reactions for completing the draft network.
Their topological validity is checked in lines~12--16 with regular ASP,
the stoichiometric one in lines~18--24 in terms of linear constraints.
(Lines~1--16 constitute a revision of the encoding in~\citep{schthi09a}.)
The last two lines pose a hybrid optimization problem,
first minimizing the size of the completion and then
maximizing the flux of the target reactions.%
In more detail,
we begin by defining the auxiliary predicate \texttt{edge}/4 representing directed edges between compounds connected by a reaction.
With it,
we calculate in Line~4 and~5 the scope $\Sigma_{G}(S)$ of the \textbf{d}raft network $G$ from the seed compounds in $S$;
it is captured by all instances of \texttt{scope(M,d)}.
This scope is then e\textbf{x}tended in Line 7/8 via the reference network $G'$ to delineate all possibly producible compounds.
We draw on this in Line~10 when choosing the reactions $R''$ of the completion (cf.\ Section~\ref{sec:problem})
by restricting their choice to reactions from the reference network whose reactants are producible.
This amounts to a topological search space reduction.
The reactions in $R''$ are then used in lines~12--14 to compute the scope $\Sigma_{G''}(S)$ of the \textbf{c}ompleted network.
And $R''$ constitutes a topologically valid completion if all targets in $T$ are producible by the expanded draft network $G''$:
Line~16 checks whether $T\subseteq\Sigma_{G''}(S)$ holds, which is equivalent to $R_T\subseteq\Activity{t}{G''}{S}$.
Similarly, $R''$ is checked for stoichiometric validity in lines~18--24.
For simplicity, we associate reactions with their rate and let their identifiers take real values.
Accordingly, Line~18 accounts for \eqref{eq:stoichiometric:bounds} by imposing lower and upper bounds on each reaction rate.
The mass-balance equation \eqref{eq:stoichiometric:equation} is enforced for each metabolite \texttt{M} in lines~20--22;
it checks whether the sum of products of stoichiometric coefficients and reaction rates equals zero,
viz.\ \texttt{IS*IR}, \texttt{-OS*OR}, \texttt{IS'*IR'}, and \texttt{-OS'*OR'}.
Reactions \texttt{IR}, \texttt{OR} and \texttt{IR'}, \texttt{OR'} belong to the draft and reference network, respectively,
and correspond to $R\cup R''$.
Finally, by enforcing $r_T>0$ for $r_T\in R_T$ in Line~24,
we make sure that $R_T\subseteq\Activity{s}{G''}{S}$.
In all, our encoding ensures that the set $R''$ of reactions chosen in Line~10 induces an augmented network $G''$
in which all targets are activated both topologically as well as stoichiometrically,
and is optimal wrt the hybrid optimization criteria.
\section{Answer Set Programming with Linear Constraints}\label{sec:background}
For encoding our hybrid problem,
we rely upon the theory reasoning capacities of the ASP system \sysfont{clingo}\ that allows us to extend ASP with linear constraints over reals
(as addressed in Linear Programming).
We confine ourselves below to features relevant to our application and refer the interested reader for details to~\citep{gekakaosscwa16a}.
As usual, a \emph{logic program} consists of \emph{rules} of the form
\begin{lstlisting}[mathescape=true,numbers=none]
a$_0$ :- a$_1$,...,a$_m$,not a$_{m+1}$,...,not a$_n$
\end{lstlisting}
where each \lstinline[mathescape=true]{a$_i$} is either
a \emph{(regular) atom} of form \lstinline[mathescape=true]{p(t$_1$,...,t$_k$)}
where all \lstinline[mathescape=true]{t$_i$} are terms
or
a \emph{linear constraint atom} of form%
\footnote{In \sysfont{clingo}, theory atoms are preceded by `\texttt{\&}'.}
`\lstinline[mathescape=true]@&sum{w$_1$*x$_1$;$\dots$;w$_l$*x$_l$} <= k@'
that stands for the linear constraint
\(
w_1\cdot x_1+\dots+w_l\cdot x_l\leq k
\).
All \lstinline[mathescape=true]{w$_i$} and \lstinline[mathescape=true]{k} are finite sequences of digits with at most one dot%
\footnote{In the input language of \sysfont{clingo}, such sequences must be quoted to avoid clashes.}
and represent real-valued coefficients $w_i$ and $k$.
Similarly all \lstinline[mathescape=true]{x$_i$} stand for the real-valued variables $x_i$.
As usual, \lstinline[mathescape=true]{not} denotes (default) \emph{negation}.
A rule is called a \emph{fact} if $n=0$.
Semantically, a logic program induces a set of \emph{stable models},
being distinguished models of the program determined by stable models semantics~\citep{gellif91a}.
Such a stable model $X$ is an \emph{LC-stable model} of a logic program $P$,%
\footnote{This corresponds to the definition of $T$-stable models using a \emph{strict} interpretation of theory atoms~\citep{gekakaosscwa16a},
and letting $T$ be the theory of linear constraints over reals.}
if there is an assignment of reals to all real-valued variables occurring in $P$ that
(i) satisfies all linear constraints associated with linear constraint atoms in $P$ being in $X$
and
(ii) falsifies all linear constraints associated with linear constraint atoms in $P$ being not in $X$.
For instance, the (non-ground) logic program containing the fact
`\lstinline[mathescape=true]{a("1.5").}'
along with the rule
`\lstinline[mathescape=true]@&sum{R*x} <= 7 :- a(R).@'
has the stable model
\par
\lstinline[mathescape=true]@$\{$a("1.5")$,\;$&sum{"1.5"*x}<=7$\}$@.
\\
This model is LC-stable since there is an assignment,
e.g.\ $\{x\mapsto 4.2\}$,
that satisfies the associated linear constraint `$1.5*x\leq 7$'.
We regard the stable model along with a satisfying real-valued assignment as a solution to a logic program containing linear constraint atoms.
\review{For a more detailed introduction of ASP extended with linear constraints, illustrated with more complex examples, we refer the interested reader to~\citep{jakaosscscwa17a}.}
To ease the use of ASP in practice,
several extensions have been developed.
First of all, rules with variables are viewed as shorthands for the set of their ground instances.
Further language constructs include
\emph{conditional literals} and \emph{cardinality constraints} \citep{siniso02a}.
The former are of the form
\lstinline[mathescape=true]{a:b$_1$,...,b$_m$},
the latter can be written as
\lstinline[mathescape=true]+s{d$_1$;...;d$_n$}t+,
where \lstinline{a} and \lstinline[mathescape=true]{b$_i$} are possibly default-negated (regular) literals
and each \lstinline[mathescape=true]{d$_j$} is a conditional literal;
\lstinline{s} and \lstinline{t} provide optional lower and upper bounds on the number of satisfied literals in the cardinality constraint.
We refer to \lstinline[mathescape=true]{b$_1$,...,b$_m$} as a \emph{condition}.
The practical value of both constructs becomes apparent when used with variables.
For instance, a conditional literal like
\lstinline[mathescape=true]{a(X):b(X)}
in a rule's antecedent expands to the conjunction of all instances of \lstinline{a(X)} for which the corresponding instance of \lstinline{b(X)} holds.
Similarly,
\lstinline[mathescape=true]+2{a(X):b(X)}4+
is true whenever at least two and at most four instances of \lstinline{a(X)} (subject to \lstinline{b(X)}) are true.
Finally, objective functions minimizing the sum of weights $w_i$ subject to condition $c_i$ are expressed as
\lstinline[mathescape=true]!#minimize{$w_1$:$c_1$;$\dots$;$w_n$:$c_n$}!.
In the same way,
the syntax of linear constraints offers several convenience features.
As above,
elements in linear constraint atoms can be conditioned,
viz.\par
`\lstinline[mathescape=true]@&sum{w$_1$*x$_1$:c$_1$;...;w$_l$*x$_l$:c$_n$} <= k@'
\\
where each \lstinline[mathescape=true]{c$_i$} is a condition.
Moreover, the theory language for linear constraints offers a domain declaration for real variables,
`\lstinline[mathescape=true]@&dom{lb..ub} = x@'
expressing that all values of \texttt{x} must lie between \texttt{lb} and \texttt{ub}.
And finally the maximization (or minimization) of an objective function can be expressed with
\lstinline[mathescape=true]@&maximize{w$_1$*x$_1$:c$_1$;...;w$_l$*x$_l$:c$_n$}@
(by \texttt{minimize}).
The full theory grammar for linear constraints over reals is available at~\url{https://potassco.org}.
\section{Discussion}\label{sec:discussion}
We presented the first hybrid approach to metabolic network completion
by combining topological and stoichiometric constraints in a uniform setting.
To this end,
we elaborated a formal framework capturing different semantics for the activation of reactions.
Based upon these formal foundations, we developed a hybrid ASP encoding reconciling
disparate approaches to network completion.
The resulting system, \sysfont{fluto}, thus combines the advantages of both approaches and
yields greatly superior results compared to purely quantitative or qualitative existing systems.
Our experiments show that \sysfont{fluto}\ scales to more highly degraded networks
and produces useful solutions in reasonable time. %
In fact, all of \sysfont{fluto}'s solutions passed the biological gold standard.
The exploitation of the network's topology guides the solver to more likely completion candidates,
and furthermore avoids self-activated cycles, as obtained in FBA-based approaches.
Also, unlike other systems, \sysfont{fluto}\ allows for establishing optimality and address the strict stoichiometric completion problem without approximation.
\sysfont{fluto}\ takes advantage of the hybrid reasoning capacities of the ASP system \sysfont{clingo}{}
for extending logic programs with linear constraints over reals.
This provides us with a practically relevant application scenario for evaluating this hybrid form of ASP.
To us, the most surprising empirical result was the observation that domain-specific heuristic allow for boosting unsatisfiable core based
optimization.
So far, such heuristics have only been known to improve satisfiability-oriented reasoning modes, and usually hampered unsatisfiability-oriented ones
(cf.~\citep{gekakarosc15a}).
\section{System and Experiments}\label{sec:experiments}
\label{sec:sysandexp}
In this section, we introduce \sysfont{fluto}, our new system for hybrid metabolic network completion, and empirically evaluate its performance.
The system relies on the hybrid encoding described in Section~\ref{sec:approach}
along with the hybrid solving capacities of \sysfont{clingo}~\citep{gekakaosscwa16a} for implementing the combination of ASP and LP.
We use \sysfont{clingo}~5.2.0
incorporating as LP solvers either \sysfont{cplex}~12.7.0.0 or \sysfont{lpsolve}~5.5.2.5 via their respective Python\ interfaces.
We describe the details of the underlying solving techniques in a separate paper and focus below on application-specific aspects.
The output of \sysfont{fluto}\ consists of two parts.
First, the completion $R''$, given by instances of predicate \texttt{completion}, and
second, an assignment of floats to (metabolic flux variables $v_r$ for) all $r\in R\cup R''$.
In our example, we get
\begin{align*
R''=\{\texttt{completion}(r_6), \texttt{completion}(r_8), \texttt{completion}(r_9)\}
\\
\text{ and } \{\ensuremath{r_{s_{1}}}=49999.5, r_9=49999.5, r_3=49999.5, r_2=49999.5, \\
\ensuremath{r_e}=99999.0, r_6=49999.5, r_5=49999.5, r_4=49999.5\}.
\end{align*
Variables assigned $0$ are omitted.
Note the flux value $r_8=0$ even though $r_8\in R''$.
This is to avoid the self-activation of cycle $C$, $D$ and $E$.
By choosing $r_8$, we ensure that the cycle has been externally initiated at some point
but activation of $r_8$ is not necessary at the current steady state.
We analyze
(i) the impact of different system configurations
(ii) the quality of \sysfont{fluto}'s approach to metabolic network completion, and
(iii) compare the quality of \sysfont{fluto}'s solutions with other approaches.
To have a realistic setting,
we use degradations of a functioning metabolic network of \textit{Escherichia coli}~\citep{Reed2003} comprising 1075 reactions.
The network was randomly degraded by 10, 20, 30 and 40 percent,
creating 10 networks for each degradation
by removing reactions until the target reactions were inactive according to \emph{Flux Variability Analysis}~\citep{Becker2007}.
90 target reactions with varied reactants were randomly chosen for each network, yielding 3600 problem instances in total ~\citep{Prigent2017}.
The reference network consists of reactions of the original metabolic network.
We ran each benchmark on a Xeon E5520 2.4 GHz processor under Linux limiting RAM to 20~GB.
At first,
we investigate two alternative optimization strategies for computing completions of minimum size.
The first one, \emph{branch-and-bound}~(\textsc{bb}), iteratively produces solutions of better quality until the optimum is found and the other,
\emph{unsatisfiable core}~(\textsc{usc}), relies on successively identifying and relaxing unsatisfiable cores until an optimal solution is obtained.
Note that we are not only interested in optimal solutions
but if unavailable also solutions activating target reactions without trivially restoring the whole reference network.
In \sysfont{clingo}, \textsc{bb}\ naturally produces these solutions in contrast to \textsc{usc}.
Therefore, we use \textsc{usc}\ with stratification~\citep{anbole13a}, which provides at least some suboptimal solutions.
\subsection{System configurations}
\label{sec:sysconf}
\newcommand{\shade}[2
{%
\cellcolor{black!\xinttheiexpr 80*#1/#2-30\relax}%
{#1}%
}%
\begin{table}[t]
\begin{center}
\begin{tabular}{r|*{5}{c}}
\input{tables/optbb_heur0}
\end{tabular}
\end{center}
\caption{Comparison of propagation and core minimization heuristics for \textsc{bb}. \label{tab:pchbb}}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{r|*{5}{c}}
\input{tables/optusc_heur0}
\end{tabular}
\end{center}
\caption{Comparison of propagation and core minimization heuristics for \textsc{usc}. \label{tab:pchusc}}
\end{table}
The configuration space of \sysfont{fluto}\ is huge.
In addition to its own parameters, the ones of \sysfont{clingo}\ and the respective LP solver amplify the number of options.
We thus concentrate on distinguished features revealing an impact in our experiments.
The first focus are two system options controlling the hybrid solving nature of \sysfont{fluto}.
First, \prop{$n$} controls the frequency of LP propagation:
the consistency of linear constraints is only checked if $n\%$ of atoms are decided.
Second,
the \sysfont{fluto}\ option \core{$n$} invokes the irreducible inconsistent set algorithm~\citep{ostsch12a} whenever $n\%$ of atoms are decided.
This algorithm extracts a minimal set of conflicting linear constraints for a given conflict.
Note that the second parameter depends on the first one,
since conflict analysis may only be invoked if the LP solver found an inconsistency.
The \textsc{default}{} is to use \core{100}, \prop{0}, and use LP solver \sysfont{cplex}\footnote{We do not present results of \sysfont{lpsolve}\ since it produced inferior results.}.
This allows us to detect conflicts among the linear constraints as soon as possible and only perform expensive conflict analysis on the full assignment.
To get an overview, we conducted a preliminary experiment
using \textsc{bb}\ and \textsc{usc}\ with \sysfont{fluto}'s default configuration on the 10, 20, and 30 percent degraded networks,
2700 instances in total,
limiting execution time to 20 minutes.
For our performance experiments,
we selected at random three networks with at least one instance
for which \textsc{bb}\ and \textsc{usc}\ could find the optimum in 100 to 600 seconds.
With the resulting 270 medium to hard instances,
we examined the cross product of values $n\in\{0,25,50,75,100\}$ for \core{n} and \prop{n}, respectively, limiting time to 600 seconds.
Table~\ref{tab:pchbb} and Table~\ref{tab:pchusc} display the results using \textsc{bb}\ and \textsc{usc}\ respectively.
The columns increase the value for \prop{n} and the rows for \core{n} in steps of 25,
i.e., LP propagation becomes less frequent from left to right,
and conflict minimization from top to bottom.
The first value in each cell is the average runtime in seconds and the value in brackets shows the number of timeouts.
The shade of the cells depends on the average runtime,
i.e., the darker the cell, the less performant the combination of propagation and conflict minimization heuristics.
Table~\ref{tab:pchbb} shows that propagation and conflict minimization heuristics have an overall small impact on the performance of \textsc{bb}\ optimization.
Since \textsc{bb}\ relies on iterating solutions and learns weaker constraints,
only pertaining to the best known bound, while optimizing,
the improvement step is less constraint compared to \textsc{usc}.
Due to this, conflicts are more likely to appear later on in the optimization process allowing for less impact of frequent LP propagation and conflict minimization
Nevertheless, we see a slight performance improvement of propagating and conflict minimizing for every partial assignment (\prop{0}, \core{0})
compared to only on full assignments (\prop{100}, \core{100}).
To prove the optimum, the solver is still required to cover the whole search space.
For this purpose, early pruning and conflict minimization may be effective.
Furthermore, we see the best average runtime in the area \prop{0-50} at \core{75}.
That indicates a good tradeoff between the better quality conflicts which prune the search effectively
and the overhead of the costly conflict minimization.
There is no clear best configuration,
but \prop{25} and \core{75} shows the best tradeoff between average runtime and number of timeouts.
\textsc{usc}\ on the other hand (Table~\ref{tab:pchusc}), clearly benefits from early propagation and conflict minimization.
The area \prop{0-75} and \core{0-50} has the lowest average runtime and number of timeouts,
best among them \prop{25} and \core{25}
with the lowest timeouts and average runtime that is not significantly different from the best value.
\textsc{usc}\ aims at quickly identifying unsatisfiable partial assignments
and learning structural constraints building upon each other,
which is enhanced by frequent conflict detection and minimization.
Disabling LP propagation on partial assignments with \textsc{usc}\ leads to the overall worst performance
and we also see deterioration with \core{75} and \core{100} in the interval \prop{0-50}.
Overall, \textsc{usc}\ is more effective than \textsc{bb}\ for the instances and we see a benefit in early LP propagation and conflict minimization as well as in fine-tuning the heuristics at which point both are applied.
\begin{table}[t]
\begin{center}
\begin{tabular}{r|cccccccccccc}
& \multicolumn{2}{c}{\textsc{FR}}& \multicolumn{2}{c}{\textsc{JP}}& \multicolumn{2}{c}{\textsc{TW}}& \multicolumn{2}{c}{\textsc{TR}}& \multicolumn{2}{c}{\textsc{CR}}& \multicolumn{2}{c}{\textsc{HD}}\\
& \textsc{t} & \textsc{to} & \textsc{t} & \textsc{to} & \textsc{t} & \textsc{to} & \textsc{t} & \textsc{to} & \textsc{t} & \textsc{to} & \textsc{t} & \textsc{to}\\\hline
\textsc{bb} & 400.41 & 154 & 389.68 & 147 & \textbf{360.54} & 127 & 409.33 & 141 & 362.74 & \textbf{120} & 434.54 & 160\\
\textsc{usc} & 227.38 & 78 & 293.96 & 100 & 316.54 & 107 & 293.54 & 102 & \textbf{221.84} & \textbf{74} & 297.32 & 104\\
\end{tabular}
\end{center}
\caption{Comparison of \sysfont{clingo}'s portfolio configurations for \textsc{bb}\ and \textsc{usc}. \label{tab:search}}
\end{table}
Now, we focus on the portfolio configurations of \sysfont{clingo}.
Those configurations were crafted by experts to enhance the solving performance of problems with certain attributes.
\review{To examine their impact, we take the best result for \textsc{bb}\ (\prop{25} and \core{75}) and \textsc{usc}\ (\prop{25} and \core{25}), }
and employ the following \sysfont{clingo}\ options:
\begin{description}
\item[\emph{\textsc{FR}}]
Refers to {\sysfont{clingo}}'s configuration \emph{frumpy} that uses more conservative defaults.
\item[\emph{\textsc{JP}}]
Refers to {\sysfont{clingo}}'s configuration \emph{jumpy} that uses more aggressive defaults.
\item[\emph{\textsc{TW}}]
Refers to {\sysfont{clingo}}'s configuration \emph{tweety} that is geared toward typical ASP problems.
\item[\emph{\textsc{TR}}]
Refers to {\sysfont{clingo}}'s configuration \emph{trendy} that is geared toward industrial problems.
\item[\emph{\textsc{CR}}]
Refers to {\sysfont{clingo}}'s configuration \emph{crafty} that is geared towards crafted problems.
\item[\emph{\textsc{HD}}]
Refers to {\sysfont{clingo}}'s configuration \emph{handy} that is geared towards larger problems.
\end{description}
For more information on {\sysfont{clingo}}'s configurations, see ~\citep{gekakarosc15a}.
Table~\ref{tab:search} shows the average runtime in seconds (\textsc{t}) and number of timeouts (\textsc{to})
for all six configurations using \textsc{bb}\ and \textsc{usc}\ on the same 270 instances.
Even though \textsc{CR} has slightly higher average runtime for \textsc{bb}\ compared to \textsc{TW},
it is the overall best configuration.
This configuration is geared towards problems with an inherent structure
compared to randomly generated benchmarks
which fits with the metabolic network completion problem at hand
since the data is taken from an existing bacteria.
Interestingly, \textsc{bb}\ performs worse under more specific configurations
and favors moderate once like \textsc{TW} and \textsc{CR}.
This might be due to the changing nature of improvement steps as the optimization process goes on
from finding any random solutions to an unsatisfiability proof in the end.
\textsc{usc}\ on the other hand, benefits from a more structural heuristics in \textsc{CR} and more conservative defaults in \textsc{FR}
which allow the solver to explore and collect conflicts instead of frequently restarting and forgetting.
\subsection{Solution quality}
\begin{table}[t]%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\centering
\begin{tabular}{r|rr|rr|rr||r}
& \mc{2}{c|}{\textsc{f}(\textsc{bb})} & \mc{2}{c|}{\textsc{f}(\textsc{usc})} & \mc{2}{c}{\textsc{f}(\textsc{bb}+\textsc{usc})} & \textsc{f}(\textsc{bb}+\textsc{usc}) \\%& \mc{3}{c}{\textsc{verified}}\\
\textsc{degradation} & \textsc{\#sols} & \textsc{\#opts} & \textsc{\#sols} & \textsc{\#opts} & \textsc{\#sols} & \textsc{\#opts} & \textsc{verified} \\%& \textsc{f}(\textsc{bb}+\textsc{usc})\\\hline & \textsc{m} & \textsc{g}
10\% (900) & \textbf{900} & \textbf{900} & 892 & 892 & 900 & 900 & 900 \\
20\% (900) & \textbf{830} & 669 & 793 & \textbf{769} & 867 & 814 & 867 \\%& \textbf{867} \\ & 225 & 52
30\% (900) & \textbf{718} & 88 & 461 & \textbf{344} & 780 & 382 & 780 \\\hlin
all (2700) & \textbf{2448} & 1657 & 2146 & \textbf{2005} & 2547 & 2096 & 2547 \\%& \textbf{2547} & 946 & 108
\end{tabular}
\caption{Comparison of qualitative results.\label{tab:quality}}
\end{table}
\begin{table}[t]%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\centering
\begin{tabular}{r|rr||rrr}
& \mc{2}{c||}{\textsc{f}(\textsc{vbs})} & \mc{3}{c}{\textsc{verified}}\\
\textsc{degradation} & \textsc{\#sols} & \textsc{\#opts} & \textsc{f}(\textsc{vbs})\\\hline
10\% (900) & 900 & 900 & 900\\
20\% (900) & 896 & 855 & 896\\
30\% (900) & 848 & 575 & 848\\
40\% (900) & 681 & 68 & 681\\\hline
all (3600) & 3325 & 2398 & 3325
\end{tabular}
\caption{Results using best system options.\label{tab:final}}
\end{table}
Now, we examine the quality of the solutions provided by \sysfont{fluto}.
Table~\ref{tab:quality} gives the number of solutions~(\textsc{\#sols}) and optima~(\textsc{\#opts}) obtained by \sysfont{fluto}{}~(\textsc{f}) in its default setting
within 20 minutes
for \textsc{bb}, \textsc{usc}\ and the best of both~(\textsc{bb}+\textsc{usc}),
individually for each \textsc{degradation} and over\textbf{all}.
\review{The default setting for \sysfont{fluto}\ includes the default configurations for \sysfont{clingo}\ and \sysfont{cplex}.}
The data was obtained in our preliminary experiment using networks with 10, 20, and 30 percent degradation.
For 94.3\% of the instances \sysfont{fluto}(\textsc{bb}+\textsc{usc}) found a solution within the time limit and 82.3\% of them were optimal.
We observe that \textsc{bb}\ provides overall more useful solutions but \textsc{usc}\ acquires more optima,
which was to be expected by the nature of the optimization techniques.
Additionally, each technique finds solutions to problem instances where the other exceeds the time limit,
underlining the merit of using both in tandem.
Column~\textsc{verified}\ shows the quality of solutions provided by \sysfont{fluto}.
Each obtained best solution was checked with \sysfont{cobrapy}~0.3.2~\citep{Ebrahim2013},
a renowned system implementing an FBA-based gold standard (for verification only).
All solutions found by \sysfont{fluto}\ could be verified by \sysfont{cobrapy}.
In detail, \sysfont{fluto}\ found a smallest set of reactions completing the draft network for 77.6\%,
a suboptimal solution for 16.7\%,
and no solution for 5.6\% of the problem instances.
Finally, we change the system configuration and examine how \sysfont{fluto}\ scales on harder instances.
To this end, we use the best configurations from Section~\ref{sec:sysconf},
\prop{25}, \core{75} and \textsc{CR} for \textsc{bb}, and \prop{25}, \core{25} and \textsc{CR} for \textsc{usc},
and rerun the experiment on all 3600 instances.
The results are shown in (Table~\ref{tab:final}).
\textsc{f}(\textsc{vbs}) denotes the virtual best results, meaning for each problem instance the best known solution among the two configurations was verified.
For 20\% and 30\% degradation, we obtain additional 29 and 68 solutions and 41 and 193 optima, respectively.
Overall, we find solutions for 92.4\% out of the 3600 instances and 72.1\% of them are optimal.
The number of solutions decreases slightly and the number of optima more drastically with higher degradation.
The results show that \sysfont{fluto}\ is capable of finding correct completions for even highly degraded networks for most of the instances in reasonable time.
\subsection{Comparison to other approaches}
\begin{table}[t]%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\centering
\begin{tabular}{r|rrr|rrr}
& \mc{3}{c|}{\sysfont{fluto}} & \mc{3}{c}{\sysfont{meneco}} \\
& min & average & max & min & average & max \\ \hline
solutions per instance & 1 & 2.24 & 12 & 1 & 1.88 & 6 \\
reactions per solution & 1 & 6.66 & 9 & 1 & 6.24 & 9 \\ \hline
verified solutions & \mc{3}{r|}{100\%} & \mc{3}{r}{73.39\%} \\
instances with only verified solutions & \mc{3}{r|}{100\%} & \mc{3}{r}{72.94\%} \\
instances without verified solutions & \mc{3}{r|}{0\%} & \mc{3}{r}{26.61\%} \\
instance with some verified solutions & \mc{3}{r|}{0\%} & \mc{3}{r}{0.45\%} \\
\end{tabular}
\caption{Comparison of \sysfont{fluto}\ and \sysfont{meneco}\ solutions for 10 percent degraded networks.\label{tab:enumeration}}
\end{table}
\begin{table}[t]%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\centering
\begin{tabular}{r|r|r|r}
& \sysfont{fluto} & \sysfont{meneco} & \sysfont{gapfill} \\ \hline
verified union & 100\% & 73.39\% & 6.20\% \\
verified union of verified solutions & 100\% & 72.94\% & NA \\
verified union of unverified solutions & 0\% & 0.00\% & NA\\
verified union of partially verified solutions & 0\% & 0.45\% & NA \\
\end{tabular}
\caption{Comparison of \sysfont{fluto}, \sysfont{meneco}\ and \sysfont{gapfill}\ unions for 10 percent degraded networks.\label{tab:union}}
\end{table}
We compare the quality of \sysfont{fluto}\ with \sysfont{meneco}~1.4.3~\citep{Prigent2017} and \sysfont{gapfill}%
\footnote{Update of 2011-09-23 see http://www.maranasgroup.com/software.htm }~\citep{SatishKumar2007}.
\footnote{The results for \sysfont{meneco}\ and \sysfont{gapfill}\ are taken from previous work~\citep{Prigent2017},
where they were run to completion with \emph{no} time limit.}
Both \sysfont{meneco}\ and \sysfont{gapfill}\ are systems for metabolic network completion.
While \sysfont{meneco}\ pursues the topological approach,
\sysfont{gapfill}\ applies the relaxed stoichiometric variant using Inequation~\eqref{eq:stoichiometric:equation:relaxed}.
We performed an enumeration of all minimal solutions to the completion problem
under the topological (\sysfont{meneco}), the relaxed stoichiometric (\sysfont{gapfill}),
and hybrid (\sysfont{fluto}) activation semantics for the 10 percent degraded networks of the benchmark set (900 instances to be completed).
First, we compare the quality of individual solutions of \sysfont{fluto}\ and \sysfont{meneco}.
\footnote{There was no data available for the individual solutions of \sysfont{gapfill}.}
Results are displayed in Table~\ref{tab:enumeration}.
The first two rows give the minimum, average and maximum number of solutions per instance,
and reactions per solution, respectively, for \sysfont{fluto}\ and \sysfont{meneco}.
While \sysfont{fluto}\ finds 19\% more solutions on average and twice as many maximum solutions per instance compared to \sysfont{meneco},
the numbers of reactions in minimal solutions of both tools are similar.
The next four rows pertain to the solution quality as established by \sysfont{cobrapy}.
First, what percent solutions over all instances could be verified,
second, what percent of instances had verified solutions exclusively,
third, how many instances had no verified solutions at all,
and finally, percent of instances where only a portion of solutions could be verified.
All of \sysfont{fluto}'s solutions could be verified,
compared to the 72.04\% of \sysfont{meneco}\ across all solutions and 72.94\% of instances that were correctly solved.
Interestingly, \sysfont{meneco}\ achieves hybrid activation in some but not all solutions for 0.45\% (4) of the instances.
\sysfont{fluto}\ does not only improve upon the quality of \sysfont{meneco},
but also provides more solution per instances without increasing the number of relevant reactions significantly.
To empirically evaluate the properties established in Section~\ref{sec:union},
and be able to compare to \sysfont{gapfill}, for which only the union of reactions was available,
we examine the union of minimal solutions provided by all three systems
and present the results in Table~\ref{tab:union}.
The four rows show,
first, for what percent of instances the union of solutions could be verified,
second, how many instances had only verified solutions and their union was also verified,
third, the percentage of instances where the union of solutions displayed activation of the target reactions even though all individual solutions did not provide that,
and forth, instances where the solutions were partly verifiable and their union could also be verified.
While again 100\% of \sysfont{fluto}'s solutions could be verified, only 73.3\% and 6.2\% are obtained for \sysfont{meneco}\ and \sysfont{gapfill}, respectively, for 10 percent degraded networks.
As reflected by the results, the ignorance of \sysfont{meneco}\ regarding stoichiometry leads to possibly unbalanced networks.
Still, the union of solutions provided a useful set of reactions in almost three quarters of the instances, showing merit in the topological approximation of the metabolic network completion problem.
On the other hand, the simplified view of \sysfont{gapfill}\ in terms of stoichiometry misguides the search for possible completions and eventually leads to unbalanced networks even in the union.
Moreover, \sysfont{gapfill}'s ignorance of network topology results in self-activated cycles.
By exploiting both topology and stoichiometry,
\sysfont{fluto}\ avoids such cycles while still satisfying the stoichiometric activation criteria.
The results support the observations made in Section~\ref{sec:union}.
For both \sysfont{fluto}\ and \sysfont{meneco}\, all instances, for which the complete solution set could be verified,
the union is also verifiable,
as well as all unions for instances where \sysfont{meneco}\ established hybrid activation for a fraction of solutions.
\section{Introduction}\label{sec:introduction}
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\blfootnote{This is an extended version of a paper presented at LPNMR-17, invited as a rapid publication in TPLP. The authors acknowledge the assistance of the conference chairs Tomi Janhunen and Marco Balduccini.}
Among all biological processes occurring in a cell, metabolic networks are in charge of transforming
input nutrients into both energy and output nutrients necessary for the functioning of other cells.
In other words, they capture all chemical reactions occurring in an organism.
In biology,
such networks are crucial from a fundamental and technological point of view
to estimate and control the capability of organisms to produce certain products.
Metabolic networks of high quality exist for many model organisms.
In addition,
recent technological advances enable their semi-automatic generation for many less studied organisms, also described as non-model organisms.
However,
the resulting metabolic networks are usually of poor quality,
due to error-prone, genome-based construction processes and a lack of (human) resources.
As a consequence, they usually suffer from substantial incompleteness.
The common fix is to fill the gaps by completing a draft network by borrowing chemical pathways
from reference networks of well studied organisms until the augmented network provides the measured functionality.
In previous work~\citep{schthi09a}, we introduced a logical approach to \emph{metabolic network completion}
by drawing on the work in~\citep{haebhe05a}.
We formulated the problem as a qualitative combinatorial (optimization) problem and solved it with Answer Set Programming (ASP~\citep{baral02a}).
The basic idea is that reactions apply only if all their reactants are available,
either as nutrients or provided by other metabolic reactions.
Starting from given nutrients, referred to as \emph{seeds},
this allows for extending a metabolic network by successively adding operable
reactions and their products.
The set of compounds in the resulting network is called the \emph{scope} of the
seeds and represents all compounds that can principally be synthesized from the seeds.
In metabolic network completion, we query a database of metabolic reactions
looking for (minimal) sets of reactions that can restore an observed bio-synthetic behavior.
This is usually expressed by requiring that certain \emph{target} compounds are in the scope of some given seeds.
For instance, in the follow-up work in~\citep{coevgeprscsith13a,prcodideetdaevthcabosito14a},
we successfully applied our ASP-based approach to the reconstruction of the metabolic network of the macro-algae \emph{Ectocarpus siliculosus},
using the collection of \review{reference networks Metacyc \citep{Caspi2016}.}
We evidenced in~\citep{Prigent2017}
that our ASP-based method partly restores the bio-synthetic capabilities of a large proportion of moderately degraded networks: it fails to restore the ones of both some moderately degraded and most of highly degraded metabolic networks.
The main reason for this is that our purely qualitative approach misses quantitative constraints
accounting for the law of mass conservation,
a major hypothesis about metabolic networks.
This law stipulates that
each internal metabolite of a network must balance its production rate with its consumption rate at the steady state of the system.
Such rates are given by the weighted sums of all reaction rates consuming or producing a metabolite, respectively.
This calculation is captured by the \emph{stoichiometry}\footnote{See also \url{https://en.wikipedia.org/wiki/Stoichiometry}.} of the involved reactions.
Hence,
the qualitative ASP-based approach fails to tell apart solution candidates with correct and incorrect stoichiometry
and therefore reports inaccurate results for some degraded networks.
We address this by proposing a hybrid approach to metabolic network completion that integrates our qualitative ASP approach
with quantitative techniques from
\emph{Flux Balance Analysis} (FBA\footnote{See also \url{https://en.wikipedia.org/wiki/Flux_balance_analysis}.}~\citep{marzom16a}),
the state-of-the-art quantitative approach for capturing reaction rates in metabolic networks.
We accomplish this by taking advantage of recently developed theory reasoning capacities for the ASP system \sysfont{clingo}~\citep{gekakaosscwa16a}.
More precisely,
we use an extension of \sysfont{clingo}\ with linear constraints over reals, as dealt with in Linear Programming (LP~\citep{dantzig63a}).
This extension provides us with an extended ASP modeling language as well as a generic interface to alternative LP solvers, viz.\ \sysfont{cplex}\ and \sysfont{lpsolve},
for dealing with linear constraints.
We empirically evaluate our approach by means of the metabolic network of \emph{Escherichia coli}.
Our analysis shows that our novel approach yields superior results than obtainable from purely qualitative or quantitative approaches.
Moreover, our hybrid application provides a first evaluation of the theory extensions of the ASP system \sysfont{clingo}\
with linear constraints over reals in a non-trivial setting.
\section{Metabolic Network Completion}\label{sec:problem}
Metabolism is the sum of all chemical reactions occurring within an organism.
As the products of a reaction may be reused as reactants, reactions can be chained to complex chemical pathways.
Such complex pathways are described by a metabolic network.
A \emph{metabolic network} is commonly represented as a directed bipartite graph
\(
(R\cup M,E),
\)
where $R$ and $M$ are sets of nodes standing for \emph{reactions} and \emph{metabolites}, respectively.
When $(m,r)\in E$ or $(r,m)\in E$ for $m\in M$ and $r\in R$, the metabolite $m$ is called a \emph{reactant} or \emph{product} of reaction~$r$, respectively.
More formally, for any $r\in R$, define
\(
\Reactants{r}=\{m\in M\mid (m,r)\in E\}
\)
and
\(
\Products{r} =\{m\in M\mid (r,m)\in E\}
\).
Quantitatively, reactions are subject to \emph{stoichiometry} for balancing the relative quantities of reactants and products.
This can be captured by an \emph{edge labeling} giving the stoichiometric coefficient of a reaction's reactants and products, viz.\
\(
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}: E\rightarrow \mathbb{Q}
\),
respectively.
We call
\(
(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}})
\)
a \emph{stoichiometric metabolic network}.
In addition, the activity rate of reactions $r\in R$ is bounded by lower bound $\MinFlux{r}=0$ and upper bound $\MaxFlux{r}\in\mathbb{R}^+$, respectively.
We introduce the notation $S \subseteq M$ to model {\em initiation seeds}, a set of metabolic compounds which are assumed to be present in the system at its initial state
due to data gathered in an experimental setup.
In addition, the graph structure carries out a specific set of compounds whose production is intrinsically assumed to be activated by default. These {\em boundary compounds} are defined as follows:
\(
S_{b}\,(G) = \{ m\in M \, | \, m\in \Products{r}, \Reactants{r}=\emptyset \} \subseteq M
\).
For the sake of clarity, in the network completion problem described in the paper, we will assume that all boundary compounds are seeds: $S_{b}\,(G) \subset S$. Notice, however, that the concepts of reachability and activity on which the network completion problem is based are more generic and independent of this assumption.
For illustration, consider the stoichiometric metabolic network in Fig.~\ref{gra:toy} and ignore the shaded part.
\begin{figure}[ht]
\centering
\input{toypic}
\caption{Example of a metabolic network}
\label{gra:toy}
\end{figure}
The network consists of 6 reactions, $r_{inS}$, $r_{expF}$ and $r_0$ to $r_5$, and 8 metabolites, $A,\dots,F$, $S_1$, $S_2$.
Here, $S=\{S_1,S_2\}$, $S_1$ being the only boundary compound of the network.
Consider reaction
\(
r_4 : E\rightarrow 2C
\)
transforming one unit of $E$ into two units of $C$.
We have
$\Reactants{r_4}=\{E\}$,
$\Products{r_4}=\{C\}$,
along with $\Stoichiometry{E}{r_4}=1$\footnote{Stoichiometric coefficients of 1 are omitted in the graphical representation; cf.~Fig.\ref{gra:toy}.}
and $\Stoichiometry{r_4}{C}=2$.
In the applicative domain of biology, several concepts have been introduced to model the capability of a metabolic network to activate flux reactions, or, equivalently, to synthesize metabolic compounds.
We hereby assume that the capability of a stoichiometric metabolic network $G$ to activate a set of reaction fluxes
can be modeled by a function $\ensuremath{\mathit{active}}$ taking as input any set of seeds $S \subseteq M$, that consists of initial and/or boundary seeds as described above,
and having as output a set of activated reactions $\ensuremath{\mathit{active}}(G,S) \subseteq R$.
In this setting, the {\em metabolic network completion problem} aims at ensuring that a set of targeted reactions is activated from the seed compounds $S$,
possibly by adding to the metabolic network additional reactions picked up in a reference network (shaded part of the network in ~Fig.\ref{gra:toy}).
Formally, we assume that we are given a stoichiometric metabolic network $G=(R\cup M,E,\mathit{stc})$ with its bounds,
a set of seed metabolites $S$ with the property that $S_{b}\,(G) \subset S$,
a set of targeted reactions $R_{T}\subseteq R$ and a reference network $(R'\cup M',E',\mathit{stc}')$.
The \emph{metabolic network completion problem} is to find a set $R''\subseteq R'\setminus R$ of reactions with minimal size such that
\(
R_{T}\subseteq\ensuremath{\mathit{active}}(G'',S)
\),
where
\begin{align}
\label{eq:completion:graph}
G''&= ((R\cup R'')\cup (M\cup M''),E\cup E'',\mathit{stc}'')\ ,
\\\label{eq:completion:metabolites}
M''&=\{m\in M'\mid r\in R'', m\in\Reactants{r}\cup\Products{r}\}\ ,
\\\label{eq:completion:edges}
E''&=E'\cap((M''\times R'')\cup(R''\times M'')) , \text{ and}
\\
\mathit{stc}''&=\mathit{stc}\cup\mathit{stc}'
\ .
\end{align}
We call $R''$ a \emph{completion} of $(R\cup M,E,\mathit{stc})$ from $(R'\cup M',E',\mathit{stc}')$
wrt $S$ and $R_{T}$.
Further refinements may also be interested in subset-minimal completions or optimize the distance between seeds and targets or minimize forbidden side products (cf.~\cite{schthi09a}).
The concept of activated reactions $\ensuremath{\mathit{active}}$ can be modeled differently according to different paradigms introduced in biology.
Therefore, there are several different metabolic network completion problem formulations: the stoichiometric, the relaxed stoichiometric, the topological, and the hybrid one.
They are formally defined in the following sections.
\subsection{Stoichiometric Metabolic Network Completion}
\label{sec:stoichio}
The first semantics associated to the concept of activated reactions has been introduced in the context of Flux Balance Analysis (FBA) \cite{Orth2010},
a linear-algebra paradigm allowing for modeling and studying reaction flux distributions of stoichiometric metabolic networks at steady-state.
In this paradigm, each reaction $r$ is associated with a \emph{metabolic flux value} $v_r$, a real variable confined by the maximum rates \MaxFlux{r}
\begin{align} \label{eq:stoichiometric:bounds}
& \MinFlux{r} \leq v_r\leq \MaxFlux{r}
\qquad\text{ for } r\in R
\end{align}
\comment{C: lb or 0 in (5) ?}
Flux distributions are formalized in terms of a system of equations relying on the stoichiometric coefficients of reactions.
Reaction rates are governed by the \emph{law of mass conservation} which assumes a steady state, that is, the input and output rates of reactions consuming and producing a metabolite are balanced.
\begin{align}
\label{eq:stoichiometric:equation}
& \textstyle
\sum_{\substack{r\in R}}\Stoichiometry{r}{m}\cdot v_r
+
\sum_{\substack{r\in R}}-\Stoichiometry{m}{r}\cdot v_r
=
0
\quad \text{ for } m\in M
\end{align}
Given a reaction $r_T\in R_T$, a stoichiometric metabolic network $(R\cup M,E,\mathit{stc})$ and a set of seeds $S$,
{\em stoichiometrically activated} is defined as follows:
\begin{align} \label{eq:stoichiometric:objective}
r_T \in \ensuremath{\mathit{active}}^{(st)} (S) & \iff \forall r \in R, \, \exists v_r \geq 0 \mbox{ s.t. Eqs.}~\eqref{eq:stoichiometric:bounds},\eqref{eq:stoichiometric:equation}\mbox{ hold and } v_{r_T} >0.
\end{align}
We shall notice that from this definition, activated target reactions do not depend directly on the set of seeds $S$ of the network. \comment{C: Changed S by Sinit because Sbdr are attributes of G and are thus taken into account here}
On the other hand, activated targets reactions highly depend of boundary compounds from $S_{b}\,$ for which \eqref{eq:stoichiometric:equation} is always satisfied and allows to initiate the values of fluxes.
In practice, the database can be insufficient to restore the steady-state criteria depicted by equation \eqref{eq:stoichiometric:equation}.
This can be circumvented by enabling the creation of so called \textit{export} reactions to balance the amount of compounds that cannot be balanced with existing reactions in the draft or the database.
We see such export reaction in Figure~\ref{gra:toy} for compound $F$.
To solve the {\em stoichiometric completion} based on flux-balance activated reactions,
Linear Programming (LP)~\cite{dantzig63a} can be used to maximize the flux rate $v_{r_T}$ under the hypothesis that the linear constraints are satisfied.
In practice, this problem is intractable.
The relaxed problem consists in relaxing the mass-balance equation \eqref{eq:stoichiometric:equation} as follows:
\begin{align}
\label{eq:stoichiometric:equation:relaxed}
& \textstyle
\sum_{\substack{r\in R}}\Stoichiometry{r}{m}\cdot v_r
+
\sum_{\substack{r\in R}}-\Stoichiometry{m}{r}\cdot v_r
\geq
0
\quad \text{ for } m\in M
\end{align}
The {\em relaxed stoichiometric completion problem} is based on the following semantics for activated reactions:
\begin{align*}
r_T \in \ensuremath{\mathit{active}}^{(strl)} (G,S) & \iff \forall r \in R, \, \exists v_r \geq 0 \mbox{ s.t. Eqs.}~\eqref{eq:stoichiometric:bounds},\eqref{eq:stoichiometric:equation:relaxed}\mbox{ hold and } v_{r_T} >0.
\end{align*}
This problem can be efficiently solved with LP \cite{SatishKumar2007}.
Notice however that, when interested in strict steady-state modeling (FBA), an \textit{a posteriori} verification of the output solution is needed to check whether equation \eqref{eq:stoichiometric:equation} is satisfied.
In the example network, $G$ consisting of all bold nodes and edges depicted in Figure \ref{gra:toy}. A strict stoichiometry-based completion \comment{C: removed FBA-based, seemed unclear} aims to obtain a positive value when maximizing the flux in $r_{5}$.
This can be achieved by adding the set of reactions $\{r_{6},r_9\}$.
The cycle made of compounds $E,C,D$ is already balanced and self-activated.
The instance of equation~\eqref{eq:stoichiometric:equation} controlling the reaction rates related to metabolite $C$ is
\(
2\cdot v_{r_4} - v_{r_2} - v_{r_5} = 0
\).
While we are not aware of implementations of the stoichiometric completion problem using Equation~\eqref{eq:stoichiometric:equation},
the system \sysfont{gapfill}~\cite{SatishKumar2007} solves the relaxed problem~\eqref{eq:stoichiometric:equation:relaxed}.
\subsection{Topological Metabolic Network Completion}
\label{sec:topo}
In the topological metabolic network completion problem, the semantics associated to the activation of reactions is based on the study of the graph topology.
Given a metabolic network $G$, a reaction $r\in R$ is \emph{activated} from a set of seeds $S$ if all reactants in $\Reactants{r}$ are reachable from~$S$.
Moreover, a metabolite $m\in M$ is \emph{reachable} from $S$ if
$m\in S$
or if
$m\in\Products{r}$ for some reaction $r\in R$ where all $m'\in\Reactants{r}$ are reachable from~$S$.
The \emph{scope} of $S$, written $\Sigma(G,S)$, is the closure of metabolites reachable from~$S$.
In this setting, \emph{topological activation} of reactions from a set of seeds $S$ is defined as follows:
\begin{align*}
r_T \in \ensuremath{\mathit{active}}^{(top)}(G,S) \iff \forall m \in \Reactants{r_T}\mbox{ holds }m \in \Sigma(G,S). \label{eq:topological}
\end{align*}
The {\em topological completion} problem, based of this topological notion of activation of reactions, is a combinatorial optimization problem.
It amounts to a simple form of abductive reasoning.
For illustration, consider the metabolic network $G$ consisting of all bold nodes and edges in Fig.~\ref{gra:toy},
viz.\ reactions $r_1$ to $r_5$, $r_{inS}$ and metabolites $A,\dots,F$, $S_1$ and $S_2$, the latter being a initiation metabolite and $r_5$ the single target reaction.
We get $\Sigma_{G}(\{S_1,S_2\})=\{S_1,S_2,B\}$, which indicates that the target reaction $r_5$ is not activated from the seed with the current network
because $A$ and $C$ are not reachable.
This changes once the network is completed.
Valid minimal completions are $R''_1=\{r_6,r_7\}$ and $R''_2=\{r_6,r_8\}$ \comment{C: removed , and $R''_3=\{r_6,r_7,r_8\}$ because in problem def we say minimal size} because
\(
r_5\in\ensuremath{\mathit{active}}{(top)}(G_i,\{S_1,S_2\})\mbox{ since }\{A,C\}\subseteq\Sigma_{G}(\{S_1,S_2\})
\)
for all extended networks $G_i$ obtained from completions $R''_i$ of $G$ for $i=\{1,2\}$ \comment{removed $1,\dots,3 $for the same reason}.
Relevant elements from the reference network are given in dashed gray.
\subsection{Hybrid Metabolic Network Completion}
\label{sec:hybrid}
The idea of hybrid metabolic network completion is to combine the two previous semantics:
the topological one that takes into account the initiation of the system,
and the stoichiometric one that models the steady state.
This means that we aim to compute network completions that are both topologically functional and flux balanced (without suffering from self-activated cycles\comment{T: Issue to be introduced above. A: I do not know wether we have space to explain this question of cycles}).
To formalize this setting, we introduce the {\em hybrid} semantics for activation as follows.
A reaction $r\in R_T$ is {\em hybrid activated} from a set of seeds $S$ if it both stoichiometric and topological activated.
\comment{T: This is the first turn to our contribution! Hence, more explanation (or even re-formatting) could be envisaged!}
\begin{align*}
r_T \in \ensuremath{\mathit{active}}^{(hyb)}(G,S) \iff r_T \in \ensuremath{\mathit{active}}^{(st)}(G,S)\mbox{ and }r_T \in \ensuremath{\mathit{active}}^{(top)}(G,S)
\end{align*}
Applying this hybrid reasoning to the network $G$ consisting of all bold nodes and edges in Fig.~\ref{gra:toy},
hybrid minimal solutions would be $R''_1=\{r_6,r_7,r_{9}\}$ and $R''_2=\{r_6,r_8,r_{9}\}$ \comment{C: same here , and $R''_3=\{r_6,r_7,r_8,r_{9}\}$}.
They enable to initiate every path of reactions from the seeds $r_5\in\ensuremath{\mathit{active}}_{G_i}^{(top)}(\{S_1,S_2\})\mbox{ since }\{A,C\}\subseteq\Sigma_{G}(\{S_1,S_2\})$.
These solution sets are as well consistent with the steady-state assumption $r_5\in\ensuremath{\mathit{active}}_{G_i}^{(st)}(\{S_1,S_2\})$ by balancing the amount of every metabolite.
\comment{C: See if clear}
\section{Metabolic Network Completion}\label{sec:problem}
Metabolism is the sum of all chemical reactions occurring within an organism.
As the products of a reaction may be reused as reactants, reactions can be chained to complex chemical pathways.
Such complex pathways are described by a metabolic network.
We represent a \emph{metabolic network} as a labeled directed bipartite graph
\(
G=(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}),
\)
where $R$ and $M$ are sets of nodes standing for \emph{reactions} and \emph{compounds} (also called metabolites), respectively.
When $(m,r)\in E$ or $(r,m)\in E$ for $m\in M$ and $r\in R$, the metabolite $m$ is called a \emph{reactant} or \emph{product} of reaction~$r$, respectively. \review{Metabolites and reactions nodes can both have multiple ingoing and outgoing edges.}
More formally, for any $r\in R$, define
\(
\Reactants{r}=\{m\in M\mid (m,r)\in E\}
\)
and
\(
\Products{r} =\{m\in M\mid (r,m)\in E\}
\).
The \emph{edge labeling}
\(
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}: E\rightarrow \mathbb{R}
\)
gives the stoichiometric coefficients of a reaction's reactants and products, respectively, i.e., their relative quantities involved in the reaction.
Finally, the activity rate of reactions is bound by lower and upper bounds,
denoted by $\MinFlux{r}\in\mathbb{R}^+_0$ and $\MaxFlux{r}\in\mathbb{R}^+_0$ for $r\in R$, respectively.
Whenever clear from the context,
we refer to metabolic networks with $G$ (or $G'$, etc) and denote the associated reactions and compounds with
$M$ and $R$ (or $M',R'$ etc), respectively.
We distinguish a set $S \subseteq M$ of compounds as initiation \emph{seeds}, that is,
compounds initially present due to experimental evidence.
Another set of compounds is assumed to be activated by default.
These \emph{boundary compounds} are defined as:
\(
S_{b}\,(G) = \{ m\in M \mid r\in R, m\in \Products{r}, \Reactants{r}=\emptyset \}
\).
For simplicity, we assume that all boundary compounds are seeds: $S_{b}\,(G)\subseteq S$.
Note that follow-up concepts like reachability and activity in network completion are independent of this assumption.
For illustration, consider the metabolic network in Fig.~\ref{gra:toy_d}.
\begin{figure}[t]
\centering
\input{toypic_draft}
\caption{Example of a metabolic network. Compounds and reactions are depicted by circles and rectangles respectively. Dashed reactions are reactions involving the boundary between the organism's metabolism and its environment. $r_5$ is the target reaction. $S_1$ and $S_2$ are boundary (and initiation) seeds. $S_3$ is assumed to be an initiation seed. Numbers on arrows describe the stoichiometry of reaction (default value is 1).}
\label{gra:toy_d}
\end{figure}
The network consists of 9 reactions, \ensuremath{r_{s_{1}}}, \ensuremath{r_{s_{2}}}, \ensuremath{r_e}{} and $r_0$ to $r_5$, and 8 compounds, $A,\dots,F$, $S_1$, $S_2$ and $S_3$.
Here, $S=\{S_1,S_2,S_3\}$, $S_1$ and $S_2$ being the two boundary compounds of the network. Dashed rectangle describes the boundary of the system, outside of which is the environment of the organism.
Consider reaction
\(
r_4 : E\rightarrow 2C
\)
transforming one unit of $E$ into two units of $C$ (stoichiometric coefficients of 1 are omitted in the graphical representation; cf.~Fig.~\ref{gra:toy_d}).
We have
$\Reactants{r_4}=\{E\}$,
$\Products{r_4}=\{C\}$,
along with $\Stoichiometry{E}{r_4}=1 $
\ and $\Stoichiometry{r_4}{C}=2$.
In biology, several concepts have been introduced to model the activation of reaction fluxes in metabolic networks,
or to synthesize metabolic compounds.
To model this,
we introduce a function \ensuremath{\mathit{active}}\ that given a metabolic network $G$ takes a set of seeds $S \subseteq M$ and returns a set of activated
reactions $\Activitytwo{G}{S} \subseteq R$.
\begin{figure}[t]
\centering
\input{toypic}
\caption{Metabolic network completion problem. The purpose of its solving is to select the minimal number of reactions from a database (dashed shaded reactions) such that activation of target reaction $r_5$ is restored from boundary and/or initiation seeds. There are three formalisms for activation of target reaction: stoichiometric, topological and hybrid.}
\label{gra:toy}
\end{figure}
With it,
\emph{metabolic network completion} is about ensuring that a set of target reactions (reaction $r_5$ in~Fig.~\ref{gra:toy_d}) is activated from seed compounds in $S$
by possibly extending the metabolic network with reactions from a reference network (cf.\ shaded part in~Fig.~\ref{gra:toy}).
Formally, given
a metabolic network $G=(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}})$,
a set $S\subseteq M$ of seed compounds such that $S_{b}\,(G) \subseteq S$,
a set $R_{T}\subseteq R$ of target reactions, and
a reference network $(R'\cup M',E',\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}')$,
the \emph{metabolic network completion problem} is to find a set $R''\subseteq R'\setminus R$ of reactions of minimal size such that
\(
R_{T}\subseteq\Activitytwo{G''}{S}
\)
where%
\footnote{Since \ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}, $\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}'$ have disjoint domains we view them as relations and compose them by union.}
\begin{align}
\label{eq:completion:graph}
G''&= ((R\cup R'')\cup (M\cup M''),E\cup E'',\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}'')\ ,
\\\label{eq:completion:metabolites}
M''&=\{m\in M'\mid r\in R'', m\in\Reactants{r}\cup\Products{r}\}\ ,
\\\label{eq:completion:edges}
E''&=E'\cap((M''\times R'')\cup(R''\times M'')) , \text{ and}
\\
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}''&=\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}\cup\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}'
\ .
\end{align}
We call $R''$ a \emph{completion} of $(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}})$ from $(R'\cup M',E',\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}')$ wrt $S$ and $R_{T}$.
Our concept of activation allows different biological paradigms to be captured.
Accordingly,
different formulations of metabolic network completion can be characterized:
the stoichiometric, the relaxed stoichiometric, the topological, and the hybrid one.
We elaborate upon their formal characterizations in the following sections.
\subsection{Stoichiometric Metabolic Network Completion}\label{sec:stoichio}
The first activation semantics has been introduced in the context of Flux Balance Analysis
capturing reaction flux distributions of metabolic networks at steady state.
In this paradigm, each reaction $r$ is associated with a \emph{metabolic flux value},
expressed as a real variable $v_r$ confined by the minimum and maximum rates:
\begin{align} \label{eq:stoichiometric:bounds}
& \MinFlux{r} \leq v_r\leq \MaxFlux{r} \qquad\text{ for } r\in R.
\end{align}
Flux distributions are formalized in terms of a system of equations relying on the stoichiometric coefficients of reactions.
\review{Reaction stoichiometries are governed by the \emph{law of mass conservation} under a steady state assumption; in other words, the mass of the system remains constant over the reaction.
The input and output fluxes of reactions consuming and producing a metabolite are balanced.}
\begin{align}
\label{eq:stoichiometric:equation}
& \textstyle
\sum_{\substack{r\in R}}\Stoichiometry{r}{m}\cdot v_r
+
\sum_{\substack{r\in R}}-\Stoichiometry{m}{r}\cdot v_r
=
0
\qquad \text{ for } m\in M.
\end{align}
Given a target reaction $r_T\in R_T$, a metabolic network $G=(R\cup M,E,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}})$ and a set of seeds $S$,
\emph{stoichiometric activation} is defined as follows:
\begin{align
\label{eq:stoichiometric:activation}
r_T \in \Activity{s}{G}{S} & \ \text{ iff } \ v_{r_T} >0 \text{ and }
\eqref{eq:stoichiometric:bounds} \text{ and } \eqref{eq:stoichiometric:equation}\text{ hold for }M\text{ and }R.
\end{align
Note that the condition $v_{r_T} >0$ strengthens the flux condition for $r_T\in R$ in the second part.
More generally, observe that activated target reactions are not directly related to the network's seeds $S$.
However,
the activation of targets highly depends on the boundary compounds in $S_{b}\,(G)$
for which \eqref{eq:stoichiometric:equation} \review{is always satisfied and thus initiates the fluxes.
Since boundary compounds are produced by at least one reaction without prerequisite,
an arbitrary amount might be produced.
Therefore, the incoming flux value always balances the sum of the flux values associated to outgoing edges.
Intuitively, boundary compounds are nutrients that are expected to be available in the system
for the consumption by the metabolic network,
thus initiating the reactions within.}
In our draft network $G$,
consisting of all \review{non-dashed} nodes and edges depicted in Fig.~\ref{gra:toy}
(viz.\ reactions \ensuremath{r_{s_{1}}}, \ensuremath{r_{s_{2}}}, \ensuremath{r_e}{} and $r_0$ to $r_5$ and compounds $A,\dots,F$, $S_1$, $S_2$, and $S_3$ and $r_5$ the single target reaction)
and the reference network $G'$,
consisting of the shaded part of Fig~\ref{gra:toy},
(viz.\ reactions $r_6$ to $r_9$ and metabolite $G$)
a strict stoichiometry-based completion aims to obtain a solution with $r_5\in\Activity{s}{G''}{\{S_1,S_2,S_3\}}$ where $v_{r_5}$ is maximal.
This can be achieved by adding the completion $R''_1=\{r_{6},r_9\}$ (Fig.~\ref{gra:toy_ss}).
\review{The cycle made of compounds $E,C,D$ and the boundary seed $S_2$ is already balanced and notably self-activated.
Indeed, initiation of $D$ and $E$ producibility requires the producibility of $C$ (in addition to the presence of the boundary seed $S_2$) that itself depends on $D$ and $E$. Yet, according the flux conditions, that models steady state conditions, the cycle is activated.
Such self-activation of cyclic pathways is an inherent problem of purely stoichiometric approaches to network completion.
This is a drawback of the semantics because the effective activation of the cycle requires the additional (and unchecked) condition that at least one of the compounds was present as the initial state of the system. This could be the case provided there exist another way to enable the production of one or several components of the cycle (here an activable reaction producing $E$ for instance) \citep{Prigent2017}.}
The instance of Equation~\eqref{eq:stoichiometric:equation} controlling the reaction rates related to metabolite $C$ is
\(
2\cdot v_{r_4} - v_{r_2} - v_{r_5} = 0
\).
To solve metabolic network completion with flux-balance activated reactions,
Linear Programming can be used to maximize the flux rate $v_{r_T}$ provided that the linear constraints are satisfied.
Nonetheless, this problem turns out to be hard to solve in practice and existing approaches scale poorly to real-life applications (cf.~ \citep{Orth2010}).
This motivated the use of approximate methods.
The relaxed problem is obtained by weakening the mass-balance equation \eqref{eq:stoichiometric:equation} as follows:
\begin{align}
\label{eq:stoichiometric:equation:relaxed}
& \textstyle \sum_{\substack{r\in R}} \Stoichiometry{r}{m}\cdot v_r
+
\sum_{\substack{r\in R}}-\Stoichiometry{m}{r}\cdot v_r
\geq
0
\qquad \text{ for } m\in M.
\end{align}
This lets us define the concept of \emph{relaxed stoichiometric activation}:
\begin{align
\label{eq:stoichiometric:activation:relaxed}
r_T \in \Activity{r}{G}{S} & \ \text{ iff } \ v_{r_T} >0 \text{ and }
\eqref{eq:stoichiometric:bounds} \text{ and } \eqref{eq:stoichiometric:equation:relaxed}\text{ hold for }M\text{ and }R.
\end{align
The resulting problem can now be efficiently solved with Linear Programming~\citep{SatishKumar2007}.
Existing systems addressing strict stoichiometric network completion either
cannot guarantee optimal solutions~\citep{laten2014a} or
do not support a focus on specific target reactions~\citep{Thiele2014}.
Other approaches either partially relax the problem~\citep{Vitkin2012} or
solve the relaxed problem based on Equation~\eqref{eq:stoichiometric:equation:relaxed},
like the popular system \sysfont{gapfill}~\citep{SatishKumar2007}. Applied to the network of Fig.~\ref{gra:toy}, the minimal completion under the relaxed stoichiometric activation is $R''_1=\{r_{6}\}$ (Fig.~\ref{gra:toy_sr}) but does not carry flux because of the accumulation of metabolite $G$, allowed by Equation~\eqref{eq:stoichiometric:equation:relaxed}.
Note however that for strict steady-state modeling an \textit{a posteriori} verification of solutions is needed
to warrant the exact mass-balance equation~\eqref{eq:stoichiometric:equation}.
\begin{figure}
\captionsetup{width=0.45\textwidth}
\centering
\begin{minipage}[t]{.5\textwidth}
\centering
\input{toypic_sol_s}
\caption{Solution to metabolic network completion under stoichiometric activation hypothesis in order to satisfy Equations~\eqref{eq:stoichiometric:bounds},~\eqref{eq:stoichiometric:equation} and ~\eqref{eq:stoichiometric:activation}. Within this network, there exists at least one flux distribution which activates $r_5$.}
\label{gra:toy_ss}
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\input{toypic_sol_sr}
\caption{Solution to metabolic network completion under relaxed stoichiometric activation hypothesis in order to satisfy Equations~\eqref{eq:stoichiometric:bounds},~\eqref{eq:stoichiometric:equation:relaxed} and ~\eqref{eq:stoichiometric:activation:relaxed}. Notice that within this completed network, there exist no flux distribution allowing the reaction $r_5$ to be activated.}
\label{gra:toy_sr}
\end{minipage}
\end{figure}
\subsection{Topological Metabolic Network Completion}\label{sec:topo}
A qualitative approach to metabolic network completion relies on the topology of networks for capturing the activation of reactions.
Given a metabolic network $G$, a reaction $r\in R$ is \emph{activated} from a set of seeds $S$ if all reactants in $\Reactants{r}$ are reachable from~$S$.
Moreover, a metabolite $m\in M$ is \emph{reachable} from $S$ if %
$m\in S$
or if
$m\in\Products{r}$ for some reaction $r\in R$ where all $m'\in\Reactants{r}$ are reachable from~$S$.
The \emph{scope} of $S$, written $\Sigma_G(S)$, is the closure of compounds reachable from~$S$.
In this setting, \emph{topological activation} of reactions from a set of seeds $S$ is defined as follows:
\begin{align
r_T \in \Activity{t}{G}{S} \ \text{ iff } \ \Reactants{r_T} \subseteq \Sigma_G(S). \label{eq:topological:activation}
\end{align}
Note that this semantics avoids self-activated cycles by imposing an external entry sufficient to initiate all cycles ($S_3$ is not enough to activate the cycle as it does not activate one of its reaction on its own).
The resulting network completion problem can be expressed as a combinatorial optimization problem and effectively solved with ASP~\citep{schthi09a}.
For illustration, consider again the draft and reference networks $G$ and $G'$ in Fig.~\ref{gra:toy_d} and Fig.~\ref{gra:toy}.
We get $\Sigma_{G}(\{S_1,S_2,S_3\})=\{S_1,S_2,S_3,B\}$, indicating that target reaction $r_5$ is not activated from the seeds with the draft network
because $A$ and $C$, its reactants, are not reachable.
This changes once the network is completed.
Valid minimal completions are $R''_2=\{r_6,r_7\}$ (Fig.~\ref{gra:toy_st1}) and $R''_3=\{r_6,r_8\}$ (Fig.~\ref{gra:toy_st2}) because
\(
r_5\in\Activity{t}{G''_i}{\{S_1,S_2\}}\mbox{ since }\{A,C\}\subseteq\Sigma_{G''_i}(\{S_1,S_2\})
\)
for all extended networks $G''_i$ obtained from completions $R''_i$ of $G$ for $i\in\{2,3\}$.
\begin{figure}
\captionsetup{width=0.45\textwidth}
\centering
\begin{minipage}[t]{.5\textwidth}
\centering
\input{toypic_sol_t1}
\caption{First solution to metabolic network completion under topological activation hypothesis satisfying Equation~\eqref{eq:topological:activation}. The production of C cannot be explained by a self-activated cycle and requires an external source of compounds via $S_3$ and reaction $r_7$.}
\label{gra:toy_st1}
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\input{toypic_sol_t2}
\caption{Second solution to metabolic network completion under topological activation hypothesis satisfying Equation~\eqref{eq:topological:activation}.}
\label{gra:toy_st2}
\end{minipage}
\end{figure}
Relevant elements from the reference network are given in dashed gray.
\subsection{Hybrid Metabolic Network Completion}\label{sec:hybrid}
The idea of hybrid metabolic network completion is to combine the two previous activation semantics:
the topological one accounts for a well-founded initiation of the system from the seeds
and the stoichiometric one warrants its mass-balance.
We thus aim at network completions that are both topologically functional and flux balanced
(without suffering from self-activated cycles).
More precisely,
a reaction $r_T\in R_T$ is \emph{hybridly activated} from a set $S$ of seeds in a network $G$,
if both criteria apply:
\begin{align
\label{eq:hybrid:activation}
r_T \in \Activity{h}{G}{S} \ \text{ iff } \ r_T \in \Activity{s}{G}{S}\text{ and }r_T \in \Activity{t}{G}{S}.
\end{align
Applying this to our example in Fig.~\ref{gra:toy},
we get the (minimal) hybrid solutions $R''_4=\{r_6,r_7,r_{9}\}$ (Fig.~\ref{gra:toy_sh1}) and $R''_5=\{r_6,r_8,r_{9}\}$ (Fig.~\ref{gra:toy_sh2}).
Both (topologically) initiate paths of reactions from the seeds to the target,
ie.\ $r_5\in\Activity{t}{G''_i}{\{S_1,S_2,S_3\}}\mbox{ since }\{A,C\}\subseteq\Sigma_{G''_i}(\{S_1,S_2,S_3\})$
for both extended networks $G''_i$ obtained from completions $R''_i$ of $G$ for $i\in\{4,5\}$.
Both solutions are as well stoichiometrically valid and balance the amount of every metabolite,
hence we also have $r_5\in\Activity{s}{G''_i}{\{S_1,S_2,S_3\}}$.
\begin{figure}
\captionsetup{width=0.45\textwidth}
\centering
\begin{minipage}[t]{.50\textwidth}
\centering
\input{toypic_sol_h1}
\caption{First solution to metabolic network completion under hybrid activation hypothesis satisfying Equation~\eqref{eq:hybrid:activation} (that is Equations~\eqref{eq:stoichiometric:bounds},~\eqref{eq:stoichiometric:equation}, ~\eqref{eq:stoichiometric:activation} and ~\eqref{eq:topological:activation}).}
\label{gra:toy_sh1}
\end{minipage}%
\begin{minipage}[t]{.50\textwidth}
\centering
\input{toypic_sol_h2}
\caption{Second solution to metabolic network completion under hybrid activation hypothesis satisfying Equation~\eqref{eq:hybrid:activation} (that is Equations~\eqref{eq:stoichiometric:bounds},~\eqref{eq:stoichiometric:equation}, ~\eqref{eq:stoichiometric:activation} and ~\eqref{eq:topological:activation}).}
\label{gra:toy_sh2}
\end{minipage}
\end{figure}
\subsection{Union of Metabolic Network Completions}\label{sec:union}
As depicted in the toy examples for the topological (Fig.~\ref{gra:toy_st1} and Fig.~\ref{gra:toy_st2}) and hybrid (Fig.~\ref{gra:toy_sh1} and
Fig.~\ref{gra:toy_sh2}) activation, several minimal solutions to one metabolic network completion problem may exist.
There might be dozens of minimal completions, depending on the degradation of the original draft network,
hence leading to difficulties for biologists and bioinformaticians to discriminate the individual results.
One solution to facilitate this curation task is to provide, in addition to the enumeration of solutions, their union.
This has been done previously for the topological completion \citep{Prigent2017}.
Notably, the concept of ``union of solutions" is particularly relevant from the biological perspective since it provides in a single view all possible reactions that could be inserted in a solution to the network completion problem.
Additionally, verifying the union according to the desired (stoichiometric and hybrid) activation semantics,
offers a way to analyze the quality of approximation methods (topological and relaxed-stoichiometric ones).
If individual solutions contradict a definition of activation that the union satisfies, it suggests that the family of reactions contained in the union, although possibly non-minimal, may be of interest.
Thus providing merit to the approximation method and their results.
Importantly, we notice that the operation of performing the union of solutions is stable with the concept of activation, although it can contradict the minimality of the size of completion.
Indeed, the union of solutions to the topological network completion problem is itself a (non-minimal) solution to the topological completion problem.
Similarly, the union of minimal stoichiometric solutions always displays the stoichiometric activation of the target reaction(s).
In fact, adding an arbitrary set of reactions to a metabolic network still maintains stoichiometric activation,
since flux distribution for the newly added reactions may be set to zero.
Consequently, the union of minimal hybrid solutions always displays the hybrid activation in the target reaction(s).
The following theorems (Theorems ~\ref{th:topo}, ~\ref{th:flux} and ~\ref{th:hybr}) are a formalization of the stability of the union of solutions with respect to the three concepts of activation.
The union $G=G_1\cup G_2$ of two metabolic networks $G_1=(R_1\cup M_1,E_1,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_1)$
and $G_2=(R_2\cup M_2,E_2,\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_2)$ is defined by
\begin{align}
G &= (R\cup M, E, \ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}), \label{eq:graph.union1}\\
R &= R_1\cup R_2, \label{eq:graph.union2}\\
M &= M_1\cup M_2, \label{eq:graph.union3}\\
E &= E_1\cup E_2, \label{eq:graph.union4}\\
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}} &= \ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_1\cup \ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_2. \label{eq:graph.union5}
\end{align}
\begin{theorem}\label{th:topo}
Let $G_1$ and $G_2$ be metabolic networks.
If $R_T\subseteq\Activity{t}{G_1}{S}$, then $R_T\subseteq\Activity{t}{G_1\cup G_2}{S}$.
\end{theorem}
\begin{proof}
The proof is given by monotonicity of the union and the monotonicity of the closure.
Thus it can never be case that having more reactions disables reachability.
More formal,
$R_T\subseteq\Activity{t}{G_1}{S}$ holds iff
$\Reactants{r_T}\subseteq\Sigma_{G_1}(S)$.
Furthermore, we have
$\Sigma_{G_1}(S)\subseteq \Sigma_{G_1\cup G_2}(S)$
by the definition of the closure.
This implies
$\Reactants{r_T}\subseteq\Sigma_{G_1\cup G_2}(S)$.
Finally, we have
$R_T\subseteq\Activity{t}{G_1\cup G_2}{S}$.
\end{proof}
\begin{theorem}\label{th:flux}
Let $G_1$ and $G_2$ be metabolic networks.
If $R_T\subseteq\Activity{s}{G_1}{S}$, then $R_T\subseteq\Activity{s}{G_1\cup G_2}{S}$.
\end{theorem}
\begin{proof}
First, we define following bijective functions
\begin{align*}
f:&R_1 \rightarrow \{1,\dots,l\}\subseteq\mathbb{N}, \\
& r \mapsto f(r)=i \\
g:&M_1 \rightarrow \{1,\dots,k\}\subseteq\mathbb{N}, \\
& m \mapsto g(m)=j \\
f':&R_1\cup R_2 \rightarrow \{1,\dots,l'\}\subseteq\mathbb{N}, \\
& r \mapsto f'(r)=
\begin{cases}
f(r) &, \text{ if $f(r)$ is defined} \\
i &, \text{ otherwise}
\end{cases} \\
g'&:M_1\cup M_2 \rightarrow \{1,\dots,k'\}\subseteq\mathbb{N} \\
& m \mapsto g'(m)=
\begin{cases}
g(m) &, \text{ if $g(m)$ is defined} \\
j &, \text{ otherwise}
\end{cases}
\end{align*}
for $k=|M_1|$, $l=|R_1|$, $k'=|M_1\cup M_2|$ and $l'=|R_1\cup R_2|$ regarding $G_1$ and $G_1\cup G_2$, respectively.
Now, we rewrite the system of (\ref{eq:stoichiometric:equation}) regarding $G_1$ as a matrix equation $Av=0$ of form
\begin{align*}
\begin{pmatrix}
a_{11} & \dots & a_{1l} \\
\vdots & \ddots & \vdots \\
a_{k1} & \dots & a_{kl}
\end{pmatrix}
\begin{pmatrix}
v_1 \\
\vdots \\
v_l
\end{pmatrix}
=
\begin{pmatrix}
0 \\
\vdots \\
0
\end{pmatrix}\label{eq:matrix}
\end{align*}
where $A$ is a $k\times l$ matrix with coefficients
\begin{align*}
a_{g(m)f(r)}=
\begin{cases}
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_1(r,m) &, (r,m)\in E_1 \\
-\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_1(m,r) &, (m,r)\in E_1 \\
0 &, \text{ otherwise}
\end{cases}
\end{align*}
and $v$ consists of variables $v_{f(r)}$ for $r\in R_1$.
By $L=\{v\mid Av=0\}$ we denote the set of solutions induced by $Av=0$.
Furthermore, we represent the system of linear equations of (\ref{eq:stoichiometric:equation}) regarding $G_1\cup G_2$ as a matrix equation $A'v'=0$ of form
\begin{align*}
\begin{pmatrix}
a_{11} & \dots & a_{1l} & a_{1l+1} & \dots & a_{1l'} \\
\vdots & \ddots& \vdots & \vdots & \ddots& \vdots \\
a_{k1} & \dots & a_{kl} & a_{kl+1} & \dots & a_{kl'} \\
0 & \dots & 0 & a_{k+1l+1} & \dots & a_{k+1l'} \\
\vdots & \ddots& \vdots & \vdots & \ddots& \vdots \\
0 & \dots & 0 & a_{k'l+1}& \dots & a_{k'l'}
\end{pmatrix}
\begin{pmatrix}
v_1 \\
\vdots \\
v_l \\
v_{l+1} \\
\vdots \\
v_{l'}
\end{pmatrix}
=
\begin{pmatrix}
0 \\
\vdots \\
0
\end{pmatrix
\end{align*}
where $A'$ is a $k'\times l'$ matrix with coefficients
\begin{align*}
a_{g'(m)f'(r)}=
\begin{cases}
\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}(r,m) &, (r,m)\in E_1\cup E_2 \\
-\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}(m,r) &, (m,r)\in E_1\cup E_2 \\
0 &, \text{ otherwise}
\end{cases}
\end{align*}
where $\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}=\ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_1\cup \ensuremath{s}} % {\ensuremath{\mathit{stoichiometry}}_2$
and $v'$ consists of variables $v_{f'(r)}$ of (\ref{eq:stoichiometric:equation}) for $r\in R_1\cup R_2$.
Note that $A'$ can always be written in this form, since switching columns and rows will not change solutions.
By $L'=\{v'\mid A'v'=0\}$ we denote the set of solutions induced by $A'v'=0$.
Since $A'v'=0$ is homogeneous, $L\subseteq L'$ holds by extending $L$ with zeros for $v_{f'(r)}$ with $r\in R_2\setminus R_1$.
Thus $\{v\mid v\in L, \forall r_T\in R_T, v_{f(r_T)}>0\}
\subseteq\{v\mid v\in L', \forall r_T\in R_T, v_{f'(r_T)}>0\}$
by extending the first set with zeros for $v_{f'(r)}$ with $r\in R_2\setminus R_1$.
From $R_T\subseteq\Activity{s}{G_1}{S}$, we know that the homogeneous system of linear equations from
(\ref{eq:stoichiometric:equation}) regarding $G_1$ is non-trivial satisfiable,
which finally implies that $R_T\subseteq\Activity{s}{G_1\cup G_2}{S}$.
\end{proof}
\begin{theorem}\label{th:hybr}
Let $G_1$ and $G_2$ be metabolic networks.
If $R_T\subseteq\Activity{h}{G_1}{S}$, then $R_T\subseteq\Activity{h}{G_1\cup G_2}{S}$.
\end{theorem}
\begin{proof}
Follows directly by the definition of hybrid activation together with Theorem~\ref{th:topo} and Theorem~\ref{th:flux}.
More formal,
$R_T\subseteq\Activity{h}{G_1}{S}$
holds iff
$R_T\subseteq\Activity{t}{G_1}{S}$ and $R_T\subseteq\Activity{s}{G_1}{S}$.
From Theorem~\ref{th:topo} and $R_T\subseteq\Activity{t}{G_1}{S}$ follows
$R_T\subseteq\Activity{t}{G_1\cup G_2}{S}$.
Analogously, from Theorem~\ref{th:flux} and $R_T\subseteq\Activity{s}{G_1}{S}$ follows
$R_T\subseteq\Activity{s}{G_1\cup G_2}{S}$.
Finally, this implies
$R_T\subseteq\Activity{h}{G_1\cup G_2}{S}$.
\end{proof}
In particular, studying the union in case of topological modeling can pinpoint interesting cases.
Individual solutions satisfying the topological activation can additionally satisfy the stoichiometric and thus the hybrid activation semantics.
A union including such a solution will also adhere to the hybrid standard.
In some cases, the union of solutions will display the stoichiometric activation whereas the individual solutions only satisfy the topological activation.
Fig.~\ref{gra:union_nf_f1} to Fig.~\ref{gra:union_nf_f} display an example of topological metabolic network completions that do not satisfy stoichiometric (and hybrid) activation whereas their union does.
Fig.~\ref{gra:union_nf_nf1} to Fig.~\ref{gra:union_nf_nf} provide an example of minimal topological completions that do not satisfy stoichiometric (and hybrid) activation and for which the union does not satisfy it either.
Both observations induce that in general we cannot derive anything about activation of reactions in a graph resulting from the union of two or more graphs.
And similarly, we cannot infer about the activation of reactions in subgraphs arbitrarily derived from a graph in which these reactions are activated.
\begin{figure}
\captionsetup{width=0.3\textwidth}
\centering
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_flux1}
\caption{Topological completion $R_1=\{r_2\}$ satisfies $r_4\in\Activity{t}{G_1}{\{S\}}$, but carries no flux, due to accumulation of compound $B$ that contradicts Eq.~\ref{eq:stoichiometric:equation}.\label{gra:union_nf_f1}}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_flux2}
\caption{Topological completion $R_2=\{r_3\}$ satisfies $r_4\in\Activity{t}{G_2}{\{S\}}$ and carries no flux as well, due to accumulation of compound $A$ that contradicts Eq.~\ref{eq:stoichiometric:equation}.\label{gra:union_nf_f2}}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_flux}
\caption{Completion with the union $R_1\cup R_2=\{r_2,r_3\}$. $G=G_1\cup G_2$ satisfies $r_4\in\Activity{h}{G}{\{S\}}$ and thus is flux-balanced.
\label{gra:union_nf_f}}
\end{minipage}
\end{figure}
\begin{figure}
\captionsetup{width=0.3\textwidth}
\centering
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_no_flux1}
\caption{Topological completion $R_1=\{r_2\}$ satisfies $r_4\in\Activity{t}{G_1}{\{S\}}$, but carries no flux, due to accumulation of compound $B$ that contradicts Eq.~\ref{eq:stoichiometric:equation}.\label{gra:union_nf_nf1}}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_no_flux2}
\caption{Topological completion $R_1=\{r_3\}$ satisfies $r_4\in\Activity{t}{G_2}{\{S\}}$, but carries no flux, due to accumulation of compounds $A$ and $E$ that contradicts Eq.~\ref{eq:stoichiometric:equation}.\label{gra:union_nf_nf2}}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\input{union_of_no_flux_has_no_flux}
\caption{Completion with the union $R_1\cup R_2=\{r_2,r_3\}$. $G=G_1\cup G_2$ satisfies $r_4\in\Activity{t}{G}{\{S\}}$, but contradicts minimality and carries no flux $r_4\not \in\Activity{s}{G}{\{S\}}$, due to accumulation of compound $E$ that contradicts Eq.~\ref{eq:stoichiometric:equation}.\label{gra:union_nf_nf}}
\end{minipage}
\end{figure}
|
2,869,038,156,686 | arxiv |
\section{Introduction}
\label{sec:intro}
Matchings are one of the most fundamental and best studied notions in graph theory, see \citet{lovasz2009matching} and \citet{schrijver2003combinatorial} for an overview.
Recently, \citet{baste2018temporal} and \citet{MMNZZ} studied matchings in temporal graphs.
A \emph{temporal graph} $\TG = (V,(E_t)_{t=1}^\lifetime)$ consists of a set $V$ of vertices and an ordered list of $\tau$
edge sets $E_1,E_2,\dots,E_\tau$.
A tuple $(e,t)$ is a \emph{time edge} of $\mathcal G$ if $e \in E_t,t \in \{ 1,2,\dots,\tau \}$.
Two \emph{time edges} $(e,t)$ and $(e',t')$ are \emph{$\Delta$-independent}
whenever the edges $e,e'$ do not share an endpoint or
their time labels $t,t'$ are at least~$\Delta$ time units apart
from each other (that is, $|t -t'| \geq \Delta$).\footnote{Throughout the paper, $\Delta$ always refers to this number,
and never to the maximum degree of a static graph (which is another common use of $\Delta$).}
A~\emph{$\Delta$-temporal matching} $M$ of a temporal graph $\mathcal G$ is a set of
time edges of $\mathcal G$ which are pairwise $\Delta$-independent.
This leads naturally to the following decision problem, introduced by \citet{MMNZZ}.
\problemdef{Temporal Matching}
{A temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$ and integers $k,\Delta \in \mathbb N$.}
{Is there a size-$k$ $\Delta$-temporal matching in $\mathcal G$?}
Without loss of generality, we assume that $\Delta \leq \tau$.
While \textsc{Temporal Matching} is polynomial-time solvable if the temporal graph has $\tau \leq 2$ layers,
it becomes NP-hard, even if $\tau=3$ and $\Delta=2$ \cite{MMNZZ}.
Driven by this NP-hardness,
\citet{MMNZZ} showed an FPT-algorithm for \textsc{Temporal Matching},
when parameterized by $\Delta$ and the maximum matching size of
the \emph{underlying graph} $G_{\downarrow}(\mathcal G) :=(V,\bigcup_{i=1}^\tau E_i)$ of
the input temporal graph~$\TG = (V,(E_t)_{t=1}^\lifetime)$.
On a historical note, one has to mention that \citet{baste2018temporal}
introduced temporal matchings in a slightly different way.
The main difference to the model of \citet{MMNZZ} which we also adopt here is that the model of \citet{baste2018temporal}
requires edges to exist in at least $\Delta$ consecutive time steps in order for them to be eligible for a matching.
However, with little preprocessing an instance of the model of \citet{baste2018temporal} can be reduced to our model and
the algorithmic ideas presented by \citet{baste2018temporal} apply as well.
Notably, there is also the related problem
\textsc{Multistage Matchings}:
this is a radically different way to lift
the notion of matchings into the temporal setting.
Here, we are given a temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$
and we want to find a perfect (or maximum) matching for each \emph{layer} $(V,E_i)$ such that the symmetric differences of matchings for consecutive layers are small \cite{gupta2014changing,chimani2020approximating,heeger2019multistage,bampis2018multistage}.
In this paper, we consider the vertex cover number\footnote{That is, the minimum number of vertices needed to cover all edges of a graph.}
to \emph{measure} the \emph{width} of local sections (that is, $\Delta$ consecutive layers) in temporal graphs.
We call this the~\emph{$\Delta$-vertex cover number} of a temporal graph~$\TG = (V,(E_t)_{t=1}^\lifetime)$.
Intuitively, this is the minimum number of vertices which we need to hit (or cover)
all edges in any $\Delta$ consecutive layers of the temporal graph.
Formally, the $\Delta$-vertex cover number of $\mathcal G$ is the minimum number $\nu$ such that
for all~$i \in [\tau-\Delta+1]$ there is a $\nu$-size vertex set $S$
such that all edges in $\bigcup_{t=i}^{i+\Delta-1} E_t$ are incident to at least one vertex in $S$.
Observe that the $\Delta$-vertex cover number{} can be smaller but not larger than the smallest sliding $\Delta$-window vertex cover, see \citet{akrida2020temporal} for details.
Note that also \textsc{Vertex Cover} has been studied in the multistage setting
\cite{DBLP:conf/iwpec/FluschnikNRZ19}.
It is easy to check that the running time of the algorithm for \textsc{Temporal Matching} of
\citet{MMNZZ} can be upper-bounded by $2^{O(\nu\Delta)}\cdot |\mathcal G|^{O(1)}$,
where $\nu$ is the $\Delta$-vertex cover number{} of $\mathcal G$.
This paper contributes an improved algorithm for \textsc{Temporal Matching}
with a running time of $\Delta^{O(\nu)}\cdot |\mathcal G|$.
Hence, this is an exponential speedup in term of $\Delta$ compared to the algorithm of \citet{MMNZZ}.
Before we describe the details of the algorithm in \cref{sec:algo},
we introduce further basic notations in the next section.
\section{Preliminaries}
We denote by $\log(x)$ the ceiling of the binary logarithm of $x$ ($\lceil \log_2(x)\rceil$).
A $p$-family is a family of sets where each set has size $p$.
We refer to a set of consecutive natural numbers $[i,j] := \{ k \in \mathbb N \mid i \leq k \leq j\}$ for some $i,j \in \mathbb N$ as an \emph{interval}.
If~$i=1$, then we denote $[i,j]$ simply by $[j]$.
The \emph{neighborhood} of a vertex~$v$ and a vertex set $X$ in a graph $G=(V,E)$ is
denoted by $N_G(v) := \{ u \in V \mid \{v,u\} \in E \}$ and $N_G(X) := \left(\bigcup_{v \in X} N_G(v)\right) \setminus X$, respectively.
The \emph{lifetime} of a temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$ is~$\tau$.
The \emph{size} of a temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$ is~$|\mathcal G| := |V|+\sum_{t=1}^{\tau}|E_{t}|$.
Furthermore, in accordance with the literature~\cite{wu2016efficient,zschocheFMN18},
we assume that the lists of labels are given in ascending order.
The \emph{set of time edges} $\mathcal E(\mathcal G)$ of a temporal graph~$\TG = (V,(E_t)_{t=1}^\lifetime)$ is defined as $\{ (e,t) \mid e \in E_t \}$.
A pair $(v,t)$ is a \emph{vertex appearance} in a temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$ of $v$ at time $t$ if $v \in V$ and $t \in [\tau]$.
A time edge $(e,t)$ \emph{$\Delta$-blocks} a vertex appearance~$(v,t')$
(or $(v,t')$ is \emph{$\Delta$-blocked} by~$(e,t)$)
if $v \in e$ and $|t - t'| \leq \Delta -1$.
For a time edge set $\mathcal E$ and integers~$a$ and~$b$,
we denote by $\mathcal E[a,b] := \{ (e,t) \in \mathcal E \mid a \leq t \leq b \}$ the subset of $\mathcal E$ between the time steps $a$ and $b$.
Analogously, for a temporal graph $\TG = (V,(E_t)_{t=1}^\lifetime)$ we denote by~$\mathcal G[a,b]$ the temporal graph on the vertex set $V$
with the time edge set $\mathcal E(\mathcal G)[a,b]$.
A \emph{parameterized problem} is a language $L\subseteq \Sigma ^{*}\times \mathbb {N}$, where $\Sigma$ is a finite alphabet.
The second component is called the parameter of the problem.
A parameterized problem $L$ is \emph{fixed-parameter tractable}
if we can decide in $f(k)\cdot |x|^{O(1)}$ time
whether a given instance $(x,k)$ is in $L$, where $f$ is an arbitrary function depending only on $k$.
An algorithm is an FPT-algorithm for parameter~$k$ if its running time is upper-bounded by $f(k)\cdot n^{O(1)}$,
where $n$ is the input size and $f$ is a computable function depending only on $k$.
\section{The Algorithm}
\label{sec:algo}
\citet{MMNZZ} provided a $2^{O(\Delta\nu)} |\mathcal G|^{O(1)}$-time algorithm for \textsc{Temporal Matching}.
We now develop an improved algorithm which runs in $\Delta^{O(\nu)} |\mathcal G|$.
Formally, we show the following.
\begin{theorem}
\label{thm:fpt-for-vc-delta}
\textsc{Temporal Matching} can be solved
in $\Delta^{O(\nu)}\cdot |\mathcal G|$ time, where $\nu$ is the $\Delta$-vertex cover number{} of $\mathcal G$.
\end{theorem}
The proof of \cref{thm:fpt-for-vc-delta} is deferred to the end of the section.
Formally, we solve the decision variant of \textsc{Temporal Matching} as it is defined in \cref{sec:intro}.
However, the algorithm actually computes the maximum size of a $\Delta$-temporal matching in a temporal graph and
with a straight-forward adjustment the algorithm can also output a $\Delta$-temporal matching of maximum size.
Similarly to the algorithm of \citet{MMNZZ},
the algorithm behind \cref{thm:fpt-for-vc-delta} works in three major steps:
\begin{enumerate}
\item\label{step1} Divide the temporal graph into disjoint $\Delta$-windows.
\item\label{step2} For each of these $\Delta$-windows compute a small family of $\Delta$-temporal matchings.
\item\label{step3} Based on the families of the Step \ref{step2},
by dynamic programming compute
the maximum size of a $\Delta$-temporal matching for the whole temporal graph.
\end{enumerate}
While Step~\ref{step1} is trivial and Step~\ref{step3} is similar to the algorithm of \citet{MMNZZ},
Step~\ref{step2} is where we provide new ideas leading to an improved overall running time.
In the next two subsections, we describe Step~\ref{step2} and Step~\ref{step3} in detail.
Afterwards, we to put everything together and prove \cref{thm:fpt-for-vc-delta}.
\subsection{Step \ref{step2}: Families of $d$-complete $\Delta$-temporal matchings.}
In a nutshell, the core of Step \ref{step2} consists of an iterative computation of
a small (bounded by $\Delta^{O(\nu)}$) family of $\Delta$-temporal matchings
for an arbitrary $\Delta$-window such that at least one of
them is ``extendable'' to a maximum $\Delta$-temporal matching for the whole temporal graph.
Let $\TG = (V,(E_t)_{t=1}^\lifetime)$ be a temporal graph of lifetime $\tau$, and $d$ and let $ \Delta$ be two natural numbers such that $d\Delta \leq \tau$.
%
A family $\mathcal M$ of $\Delta$-temporal matchings is~\emph{$d$-complete for $\mathcal G$}
if for any $\Delta$-temporal matching $M$ of $\mathcal G$ there is an $M' \in \mathcal M$
such that $\big(M \setminus M{[\Delta(d-1)+1, \Delta d]}\big) \cup M'$ is a $\Delta$-temporal matching of $\mathcal G$ of
size at least $|M|$.
The central technical contribution of this paper is
a procedure to compute in~$\Delta^{O(\nu)} \cdot |\mathcal E(\mathcal G[\Delta(d-1)+1, \Delta d])|$ time such a \emph{$d$-complete} family $\mathcal M$ of size at most~$\Delta^{O(\nu)}$, where~$\nu$ is the $\Delta$-vertex cover number{} of $\mathcal G$.
Formally, we aim for the following theorem.
\begin{theorem}
\label{lem:d-complete-family}
Given two natural numbers $d,\Delta$
and a temporal graph $\mathcal G$ of lifetime at least $d\Delta$ and $\Delta$-vertex cover number{} $\nu$,
one can compute in $\Delta^{O(\nu)} \cdot |\mathcal E(\mathcal G[\Delta(d-1)+1,\Delta d])|$ time
a family of $\Delta$-temporal matchings which is $d$-complete for $\mathcal G$
and of size at most~$\Delta^{O(\nu)}$.
\end{theorem}
To this end, we define a binary tree where the leaves have a fixed ordering.
An order of the leaves of a rooted tree is in \emph{post order} if
a depth-first search traversal started at the root can visit the leaves in this order.
A \emph{postfix order} of the leaves of a rooted tree is an arbitrarily chosen fixed ordering which is in post order.
\begin{definition}
Let $\Delta \in \mathbb N$.
A \emph{$\Delta$-postfix tree} $T$ is a rooted full binary tree of depth at most $\log(\Delta)$ with $\Delta$ many leaves $v_i,i \in [\Delta]$, such that
$v_1,v_2,\dots,v_\Delta$ is the postfix order of the leaves.
\end{definition}
Later, the algorithm will construct a $\Delta$-postfix tree $T_v$ for each vertex $v$ in the temporal graph.
Here, each leaf in $T_v$ represents a vertex appearance $v$.
For example, the leaf $v_t$ represents the vertex appearance $(v,t)$.
To encode that vertex appearances of $v$ are $\Delta$-blocked until (or since) some point in the time,
we will use specific ``separators''.
\begin{definition}
Let $T$ be $\Delta$-postfix tree rooted at $v$ with leaves $v_1,v_2,\dots,v_\Delta$ in postfix order.
Then,
a \emph{$[a,b]$-separator} of $T$ is given by $S := N_T(\bigcup_{i \in [\Delta] \setminus [a,b]}V(P_i))$,
where $P_i$ is the $v_i$-$v$ path in $T$.
\end{definition}
We now consider an example, depicted in \cref{fig:fig1}, to develop some intuition on
$\Delta$-postfix trees and $[a,b]$-separators.
In \cref{fig:fig1}, we see an auxiliary graph,
which is constructed for some $\Delta$-window in a temporal graph with only two vertices $v$ and $u$, where $\Delta=8$.
For both vertices we constructed $\Delta$-postfix trees visualized by the dashed edges.
Moreover, there is an edge between $v$ and $u$ in the first, fifth, and sixth layer of this $\Delta$-window.
This is depicted by the straight edges $\{u_i,v_i\},i \in \{ 1,5,6\}$.
In our algorithm a path from~$u$~to~$v$ (the roots of our trees) represents a time edge between $u$ and $v$ in the $\Delta$-window.
The $[a,b]$-separators will represent that a vertex is blocked by some time edge outside of the $\Delta$-window.
In \cref{fig:fig1}, we look at the situation where the vertexr~$u$ (vertex~$v$) is blocked
in the first three and last two (first four and last three)
layers of the $\Delta$-window.
To encode these blocks, we use a $[1,3]$-separator (blue/stars) and a $[7,8]$-separator (orange/triangle) in the $\Delta$-postfix tree of $u$ and
a $[1,4]$-separator (green/diamond) and a $[6,8]$-separator in the $\Delta$-postfix tree of $v$.
Note that paths from $u$ to $v$ not intersecting one of the separator vertices correspond to time edges
which can be taken into a matching even if~$u$~and~$v$ are blocked by some other time edges
outside of the $\Delta$-window, as depicted by the gray areas in \cref{fig:fig1}.
\begin{figure}
\begin{tikzpicture}[scale=1,yscale=1.3]
\foreach \i in {0,...,8} {
\draw[edge,dotted,gray] (\i,1) -- (\i+1,1) {};
\draw[edge,dotted,gray] (\i,2) -- (\i+1,2) {};
}
\foreach \i in {1,...,8} {
\node[vertex,label=$u_{\i}$] (w\i) at (\i,2) {};
\node[vertex,label=below:$v_{\i}$] (v\i) at (\i,1) {};
\node at (\i,-1) {$\i$};
}
\foreach \i in {1,...,4} {
\node[vertex] (w1\i) at (\i*2-0.5,2.5) {};
\draw[edge,dashed] (\i*2-0.5,2.5) -- (\i*2-1,2) {};
\draw[edge,dashed] (\i*2-0.5,2.5) -- (\i*2,2) {};
\node[vertex] (v1\i) at (\i*2-0.5,0.5) {};
\draw[edge,dashed] (\i*2-0.5,0.5) -- (\i*2-1,1) {};
\draw[edge,dashed] (\i*2-0.5,0.5) -- (\i*2,1) {};
}
\foreach \i in {1,...,2} {
\node[vertex] (w2\i) at (\i*4-1.5,3) {};
\node[vertex] (v2\i) at (\i*4-1.5,0) {};
}
\node[vertex,label=above:$u$] (w) at (4.5,3.5) {};
\node[vertex,label=above:$v$] (v) at (4.5,-0.5) {};
\draw[edge,dashed] (w) -- (w21) {};
\draw[edge,dashed] (w) -- (w22) {};
\draw[edge,dashed] (w11) -- (w21) {};
\draw[edge,dashed] (w12) -- (w21) {};
\draw[edge,dashed] (w13) -- (w22) {};
\draw[edge,dashed] (w14) -- (w22) {};
\draw[edge,dashed] (v) -- (v21) {};
\draw[edge,dashed] (v) -- (v22) {};
\draw[edge,dashed] (v11) -- (v21) {};
\draw[edge,dashed] (v12) -- (v21) {};
\draw[edge,dashed] (v13) -- (v22) {};
\draw[edge,dashed] (v14) -- (v22) {};
\node at (0-1,1) {$v$:};
\node at (0-1,2) {$u$:};
\node at (-.21-1,-1) {time:};
\node at (9.5,-1) {$(\Delta=8)$};
\draw[edge] (v1) -- (w1);
\draw[edge] (v5) -- (w5);
\draw[edge] (v6) -- (w6);
\draw[rounded corners,fill=gray,opacity=0.4] (9,2.15) to (6.7,2.15) to (6.7,1.85) to (9,1.85);
\draw[rounded corners,fill=gray,opacity=0.4] (0,2.15) to (3.3,2.15) to (3.3,1.85) to (0,1.85);
\draw[rounded corners,fill=gray,opacity=0.4] (0,1.15) to (4.3,1.15) to (4.3,0.85) to (0,0.85);
\draw[rounded corners,fill=gray,opacity=0.4] (9,1.15) to (5.7,1.15) to (5.7,0.85) to (9,0.85);
%
%
\node[vertex,rectangle,red,minimum size=5pt] at (4*2-0.5,0.5) {};
\node[vertex,rectangle,red,minimum size=5pt] at (3*2,1) {};
\node[vertex,diamond,green,minimum size=7pt] at (2.5,-0) {};
\node[vertex,star,blue,minimum size=7pt] at (1.5,2.5) {};
\node[vertex,star,blue,minimum size=7pt] at (3,2) {};
\node[vertex,orange,regular polygon,regular polygon sides=3,minimum size=7pt] at (7.5,2.5) {};
\end{tikzpicture}
\caption{Illustration how the $\Delta$-postfix trees are used for a $\Delta$-window of a temporal graph with only two vertices $u$ and $v$.
The edge $\{u,v\}$ is in the first, fifth, and sixth edge set of the $\Delta$-window.}
\label{fig:fig1}
\end{figure}
It is crucial for our algorithm that the $[a,b]$-separators are small in terms of $\Delta$.
\begin{lemma}
\label{lem:postfix-tree}
Let $T$ be a $\Delta$-postfix tree $T$ rooted at $v$ with leaves $v_1,v_2,\dots,v_\Delta$ in postfix order,
and let $S$ be the $[a,b]$-separator of $T$, where $[a,b] \subset [\Delta]$ with $a=1$ or $b=\Delta$.
Then,
\begin{enumerate}
\item $|S| \leq \log(\Delta)+1$, and
\item the $v_i$-$v$ path in $T$ contains a vertex from $S$ if and only if $i \in [a,b]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that $[a,b] \neq [\Delta]$.
Since $a=1$ or $b=\Delta$, $[\Delta] \setminus [a,b]$ is an interval and not empty.
Hence, $S$ cannot contain two vertices with the same distance to the root $v$,
otherwise $[\Delta] \setminus [a,b]$ is not an interval or we have that $a\not=1$ and~$b\not=\Delta$.
Thus, $|S| \leq \log(\Delta)+1$.
The rest of the lemma follows simply by the fact that the root $v$ is in the set $\bigcup_{i \in [\Delta] \setminus [a,b]}V(P_i)$
and that $\left(\bigcup_{i \in [\Delta] \setminus [a,b]}V(P_i)\right)\cap N(\bigcup_{i \in [\Delta] \setminus [a,b]}V(P_i)) = \emptyset$,
where $P_i$ is the $v_i$-$v$ path in $T$.
\end{proof}
Our intermediate goal now is to compute a family of paths between roots of $\Delta$-postfix trees in the auxiliary graph
such that if we are given set of $[a,b]$-separators $S$,
then the family shall contain a path between two roots which avoids the vertices in $S$ (if there is one).
We will use \emph{representative families} for this.
Before we jump into the formal definition of representative families \cite{MONIEN1985239},
we build some intuition for representative families by considering an illustrative game played by Alice and Bob.
Bob has a set $U$ and a $p$-family $\mathcal S \subseteq 2^U$.
He shows $U$ and $\mathcal S$ once to Alice.
Afterwards, Bob puts $q$ elements from $U$ on the table $Y$ and asks Alice
whether there is a set in $\mathcal S$ which does not contain any element on the table $Y$.
Alice wins if and only if she can answer the question correctly.
The goal of Alice is to win this game while remembering as little as possible from $\mathcal S$.
This is represented by a set $\widehat{\mathcal S} \subseteq \mathcal S$.
Intuitively speaking, a representative family $\widehat{\mathcal S} \subseteq \mathcal S$ guarantees to
Alice that there is at least one set in~$\widehat{\mathcal S}$
which does not contain an element on the table $Y$
if there is a set in $\mathcal S$
which does not contain an element on the table $Y$.
Formally, we define representative families as follows.
\begin{definition}
Let $\mathcal S$ be a $p$-family
and $\omega \colon \mathcal S \rightarrow \mathbb N$.
A subfamily $\widehat{\mathcal S} \subseteq \mathcal S$
is a \emph{max~$q$-representative} with respect to $\omega$
if for each set $Y$ of size at most $q$ it holds true that
if there is a set $X \in \mathcal S$
with $X \cap Y = \emptyset$,
then there is an $\widehat X \in \widehat{\mathcal S}$
such that $\widehat X \cap Y = \emptyset$ and $\omega(\widehat X) \geq \omega(X)$.
\end{definition}
Representative families are algorithmically useful
because there are FPT-algorithms for the parameter $p+q$ to compute $q$-representatives of $p$-families such that the size of
the $q$-representative only depends on $p+q$ \cite{MONIEN1985239,FLPS16}.
An algorithm of \citet{FLPS16} can be iteratively applied
to show the following.
\begin{proposition}{\cite[Proposition 4.8]{ROP-Arxiv18}}
%
\label{thm:matroid-tool}
Let $\alpha$, $\beta$, and $\gamma$ be non-negative integers such that $r=(\alpha+\beta)\gamma \geq 1$, and let
$\omega \colon U \to \mathbb N$~be a weight function.
Furthermore, let $U$ be a set, $\mathcal H\subseteq 2^U$~be a $\gamma$-family of size~$t$ and let
$\mathcal S = \{ S = \biguplus_{i=1}^\alpha H_i
\mid
H_j \in \mathcal H\text{ for }j \in [\alpha]\}.
$
Then, we can compute a max $\beta \gamma$-representative~$\widehat{\mathcal S}$ of $\mathcal{S}$ with respect to~$\omega'$
in $2^{O(r)}\cdot t$ time such that $|\widehat{\mathcal S}| \leq {{r}\choose {\alpha \gamma}}$, where $\omega'(X) := \sum_{x \in X} \omega(x)$.
\end{proposition}
Note that \citet{ROP-Arxiv18} actually showed a more general version of \cref{thm:matroid-tool}.
However, for the following algorithm, we only need \cref{thm:matroid-tool} (that is, Proposition 4.8 of \citet{ROP-Arxiv18}
in the special case of a uniform matroid represented over a large enough prime field).
Now, we can describe the algorithm behind \cref{lem:d-complete-family} in detail.
\begin{algorithm}[Algorithm behind \cref{lem:d-complete-family}]
\label{const:family}
Let $\TG = (V,(E_t)_{t=1}^\lifetime)$ be a temporal graph of
lifetime $\tau$, and let
$d$ and $\Delta$ be two natural numbers such that~$d\Delta \leq \tau$.
Furthermore let $\mathcal G' := \mathcal G{[\Delta(d-1)+1, \Delta d]}$,
and let $\nu$ be the $\Delta$-vertex cover number{} of $\mathcal G$.
\begin{enumerate}[(i)]
\item
For each vertex $u \in V$,
we construct the $\Delta$-postfix tree $T_u$ with $\Delta$ many leaves.
These trees have pair-wise disjoint vertex sets.
The root of $T_u$ is~$u$ and
the leaves in postfix order are
%
$u_1,u_2,\dots,u_\Delta$.
\item
Construct a $(2\log \Delta+3)$-family $\mathcal H := \mathcal H_E \cup \mathcal H_D$ such that
$\mathcal H_D$ contains~$\nu$ pairwise disjoint sets of fresh vertices, and
$\mathcal H_E := \{ E_{(\{u,w\},t)} \mid (\{u,w\},t)$ is time edge of $\mathcal G' \}$,
where $E_{(\{u,w\},t)} :=$
\begin{align*}
\bigcup_{y \in \{u,w\}} \left\{ x \in V(T_y) \mid x \text{ is on the $y$-$y_{t-\Delta(d-1)}$-path in } T_y \}
\cup \{ (\{u,w\},t) \right\},
\end{align*}
for all time edges $(\{u,w\},t)$ of $\mathcal G'$.
\item Let $\omega : 2^U \to \mathbb N$ with $\omega(X) := |X \cap \mathcal E(\mathcal G')|$, for all $X \in 2^U$ be a weight function,
where~$U := \bigcup_{A \in \mathcal H} A$.
\item Compute the max $4\nu(\log{\Delta}+1)$-representative $\widehat{\mathcal S}$ of
\begin{align*}
%
\mathcal S := \left\{ \biguplus_{i=1}^{\nu} H_i \mid \emptyset \neq H_i \in \mathcal H, i \in [\nu] \right\}
%
\end{align*}
with respect to $\omega$ (using \cref{thm:matroid-tool}).
\item Output $\mathcal M := \{ S \cap \mathcal E(\mathcal G') \mid S \in \widehat{\mathcal S} \}$.
\hfill$\diamond$
\end{enumerate}
\end{algorithm}
Towards the correctness of \cref{const:family}, we observe the following.
\begin{lemma}
\label{lem:matchings-sets}
Let $\mathcal S$, $\mathcal G'$, and $\nu$ be defined as in \cref{const:family} for some temporal graph $\mathcal G$ and $d, \Delta \in \mathbb N$.
Then,
$M$ is a $\Delta$-temporal matching in $\mathcal G'$
if and only if there is an $S \in \mathcal S$ such that
$M = \mathcal E(\mathcal G') \cap S$ and $\omega(S) = |M|$.
\end{lemma}
\begin{proof}
($\Leftarrow$): Let $S \in \mathcal S$ and set $M = \mathcal E(\mathcal G') \cap S$.
Clearly, $\omega(S) = |M|$ and for two distinct time edges $(e,t),(e',t') \in M$ we have that $e \cap e' = \emptyset$,
because otherwise~$E_{(e,t)} \cap E_{(e',t')} \not= \emptyset$ and hence $S \not \in \mathcal S$.
($\Rightarrow$): Let $M$ be a $\Delta$-temporal matching in $\mathcal G'$.
Since all time edges of $\mathcal G'$ are in $\Delta$ consecutive time steps,
we know that for all $(e,t),(e',t') \in M$ we have that $e \cap e' = \emptyset$.
Hence, $|M| \leq \nu$ and $E_{(e,t)} \cap E_{(e',t')} = \emptyset$.
Thus, $S := \biguplus_{(e,t) \in M} E_{(e,t)} \uplus \biguplus_{i=1}^{\nu-|M|} D_i \in \mathcal S$ and $\omega(S) = |M|$,
where $D_1,\dots,D_{\nu-|M|} \in \mathcal H_D$ are pairwise disjoint.
\end{proof}
We now show the correctness of \cref{const:family}.
\begin{lemma}
\label{lem:d-complete-family-correct}
Let $\mathcal M$, $\mathcal G'$, and $\nu$ be defined as in \cref{const:family} for some temporal graph $\mathcal G$, and $d, \Delta \in \mathbb N$.
Then, $\mathcal M$ is a $d$-complete family of $\Delta$-temporal matchings for $\mathcal G'$.
\end{lemma}
\begin{proof}
By \cref{lem:matchings-sets} together with Step (iv) and (v) of \cref{const:family},
the family~$\mathcal M$ only contains $\Delta$-temporal matchings in $\mathcal G'$.
To show that $\mathcal M$ is $d$-complete,
let $M$ be a $\Delta$-temporal matching for the whole temporal graph $\mathcal G$.
Then, $M' := M[\Delta(d-1)+1, \Delta d]$ is the $\Delta$-temporal matching in $\mathcal G'$ which is included in $M$.
If $d>1$, then let $M^- := M[\Delta(d-2)+1, \Delta (d-1)]$ or otherwise $M^- := \emptyset$.
If $d< \nicefrac{\tau}{\Delta}$, then let $M^+ := M[\Delta d+1, \Delta (d+1)]$ or otherwise $M^+ := \emptyset$.
Observe that $|M^-|,|M'|,|M^+| \leq \nu$ and
that vertex appearances in $\mathcal G'$ which are incident to an arbitrary time edge
can only be $\Delta$-blocked by time edges in $M^- \cup M^+$.
Hence, there at most $4\nu$ vertices for which some vertex appearances in~$\mathcal G'$ are $\Delta$-blocked by time edges in $M^- \cup M^+$.
Let
%
%
%
%
%
%
%
%
\begin{align*}
B :=\ &\{ (v,[1,t-\Delta d]) \mid (e,t) \in M^-, v \in e \} \cup \\
& \{ (v,[t+1 - \Delta d, \Delta]) \mid (e,t) \in M^+, v \in e \}.
\end{align*}
Thus, a vertex appearance $v_t$ from $\mathcal G'$ is $\Delta$-blocked by some time edge in $M^- \cup M^+$ if and only if
there is a $(v,[a,b]) \in B$ with $t - \Delta(d-1) \in [a, b]$.
Now, let~$Y := \bigcup_{(v,[a,b]) \in B} S_{(v,[a,b])}$,
where $S_{(v,[a,b])}$ is an $[a,b]$-separator in the $\Delta$-postfix tree~$T_v$.
Furthermore, by \cref{lem:matchings-sets}, there is an $S \in \mathcal S$
such that $M' = \mathcal E(\mathcal G') \cap S$ and $\omega(S) = |M'|$.
We now show that $S \cap Y = \emptyset$.
Assume towards a contradiction that $S \cap Y \not= \emptyset$.
Hence, there is an $(e,t) \in M'$ such that there is a $u \in E_{(e,t)} \cap Y$.
Since $u \in Y$, there is a $v \in e$ such that $u \in V(T_v)$,
and a $(v,[a,b]) \in B$ such that $u \in S_{(v,[a,b])}$.
From $u \in E_{(e,t)}$ we know that $u$ is on the $v$-$v_{t-\Delta(d-1)}$ path in $T_v$.
Hence, by \cref{lem:postfix-tree}, $t \in [a,b]$ and thus there is a time edge $(e',t') \in M^- \cup M^+$ which is not $\Delta$-independent with $(e,t)$.
This contradicts $M$ being a $\Delta$-temporal matching.
Thus, $S \cap Y = \emptyset$.
Since $S \cap Y = \emptyset$, $S \in \mathcal S$, $|Y| \leq 4\nu(\log \Delta +1)$, and
$\widehat{\mathcal S}$ is a max $4\nu(\log \Delta +1)$-representative of $\mathcal S$ with respect to $\omega$,
we know that there is an $\widehat S \in \widehat{\mathcal S}$
such that~$\widehat S \cap Y = \emptyset$ and $\omega(\widehat S) \geq \omega(S)$.
By the construction of $\mathcal M$ in \cref{const:family},
and by \cref{lem:matchings-sets}, we know that there is an $\widehat M \in \mathcal M$
such that $\widehat S \cap E(\mathcal G') = \widehat M$ and $|\widehat M| = \omega(\widehat S) \geq \omega(S) = |M'|$.
Hence, $|(M \setminus M') \cup \widehat M| \geq |M|$.
We now show that $(M \setminus M') \cup \widehat M$ is a $\Delta$-temporal matching.
Suppose not.
Then there are time edges $(e,t) \in M^- \cup M^+$ and $(\widehat e, \widehat t) \in \widehat M$
with $v \in e \cap \widehat e$ such that the vertex appearance $v_{\widehat t}$ is $\Delta$-blocked by $(e,t)$.
Hence, there is a~$(v,[a,b]) \in B$ with $\widehat t - \Delta(d-1) \in [a,b]$.
By \cref{lem:postfix-tree}, this contradicts $\widehat S \cap Y = \emptyset$.
Hence, $(M \setminus M') \cup \widehat M$ is a $\Delta$-temporal matching and thus $\mathcal M$ is~$d$-complete.
\end{proof}
The running time of the dynamic program defined in \eqref{eq:dp} will be discussed directly in the following proof of \cref{lem:d-complete-family}.
\begin{proof}[Proof of \cref{lem:d-complete-family}]
By \cref{lem:d-complete-family-correct}, we can use \cref{const:family}
to compute a $d$-complete family $\mathcal M$ of $\Delta$-temporal matchings in $\mathcal G[\Delta(d-1)+1,\Delta d]$.
It is easy to verify that we can compute $\mathcal H$ in $O\left((\nu + |\mathcal E(\mathcal G[\Delta(d-1)+1,\Delta d])|)\log \Delta \right)$ time (by ignoring isolated vertices).
Finally, we compute $\widehat{\mathcal S}$ with \cref{thm:matroid-tool} in $2^{O(\nu \cdot \log{\Delta})}\cdot |\mathcal E(\mathcal G[\Delta(d-1)+1,\Delta d])|$ time,
by setting $\alpha$ to $2 \log{\Delta} + 3$, $\beta$ to $2\nu$, and $\gamma$ to $\nu$.
By \cref{thm:matroid-tool} the size of $\widehat{\mathcal S}$ is at most $2^{O(\nu \cdot \log{\Delta})} = \Delta^{O(\nu)}$.
Hence, we end up with an overall running time of~$\Delta^{O(\nu)}\cdot |\mathcal E(\mathcal G[\Delta(d-1)+1,\Delta d])|$.
\end{proof}
\subsection{Step \ref{step3}: The dynamic program}
In this section we describe Step \ref{step3} of the algorithm behind \cref{thm:fpt-for-vc-delta}, see \cref{sec:algo}.
Let $\TG = (V,(E_t)_{t=1}^\lifetime)$ be a temporal graph such that $\tau$ is a multiple of $\Delta \in \mathbb N$.
Assume that we already computed for all $d \in [\nicefrac{\tau}{\Delta}]$ a family $\mathcal M_d$ of $\Delta$-temporal matchings
which is $d$-complete for $\mathcal G$.
For all $i \in [\nicefrac{\tau}{\Delta}] \setminus $
and $M \in \mathcal M_i$, for $i > 1$ let
\begin{equation}
\label{eq:dp}
\begin{split}
&T_i[M] := \max(A(M) \cup \{ 0\}), \text{where }
A(M) :=\\ &\left\{ |M| + T_{i-1}[M'] \ \middle\vert \
M'\in \mathcal M_{i-1}, M \cup M' \text{ is a $\Delta$-temporal matching }
\right\},\\
&\text{and } T_1[M] := |M|.
\end{split}
\end{equation}
Towards the correctness of the dynamic program specified in \eqref{eq:dp},
we observe the following.
\begin{lemma}
\label{lem:dp-correctness}
There is a $\Delta$-temporal matching of size at least $k$ in $\mathcal G$
if and only if
$\max_{M \in \mathcal M_\frac{\tau}{\Delta}} T_{\frac{\tau}{\Delta}}[M] \geq k$.
\end{lemma}
\begin{proof}
($\Rightarrow$):
We show by induction over $i$ that if there is a $\Delta$-temporal matching~$M$
in $\mathcal G$,
then there is an $M' \in \mathcal M_i$
such that $T_{i}[M'] \geq |M[1,\Delta i]|$ and $M[1,\Delta(i-1)]\cup M' \cup M[\Delta i+1, \tau] $ is a $\Delta$-temporal matching
of size at least~$|M|$.
By~\eqref{eq:dp}, this is clearly the case for $i=1$, because $\mathcal M_1$ is $1$-complete for~$\mathcal G$.
For the induction step, let $i>1$ and
assume that
if there is a $\Delta$-temporal matching $M$
in $\mathcal G$, then
there is an $M' \in \mathcal M_{i-1}$
such that
\begin{enumerate}[(i)]
\item $T_{i-1}[M'] \geq |M[1,\Delta(i-1)]|$, and
\item $M[1,\Delta(i-2)]\cup M' \cup M[\Delta(i-1)+1,\tau]$ is a $\Delta$-temporal matching of size at least $|M|$.
\end{enumerate}
%
Let $M^*$ be a $\Delta$-temporal matching for $\mathcal G$.
By the induction hypothesis,
there is an $M' \in \mathcal M_{i-1}$
such that $T_{i-1}[M'] \geq |M^*[1,\Delta(i-1)]|$
and $\widehat M := M^*[1,\Delta(i-2)] \cup M' \cup M^*[\Delta(i-1)+1,\tau]$ is a $\Delta$-temporal matching of size at least $|M^*|$.
Since $\mathcal M_i$ is $i$-complete for $\mathcal G$,
there is an $M'' \in \mathcal M_i$ such that $\widehat M[1,\Delta(i-1)] \cup M'' \cup \widehat M[\Delta i + 1,\tau]$
is a $\Delta$-temporal matching of size at least $|\widehat M|\geq |M^*|$.
By~\eqref{eq:dp}, we have that $T_i[M''] \geq |\widehat M[1,\Delta i]| \geq |M^*[1,\Delta i]|$.
%
Hence, if there is a $\Delta$-temporal matching of size $k$ in $\mathcal G$,
then there is an~$M' \in \mathcal M_{\frac{\tau}{\Delta}}$ such that $T_{\frac{\tau}{\Delta}}[M'] \geq k$.
($\Leftarrow$): We show by induction over $i$ that if $T_i[M'] > 0$, then there is a $\Delta$-temporal matching $M$ in $\mathcal G[1,\Delta(i-1)]$
such that $|M|=T_i[M'|$ and $M[\Delta(i-1)+1,\Delta i] = M'$, where $M' \in \mathcal M_i$.
By~\eqref{eq:dp}, this is clearly the case for $i=1$, because $\mathcal M_1$ is $1$-complete for~$\mathcal G$.
For the induction step, let $i > 1$ and assume that
if $T_{i-1}[M'] > 0$, then there is a $\Delta$-temporal matching $M$ in $\mathcal G[1,\Delta(i-2)]$
such that $|M| = T_{i-1}[M']$ and $M[\Delta(i-2)+1,\Delta (i-1)] = M'$, where $M' \in \mathcal M_{i-1}$.
%
Let $T_i[M''] > 0$, for some $M'' \in \mathcal M_{i}$.
By~\eqref{eq:dp}, there is an $M' \in \mathcal M_{i-1}$ such that
$M'' \cup M'$ is a $\Delta$-temporal matching and $T_i[M''] = T_{i-1}[M'] + |M''|$.
By the induction hypothesis, there is a $\Delta$-temporal matching $M$ in $\mathcal G[1,\Delta(i-2)]$
such that $|M| = T_{i-1}[M']$ and $M[\Delta(i-2)+1,\Delta (i-1)] = M'$.
Since $M[\Delta(i-2)+1,\Delta (i-1)] = M'$ and $M' \cup M''$ is a $\Delta$-temporal matching,
$M \cup M''$ is a $\Delta$-temporal matching of size $T_{i-1}[M'] + |M''| = |T_{i}[M'']$.
Hence, if $\max_{M \in \mathcal M_\frac{\tau}{\Delta}} T_{\frac{\tau}{\Delta}}[M] \geq k$,
then there is a $\Delta$-temporal matching of size at least $k$ in $\mathcal G$.
\end{proof}
We now are ready to show \cref{thm:fpt-for-vc-delta}.
\begin{proof}[Proof of \cref{thm:fpt-for-vc-delta}]
Let $(\mathcal G,k,\Delta)$ be an instance of \textsc{Temporal Matching}.
We assume without loss of generality that there is no $\Delta$-window in $\mathcal G$ which does not contain any time edge,
otherwise we can split $\mathcal G$ into two parts,
compute the maximum size of a $\Delta$-temporal matching in each part separately,
and check whether the sum is at least $k$.
Moreover, we assume without loss of generality that the lifetime $\tau$ of $\mathcal G$ is a multiple of $\Delta$, otherwise we can add some empty layers at the end of $\mathcal G$.
We start by splitting $\mathcal G$ into $\nicefrac{\tau}{\Delta}$ many $\Delta$-windows:
for all $i \in [\nicefrac{\tau}{\Delta}]$ let~$\mathcal G_i := \mathcal G[\Delta(i-1)+1,\Delta i]$.
This can be done in $O(|\mathcal G|)$ time.
To compute the $\Delta$-vertex cover number{},
we first compute the vertex cover number of the underlying graph of $\mathcal G_i$, for all $i \in [\nicefrac{\tau}{\Delta}]$.
Let $\nu'$ be the maximum vertex cover number over all underlying graphs $\mathcal G_i$, where $i \in [\nicefrac{\tau}{\Delta}]$.
Note that the $\Delta$-vertex cover number{} $\nu$ of~$\mathcal G$ is at least $\nu'$ and at most $2\nu'$.
Hence, in $2^{O(\nu)}\cdot|\mathcal G|$ time, we can compute~$\nu'$ and then guess $\nu$.
Next, we compute with \cref{lem:d-complete-family} for each $i \in [\nicefrac{\tau}{\Delta}]$
a family $\mathcal M_i$ of $\Delta$-temporal matching of size $\Delta^{O(\nu)}$ which is $i$-complete for $\mathcal G$
in $\Delta^{O(\nu)} \cdot |\mathcal E(\mathcal G_i)|$.
Hence, it takes $\Delta^{O(\nu)} \cdot |\mathcal G|$ time to compute the families~$\mathcal M_1,\dots,\mathcal M_{\frac{\tau}{\Delta}}$.
Now we have met the preconditions to compute the dynamic program specified in~\eqref{eq:dp}.
By \cref{lem:dp-correctness}, there is a $\Delta$-temporal matching of size at least $k$ in~$\mathcal G$
if and only if $\max_{M \in \mathcal M_{\frac{\tau}{\Delta}}} T_{\frac{\tau}{\Delta}}[M] \geq k$.
Note that $\max_{M \in \mathcal M_{\frac{\tau}{\Delta}}} T_{\frac{\tau}{\Delta}}[M]$
can be computed in $\Delta^{O(\nu)} \cdot \sum_{i=1}^{\frac{\tau}{\Delta}} |\mathcal E(\mathcal G[\Delta(i-1),\Delta i)]|$ time.
Since each $\Delta$-window contains at least one time edge, we arrive at an overall running time of $\Delta^{O(\nu)} \cdot |\mathcal G|$.
This completes the proof.
\end{proof}
It is easy to check that
if we additionally store
for the table entry $T_i[M'],i\in[\nicefrac{\tau}{\Delta}],M' \in \mathcal M_i$
a $\Delta$-temporal matching $M$ in $\mathcal G[1,\Delta i]$
of size $T_i[M']$ such that~$M[\Delta(i-1)+1,\Delta i] = M'$,
then the dynamic program also computes a $\Delta$-temporal matching of maximum size and not just the size.
Thus, we can solve the optimization variant of \textsc{Temporal Matching}.
\begin{corollary}
Given a temporal graph $\mathcal G$ and an integer $\Delta$,
we can compute in~$\Delta^{O(\nu)}\cdot |\mathcal G|$ time a maximum-cardinality $\Delta$-temporal matching in $\mathcal G$, where~$\nu$ is the~$\Delta$-vertex cover number{}.
\end{corollary}
\section{Conclusion}
While we could improve the running time to solve \textsc{Temporal Matching} exponentially in terms of $\Delta$ compared
to the algorithm of \citet{MMNZZ}, we left open
whether
in \cref{thm:fpt-for-vc-delta}
we can get rid off the running time dependence on $\Delta$.
|
2,869,038,156,687 | arxiv | \section{Introduction} \label{intro}
Although Seyfert galaxies and quasars have been well studied in the X--rays,
most previous observational scrutiny has been devoted to the brighter
Seyfert~1/QSOs which are more easily detected. There are few observations of
those Seyfert~1/QSOs which are relatively X--ray weak or of any Seyfert~2, and
not all of those have been measured well enough for detailed spectral analysis.
This paper discusses new ROSAT spectra of such objects, broadening the range of
types of AGN observed in the soft X--rays. This can provide us with an
understanding of the soft X--ray nature of (low luminosity) AGN which is more
representative of this entire class of objects, and free from the biases which
can result from analyzing only a small subset AGN types.
Previous X--ray missions, in the 2--10~keV energy range, found Seyfert galaxies
(mostly Seyfert~1s) to be best fit by power--law spectra with a photon index of
about $\Gamma\sim$1.7---1.9 (e.g. Mushotzky 1984; Turner \& Pounds 1989).
However, the ROSAT spectra of Seyferts generally have steeper photon indices,
of
about $\Gamma\sim 2.4$ for Seyfert~1s (Turner, George, \& Mushotzky 1993,
hereafter TGM) and even steeper values $\Gamma\sim 3.2$, for Seyfert~2s
(Turner,
Urry, \& Mushotzky 1993, hereafter TUM). There are several possible
explanations
for these steep observed indices. This could indicate a steeper intrinsic
continuum slope, or alternatively adding a ``soft X--ray excess" to an
underlying power--law model usually improves the fit and flattens the best--fit
continuum slope. The nature of this soft excess has been suggested to be one or
more of the following: Fe--L and/or Oxygen--K emission lines around
0.8--1.0~keV, a low--temperature blackbody, an optically--thin thermal
component, a steep second power--law, or the underlying hard continuum leaking
through a partial absorber. It is not evident that a combination of a
power--law
and a soft excess is necessary in all objects. Perhaps a large amount of
absorption ($N_H\sim10^{23}$) could harden an even softer underlying power--law
to give the observed spectrum, or a strong blackbody or optically--thin thermal
component could account for all of the observed soft--X--ray flux, without an
underlying power--law even being necessary.
These large object--to--object differences in the observed range of
$L_x/L_{opt}$ in Seyfert~1s and QSOs of a factor of 300 (e.g., values of
$\alpha_{ox}$ ranging from --1.0/--1.1 to --1.9---Picconotti et al. 1982;
Tananbaum et al. 1986) reflect substantial fundamental differences in the
structure of their central engines. A large difference in X--ray properties is
also seen in the spectra of Seyfert~2s. For example, NGC~1068, the prototype of
a Seyfert~2 which may be a hidden Seyfert~1, is also the brightest and best
observed Seyfert~2 in the X--rays. It appears to have a very steep soft X--ray
spectrum (Monier \& Halpern 1987), but is more like Seyfert~1s at high energies
(Koyama et al. 1989), and does not resemble the average spectrum of other
Seyfert~2s observed with the IPC, or the spectrum of the Seyfert~2 Mkn~348
observed with Ginga (Warwick et al. 1989).
These differences, lead to the question of whether the usual
Seyfert~1---Seyfert~2 dichotomy, usually made based on optical spectra, is a
physically accurate way to classify these objects in the X--rays. Observations
of a wide range of Seyfert galaxies are necessary to determine whether
Seyfert~1s and Seyfert~2s represent two primarily distinct classes of objects,
or if they are better described as having a continuous {\it range\/} of
properties, and whether the observed differences are intrinsic to the nucleus,
or represent varying circumnuclear properties, such as the amount and
distribution of absorbing material. Our data suggest that a subset of
Seyfert~1s
(of which we discuss only two objects in this work, but which may include many
other objects) are more intrinsically similar (with respect to the source of
the
soft X--ray emission) to most Seyfert~2s than to other Seyfert~1s. This is most
likely explainable if different mechanisms produce the X--rays in the
X--ray--quiet objects. If the standard X--ray emission mechanisms (e.g.,
inverse--Compton scattering of lower energy photons by relativistic electrons,
direct synchrotron emission from relativistic electrons produced near the
central engine or jet, and/or thermal emission from the hot inner parts of an
accretion flow) are in fact virtually ``turned off" in these objects, it is
quite possible that weaker, more exotic mechanisms (e.g., optically thin
thermal
emission from the hot intercloud medium) may contribute significantly to the
X--rays we actually detect.
\section{Target Selection and Observations} \label{targets}
\subsection{Selection of Objects from the 12 Micron Sample}
\label{targets_selection}
The objects for which we have obtained pointed PSPC spectra were carefully
selected for several reasons. First, they are from (with the exception of
PG~1351+640) the most complete and unbiased source of bright AGNs compiled to
date---the Extended 12 Micron Galaxy Sample (Rush, Malkan, \& Spinoglio 1993).
This sample is complete relative to a {\it bolometric\/} flux level, and
includes those Seyferts which are the brightest at longer wavelengths,
including
a truly representative number of both X--ray--quiet and X--ray--loud objects.
We
selected the IR--brightest Seyfert~2s from this sample which had not previously
been observed in any pointed X--ray mission. We also selected two typical
examples of relatively X--ray--weak Seyfert~1/QSOs. Mkn~1239 has one of the
lowest detected X--ray fluxes of all 55 Seyfert~1s in the 12\mbox{~$\mu$m}\ Sample (20
counts and 0.05~cts/sec in the ROSAT All--Sky Survey---Rush et al. 1996), and
PG~1351+640 has the steepest $\alpha_{ox}$ (-1.91) of the 66 PG~~QSOs observed
by Einstein (Tananbaum et al. 1986).
Second, the 12\mbox{~$\mu$m}--selected Seyferts are qualitatively different from those
observed previously. Halpern \& Moran (1993) pointed out that the Seyfert~2s
usually observed, with polarized broad lines, are restricted to those with
relatively strong UV excesses (found by the Markarian surveys; e.g. those
reported in TUM) which are also relatively radio--strong. Compared to these
Markarian Seyfert~2s (many of which were observed but not detected by
Ginga---Awaki 1993), the targets we observed have redder optical/infrared
colors, weaker and smaller radio sources, larger starlight fractions, and
steeper Balmer decrements---more representative of Seyfert~2s as a general
class. Similarly, Mkn~1239 and PG~1351+640 differ from those broad--line AGN
usually observed, in that they are specifically chosen to have relatively weak
X--ray fluxes. The one IR--luminous non-Seyfert we observed was chosen by
cross--referencing the non--Seyferts in the 12\mbox{~$\mu$m}\ Sample with a large sample
of
IRAS galaxies detected in the ROSAT All--Sky Survey (hereafter RASS; Boller et
al. 1992; Boller et al. 1995b) for those non--Seyferts with the highest IR
luminosity {\it and\/} X--ray flux.
\subsection{Pointed ROSAT PSPC Observations during AO2--AO4}
\label{targets_obs}
The observations were carried out AO2---AO4 (from 1991~December to
1993~October)
with the ROSAT X--ray telescope, with the Position Sensitive Proportional
Counter (PSPC) in the focal plane. The PSPC provides spatial and spectral
resolution over the full field of view of 2$^{\circ}$\ which vary slightly with
photon energy E. The energy resolution is $\Delta$E/E = 0.41/$\sqrt{E_{keV}}$.
The on--axis angular resolution is limited by the PSPC to about 25$^{\prime \prime}$, and
the
on--axis effective collecting area, including the PSPC efficiency, is about
220~cm$^2$ at 1~keV (Brinkmann 1992). See Table~1 for a summary of the
observations and count rates for each object, where the objects are listed in
decreasing order of total counts obtained.
We have also obtained ROSAT All--Sky Survey data for almost all of the Seyferts
in the 12\mbox{~$\mu$m}\ and CfA samples. This will be discussed in another paper to be
completed shortly after this one (Rush et al. 1996). Those data, on over 100
Seyferts spanning a wide range of characteristics, will complement this work by
enabling us to address {\it statistically\/} the scientific issues discussed
below for individual objects.
\section{Data Analysis} \label{analysis}
For each step of the data analysis discussed below, only those counts in pulse
invariant (PI) channels 12---200 inclusive are included. The lower limit is set
by the fact that the lower level discriminator lies just below this limit, so
any data taken from lower channels cannot be considered as valid events.
Furthermore, analysis of the PSPC PSF has shown that the positions of very soft
events cannot be accurately determined because of a ghost imaging effect (J.
Turner, p.comm). The exact level at which this effect is significant is
different for each observation (Hasinger \& Snowden 1990), so we conservatively
chose to exclude PI channels below 12. The upper PI channel included is 200,
since the mirror effective area falls off rapidly at higher energies. We have
also defined low, medium, and high energies to refer to PI channels 12---50,
51---100, and 101---200, respectively, and ``all" energies refers to PI
channels
12---200.
The spectral analysis was done by first extracting spectra from the events file
using the QPSPEC command in the PROS package in IRAF. We made sure that the
output of PROS were properly compatible with XSPEC, in particular with regards
to the manner in which these two packages deal with binning and calculating
statistical errors.\footnote{This simple but very important procedure is
explained in detail at http://heasarc.gsfc.nasa.gov/docs/rosat/to\_xspec.html.}
We then fit simple models using the XSPEC software, with the events in PI
channels 12---200 binned so as to include at least 20 counts in each bin,
allowing $\chi^2$\ techniques to be applied.\footnote{We only required 10 counts
per bin both NGC~3982 and CGCG~022--021, and 5 counts per bin in NGC~1144, in
order to have at least 7 bins for the fits; this makes the results extremely
rough, but otherwise we would have only 3--4 bins, with which no fits could be
done.} We used the most recent response matrix available, released from MPE in
1993~January. We first fit the data to the standard absorbed power--law model,
both with all parameters ($\Gamma$, N$_H$, and normalization) free and with N$_H$\
fixed
at the Galactic value (see Table~2). We use the photon index, $\Gamma$, defined
such
that $N_\nu \propto \nu^{-\Gamma}$ ($N$ = number of photons), which is output
by
the fitting routines in XSPEC. This relates to the spectral slope, $\alpha$,
defined by $F_\nu \propto \nu^{\alpha}$, as $\Gamma = 1 - \alpha$. We also
performed several other fits, either adding a thermal component to the
power--law or fitting only a thermal component. These are discussed in
\SS{results_fits}
The quoted uncertainties are at the 90\% confidence level, assuming one free
parameter of interest (Lampton, Margon, \& Bowyer 1976), when available (i.e.,
when the chi--square minimization to determine these uncertainties properly
converged; these are denoted as separate upper and lower uncertainties).
Otherwise, the 1$\sigma$ uncertainty on each parameter is given (denoted as a
single $\pm$ value.)
Hardness ratios provided a simple approximation to the spectral shape, even for
those objects which didn't have enough counts to accurately fit a spectral
model
to (see Table~3). The hardness ratio is defined as HR=(A--B)/(A+B), where
A~=~ctrt~(0.12--1.00~keV) and B~=~ctrt~(1.01--2.00~keV). Also given is the
ratio
A/(A+B), which we refer to as F$_{\mbox{soft}}$.
The spatial analysis was done using the SAOimage display in IRAF/PROS. Each of
the sources were observed at the center of the PSPC field, with the exception
of
NGC~1144, which was about 20$^{\prime}$\ south of the field center. This object was
partially occulted by the telescope support structure and we thus corrected the
exposure time accordingly. The accumulated PSPC counts for each object were
calculated using the IMCNTS task in IRAF/PROS and are listed in Table~1. All
counts in a circular region surrounding the source are given, after subtracting
the background, as calculated in a source--free annular region just outside the
circle.
Finally, using the TIMSORT and LITCURV tasks in PROS, we extracted light curves
for each object. This was done individually for low, medium, and high energies
and for all energies. All of the objects were observed over periods of no more
than 8 days, except for NGC~3982 and PG~1351+640, which were observed in
several
segments, spanning 5 and 11 months, respectively, allowing us to test for
variations on a half--year to year time scale.
\section{Results} \label{results}
\subsection{Variability} \label{results_var}
\subsubsection{Seyfert~2s}
Any variation in the spectra of our Seyfert~2s would have be considered an
important result, as there are only a couple reports to date of X--ray
variability in Seyfert~2 galaxies (e.g., in NGC~1365---TUM and, possibly, in
Mkn~78---Canizares et al. 1986), and none of these are conclusive (e.g., the
variation in NGC~1365 may be due to the serendipitous sources). However, no
significant short--term variation was found for any Seyfert~2 in our sample.
The
one object which was observed over a 5~month period, NGC~3982, showed no
significant variation over this time scale either (see, for example, the count
rates in Table~1).
We also compared the count rates of our pointed observations to those obtained
during the ROSAT All--Sky Survey for the same objects (Rush et al. 1996), as
shown in Figure~1. Point sizes in Figure~1 are proportional to the square of
the
total counts\footnote{Several figures have point sizes proportional to counts
instead of count--rate or flux. This is because the former is also an indicator
of SNR and thus also of the statistically accuracy of spectral fits and other
quantitative results. Also, this makes little difference since the exposure
times vary only by a factor of two among our objects while the total counts
vary
by a factor of $\sim$20.} in our pointed observation and errorbars are
$1\sigma$
statistical uncertainties in the count rates. The RASS was taken during
1990~July---1991~February, thus this comparison provides timelines of 1---3
years for the various objects. As can be seen, the 5 Seyfert~2s with the most
counts in our observations show no sign of variability since the RASS. That the
count rates for two of the fainter Seyfert~2s and for the one IR--luminous
non--Seyfert are different is probably {\it not\/} an indication of
variability,
since we have extremely low counts for those objects (in both our observations
and the RASS), and it is unlikely that only the objects with the fewest
observed
count rates would be the only ones to vary.
\subsubsection{Seyfert~1/QSOs}
However, there {\it is\/} evidence for variation in both of our Seyfert~1/QSOs.
{}From Table~1 and Figure~1, we can see that Mkn~1239 increased its count rate
by
about a factor of two between the RASS and our observation (over 21---28
months,
depending on when this object was observed during the RASS). The spectral slope
steepened slightly during this period, from $\Gamma=2.69$ to $\Gamma=2.94$ (for
a power--law fit, with N$_H$\ constrained to N${_H,gal}$, which is the only spectral
parameter we have from the RASS).
We don't have RASS data for PG~1351+640, but we can see that it varied during
our observations, which spanned the 11 months from 1992~November to
1993~October, increasing its total counts and flux by factors of 1.5 and 1.4,
respectively (a $\sim10\sigma$ result). The spectral shape varied, becoming
steeper as this object became more luminous, as with Mkn~1239. The
0.12---1.00~keV count rate increased by $\sim$59\%, whereas the 1.00---2.00
count rate only increased by $\sim$14\%, as indicated by the counts and
hardness
ratios of Table~3. The best--fit photon index steepened slightly, from 2.54 to
2.73 (see Table~2).
That the spectra of both of these objects steepened during the more luminous
state indicates that most of the variability was at the lowest energies (i.e.,
below 1~keV). The timescale of the variability puts an upper limit on the size
of the emitting region for this soft component, of much less than a light--year
for PG~1351+640, and less than two light--years for Mkn~1239, restricting the
source to the area not much larger than the broad--line region.
\subsection{Spectral Fitting} \label{results_fits}
\subsubsection{Power--Law Models} \label{results_fits_pl}
We fit each of our spectra to a simple absorbed power--law model, both with
N$_H$\
held constant at the Galactic value, and allowing it to vary. As an example, we
show in Figure~2 the data and folded model for our highest SNR object,
PG~1351+640. Below we discuss how the spectra for the other objects differ. We
also show, in Figure~3, the $\chi^2$\ contour plot which results from minimizing
$\chi^2$\ as a function of N$_H$\ and $\Gamma$\ for this object. The contours represent
the 68\%, 90\%, and 99\% confidence limits (1$\sigma$, 1.6$\sigma$, and
2.6$\sigma$, respectively) and the plus marks the best--fit value. The contour
plots for our strongest 6 objects (in terms of total counts---PG~1351+640;
NGC~5005; Mkn~1239; NGC~424; NGC~4388; and NGC~5135) look roughly the same as
this one, and those for the other objects look increasingly ``bent", with less
well--defined maxima as the total number of photons decreases.
As indicated in Table~2, when N$_H$\ is allowed to vary, the best--fit value is
always higher than the Galactic value, by a factor of 2---3 (again, for the 6
well--determined spectra), the one exception being PG~1351+640 which shows no
increase. The fact that $\chi^2_{\nu}$\ (reduced $\chi^2$) decreases by $\sim$35-50\% when
allowing N$_H$\ to vary indicates that these values are more accurate than the
Galactic ones. This indicates that there is indeed some internal absorption of
one form or another in these objects, and that the underlying slope is steeper
than that which is obtained when requiring N$_H$=N${_H,gal}$. We illustrate this in
Figure~4, where we plot the photon indices obtained with N$_H$\ free versus with
N$_H$\ fixed. Most of our Seyfert~2s, as well as those from TUM, have the former
steeper by $\sim1$.
The average values of $\Gamma$\ which we obtain with N$_H$\ free are
$\overline\Gamma=3.13$ for our 4 Seyfert~2s with sufficient counts, and
$\overline\Gamma=3.20$ for our two Seyfert~1/QSOs. These values are similar to
the six Seyfert~2s observed by TUM, which have $\overline\Gamma=3.16$, but
differ from the six Seyfert~1/QSOs observed by TGM which have
$\overline\Gamma=2.41$.
In Figure~5, we plot the photon index versus count rates for the pointed
observations of this work, TUM, and TGM. We see that most of the objects have
significantly steeper values of $\Gamma$\ than the old canonical value of 1.7
(dotted line). All of our well--observed Seyfert~2s (filled triangles), and
most
of TUM's Seyfert~2s (open triangles), {\it and\/} both of our Seyfert~1/QSOs
have values of $\Gamma\sim3$. The one exception is Mkn~372 which has a value of
$\Gamma=2.2$. However this object is now known to be a Seyfert~1, and, as
expected lies close to the average value of the Seyfert~1/QSOs from TGM at
$\overline\Gamma\sim2.4$.
What these data show us is that, not only do most Seyfert~2s have a best--fit
photon index around $\Gamma\sim3$, but also that Seyfert~1s are divided between
objects which have similar spectral slopes as Seyfert~2s and those which have
flatter spectra with $\Gamma\sim2.2$. Physical explanations for this are
discussed further in \SS{disc} and~\SS{summary}
\subsubsection{Internal Absorption} \label{results_fits_abs}
For each of our targets, we looked at the best--fit hydrogen column density as
compared to the Galactic value, and compared this to the photon indices and
hardness ratios, to try to determine the significance of internal absorption
and
how this affects the observed count rates and spectral shape. Figure~5 seems to
indicate that a few of the faintest objects also have the hardest spectra. This
is tentative, however, since these objects are the ones with the fewest photons
and the data are not very trustworthy. However, we do note that, if real, this
is consistent with these faint objects being the most heavily absorbed (i.e.,
with low signal--to--noise, a heavily absorbed, intrinsically steep spectrum
would appear similar to a relatively unabsorbed flat spectrum). We investigate
this trend further by plotting the spectra of our 8 brightest objects in
Figure~6 (in order of brightness, from the upper left, down to the lower
right),
fit to a power--law with N$_H$\ free. The general trend is for the fainter
objects
to have harder spectra (as also indicated by the hardness ratios in Table~3),
with the 4 highest hardness ratios belonging to 4 of the 5 lowest--count
objects
(the exception being NGC~3982 which actually has one of the lowest hardness
ratios).
To determine whether these harder--spectrum objects may be more heavily
obscured
by dust, we have compared their ROSAT hardness ratios to their IRAS colors (see
Figure~7). Six of our objects are very dusty in the far--IR, having values of
$\log F_{\nu,60} / F_{\nu,25}$ $\sim0.8-1.0$, which is among the reddest
(which probably means most dust--enshrouded) third of even Seyfert~2s (Rush et
al. 1993). This includes the four lowest--count objects in our sample.
Conversely, both PG~1351 and Mkn~1239 have values of $\log F_{\nu,60} /
F_{\nu,25}$ $\sim0.15$, which is among the hottest $\sim$20\% of even
Seyfert~1s. However, there is no strong relation of the IRAS color to the
hardness ratio other, other than that of the three hardest objects are also
among the reddest.
Taken together, these results indicate that there is a trend for the fainter
objects to have harder ROSAT spectra, indicating that absorption is partially
responsible for steepening the spectra. However there is less evidence that the
amount of absorption is correlated with redness/dustiness in the galaxy, as
determined from IRAS colors.
\subsubsection{Additional Models} \label{results_fits_other}
We also fitted some of our spectra to other models. These include a power--law
plus an emission line or thermal component (Raymond--Smith thermal plasma or
blackbody), or a thermal component alone. As discussed in \SS{individual} for
individual objects, there are several cases where the fits improve, indicating
that more than a simple power--law may be necessary to explain the soft
X--rays.
First, we added an additional component to the underlying power--law. The fits
to neither of our Seyfert~1/QSOs were improved by adding another component.
This
is as expected, as the power--law fits to both objects were quite good ($\chi^2_{\nu}$\
of 0.79 and 0.67 for PG~1351+640 and MKN~1239, respectively). The fit did
improve, however when we added an emission line to some of our Seyfert~2s. See,
for example, Figure~8 which shows the model for a power--law plus gaussian
emission line fit to NGC~5005. The best--fit energy for this line is at
0.8~keV,
around the energy expected for Fe--L and/or Oxygen--K emission lines. Adding
this component also has the effect of flattening the underlying power--law
slope
from 3.0 to 2.4. Similar results are obtained for the fits to NGC~5135 and
NGC~4388, which are slightly improved by adding emission lines at 0.5, and
0.6~keV, respectively.
We also tried fitting each object to a thermal model only. Again, both
Seyfert~1/QSOs were not fit at all well in this way. However, several
Seyfert~2s
(NGC~5005, NGC~5135, NGC~5929, and NGC~1144), were fit better (i.e., lower
$\chi^2$\ for the same number of free parameters) by a $\sim$0.2~keV black--body
than by an absorbed power--law (see, for example Figure~9 for the black--body
fit to NGC~5135). This is significant in that it prevents us from saying
conclusively that the soft--X--rays from these objects are associated with the
AGN at all, and that they may simply be due to stellar processes. It is not
likely that ROSAT data alone will be able to finally distinguish between
stellar
and non--stellar explanations for the X--ray emission from Seyfert~2s, as the
most definitive tests to discriminate between such models are best done in the
hard X--rays (e.g., Iwasawa 1995).
\subsection{Spatial Extent} \label{results_extent}
\subsubsection{HRI Image of NGC~5005} \label{results_extent_hri}
If multiple components are responsible for the soft--X--rays in these objects,
it is quite possible that they are from spatially distinct regions, as is
already known to be the case for some brighter Seyfert galaxies. For example,
the brightest and best observed Seyfert~2 in the X--rays is NGC~1068, the
prototype of a Seyfert~2 which may be a hidden Seyfert~1. HRI Imaging (Wilson
1994; Halpern 1992; Wilson et al. 1992) of this object reveals at least three
components to the soft--X--ray emission: (a) a compact nuclear source,
coincident with the optical nucleus, (b) asymmetric emission extending
10--15$^{\prime \prime}$\ N---NE, closely correlated with the radio jet and narrow--line
[OIII] emission, and (c) large--scale (60$^{\prime \prime}$) emission with similar
morphology
to the starburst disk. These three components comprise 55, 23, and 22\% of the
X--ray flux, respectively.
To investigate whether similar structures may be responsible for part of the
soft X--rays from our (much fainter) objects, we obtained a 27~ksec HRI
exposure
of our brightest Seyfert~2 galaxy, NGC~5005, shown in the contour plot in
Figure~10 (the contour values range from 0.05 to 0.60 photons/pixel and the
spatial resolution is 0\secpoint5/pixel). The central source spans
$\sim$20$^{\prime \prime}$~x~20$^{\prime \prime}$, and is significantly extended (FWHM$\sim$10$^{\prime \prime}$) as
compared to the HRI on--axis PSF (FWHM$\sim$5\secpoint5). The position of the
peak of this central component agrees within error to the optical position, and
is roughly 3\secpoint7 south of the radio--interferometer position given by
Vila
et al. (1990).
In addition to this central component, there is an extended wing from about
10$^{\prime \prime}$\ to 25$^{\prime \prime}$\
to the south--west of the central source (from 0.6$h^{-1}$~kpc
to 1.4$h^{-1}$~kpc). This feature contains about 13\% as many
background--subtracted counts as does the central source (31 compared to 247).
The orientation of this feature is roughly parallel to the major optical axis
of
the galaxy ($\sim45^\circ$ E of N), although the latter represent structure on
the 1--arcminute scale. At smaller sizes, arcsecond--scale radio maps made with
the VLA at 6~and 20~cm are presented in Vila et al. (1990). They find the
central source to dominate the nuclear region of the galaxy (being marginally
resolved---FWHM$\sim$0\secpoint7), and weak extended structure over
$\sim$2~arcsec in no particular direction.
Although this is our brightest Seyfert~2 galaxy, the spatial resolution and
counts are only sufficient to tell that there definitely is some asymmetric
soft--X--ray emission. Higher spatial--resolution and higher SNR data of
X--ray--weak Seyferts with future X--ray missions will be necessary to
determine
the general significance of the contribution of extended components to the
soft--X--ray spectrum of such objects.
\subsubsection{PSPC Images} \label{results_extent_pspc}
None of targets show extended emission in the PSPC image. (However, not being
primarily an imaging instrument, the resolution of the PSPC would only show
structure on much larger scales than the HRI, and cannot be used to rule out
sub--arcminute--scale structure, as exemplified by the fact that our HRI image
of NGC~5005 clearly shows structure not apparent in the PSPC images of the same
object.) Several of the images contain field objects $\sim$10--20$^{\prime}$\ from
the
target, clearly distinguished by the resolution of the PSPC. The only exception
is NGC~1144, which is not spatially separated from NGC~1143. Since the latter
is
a non--active galaxy the X--rays are likely to be mostly from NGC~1144, however
we note the PSPC spectrum is a combination of these two sources.\footnote{This
object has the least counts of all, primarily due to obscuration by the
telescope support structure, so no strong conclusions can be drawn about its
spectrum.} It is interesting to note that TUM found serendipitous (optically)
unidentified X--ray sources about 1$^{\prime}$\ from each of the six Seyfert~2s
observed in their program. In some cases (e.g., NGC~1365) these sources are
likely bright X--ray sources in the host galaxy, and in others (e.g., Mkn~78)
they are likely low--luminosity AGNs. We looked for such sources in the field
of
our 12\mbox{~$\mu$m}\ Seyfert~2s, and found none. The number of Seyfert~2s (14) observed
between these two samples makes it highly unlikely that this difference could
be
explained simply by chance. One possible explanation is that the objects in TUM
are galaxies previously known to be relatively bright in the X--rays from
Einstein IPC observations, and these serendipitous sources could have
contributed to the Einstein flux.
\section{Discussion} \label{disc}
\subsection{The Standard Soft X--Ray Slope for X--Ray Weak Seyferts}
\label{disc_newslope}
Considering both our data and that of TUM, it appears that a steep spectral
slope, around $\Gamma$=3, should be considered the standard slope for X--ray--weak
Seyferts. This includes virtually all Seyfert~2s, as indicated by the results
that have been derived for Seyfert~2s displaying a wide range in
multiwavelength
characteristics. As discussed in \SS{targets_selection}, our objects were
chosen
from the 12\mbox{~$\mu$m}\ sample and thus have redder optical/infrared colors than the
objects observed by TUM, which are Markarian objects selected as having a
strong
UV--excess.
Even the prototypical Seyfert~2 galaxy, NGC~1068, resembles these objects.
Monier \& Halpern (1987) observed this object with Einstein, finding a
0.1---3.8~keV photon index of $\Gamma\sim3.0$, and N$_H$\ consistent with the
Galactic value. Our data from the RASS give a 0.1---2.0~keV value of
$\Gamma=2.78$ for this object (Rush et al. 1996), which is slightly harder, but
consistent when considering that our RASS data was fitted with N$_H$\ constrained
to N${_H,gal}$.
This category of X--ray--steep AGN not only includes most Seyfert~2s, but some
X--ray--weak Seyfert~1 /QSOs, such as PG~1351+640 and Mkn~1239. That the soft
X--ray source in these objects may be the same as in most Seyfert~2s is
consistent with their selection as being X--ray {\it weak\/} for
Seyferts~1/QSOs. In contrast, other Seyfert~1/QSOs, e.g. those observed by TGM,
were known to be relatively strong in the soft X--rays, and thus one would
expect those objects to have X--ray spectra more similar to conventional
Seyfert~1s. Thus, it seems that the standard Seyfert~2---Seyfert~1 dichotomy in
not the simplest way to categorize these AGN in the soft X--rays. Rather, we
could refer to (relatively) steep, X--ray--weak objects and flat,
X--ray--strong
objects, whose soft X--rays are probably dominated by different components.
We also find steep average spectral slopes in our RASS data (to be analyzed
thoroughly in Rush et al. 1996), of \agamsub{Sy1}=2.24$\pm0.49$ and
\agamsub{Sy2}=2.86$\pm0.48$ for 39 Seyfert~1s and 5 Seyfert~2s, respectively
(uncertainties quoted are 1$\sigma$ individual scatter). These fits were done
with N$_H$\ constrained to N${_H,gal}$, and thus the best--fit slopes are likely a
little
steeper, depending mainly on the amount of internal obscuration. This could
place the average slope of the Seyfert~2s over 3 and that of the Seyfert~1s
around 2.4---2.5. This and the fact that there is a wide range of slopes for
the
Seyfert~1s, with over 1/3 being steeper than $\Gamma$=2.5 assuming no internal
absorption, makes these results consistent with those for our pointed
observations---namely that all Seyfert~2s and some Seyfert~1s have slopes much
closer to 3 than to 2. Similar results have been found in other works, for
example Boller, Brandt, \& Fink (1995a), who surveyed 46 narrow--line
Seyfert~1s
with ROSAT and found them all to have extremely steep spectra (some with $\Gamma$\
as high as 5).
\subsection{Physical Interpretation} \label{disc_interpretation}
There are several competing explanations for the steep slopes observed in many
X--ray--weak Seyferts, as compared to the flatter slopes observed in
conventional (X--ray--strong) Seyferts. The physical models which may be able
to
explain all or part of the observed differences between steep--slope and
flat--slope Seyferts include:
(1) A separate, hard power--law present in steep objects which is very weak,
such as a scattered component. Although we see no evidence of such a component
in our fits, we cannot rule out this possibility, as observations in a larger
wavelength baseline of X--ray--weak Seyferts may detect such a component if it
is extremely faint.
(2) Much of the soft spectrum of steep objects being produced by the same
physical mechanism, located in the same place, as the soft excess observed in
many flat objects. In this model, steep objects have relatively more soft
excess
and less of the hard power--law.
The evidence for this type of spectrum would be that fits to a power--law--only
model would give a very steep slope, but that adding the soft excess would
flatten the underlying slope while improving the fit. As discussed in
\SS{results_fits_other} and \SS{individual}, we have evidence for this in
several of our objects, and even a pure black--body with no underlying
power--law cannot be ruled out in some cases. This is even more evident in TUM,
as most of their objects are fitted significantly better when either an
emission
line or Raymond--Smith plasma are added to the power--law. If we do assume that
a very soft excess exists in these objects, a physical model for this excess
still remains to be determined. For example, it could be thermal emission from
the galaxy, hot gas near the nucleus, iron and/or oxygen emission line(s), or
the UV bump shifted into the ultra--soft X--rays as suggested in Boller et al.
(1995a). But, again, we stress that such evidence is not universal, as several
of our objects show no definite preference for anything other than a
power--law.
(3) That the soft spectrum we see in X--ray--weak Seyferts represents a
component present in most or all Seyferts, but which is much weaker in X--ray
strong objects and is thus suppressed by the hard spectrum in those objects. If
so, is this universal component non--nuclear, i.e. similar to the soft X--rays
observed in normal or starburst galaxies (from, e.g., X--ray binaries and
SNRs)?
(4) That the soft spectra arise from the same physical process (and from the
same location) as the flat power--laws in some Seyfert~1s, but with a higher
value for $\Gamma$, caused by variance of one or more intrinsic physical
parameters?
For example, of several explanations Boller et al. (1995a) suggest for their
steep spectra, one of the more promising ones is that the central engine in
these objects is at a lower mass than other Seyfert~1s, and would thus have an
accretion disk emitting at a higher temperature, shifting the UV bump into the
low--energy end of the ROSAT band, steepening the X--rays. This idea is also
one
possible explanation for the steep spectra we found in PG~1351+640 and
MKN~1239,
as well as other X--ray--weak Seyfert~1/QSOs. To test this idea thoroughly, one
would need to observe the {\it spread\/} in $\Gamma$\ for many X--ray--weak and
X--ray--strong Seyferts and see if there is a continuous range of observed
values, as opposed to a more--or-less bimodal distribution. If such a range is
observed, then determining any X--ray or multiwavelength parameter which is
correlated with $\Gamma$\ would provide information about the fundamental cause of
its variance.
Finally, an important caveat in this distinction between X--ray--weak and
strong
Seyferts is that our X--ray--weak Seyfert~1/QSOs are not exactly like our
Seyfert~2s in the soft X--rays, which is seen in several ways: (1) even though
the former have the same steep slope when fitted to a power--law, they are more
often fitted only by this steep power--law, as opposed to a power--law plus an
additional component (and PG~1351+640 cannot be fitted at all by any model
other
than a pure power--law); (2) they are also more luminous in the soft X--rays
than all but the very strongest Seyfert~2s; and (3) they show less indication
of
internal absorption (above the Galactic value): of all our objects, PG~1351+640
is the only one to not have even the slightest evidence for internal absorption
in a power--law fit, and several of our Seyfert~2s show much stronger evidence
for internal absorption than does MKN~1239. This last difference is of
particular importance because it can affect the measured parameters in each of
the models listed above. These differences imply that, although the observed
soft X--ray emission from these Seyfert~1/QSOs is similar to that from
Seyfert~2s, the underlying physical processes are probably at least partially
different. Perhaps, for example, the X--ray--weak Seyfert~1/QSOs are best
explained by one or more of the models listed above, but the Seyfert~2s by
another. Thus, whereas is seems as though these relatively X--ray weak
Seyfert~1/QSOs should definitely not be strictly grouped with the more luminous
(flat--slope) Seyfert~1/QSOs with regards to the soft X--ray properties, they
still appear somewhat distinct from even the relatively X--ray strong
Seyfert~2s
and perhaps represent an intermediate or mixed class.
\section{Notes on Spectral Fits to Individual Objects} \label{individual}
\subsection{PG~1351+640 and Mkn~1239}
These two Seyfert~1/QSOs were relatively well observed, with 990 and 595 counts
obtained, respectively. Both were well fitted with a simple power--law. For our
strongest object, PG~1351+640, no improvement is obtained by allowing N$_H$\ to
vary, giving no indication of internal absorption. For Mkn~1239, an increase of
about a factor of 1.5 in N$_H$\ over the Galactic value reduces $\chi^2_{\nu}$\ from 0.95
to 0.67, perhaps indicating some internal absorption.
We tried to fit each object to the other models listed in Table~2. For
PG~1351+640, the parameters returned each time indicated that a single
power--law was preferred (i.e., the normalization for other component was at or
near zero). Mkn~1239, on the other hand, fit well to a power--law model with
the
addition of a gaussian emission line around 0.7~keV. This fit was not, however
better than those with a Raymond--Smith plasma or black--body replacing the
emission line. Thus, if there is a second component to the soft X--rays
spectrum, we cannot distinguish among several possibilities for its shape.
For PG~1351+640, we also separately fit the spectra which were taken during
1992~November and 1993~October to a power--law model. A slight increase in the
best--fit $\Gamma$\ is found in the more luminous state.
\subsection{NGC~424, NGC~4388, NGC~5005, and NGC~5135}
These four Seyfert~2s each yielded at least $\sim$400 counts (see table~1),
sufficient for accurate spectral fitting. For these objects, an average photon
index of $\overline\Gamma=3.13$ (3.0, 3.2, 3.2, and 3.2, respectively) was
obtained when N$_H$\ was allowed to vary, and of $\overline\Gamma=2.00$ (1.7,
2.1,
1.9, and 2.3) when N$_H$\ was constricted to the Galactic value.
In all cases, we tried adding another component to the fit. In the case of
NGC~5135 the fit was improved at a significance level of $>90\%$. This object
has the hardest spectrum of these four Seyfert~2s. Considering that it is also
fitted by the largest N$_H$, the hard spectrum and the good fit to a second
component above 0.5~keV both probably indicate significant absorption of the
softest X--rays below 0.5~keV. Adding emission lines also improved the fits to
NGC~5005 ($>99\%$ significance level) and NGC~4388 ($>90\%$). Only in the case
of NGC~5005 was the emission line at the energy expected for Fe--L and/or
Oxygen--K, thus identification of these components with a specific emission
process is not possible. We also fit NGC~5005 and NGC~5135 to a black--body
model and obtained better fits than to a power--law model, further indicating
that we don't know the source of the soft X--rays---whether they are from the
nonstellar active nucleus or from stellar processes such as X--ray binaries or
supernova. In the latter case, we have some evidence that a small contribution
of the soft--X--rays may come from an extended component, as discussed in
\SS{results_extent_hri} for NGC~5005.
\subsection{IRAS~F01475--0740 and NGC~5929}
For these two objects, only 276 and 200 counts were obtained, allowing only 12
and 9 points (bins) for the spectral fitting, respectively. Interestingly,
relative to the 0.5---2.0~keV range, F01475--0740 has almost no counts below
0.5~keV, and NGC~5929 has very few. In fact, F01475--0740 has the hardest
spectrum of any object we observed, indicted both by the hardness ratios in
Table~3 and by the very flat value of $\Gamma$. NGC~5929 also has a harder spectrum
than any of the objects discussed above, but not nearly as hard as
F01475--0740.
This may indicate that these objects are very heavily absorbed, which would
explain both the low overall flux and the hard spectra.
When adding another component to the power--law for F01475--0740, $\Gamma$\ always
tended towards zero (as flat as we would allow), with only a small contribution
from the other component---indicating nothing more than the very hard spectrum
of the simple power--law. For NGC~5929, a slight improvement in the fit was
obtained by adding a second component, similar to some of the brighter four
Seyfert~2s discussed above, but with much less statistical significance.
\subsection{NGC~3982 and NGC~1144}
These two objects yielded so few counts that can only give a very rough
estimate
of the best--fit photon index, which is 2.12 and 1.90 for NGC~3982 and
NGC~1144,
respectively with N$_H$\ fixed. Only NGC~3982 had enough photons to allow a fit
with N$_H$\ variable, which yielded $\Gamma=3.4$. Although this slope is similar
to the values for our bright Seyfert~2s, the spectra do not look similar.
NGC~3982 has the softest and NGC~1144 the second hardest count rates of any of
our Seyfert~2s. There were not enough counts to fit to composite models, but we
did try to fit these spectra to a simple black--body, to estimate whether or
not
a power--law is even the most descriptive of the soft X--rays. For NGC~3982
there was only marginal improvement in the fit, but for NGC~1144 $\chi^2_{\nu}$\ did
drop by almost a factor of two for the black--body fit as compared to a
power--law.
\subsection{CGCG~022--021}
In addition to the 10 Seyfert galaxies discussed above, we also observed one
IR--luminous non--Seyfert which had been detected by the ROSAT All--Sky Survey.
We would expect the ROSAT spectra of this type of object to be similar to those
from Seyfert~2s (both of which emit strongly in the thermal infrared, but
relatively weakly in the X--rays), if the X--ray emission in the latter are
produced by the normal processes of stellar evolution, as in classic starburst
nuclei like NGC~7714 (Weedman et al. 1981).
Unfortunately, the observation of CGCG~022--021 yielded only 81$\pm$30 counts,
and a count--rate of 0.010 $\pm$0.003 cts/s, which is not sufficient for a
detailed spectral analysis. There may be some indication of variability, since
the RASS count--rate was 0.064 $\pm$0.018 cts/s, indicating a $\gapprox2\sigma$
change. However, this is very tentative as the (background--subtracted) counts
obtained in the pointed and RASS observations are only 81 and 26, respectively.
We do see, though, that this non--seyfert has a hard spectrum quite similar to
that several of the weaker Seyfert~2s (F01475--0740, NGC~5929, and NGC~1144).
This indicates that heavy internal absorption is probably present. To describe
the spectrum further, we attempted to fit simple models to the X--ray flux,
although with high uncertainties. A simple power--law and a black--body model
provided similarly accurate fits ($\chi^2_{\nu}$\ of 1.2 and 1.3, respectively),
however
the error bars are high.
\section{Summary and Conclusions} \label{summary}
We have analyzed pointed ROSAT PSPC spectra of 11 objects selected as having
atypical soft X--ray fluxes. These include 8 Seyfert~2s and one IR--luminous
non--Seyfert selected from the Extended 12\mbox{~$\mu$m} Galaxy Sample, which all have
relatively strong detections in the ROSAT All--Sky Survey, as compared to other
objects in their class. We also observed on X--ray weak Seyfert~1/QSO from this
sample and a similar object selected from the PG Bright Quasar Survey.
We found both Seyfert~1/QSOs, Mkn~1239 and PG~1351+640, to vary in flux by a
factors of 2 and 1.5, over periods of less than 2 and 1 year, respectively.
Both
objects had steeper spectra in their more luminous state, indicating that the
variability was mainly due to the softest X--rays, which are confined to a size
of less than a parsec.
All of our Seyfert~2s which had sufficient counts for accurate spectral
fitting,
as well as both Seyfert~1/QSOs, have soft X--ray photon indices of $\sim3$,
similar to the Seyfert~2s observed by TUM. The wide--spread occurrence of such
steep slopes suggests that this value of $\Gamma\sim3$ is the norm for a wide
variety of AGN, namely Seyfert~2s {\it and\/} many Seyfert~1/QSOs. Therefore,
discussing relatively steep ($\Gamma\sim3$), X--ray--weak objects versus flat
($\Gamma\sim2$), X--ray--strong objects may be a more fundamental way to
separate
Seyferts with respect to the soft X--rays than the usual type~1--type~2
dichotomy (derived primarily from optical spectra).
There are several possible explanations for these steep slopes. One is the
presence of a very soft ($<1$~keV) excess in addition to a flatter underlying
continuum. We see strong evidence in the spectral fits to some of our objects
for such a component, but a physical model for this excess still needs to be
determined---it could be strong iron and/or oxygen line emission, a
black--body,
or even a thermal plasma. However, several of our objects show no definite
preference for anything other than a steep power--law. Alternatively, both flat
and steep components could be present in some Seyferts, with one or the other
dominating depending on internal physical conditions. Or the steep and flat
spectra observed in different objects may have the same basic origin, but with
variance of one or more parameters affecting the measured slope. Distinguishing
between these and other models for the X--ray emission from Seyferts can best
be
done by testing multiple--component models over the entire 0.1---10~keV range,
where the distinguishing spectral signatures of competing models can be most
clearly identified. Thus, obtaining high---SNR spectra of X--ray weak Seyferts,
with several thousand of counts both in the soft and hard X--rays, should prove
a profitable pursuit of current and future X--ray missions.
Finally, we obtained a ROSAT HRI image of one Seyfert~2 (NGC~5005) and found
about 13\% of the flux to come from an extended component. This implies that
multiple components of the soft--X--ray spectra of Seyferts may arise in
spatially distinct regions, as has been previously observed primarily in
brighter objects. Further, deeper images of X--ray--weak Seyferts will be
necessary to determine the physical processes giving rise to these components,
as well as how common such phenomena are in Seyfert galaxies.
\acknowledgements
We thank Jane Turner for much help in understanding the PROS and XSPEC
software,
the ROSAT data, and the specifications of the PSPC, and for providing us with
the results of TUM and TGM before publication. This work was supported by NASA
grants NAG~5--1358 and NAG~5--1719.
\clearpage
\addtocounter{page}{+3}
|
2,869,038,156,688 | arxiv | \section{Introduction}\label{sec:intro}
The concept of first passage time arises in various applications in
biology, biochemistry, ecology, physics, and biophysics (see
\cite{grigoriev2002kinetics}, \cite{holcman2004escape}, \cite{redner},
\cite{Venu} \cite{schuss2007narrow}, \cite{ricc1985}, and the
references therein). Narrow escape or capture problems are first
passage time problems that characterize the expected time it takes for
a Brownian ``particle'' to reach some absorbing set of small
measure. These problems are of singular perturbation type as they
involve two spatial scales: the ${\mathcal O}(1)$ spatial scale of the
confining domain and the ${\mathcal O}(\varepsilon)$ asymptotically small
scale of the absorbing set. Narrow escape and capture problems arise
in various applications, including estimating the time it takes for a
receptor to hit a certain target binding site, the time it takes for a
diffusing surface-bound molecules to reach a localized signaling
region on the cell membrane, or the time it takes for a predator to
locate its prey, among others (cf.~\cite{benichou2014first},
\cite{BL}, \cite{coombs2009}, \cite{cheviakov2010asymptotic},
\cite{Holcman2014}, \cite{LBW2017}, \cite{singer2006narrow},
\cite{PWPK2010}, \cite{Venu}). A comprehensive overview of the
applications of narrow escape and capture problems in cellular biology
is given in \cite{HolcmanReview2014}.
In this paper, we consider a narrow capture problem that involves
determining the MFPT for a Brownian particle, confined in a bounded
two-dimensional domain, to reach one of $m$ small stationary circular
absorbing traps located inside the domain. The average MFPT for this
diffusion process is the expected time for capture given a uniform
distribution of starting points for the random walk. In the limit of
small trap radius, this narrow capture problem can be analyzed by
techniques in strong localized perturbation theory
(cf.~\cite{WHK1993}, \cite{WK1993}). For a disk-shaped domain spatial
configurations of small absorbing traps that minimize the average MFPT
domain were identified in \cite{KTW2005}. However, the problem of
identifying optimal trap configurations in other geometries is largely
open. In this direction, the specific goal of this paper is to develop
and implement a hybrid asymptotic-numerical theory to identify optimal
trap configurations in near-disk domains and in the ellipse.
In \S~\ref{sec:mfpt_asy}, we use a perturbation approach to derive a
two-term approximation for the average MFPT in a class of near-disk
domains in terms of a boundary deformation parameter $\sigma\ll
1$. In our analysis, we allow for a smooth, but otherwise arbitrary,
star-shaped perturbation of the unit disk that preserves the domain
area. At each order in $\sigma$, an approximate solution is derived
for the MFPT that is accurate to all orders in
$\nu\equiv {-1/\log\varepsilon}$, where $\varepsilon\ll 1$ is the common radius of
the $m$ circular absorbing traps contained in the domain. To
leading-order in $\sigma$, this small-trap singular perturbation
analysis is formulated in the unit disk and leads to a linear
algebraic system for the leading-order average MFPT involving the Neumann
Green's matrix. At order ${\mathcal O}(\sigma)$, a further
linear algebraic system that sums all logarithmic terms in $\nu$ is
derived that involves the Neumann Green's matrix and certain weighted
integrals of the boundary profile characterizing the domain
perturbation. In \S~\ref{sec:mfpt_num}, we show how to numerically
implement this asymptotic theory by using the analytical expression
for the Neumann Green's function for the unit disk together with
the trapezoidal rule to compute certain weighted integrals of the
boundary profile with high precision. From this numerical
implementation of our asymptotic theory, and combined with either a
simple gradient descent procedure or a particle swarming
approach \cite{kennedy2010}, we can numerically identify optimal trap
configurations that minimize the average MFPT in near-disk domains. In
\S~\ref{sec:examples}, we illustrate our hybrid asymptotic-numerical
framework by determining some optimal trap configurations in various
specific near-disk domains.
For a general 2-D domain containing small absorbing traps, a singular
perturbation analysis in the limit of small trap radii, related to
that in \cite{Venu}, \cite{coombs2009}, \cite{KTW2005}, and
\cite{WHK1993}, shows that the average MFPT is closely approximated by
the solution to a linear algebraic system involving the Neumann
Green's matrix. The challenge in implementing this analytical theory
is that, for an arbitrary 2-D domain, a full PDE numerical solution of
the Neumann Green's function and its regular part is typically
required to calculate this matrix. However, for an elliptical domain,
in \eqref{cell:finz_g} and \eqref{cell:R0} below, we provide a new
explicit representation of this Neumann Green's function and its
regular part. These explicit formulae allow for a rapid numerical
evaluation of the Neumann Green's interaction matrix for a given
spatial distribution of the centers of the circular traps in the
ellipse. The linear algebraic system determining the average MFPT is
then coupled to a gradient descent numerical procedure in order to
readily identify optimal trap configurations that minimize the average
MFPT in an ellipse. Although, a similar formula for the Neumann
Green's function has been derived previously for a rectangular domain
(cf.~\cite{marshall}, \cite{MHB}, \cite{KWW2010}), and an explicit and
simple formula exists for the disk \cite{KTW2005}, to our knowledge
there has been no prior derivation of a rapidly converging infinite
series representation for the Neumann Green's function in an
ellipse. The derivation of this Neumann Green's function using
elliptic cylindrical coordinates is deferred until \S~\ref{sec:g_ell}.
With this explicit approach to determine the Neumann Green's matrix,
in \S~\ref{sec:ellipse} we develop a hybrid asymptotic-numerical
framework to approximate optimal trap configurations that minimize the
average MFPT in an ellipse of a fixed area. In \S~\ref{ell:ex} we
implement our hybrid method to investigate how the optimal trap
patterns change as the aspect ratio of the ellipse is varied. The
results from the hybrid theory for the ellipse are favorably compared
with full PDE numerical results computed from a computationally
intensive numerical procedure of using the closest point method
\cite{IWWC2019} to compute the average MFPT and a particle swarming
approach \cite{kennedy2010} to numerically identify the optimum trap
configuration. As the ellipse becomes thinner, our hybrid theory shows
that the optimal trap pattern for $m=2,\ldots,5$ identical traps
becomes collinear along the semi-major axis of the ellipse. In the
limit of a long and thin ellipse, in \S~\ref{sec:thin} a thin-domain
asymptotic analysis is formulated and implemented to accurately
predict the optimal locations of collinear trap configurations and the
corresponding optimal average MFPT.
In \S~\ref{sec:discussion}, we show that the optimal trap
configurations that minimize the average MFPT also correspond to trap
patterns that maximize the coefficient of order ${\mathcal O}(\nu^2)$
in the asymptotic expansion of the fundamental Neumann eigenvalue of
the Laplacian in the perforated domain. This fundamental eigenvalue
characterizes the rate of capture of the Brownian particle by the
traps. Eigenvalue optimization problems for the fundamental Neumann
eigenvalue in a domain with small absorbing traps have been studied in
\cite{KTW2005} for the unit disk. The results herein extend this
previous analysis to the ellipse and to near-disk domains.
\section{Asymptotics of the MFPT in Near-Disk Domains}\label{sec:mfpt_asy}
We derive an asymptotic approximation for the MFPT for a class of
near-disk 2-D domains that are defined in polar coordinates by
\begin{equation}\label{PerturbPar}
\Omega_{\sigma} = \Big{ \{ }(r,\theta)\, \Big{|}\, 0 < r \leq 1 +
\sigma h(\theta)\,, \,\, 0 \leq \theta \leq 2 \pi \Big{ \} }\,,
\end{equation}
where the boundary profile, $h(\theta)$, is assumed to be an
${\mathcal O}(1)$, $C^{\infty}$ smooth $2\pi$ periodic function with
$\int_0^{2\pi} h(\theta)\,d\theta=0$. Observe that
$\Omega_{\sigma}\to \Omega$ as $\sigma \to 0$, where $\Omega$ is the
unit disk. Since $\int_{0}^{2\pi} h(\theta)\, d\theta=0$, the domain
area $|\Omega_{\sigma}|$ for $\sigma\ll 1$ is
$|\Omega_{\sigma}|= \pi + {\mathcal O}(\sigma^2)$.
Inside the perturbed disk $\Omega_{\sigma}$, we assume that there are
$m$ circular traps of a common radius $\varepsilon\ll 1$ that are
centered at arbitrary locations $\v{x}_1,\ldots,\v{x}_m$ with
$|\v{x}_i-\v{x}_j|={\mathcal O}(1)$ and
$\mbox{dist}(\partial\Omega_{\sigma},\v{x}_j)= {\mathcal O}(1)$ as
$\varepsilon\to 0$. The $j$-th trap, centered at some
$\v{x}_j\in \Omega_{\sigma}$, is labelled by
$\Omega_{\varepsilon j} = \{\v{x} : |\v{x} - \v{x}_j| \leq \varepsilon \}$.
The near-disk domain with the union of the trap regions deleted is
denoted by $\bar{\Omega}_{\sigma}$. In $\bar{\Omega}_{\sigma}$, it is
well-known that the mean first passage time (MFPT) for a Brownian
particle starting at a point $\v{x} \in \bar{\Omega}_{\sigma}$ to be
absorbed by one of the traps satisfies (cf.~\cite{redner})
\begin{equation}\label{Ellip_Model}
\begin{split}
D\, \Delta u = -1\,, &\quad \v{x} \in \bar{\Omega}_{\sigma}\,; \qquad
\bar{\Omega}_{\sigma} \equiv \Omega_{\sigma} \setminus
\cup_{j=1}^{m}\Omega_{\varepsilon j} \,, \\
\partial_n u = 0\,, \quad \v{x} \in \partial \Omega_{\sigma}\,; &
\qquad u = 0\,, \quad \v{x} \in \partial \Omega_{\varepsilon j}\,,
\quad j = 1, \ldots,m\,.
\end{split}
\end{equation}
In terms of polar coordinates, the Neumann boundary condition in
\eqref{Ellip_Model} becomes
\begin{equation}\label{PolarBC}
\begin{split}
u_r - &\frac{ \sigma h_{\theta}}{(1 + \sigma h)^2} u_{\theta}= 0 \quad
\text{on} \quad r = 1 +\sigma h(\theta)\,.
\end{split}
\end{equation}
For an arbitrary arrangement $\lbrace{\v{x}_1,\ldots,\v{x}_m\rbrace}$ of the
centers of the traps, and for $\sigma\to 0$ and $\varepsilon\to 0$, we
will derive a reduced problem consisting of two linear algebraic
systems that provide an asymptotic approximation to the MFPT that has
an error ${\mathcal O}(\sigma^2,\varepsilon^2)$. These
linear algebraic systems involve the Neumann Green's matrix and certain
weighted integrals of the boundary profile $h(\theta)$.
To analyze \eqref{Ellip_Model}, we use a regular perturbation series
to approximate \eqref{Ellip_Model} for the near-disk domain to
problems involving a unit disk. We expand the MFPT $u$ as
\begin{equation}\label{U_Sigma_Expand}
\begin{split}
u = u_0 + \sigma u_1 + \ldots \,,
\end{split}
\end{equation}
and substitute it into \eqref{Ellip_Model} and \eqref{PolarBC}. This
yields the leading-order problem
\begin{equation}\label{LeadingOrder}
\begin{split}
D\, \Delta u_0 = -1\,, &\quad \v{x} \in \bar{\Omega}\,; \qquad
\bar{\Omega} \equiv \Omega \setminus \cup_{j=1}^{m}\Omega_{\varepsilon j}\,, \\
\partial_n u_0 = 0\,, \quad \text{on} \quad r = 1\,; & \qquad u_0 = 0\,,
\quad \v{x} \in \partial \Omega_{\varepsilon j}\,, \quad j = 1, \ldots,m\,,
\end{split}
\end{equation}
together with the following problem for the next order correction $u_1$:
\begin{equation}\label{OrderSigma}
\begin{split}
\Delta u_1 = 0 \,, &\quad \v{x} \in \bar{\Omega}\,; \qquad \partial_r u_1 =
-h u_{0rr} + h_{ \theta} u_{0 \theta}\,, \quad \text{on} \quad r = 1\,;
\\ \quad u_1 &=0\,, \quad \v{x} \in \partial \Omega_{\varepsilon j}\,,
\quad j = 1, \ldots,m\,.
\end{split}
\end{equation}
Observe that \eqref{LeadingOrder} and \eqref{OrderSigma} are
formulated on the unit disk and not on the perturbed disk. Assuming
$\varepsilon^2 \ll \sigma $, we use \eqref{U_Sigma_Expand} and
$|\Omega_{\sigma}| = |\Omega| + {\mathcal O}(\sigma^2)$ to derive an
expansion for the average MFPT, defined by
$\overline{u} \equiv \frac{1}{|\bar{\Omega}_{\sigma}|}\int_{\bar{\Omega}_{\sigma}} u \,
\text{d}\v{x}$, in the form
\begin{equation}\label{AveMFPT_Perturb}
\begin{split}
\overline{u} = \frac{1}{|\Omega|} \int_{\Omega } u_0
\,\text{d}\v{x} + \sigma \left[ \frac{1}{|\Omega |} \int_{\Omega }
u_1 \,\text{d}\v{x} + \frac{1}{|\Omega|}\int_{0}^{2\pi}
h(\theta)\,u_0|_{r=1} \, \text{d}\theta \right] +
\mathcal{O}(\sigma^2,\varepsilon^2)\,,
\end{split}
\end{equation}
where $|\Omega|=\pi$ and $u_0|_{r=1}$ is the leading-order solution
$u_0$ evaluated on $r=1$.
Since the asymptotic calculation of the leading-order solution $u_0$
by the method of matched asymptotic expansions in the limit
$\varepsilon\to 0$ of small trap radius was done previously in
\cite{coombs2009} (see also \cite{Venu} and \cite{WHK1993}), we only
briefly summarize the analysis here. In the inner region near the
$j$-th trap, we define the inner variables
$\v{y} = \varepsilon^{-1}(\v{x} - \v{x}_j)$ and
$u_0(\v{x}) = v_j(\varepsilon \v{y}+\v{x}_j)$ with $\rho = |\v{y}|$, for
$j = 1, \ldots, m$. Upon writing \eqref{LeadingOrder} in terms of
these inner variables, we have for $\varepsilon \to 0$ and for each
$j=1,\ldots,m$ that
\begin{equation}\label{LeadingOrderInner}
\begin{split}
\Delta_{\rho}\, v_j & = 0 \,, \quad \rho > 1\,;\qquad
v_j =0\,, \quad \mbox{on} \,\,\, \rho = 1\,,
\end{split}
\end{equation}
where
$\Delta_{\rho} \equiv \partial_{\rho \rho} + \rho^{-1}
\partial_{\rho}$. This admits the radially symmetric solution
$v_j=A_j\log\rho$, where $A_j$ is an unknown constant. From
an asymptotic matching of the inner and outer solutions we
obtain the required singularity condition for the outer
solution $u_0$ as $\v{x} \to \v{x}_j$ for $j = 1, \ldots, m$.
In this way, we obtain that $u_0$ satisfies
\begin{subequations}\label{LeadingOrder_CompleteOuter}
\begin{align}
\Delta u_0 = -{1/D}\,, \quad \v{x} & \in \Omega\setminus
\lbrace{\v{x}_1,\ldots,\v{x}_{m}\rbrace}\,;\quad \partial_r u_{0} =0\,,
\,\,\, \v{x} \in \partial \Omega\,; \label{LeadingOrder_CompleteOuterA}\\
u_0 \sim A_j \log |\v{x} - \v{x}_j| &+ A_j/\nu \quad \text{as} \quad\v{x} \to \v{x}_j\,,
\qquad j = 1, \ldots, m\,,\label{LeadingOrder_CompleteOuterB}
\end{align}
\end{subequations}
where $\nu \equiv -1/\log\varepsilon$. In terms of the Delta distribution,
\eqref{LeadingOrder_CompleteOuter} implies that
\begin{equation}\label{LeadingOuter_delta}
\Delta u_0 = -\frac{1}{D} + 2 \pi \sum_{j = 1}^{m} A_j \delta(\v{x} - \v{x}_j)\,,
\quad \v{x} \in \Omega \,; \qquad \partial_r u_{0} =0\,, \,\,\,
\v{x} \in \partial \Omega\,.
\end{equation}
By applying the divergence theorem to \eqref{LeadingOuter_delta} over
the unit disk we obtain that $\sum_{j=1}^{m} A_j = {|\Omega|/(2\pi D)}$.
The solution to \eqref{LeadingOuter_delta} is represented as
\begin{align}\label{SolutionOuterLead}
u_0 = -2 \pi \sum_{k=1}^{m} A_k G(\v{x} ; \v{x}_k) + \overline{u}_0 \,; \qquad
\overline{u}_0 = \frac{1}{|\Omega|} \int_{\Omega} u_0 \, \text{d}\v{x}\,,
\end{align}
where $G(\v{x} ; \v{x}_j)$ is the Neumann Green's function for the unit disk,
which satisfies
\begin{subequations}\label{GreenFunctionProb}
\begin{gather}
\Delta G = \frac{1}{|\Omega|} - \delta(\v{x} - \v{x}_j)\,,\quad \v{x} \in \Omega\,;
\quad \partial_n G =0\,, \,\,\, \v{x} \in \partial \Omega\,; \quad
\int_{\Omega} G \,\text{d}\v{x}=0\,, \label{GreenFunctionProb_A}\\
G \sim -\frac{1}{2\pi} \log{|\v{x} - \v{x}_j|} + R_j + \nabla_{\v{x}}R_j\cdot(\v{x}-\v{x}_j)
\quad \text{as} \quad\v{x} \to \v{x}_j\,. \label{GreenFunctionProb_B}
\end{gather}
\end{subequations}
Here, $R_j \equiv R(\v{x}_j)$ is the regular part of the Green's function
at $\v{x} = \v{x}_j$. Expanding \eqref{SolutionOuterLead} as $\v{x} \to \v{x}_j$,
and using the singularity behaviour of $G(\v{x} ; \v{x}_j)$ given in
\eqref{GreenFunctionProb_B}, together with the far-field behavior
\eqref{LeadingOrder_CompleteOuterB} for $u_0$, we obtain the
matching conditon:
\begin{equation}\label{SolutionOuterLead_Expand}
-2 \pi A_j\,R_j -2 \pi
\sum_{i \neq j}^{m} A_i \, G(\v{x}_j ; \v{x}_i) + \overline{u}_0 \sim
{A_j/\nu} \,, \qquad \mbox{for} \quad j=1,\ldots, m\,.
\end{equation}
This yields a linear algebraic system for $\overline{u}_0$ and
$\mathcal{A} \equiv (A_1, \ldots, A_{m})^T$, given by
\begin{align}\label{Alg_Matrix}
( I + 2\pi \nu \, \mathcal{G}) \mathcal{A} = \nu\, \overline{u}_0 \,\v{e}\,,
\qquad \v{e}^T \mathcal{A} = \frac{|\Omega|}{2\pi D} \,.
\end{align}
Here, $\v{e} \equiv (1,\ldots,1)^T$, $\nu = -1/\log\varepsilon$, $I$
is the $m\times m$ identity matrix, and $\mathcal{G}$ is the symmetric
Green's matrix with matrix entries given by
\begin{align}\label{GreenMAtrix}
(\mathcal{G})_{jj} = R_j \,\,\, \text{for} \,\,\, i = j
\quad \text{and} \quad (\mathcal{G})_{ij} = (\mathcal{G})_{ji} =
G(\v{x}_i ; \v{x}_j) \,\,\, \text{for} \,\,\, i \neq j \,.
\end{align}
We left-multiply the equation for $\mathcal{A}$ in \eqref{Alg_Matrix}
by $\v{e}^T$, which isolates $\overline{u}_0$. By using this
expression in \eqref{Alg_Matrix}, and defining the matrix $E$ by
$E={\v{e}\v{e}^T/m}$, we get
\begin{equation}\label{u0_bar}
\Big{[} I + 2 \pi \nu (I - E)\mathcal{G} \Big{]} \mathcal{A} =
\frac{|\Omega|}{2\pi D m} \v{e} \,, \quad \text{and} \quad
\overline{u}_0 = \frac{|\Omega|}{2\pi D \nu m}
+ \frac{2 \pi}{m} \v{e}^T \mathcal{G} \mathcal{A} \,.
\end{equation}
\begin{remark} {\em The result \eqref{u0_bar} effectively sums all the
logarithmic terms in powers of $\nu={-1/\log\varepsilon}$. To estimate the
error with this approximation with regards to the leading-order
in $\sigma$ problem \eqref{LeadingOrder}, we calculate using
\eqref{SolutionOuterLead} the refined local behavior
\begin{equation}\label{err:1}
u_0\sim -2 \pi \left(A_j\,R_j +\sum_{i \neq j}^{m} A_i \, G(\v{x}_j ; \v{x}_i)
\right) + \overline{u}_0 + \v{f}_j \cdot (\v{x}-\v{x}_j) \,, \quad
\mbox{as} \quad \v{x}\to \v{x}_j,
\end{equation}
where
$\v{f}_j\equiv -2\pi\left(A_j\nabla_{\v{x}} R_{j} + \sum_{i \neq j}^{m}
A_i \, \nabla_{\v{x}}G(\v{x} ; \v{x}_i)\vert_{\v{x}=\v{x}_j}\right)$. To account for this
gradient term, near the $j$-th trap we must modify the inner
expansion as $v_j\sim A_j\log\rho + \varepsilon v_{j1}$. Here $\Delta_{\v{y}}v_{j1}=0$
in $|\v{y}|\geq 1$, with $v_{j1}=0$ on $|\v{y}|=1$ and
$v_{j1}\sim \v{f}_j\cdot \v{y}$ as $|\v{y}|\to\infty$. The solution is
$v_{j1}=\v{f}_j\cdot\left(\v{y} - {\v{y}/|\v{y}|^2}\right)$. The far field behavior
for $v_{j1}$ implies that in the outer region we must have that $u\sim u_0
+\varepsilon^2 w_0+\cdots$, where $w_0\sim -\v{f}_j\cdot{(\v{x}-\v{x}_j)/|\v{x}-\v{x}_j|^2}$
as $\v{x}\to \v{x}_j$. This shows that the $\varepsilon$-error estimate for $u_0$ is
${\mathcal O}(\varepsilon^2)$, as claimed in \eqref{AveMFPT_Perturb}.}
\end{remark}
Next, we study the $\mathcal{O}(\sigma)$ problem for $u_1$ given in
\eqref{OrderSigma}. We construct an inner region near each of the
traps by introducing the inner variables
$\v{y} = \varepsilon^{-1}(\v{x} - \v{x}_j)$ and
$u_1(\v{x}) = V_j(\varepsilon \v{y}+\v{x}_j)$ with $ \rho = |\v{y}| $. From
\eqref{OrderSigma}, this yields the same leading-order inner problem
\eqref{LeadingOrderInner} with $v_j$ replaced by $V_j$. The radially
symmetric solution is $V_j = B_j \log\rho$, where $B_j$ is a constant
to be found. By matching this far-field behavior of the inner solution
to the outer solution we obtain the singularity behavior for
$u_1$ as $\v{x} \to \v{x}_j$ for $j = 1, \ldots, m$. In this way, we find
from \eqref{OrderSigma} that $u_1$ satisfies
\begin{subequations}\label{OrderSigma_CompleteOuter}
\begin{align}
\Delta u_1 & = 0\,, \quad \v{x}\in \Omega \setminus \{\v{x}_1,\ldots,\v{x}_{m} \}\,;
\quad \partial_r u_1 = F(\theta) \,, \quad \text{on}
\quad r = 1; \label{OrderSigma_CompleteOuterA}\\
u_1 &\sim B_j \log{|\v{x} - \v{x}_j|} + B_j/\nu \quad \text{as} \quad
\v{x} \to \v{x}_j \quad j = 1, \ldots, m\,,\label{OrderSigma_CompleteOuterB}
\end{align}
where $\nu = -1/ \log\varepsilon$ and $F(\theta)$ is defined by
\begin{equation} \label{ftheta}
F(\theta) \equiv -h u_{0rr}\vert_{r=1} + h_{ \theta} u_{0 \theta}\vert_{r=1} =
\left( h u_{0\theta} \right)_{\theta} + \frac{h}{D} \,.
\end{equation}
In deriving \eqref{ftheta} we used
$u_{0rr}=-u_{0\theta\theta}+{1/D}$ at $r=1$, as obtained from
\eqref{LeadingOrder}.
\end{subequations}
Next, we introduce the Dirac distribution and write the problem
\eqref{OrderSigma_CompleteOuter} for $u_1$ as
\begin{align}\label{OrderSigma_CompleteOuter2}
\Delta u_1 = 2\pi \sum_{i=1}^{m} B_i \,\,\delta(\v{x} - \v{x}_i)\,,
\quad \v{x} \in \Omega \,; \qquad u_{1r} = F(\theta)\,, \quad \text{on}
\quad r = 1\,.
\end{align}
Since $\int_0^{2 \pi} F(\theta) \, \text{d}\theta = 0$, the divergence
theorem yields $\sum_{j=1}^{m} B_j = 0$. We decompose
\begin{equation}\label{u1_Sol}
u_1 = -2 \pi \sum_{i=1}^{m} B_i G(\v{x} ; \v{x}_i) + u_{1p}+ \overline{u}_1\,,
\end{equation}
where $\overline{u}_1$ is the unknown average of $u_1$ over the unit
disk, and $G(\v{x};\v{x}_i)$ is the Neumann Green's function satisfying
\eqref{GreenFunctionProb}. Here, $u_{1p}$ is taken to be the unique
solution to
\begin{align}\label{u1P_Prob}
\Delta u_{1p} &= 0, \quad \v{x} \in \Omega ; \quad
\partial_r u_{1p} = F(\theta) \, \quad \text{on} \quad r = 1; \quad
\int_{\Omega} u_{1p} \, \text{d}\v{x} = 0 \,.
\end{align}
Next, by expanding \eqref{u1_Sol} as $\v{x} \to \v{x}_j$, we use the
singularity behaviour of $G(\v{x} ; \v{x}_j)$ as given in
\eqref{GreenFunctionProb_B} to obtain the local behavior of $u_1$ as
$\v{x} \to \v{x}_j$, for each $j=1,\ldots,m$. The asymptotic matching
condition is that this behavior must agree with that given in
\eqref{OrderSigma_CompleteOuterB}. In this way, we obtain a linear
algebraic system for the constant $\overline{u}_1$ and the vector
$\v{B} = (B_1,\ldots,B_{m})^T$, which is given in matrix form by
\begin{equation}\label{System_BNu_Mat}
(I + 2 \pi \nu \mathcal{G} )\v{B} = \nu \overline{u}_1 \v{e} + \nu \v{u}_{1p}\,,
\qquad \v{e}^T \v{B} = 0 \,.
\end{equation}
Here, $I$ is the identity, $\v{e} = (1,\ldots,1)^T$, and
$\v{u}_{1p} = (u_{1p}(\v{x}_1), \ldots, u_{1p}(\v{x}_{m}))^T$. Next, we left
multiply the equation for $\v{B}$ by $\v{e}^T$. This determines
$\overline{u}_1$, which is then re-substituted into
\eqref{System_BNu_Mat} to obtain the uncoupled problem
\begin{equation}\label{u1_bar}
\Big{[} I + 2 \pi \nu (I - E)\mathcal{G} \Big{]} \v{B} = \nu (I - E)\v{u}_{1p}\,,
\quad \text{and} \quad \overline{u}_1 = - \frac{1}{m}
\v{e}^T \v{u}_{1p} + \frac{ 2 \pi}{m} \v{e}^T \mathcal{G} \v{B} \,,
\end{equation}
where $E\equiv {\v{e}\v{e}^T/m}$. Since $\v{e}^T(I-E)=0$, we observe from
\eqref{u1_bar} that $\v{e}^T\v{B}=0$, as required. Equation
\eqref{u1_bar} gives a linear system for the $\mathcal{O}(\sigma)$ average
MFPT $\overline{u}_1$ in terms of the Neumann Green's matrix $\mathcal{G}$,
and the vector $\v{u}_{1p}$.
To determine $u_{1p}(\v{x}_j)$, we use Green's second identity on
\eqref{u1P_Prob} and \eqref{GreenFunctionProb} to obtain a
line integral over the boundary $\v{x}\in \partial\Omega$ of the unit
disk. Then, by using \eqref{ftheta} for $F(\theta)$, integrating
by parts and using $2\pi$ periodicity we get
\begin{equation}\label{u1p:bnd_2}
u_{1p}(\v{x}_j) = \int_{0}^{2\pi} G(\v{x};\v{x}_j) F(\theta) \, d\theta
= \int_{0}^{2\pi} G(\v{x};\v{x}_j) \frac{h(\theta)}{D}\,
d\theta - \int_{0}^{2\pi} h(\theta) u_{0\theta} \partial_{\theta}
G(\v{x};\v{x}_j) \, d\theta \,.
\end{equation}
Then, by setting \eqref{SolutionOuterLead} for $u_0$ into
\eqref{u1p:bnd_2}, we obtain in terms of the $A_k$ of \eqref{u0_bar}
that%
\begin{subequations}\label{u1p:all}
\begin{equation}\label{u1p:bnd}
u_{1p}(\v{x}_j) = \frac{1}{D} \int_{0}^{2\pi} G(\v{x};\v{x}_j) h(\theta) \,
d\theta + 2\pi \sum_{k=1}^{m} A_k J_{jk}\,.
\end{equation}
Here, $J_{jk}$ is defined by the following
boundary integral with $\v{x}=(\cos(\theta), \sin(\theta))^T$:
\begin{equation}\label{u1p:Jjk}
J_{jk} \equiv \int_{0}^{2\pi} h(\theta) \left(\partial_{\theta} G(\v{x};\v{x}_j)\right)
\left(\partial_{\theta} G(\v{x};\v{x}_k)\right)\, d\theta \,.
\end{equation}
\end{subequations}
>From a numerical evaluation of the boundary integrals in
\eqref{u1p:all}, we can calculate
$\v{u}_{1p}= (u_{1p}(\v{x}_1), \ldots, u_{1p}(\v{x}_{m}))^T$, which
specifies the right-hand side of the linear system \eqref{u1_bar} for
$\v{B}$. After determining $\v{B}$, we obtain $\overline{u}_1$ from
the second relation in \eqref{u1_bar}. Finally, by substituting
\eqref{SolutionOuterLead} for $u_0$ into \eqref{AveMFPT_Perturb}, and
recalling that $\int_{0}^{2\pi} h(\theta)\,d\theta=0$, we obtain a
two-term expansion for the average MFPT given by
\begin{equation}\label{final:avemfpt}
\overline{u} \sim \overline{u}_0 + \sigma
\left( \overline{u}_1 - 2 \sum_{k=1}^{m} A_k \int_{0}^{2\pi}
G(\v{x};\v{x}_k) h(\theta) \, d\theta \right) \,.
\end{equation}
Here, $\v{x}\in \partial\Omega$ and $\overline{u}_0$ is determined
from \eqref{u0_bar}.
\section{Optimizing Trap Configurations for the MFPT
in the Near-Disk}\label{sec:mfpt_num}
To numerically evaluate the boundary integrals in \eqref{u1p:all} and
\eqref{final:avemfpt}, we need explicit formulae for $G(\v{x};\v{x}_j)$ and
$\partial_\theta G(\v{x};\v{x}_j)$ on the boundary of the unit disk where
$\v{x}=(\cos\theta,\sin\theta)^T$. For the unit disk, we obtain from
equation (4.3) of \cite{KTW2005} that
\begin{subequations}\label{gr:gmrm}
\begin{gather}
G(\v{x};\v{x}_j) = -\frac{1}{2\pi}\log|\v{x}-\v{x}_j| - \frac{1}{4\pi}
\log\left( |\v{x}|^2|\v{x}_j|^2 + 1 - 2\v{x} \cdot \v{x}_j\right)
+ \frac{ (|\v{x}|^2 + |\v{x}_j|^2 )}{4\pi} - \frac{3}{8\pi} ,
\label{gr:gm} \\
R(\v{x}_j;\v{x}_j) = -\frac{1}{2\pi}\log\left(1 - |\v{x}_j|^2\right) +
\frac{|\v{x}_j|^2}{2\pi} - \frac{3}{8\pi} \,.
\label{gr:rm}
\end{gather}
\end{subequations}
For an arbitrary configuration $\lbrace{\v{x}_1,\ldots,\v{x}_m\rbrace}$ of
traps, these expressions can be used to evaluate the Neumann Green's
matrix $\mathcal{G}$ of \eqref{GreenMAtrix} as needed in \eqref{u0_bar} and
\eqref{u1_bar}.
Next, by setting $\v{x}=(\cos\theta,\sin\theta)^T$ we can evaluate
$G(\v{x};\v{x}_j)$ on $\partial\Omega$, and then calculate its tangential
boundary derivative $\partial_\theta G(\v{x};\v{x}_j)$. By using
\eqref{gr:gm}, we obtain
\begin{subequations}\label{disk:gall}
\begin{align}
G(\v{x};\v{x}_j) & = -\frac{1}{2\pi} \log\left(1 + r_j^2 - 2 r_j
\cos(\theta-\theta_j)\right) + \frac{1}{4\pi}(1+r_j^2) -
\frac{3}{8\pi} \,, \label{disk:g} \\
\partial_\theta G(\v{x};\v{x}_j) & = - \frac{r_j}{\pi}
\frac{\sin(\theta-\theta_j)}{\left[r_j^2+1-2r_j\cos(\theta-\theta_j)\right]}
\,, \label{disk:gtheta}
\end{align}
\end{subequations}
where $r_j\equiv |\v{x}_j|$ and
$\v{x}_j=r_j(\cos\theta_j,\sin\theta_j)^T$. Then, since
$\int_{0}^{2\pi} h(\theta)\, d\theta=0$, we can write the two boundary
integrals appearing in \eqref{u1p:all} and \eqref{final:avemfpt} explicitly
as
\begin{subequations}\label{disk:int_all}
\begin{gather}
\int_{0}^{2\pi} G(\v{x};\v{x}_j) h(\theta)\, d\theta = -\frac{1}{2\pi}
\int_{0}^{2\pi} h(\theta) \log\left(1+r_j^2 - 2r_j \cos(\theta-\theta_j)\right)
\,d\theta\,,\\
J_{jk} = \frac{r_j r_k}{\pi^2} \int_{0}^{2\pi}
\frac{h(\theta) \sin(\theta-\theta_j)\sin(\theta-\theta_k)}{
\left[ r_j^2+1-2r_j\cos(\theta-\theta_j)\right]
\left[ r_k^2+1-2r_k\cos(\theta-\theta_k)\right]} \, d\theta \,.
\end{gather}
\end{subequations}
Although for an arbitrary $h(\theta)$ the integrals in
\eqref{disk:int_all} cannot be evaluated in closed form, they can be
computed to a high degree of accuracy with relatively few grid points
using the trapezoidal rule since this quadrature rule is exponentially
convergent for $C^{\infty}$ smooth periodic functions
\cite{tref_trap}. When $|x_j|<1$, the logarithmic singularities off of
the axis of integration for $J_{jk}$ in \eqref{disk:int_all} are mild
and pose no particular problem. In this way, we can numerically
calculate the two-term expansion \eqref{final:avemfpt} for the average
MFPT with high precision.
Then, to determine the optimal trap configuration we can either use the
particle swarming approach \cite{kennedy2010}, or the ODE relaxation
dynamics scheme
\begin{equation}
\frac{d\v{z}}{dt} = -\nabla_{\v{z}} \overline{u} \,, \qquad
\mbox{where} \quad \v{z}\equiv (x_1,y_1,\ldots,x_m,y_m)^T \,,
\label{near_disk:relax}
\end{equation}
and $\overline{u}$ is given in \eqref{final:avemfpt}. Starting from an
admissible initial state $\v{z}\vert_{t=0}$, where
$\v{x}_j=(x_j,y_j)\in\Omega_0$ at $t=0$ for $j=1,\ldots,m$, the gradient
flow dynamics \eqref{near_disk:relax} converges to a local minimum of
$\overline{u}$. Because of our high precision in calculating
$\overline{u}$, a centered difference scheme with mesh spacing
$10^{-4}$ was used to estimate the gradient in
\eqref{near_disk:relax}.
\subsection{Examples of the Theory}\label{sec:examples}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_1a.eps}\label{fig:cos4_3}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_2.eps}\label{fig:cos4_4}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_3.eps}\label{fig:cos4_7}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_3_better.eps}\label{fig:cos4_7_better}}
\caption{Optimal trap patterns for $D=1$ in a near-disk domain with
boundary $r=1+\sigma \cos(4\theta)$, with $\sigma=0.1$, that
contains $m$ traps of a common radius $\varepsilon=0.05$. Computed from
minimizing \eqref{final:avemfpt} using the ODE relaxation scheme
\eqref{near_disk:relax}. Left: $m=3$, $\overline{u}\approx
0.2962$. Inter-trap computed distances are $0.9588$, $0.9588$, and
$0.9540$. This result is close to the full PDE simulation results of
Fig.~\ref{fig:full_near}. Left middle: $m=4$, $\overline{u}\approx
0.1927$. This is a ring pattern of traps with ring radius
$r_c\approx 0.6215$. Right Middle: $m=7$,
$\overline{u}\approx 0.0925$. Right: $m=7$,
$\overline{u}\approx 0.0912$. The two patterns for $m=7$ give nearly
the same values for $\overline{u}$, with the rightmost pattern
giving a slightly lower value.}
\end{center}
\label{fig:neardisk_cos4}
\end{figure}
We first set $\sigma=0.1$ and consider the boundary profile
$h(\theta)=\cos(N\theta)$, where $N$ is a positive integer
representing the number of boundary folds. In \cite{IWWC2019}, an
explicit two-term expansion for the average MFPT $\overline{u}$ was
derived for the special case where $m$ traps are equidistantly spaced
on a ring of radius $r_c$, concentric within the unperturbed disk. For
such a ring pattern, in Proposition 1 of \cite{IWWC2019} it was proved
that when ${N/m}\notin \mathbb{Z}^{+}$, then
$\overline{u}\sim \overline{u}_0 + {\mathcal O}(\sigma^2)$, as the
correction at order ${\mathcal O}(\sigma)$ vanishes
identically. Therefore, in order to determine the optimal trap pattern
when ${N/m}\notin \mathbb{Z}^{+}$ we must consider arbitrary trap
configurations, and not just ring patterns of traps. By minimizing
\eqref{final:avemfpt} using the ODE relaxation scheme
\eqref{near_disk:relax}, in the left panel of
Fig.~\ref{fig:neardisk_cos4} we show our asymptotic prediction for the
optimal trap configuration for $N=4$ folds and $m=3$ traps of a common
radius $\varepsilon=0.05$. The optimal pattern is not of ring-type. The
corresponding results computed from the closest point method of
\cite{IWWC2019}, shown in Fig.~\ref{fig:full_near}, are very close to
the asymptotic result.
In the left-middle panel of Fig.~\ref{fig:neardisk_cos4}, we show the
optimal trap pattern computed from our asymptotic theory
\eqref{final:avemfpt} and \eqref{near_disk:relax} for the boundary
profile $h(\theta)=\cos(4\theta)$ with $m=4$ traps and
$\sigma=0.1$. The optimal pattern is now a ring pattern of traps. In
this case, as predicted by Proposition 1 of \cite{IWWC2019}, the
optimal pattern has traps on the rays through the origin that coincide
with the maxima of the domain boundary. By applying Proposition 2 of
\cite{IWWC2019}, the optimal perturbed ring radius has the expansion
$r_{c,opt}\sim 0.5985+0.1985\sigma$. When $\sigma=0.1$, this
gives $r_{c,opt}\approx 0.6184$, and compares well with the
value $r_c\approx 0.6215$ calculated from \eqref{final:avemfpt} and
\eqref{near_disk:relax}.
In the two rightmost panels of Fig.~\ref{fig:neardisk_cos4}, we show
for $h(\theta)=\cos(4\theta)$ and $\sigma=0.1$, that there are
two seven-trap patterns that give local minima for the average MFPT
$\bar{u}_0$. The minimum values of $\bar{u}_0$ for these patterns are
very similar.
Next, we construct a boundary profile with a localized protrusion,
or bulge, near $\theta=0$. To this end, we define
$f(\theta)\equiv -1 + \beta e^{-\chi \sin^{2}\left({\theta/2}\right)}$. By
using the Taylor expansion of $e^z$, combined with a simple
identity for $\int_{0}^{2\pi} \sin^{2n}(\psi)\,d\psi$, we conclude
that $\int_{0}^{2\pi} f(\theta)\,d\theta=0$ when $\beta$ is related to
$\chi$ by
\begin{equation}\label{near:bulge_prof}
\frac{1}{\beta} = \frac{1}{2\pi}\int_{0}^{2\pi} e^{-\chi \sin^{2}\left({\theta/2}\right)}\,
d\theta = \sum_{n=0}^{\infty} \frac{(-1)^n \chi^n}{2\pi n!}
\int_{0}^{2\pi} \sin^{2n}\left(\frac{\theta}{2}\right)\, d\theta =
\sum_{n=0}^{\infty} (-1)^n \frac{\chi^n (2n)!}{4^n\left(n!\right)^3} \,.
\end{equation}
As $\chi$ increases, the boundary deformation becomes increasingly
localized near $\theta=0$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.05cm,width=0.45\textwidth]{star_solution_sig01.eps}
\includegraphics[height=3.8cm,width=0.29\textwidth]{star_minimizer_sig01.eps}
\end{center}
\caption{Optimizing a three-trap pattern, with a common trap radius
$\varepsilon=0.05$, in a four-fold star-shaped domain (4-star) with boundary
profile $h(\theta)=\cos(4\theta)$ and $\sigma=0.1$. Left panel:
contour plot of the optimal PDE solution computed with closest point
method. Right panel: optimal traps locations in the 4-star domain with
computed side-lengths: $\mathbf{AB}\approx 0.9581$,
$\mathbf{BC}\approx 0.9569$, and $\mathbf{CA}\approx 0.9541$. All of
the computed interior angles are ${\pi/3}\pm \delta$, where
$|\delta|\leq 0.0015$.}\label{fig:full_near}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_5_3_in.eps}\label{fig:3bule_in}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_5_3_out.eps}\label{fig:3bulge_out}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_5_4_in.eps}\label{fig:4bulge_in}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_5_4_out.eps}\label{fig:4bulge_out}}
\caption{Optimal trap patterns for $D=1$ with $m$ traps each of radius
$\varepsilon=0.05$ in a near-disk domain with boundary
$r=1\pm \sigma f(\theta)$, where $\sigma=0.05$ and
$f(\theta)=-1+\beta e^{-10\sin^{2}\left( {\theta/2}\right)}$, with
$\beta=5.4484$. Computed from minimizing \eqref{final:avemfpt} using
the ODE relaxation scheme \eqref{near_disk:relax}. Left: $m=3$ and
inward domain bulge $r=1 - \sigma f(\theta)$. Centroid of trap
pattern is at $(-0.0886,0.0)$ and $\overline{u}\approx 0.2842$. Left
Middle: $m=3$ and outward bulge $r=1 + \sigma f(\theta)$. Centroid
is at $(0.1061,0.0)$, and $\overline{u}\approx 0.2825$. Right
Middle: $m=4$ and inward bulge $r=1 - \sigma f(\theta)$,
$\overline{u}\approx 0.1918$. Right: $m=4$ and outward bulge
$r=1 + \sigma f(\theta)$, $\overline{u}\approx 0.1916$.}
\end{center}
\label{fig:neardisk_bulge}
\end{figure}
For $\chi=10$, for which $\beta=5.4484$, in
Fig.~\ref{fig:neardisk_bulge} we show optimal trap patterns for $m=3$
and $m=4$ traps for both an outward domain bulge, where
$r=1+\sigma f(\theta)$, and an inward domain bulge, were
$r=1-\sigma f(\theta)$, with $\sigma=0.05$. For the three-trap case,
by comparing the two leftmost plots in Fig.~\ref{fig:neardisk_bulge},
we observe that an inward domain bulge will displace the trap
locations to the left, as expected intuitively. Alternatively, for an
outward bulge, the location of the optimal trap on the line of
symmetry becomes closer to the domain protrusion. An intuitive, but as
we will see below in Fig.~\ref{fig:odd_5_sar}, na\"ive interpretation
of the qualitative effect of this domain bulge is that it acts to
confine or pin a Brownian particle in this region, and so in order to
reduce the mean capture time of such a pinned particle, the best
location for a trap is to move closer to the region of protrusion.
For the case of four traps, a similar qualitative comparison of the
optimal trap configuration for an inward and outward domain bulge is
seen in the two rightmost plots in Fig.~\ref{fig:neardisk_bulge}.
In Fig.~\ref{fig:neardisk_odd}, we show optimal trap patterns from our
hybrid theory for $3\leq m\leq 5$ circular traps of radius $\varepsilon=0.05$
in a domain with boundary profile $r=1+\sigma h(\theta)$, where
$h(\theta)=\cos(3\theta)-\cos(\theta)-\cos(2\theta)$ and
$\sigma=0.075$. This boundary profile perturbs the unit disk inwards
near $\theta=\pi$ and outwards near $\theta=0$. For $m=3$, in
Fig.~\ref{fig:neardisk_full_PDE} we show a favorable comparison
between the full numerical PDE results and the hybrid results for the
optimal average MFPT and trap locations. Moreover, from the two
rightmost plots in Fig.~\ref{fig:neardisk_odd}, we observe that there
are two five-trap patterns that give local minima for $\bar{u}_0$. The
pattern that has a trap on the line of symmetry near the outward
bulge at $\theta=0$ is, in this case, not a global minimum of
the average MFPT. This indicates that hard-to-assess global
effects, rather than simply the local geometry near a protrusion, play
a central role for characterizing the optimal trap pattern.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_4_3traps.eps}\label{fig:odd_3}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_4_4traps.eps}\label{fig:odd_4}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_4_5traps.eps}\label{fig:odd_5}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{ex_4_5traps_better.eps}\label{fig:odd_5_sar}}
\caption{Optimal trap patterns for $D=1$ in a near-disk domain with
boundary $r=1+\sigma h(\theta)$, $\sigma=0.075$ and
$h(\theta)=\cos(3\theta)-\cos(\theta)-\cos(2\theta)$, that contains
$m$ traps of a common radius $\varepsilon=0.05$. Computed from minimizing
\eqref{final:avemfpt} using the ODE relaxation scheme
\eqref{near_disk:relax}. Left: $m=3$ and
$\overline{u}\approx 0.2794$. Left-Middle: $m=4$ and
$\overline{u}\approx 0.19055$. Right-Middle: $m=5$ and
$\overline{u}\approx 0.1418$. Right: $m=5$,
$\overline{u}\approx 0.1383$. The two patterns for $m=5$ are local
minimizers, with rather close values for $\overline{u}$. The global
minimum is achieved for the rightmost pattern.}
\end{center}
\label{fig:neardisk_odd}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.0cm,width=0.60\textwidth]{near_disk_m3_cos.eps}
\caption{Contour plot of the PDE numerical solution for the optimal
average MFPT and trap locations computed from the closest point
method corresponding to the parameter values in the left panel of
Fig.~\ref{fig:neardisk_odd}. Full PDE results for optimal
locations: $(-0.3382, 0.5512)$, $(-0.3288,-0.5510)$,
$(0.4410, 0.0012)$, and $\overline{u}=0.2996$. Hybrid results:
$(-0.3316, 0.5626)$, $(-0.3316, 0.5626)$, $(0.4314,0.000)$, and
$\overline{u}_0=0.2794$.}
\label{fig:neardisk_full_PDE}
\end{center}
\end{figure}
\section{Optimizing Trap Configurations for the MFPT
in an Ellipse}\label{sec:ellipse}
Next, we consider the trap optimization problem in an ellipse of
arbitrary aspect ratio, but with fixed area $\pi$. Our analysis uses
a new explicit analytical formula, as derived in
\S~\ref{sec:g_ell}, for the Neumann Green's function $G(\v{x};\v{x}_0)$ and
its regular part $R_e$ of \eqref{ell:g}.
For $m$ circular traps each of radius $\varepsilon$, the average MFPT
$\overline{u}_0$ satisfies (see \eqref{u0_bar})
\begin{equation}\label{e:u0_bar}
\overline{u}_0 = \frac{|\Omega|}{2\pi D \nu m}
+ \frac{2 \pi}{m} \v{e}^T \mathcal{G} \mathcal{A} \,,
\quad \mbox{where} \quad
\Big{[} I + 2 \pi \nu (I - E)\mathcal{G} \Big{]} \mathcal{A} =
\frac{|\Omega|}{2\pi D m} \v{e} \,.
\end{equation}
Here $E\equiv {\v{e}\v{e}^T/m}$, $\v{e}=(1,\ldots,1)^T$,
$\nu\equiv {-1/\log\varepsilon}$, and the Green's matrix $\mathcal{G}$ depends on
the trap locations $\lbrace{\v{x}_1,\ldots,\v{x}_m\rbrace}$. To determine
optimal trap configurations that are minimizers of the average MFPT, given
in \eqref{e:u0_bar}, we use the ODE relaxation scheme
\begin{equation}
\frac{d\v{z}}{dt} = -\nabla_{\v{z}} \overline{u}_0 \,, \qquad
\mbox{where} \quad \v{z}\equiv (x_1,y_1,\ldots,x_m,y_m) \,.
\label{ode:relax}
\end{equation}
In our implementation of \eqref{ode:relax}, the gradient was
approximated using a centered difference scheme with mesh spacing
$10^{-4}$. The results shown below for the optimal trap patterns are
confirmed from using a particle swarm approach \cite{kennedy2010}.
The derivation of the Neumann Green's function and its regular part in
\S~\ref{sec:g_ell} is based on mapping the elliptical domain to a
rectangular domain using
\begin{subequations}\label{ell:coord}
\begin{equation}\label{ell:coord_1}
x = f \cosh\xi \cos\eta \,, \quad y = f \sinh\xi \sin\eta \,, \qquad
f=\sqrt{a^2 - b^2} \,.
\end{equation}
With these elliptic cylindrical coordinates, the ellipse is mapped to the
rectangle $0\leq \xi\leq \xi_b$ and $0\leq\eta\leq 2\pi$, where
$a=f\cosh\xi_b$ and $b=f\sinh\xi_b$, so that
\begin{equation}
f = \sqrt{a^2 - b^2} \,, \qquad \xi_b = \tanh^{-1}\left(\frac{b}{a}\right)
= -\frac{1}{2} \log\beta\,, \qquad \beta
\equiv \left(\frac{a-b}{a+b}\right)\,. \label{ell:coord_2}
\end{equation}
\end{subequations}
To determine $(\xi,\eta)$, given a pair $(x,y)$, we invert the
transformation \eqref{ell:coord_1} using
\begin{subequations}\label{ell:inverse_mapping}
\begin{equation}\label{ell:xy_to_xi}
\xi = \frac{1}{2} \log\left( 1 - 2s + 2 \sqrt{s^2-s}\right) \,, \quad
s \equiv \frac{-\mu - \sqrt{\mu^2 + 4 f^2 y^2}}{2f^2} \,, \quad \mu\equiv
x^2+y^2-f^2 \,.
\end{equation}
To recover $\eta$, we define $\eta_{\star}\equiv \sin^{-1}(\sqrt{p})$ and use
\begin{equation}\label{ell:xy_to_eta}
\eta = \begin{cases}
\eta_{\star}, & \text{if } x\geq 0\,, \,\, y\geq 0\\
\pi -\eta_{\star}, & \text{if } x<0\,, \,\, y\geq 0\\
\pi +\eta_{\star}, & \text{if } x\leq 0\,, \,\, y< 0\\
2\pi- \eta_{\star}, & \text{if } x>0\,, \,\, y< 0
\end{cases}\,,\quad \mbox{where} \quad
p \equiv \frac{-\mu + \sqrt{\mu^2 + 4 f^2 y^2}}{2f^2} \,.
\end{equation}
\end{subequations}
As derived in \S~\ref{sec:g_ell}, the matrix entries in $\mathcal{G}$ are
obtained from the explicit result
\begin{subequations}\label{cell:finz_g}
\begin{equation}\label{cell:finz_g1}
\begin{split}
G(\v{x};\v{x}_0) &= \frac{1}{4|\Omega|} \left(|\v{x}|^2 + |\v{x}_0|^2\right) -
\frac{3}{16|\Omega|}(a^2 + b^2) - \frac{1}{4\pi}\log\beta -
\frac{1}{2\pi} \xi_{>}
\\
& \qquad -\frac{1}{2\pi} \sum_{n=0}^{\infty} \log\left(
\displaystyle \prod_{j=1}^{8} |1-\beta^{2n} z_j| \right) \,, \quad
\mbox{for} \quad \v{x}\neq \v{x}_0 \,,
\end{split}
\end{equation}
where $|\Omega|=\pi ab$, $\xi_{>}\equiv \max(\xi,\xi_0)$, and the complex
constants $z_1,\ldots,z_8$ are defined in terms of $(\xi,\eta)$,
$(\xi_0,\eta_0)$ and $\xi_b$ by
\begin{equation}\label{cell:def_z}
\begin{split}
z_1 &\equiv e^{-|\xi-\xi_0| + i(\eta-\eta_0)} \,, \quad
z_2 \equiv e^{|\xi-\xi_0|-4\xi_b + i(\eta-\eta_0)}\,, \quad
z_3 \equiv e^{-(\xi+\xi_0) - 2\xi_b + i(\eta-\eta_0)}\,, \\
z_4 &\equiv e^{\xi+\xi_0-2\xi_b + i(\eta-\eta_0)} \,, \quad
z_5 \equiv e^{\xi+\xi_0 -4\xi_b + i(\eta+\eta_0)} \,, \quad
z_6 \equiv e^{-(\xi+\xi_0) + i(\eta+\eta_0)} \,, \\
z_7 &\equiv e^{|\xi-\xi_0| -2\xi_b + i(\eta+\eta_0)} \,, \quad
z_8 \equiv e^{-|\xi-\xi_0| -2\xi_b + i(\eta+\eta_0)} \,.
\end{split}
\end{equation}
\end{subequations}
Observe that the Dirac point at $\v{x}_0=(x_0,y_0)$ is mapped to
$(\xi_0,\eta_0)$. The transformation \eqref{ell:coord} and its inverse
\eqref{ell:inverse_mapping}, determines $G(\v{x};\v{x}_0)$ explicitly in
terms of $\v{x} \in \Omega$.
Moreover, as shown in \S~\ref{sec:g_ell}, the regular part of the
Neumann Green's function, $R_e$, satisfying
$G(\v{x};\v{x}_0)\sim -(2\pi)^{-1}\log|\v{x}-\v{x}_0|+R_e$ as $\v{x}\to \v{x}_0$, is
given by
\begin{subequations}\label{cell:R0}
\begin{equation}
\begin{split}
R_{e} &= \frac{|\v{x}_0|^2}{2|\Omega|} - \frac{3}{16|\Omega|} (a^2+b^2)
+ \frac{1}{2\pi}\log(a+b) - \frac{\xi_0}{2\pi} + \frac{1}{4\pi}
\log\left(\cosh^2\xi_0 - \cos^2\eta_0\right) \\
&\quad -\frac{1}{2\pi} \sum_{n=1}^{\infty} \log(1-\beta^{2n}) -
\frac{1}{2\pi} \sum_{n=0}^{\infty}\log\left(
\displaystyle \prod_{j=2}^{8} |1-\beta^{2n} z_j^0| \right) \,.
\end{split}
\end{equation}
Here, $z_j^{0}$ is the limiting value of $z_j$, defined in
\eqref{cell:def_z}, as $(\xi,\eta)\to(\xi_0,\eta_0)$, given
by
\begin{equation}
\begin{split}
z_2^{0}&=\beta^2\,, \quad z_3^{0}=\beta e^{-2\xi_0}\,, \quad
z_4^{0}=\beta e^{2\xi_0}\,, \quad z_5^{0}=\beta^2 e^{2\xi_0+2i\eta_0} \,,\\
z_6^{0}&=e^{-2\xi_0+2i\eta_0} \,, \quad z_7^{0}=\beta e^{2i\eta_0} \,, \quad
z_8^{0}=\beta e^{2i\eta_0}\,, \quad \mbox{where} \quad
\beta \equiv \frac{a-b}{a+b} \,.
\end{split}
\end{equation}
\end{subequations}
\subsection{Examples of the Theory}\label{ell:ex}
In this subsection, we will apply our hybrid analytical-numerical
approach based on \eqref{e:u0_bar}, \eqref{cell:finz_g},
\eqref{cell:R0} and the ODE relaxation scheme \eqref{ode:relax}, to
compute optimal trap configurations in an elliptical domain of area
$\pi$ with either $m=2,\ldots,5$ circular traps of a common radius
$\varepsilon=0.05$. In our examples below, we set $D=1$ and we study how the
optimal pattern of traps changes as the aspect ratio of the ellipse is
varied. We will compare our results from this hybrid theory with the
near-disk asymptotic results of \eqref{final:avemfpt}, with full PDE
numerical results computed from the closest point method
\cite{IWWC2019}, and with the asymptotic approximations derived below
in \S~\ref{sec:thin}, which are valid for a long and thin ellipse.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.1cm,width=0.49\textwidth]{x0_m2.eps}
\includegraphics[height=4.1cm,width=0.49\textwidth]{u0_m2.eps}
\caption{The optimal trap distance from the origin (left panel) and
optimal average MFPT $\overline{u}_{0\mbox{min}}$ (right panel) versus
the semi-minor axis $b$ of an elliptical domain of area $\pi$ that
contains two traps of a common radius $\varepsilon=0.05$ and $D=1$. The
optimum trap locations are on the semi-major axis, equidistant from
the origin. Solid curves: hybrid asymptotic theory \eqref{e:u0_bar}
for the ellipse coupled to the ODE relaxation scheme
\eqref{ode:relax} to find the minimum. Dashed line (red):
near-disk asymptotics of \eqref{final:avemfpt}. Discrete
points: full numerical PDE results computed from the closest point
method. Dashed-dotted line (blue): thin-domain asymptotics \eqref{thin:m2}.
These curves essentially overlap with those from the hybrid theory for the
optimal trap distance.}
\end{center}
\label{fig:two_ellipse}
\end{figure}
For $m=2$ traps, in the right panel of Fig.~\ref{fig:two_ellipse} we
show results for the optimal average MFPT versus the semi-minor axis
$b$ of the ellipse. The hybrid theory is seen to compare very
favorably with full numerical PDE results for all $b\leq 1$. For $b$
near unity and for $b$ small, the near-disk theory of
\eqref{final:avemfpt} and \eqref{near_disk:relax}, and the thin-domain
asymptotic result in \eqref{thin:m2} are seen to provide,
respectively, good predictions for the optimal MFPT. Our hybrid theory
shows that the optimal trap locations are on the semi-major axis for
all $b<1$. In the left panel of Fig.~\ref{fig:two_ellipse}, the optimal
trap locations found from the steady-state of our ODE relaxation
\eqref{ode:relax} are seen to compare very favorably with full PDE
results. Remarkably, we observe that the thin-domain asymptotics
prediction in \eqref{thin:m2} agrees well with the optimal locations
from our hybrid theory for $b<0.7$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.5cm,width=0.60\textwidth]{area_m3.eps}
\caption{Area of the triangle formed by the three optimally located
traps of a common radius $\varepsilon=0.05$ with $D=1$ in a deforming
ellipse of area $\pi$ versus versus the semi-major axis $a$. The
optimal traps become collinear as $a$ increases. Solid curve: hybrid
asymptotic theory \eqref{e:u0_bar} for the ellipse coupled to the
ODE relaxation scheme \eqref{ode:relax} to find the minimum. Dashed
line: near-disk asymptotics of \eqref{final:avemfpt}. Discrete
points: full numerical PDE results computed from the closest point
method.}
\end{center}
\label{fig:three_area}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_3xy_1.eps}\label{fig:three_snap1}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_3xy_2.eps}\label{fig:three_snap2}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_3xy_3.eps}\label{fig:three_snap3}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_3xy_4.eps}\label{fig:three_snap4}}
\caption{Optimal three-trap configurations for $D=1$ in a deforming
ellipse of area $\pi$ with semi-major axis $a$ and a common trap
radius $\varepsilon=0.05$. Left: $a=1$, $b=1$. Middle Left: $a=1.184$,
$b\approx 0.845$. Middle Right: $a=1.351$, $b\approx 0.740$. Right:
$a=1.450$, $b\approx 0.690$. The optimally located traps form an
isosceles triangle as they deform from a ring pattern in the unit
disk to a collinear pattern as $a$ increases.}
\end{center}
\label{fig:three_snap}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{x0_m3.eps}\label{fig:three_ell:x0}}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{u0_m3.eps}\label{fig:three_ell:u0}}
\caption{Left panel: Optimal distance from the origin for a collinear
three-trap pattern on the major-axis of an ellipse of area $\pi$ versus the
semi-minor axis $b$. When $b\le 0.71$ the optimal pattern has a trap
at the center and a pair of traps symmetrically located on either
side of the origin. Right panel: optimal average MFPT
$\overline{u}_{0\mbox{min}}$ versus $b$. Solid curves: hybrid asymptotic
theory \eqref{e:u0_bar} for the ellipse coupled to the ODE relaxation
scheme \eqref{ode:relax} to find the minimum. Dashed line (red):
near-disk asymptotics of \eqref{final:avemfpt}. Discrete points:
Full PDE numerical results computed using the closest point method.
Dashed-dotted line (blue): thin-domain asymptotics \eqref{thin:m3}.}
\end{center}
\label{fig:three_ellipse}
\end{figure}
Next, we consider the case $m=3$. To clearly illustrate how the
optimal trap configuration changes as the aspect ratio of the ellipse
is varied, we use the hybrid theory to compute the area of the
triangle formed by the three optimally located traps. The results
shown in Fig.~\ref{fig:three_area} are seen to compare favorably with
full PDE results. These results show that that the optimal
traps become colinear on the semi-major axis when $a\ge 1.45$. In
Fig.~\ref{fig:three_snap} we show snapshots, at certain values of the
semi-major axis, of the optimal trap locations in the ellipse. In the
right panel of Fig.~\ref{fig:three_ellipse}, we show that the optimal
average MFPT from the hybrid theory compares very well with full
numerical PDE results for all $b\leq 1$, and that the thin domain
asymptotics \eqref{thin:m3} provides a good approximation when
$b\leq 0.3$. In the left panel of Fig.~\ref{fig:three_ellipse} we plot
the optimal trap locations on the semi-major axis when the trap
pattern is collinear. We observe that results for the optimal trap
locations from the hybrid theory, the thin domain asymptotics
\eqref{thin:m3}, and the full PDE simulations, essentially coincide on
the full range $0.2<b<0.7$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.5cm,width=0.60\textwidth]{quad_m4.eps}
\caption{Area of the quadrilateral formed by the four optimally
located traps of a common radius $\varepsilon=0.05$ with $D=1$ in a
deforming ellipse of area $\pi$ and semi-major axis $a$. The optimal
traps become collinear as $a$ increases. Solid curve: hybrid
asymptotic theory \eqref{e:u0_bar} for the ellipse coupled to the
ODE relaxation scheme \eqref{ode:relax} to find the minimum. Dashed
line (red): near-disk asymptotics of \eqref{final:avemfpt}. Discrete
points: full numerical PDE results computed from the closest point
method.}
\end{center}
\label{fig:four_area}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_4xy_1.eps}\label{fig:four_snap1}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_4xy_2.eps}\label{fig:four_snap2}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_4xy_3.eps}\label{fig:four_snap3}}
{\includegraphics[height=3.5cm,width=0.24\textwidth]{snap_4xy_4.eps}\label{fig:four_snap4}}
\caption{Optimal four-trap configurations for $D=1$ in a deforming
ellipse of area $\pi$ with semi-major axis $a$ and a common trap
radius $\varepsilon=0.05$. Left: $a=1$, $b=1$. Middle Left: $a=1.577$,
$b\approx 0.634$. Middle Right: $a=1.675$, $b\approx 0.597$. Right:
$a=3.0$, $b\approx 0.333$. The optimally located traps form a rectangle,
followed by a parallelogram, as they deform from a ring pattern in
the unit disk to a collinear pattern as $a$ increases.}
\end{center}
\label{fig:four_snap}
\end{figure}
For the case of four traps, where $m=4$, in Fig.~\ref{fig:four_area}
we use the hybrid theory to plot the area of the quadrilateral formed
by the four optimally located traps versus the semi-major axis $a>1$.
The full PDE results, also shown in Fig.~\ref{fig:four_area}, compare
well with the hybrid results. This figure shows that as the aspect
ratio of the ellipse increases the traps eventually become collinear on
the semi-major axis when $a\geq 1.7$. This feature is further
illustrated by the snapshots of the optimal trap locations shown in
Fig.~\ref{fig:four_snap} at representative values of $a$. In the
right panel of Fig.~\ref{fig:four_ellipse}, we show that the hybrid and
full numerical PDE results for the optimal average MFPT agree very
closely for all $b\leq 1$, but that the thin-domain asymptotic result
\eqref{thin:m4} agrees well only when $b\leq 0.25$. However, as
similar to the three-trap case, on the range of $b$ where the trap
pattern is collinear, in the left panel of Fig.~\ref{fig:four_ellipse}
we show that the hybrid theory, the full PDE simulations, and the
thin-domain asymptotics all provide essentially indistinguishable
predictions for the optimal trap locations on the semi-major axis.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{x0_m4.eps}\label{fig:four_ell:x0}}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{u0_m4.eps}\label{fig:four_ell:u0}}
\caption{Left panel: Optimal distances from the origin for a collinear
four-trap pattern on the major-axis of an ellipse of area $\pi$ and
semi-minor axis $b$. When $b\le 0.57$ the optimal pattern has two
pairs of traps symmetrically located on either side of the
origin. Right panel: the optimal average MFPT
$\overline{u}_{0\mbox{min}}$ versus $b$. Solid curves: hybrid asymptotic
theory \eqref{e:u0_bar} for the ellipse coupled to the ODE
relaxation scheme \eqref{ode:relax} to find the minimum. Dashed line
(red): near-disk asymptotics of \eqref{final:avemfpt}. Discrete
points: full numerical PDE results computed from the closest point
method. Dashed-dotted line (blue): thin-domain asymptotics
\eqref{thin:m4}.}
\end{center}
\label{fig:four_ellipse}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_1.eps}\label{fig:five_snap1}}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_2.eps}\label{fig:five_snap2}}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_3.eps}\label{fig:five_snap3}}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_4.eps}\label{fig:five_snap4}}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_5.eps}\label{fig:five_snap5}}
{\includegraphics[height=3.5cm,width=0.32\textwidth]{snap_5xy_6.eps}\label{fig:five_snap6}}
\caption{Optimal five-trap configurations for $D=1$ in a deforming
ellipse of area $\pi$ with semi-major axis $a$ and a common trap
radius $\varepsilon=0.05$. Top left: $a=1$, $b=1$. Top middle: $a=1.25$,
$b=0.8$. Top right: $a=1.4$, $b\approx 0.690$. Bottom left:
$a=1.665$, $b\approx 0.601$. Bottom middle: $a=2.22$,
$b\approx 0.450$. Bottom right: $a=2.79$, $b\approx 0.358$. The
optimal traps become collinear as $a$ increases and the edge-most
traps become closer to the corner of the domain as $a$ increases.}
\end{center}
\label{fig:five_snap}
\end{figure}
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{x0_m5.eps}\label{fig:five_ell:x0}}
{\includegraphics[height=4.1cm,width=0.49\textwidth]{u0_m5.eps}\label{fig:five_ell:u0}}
\caption{Left panel: Optimal distances from the origin for a collinear
five-trap pattern on the major-axis of an ellipse of area $\pi$ and
semi-minor axis $b$. When $b\le 0.51$ the optimal pattern has a trap
at the center and two pairs of traps symmetrically located on either
side of the origin. Right panel: The optimal average MFPT
$\overline{u}_{0\mbox{min}}$ versus $b$. Solid curves: hybrid asymptotic
theory for the ellipse \eqref{e:u0_bar} coupled to the ODE
relaxation scheme \eqref{ode:relax} to find the minimum.
Dashed line (red): near-disk asymptotics of \eqref{final:avemfpt}.
Discrete points: full numerical PDE results computed from the
closest point method. Dashed-dotted line (blue): thin-domain
asymptotics \eqref{thin:m5}.}
\end{center}
\label{fig:five_ellipse}
\end{figure}
Finally, we show similar results for the case of five traps. In
Fig.~\ref{fig:five_snap}, we plot the optimal trap locations in the
ellipse as the semi-major axis of the ellipse is varied. This plot
shows that the optimal pattern becomes collinear when (roughly)
$a\geq 2$. In the right panel of Fig.~\ref{fig:five_ellipse}, we show a
close agreement between the hybrid and full numerical PDE results for
the optimal average MFPT. However, as seen in
Fig.~\ref{fig:five_ellipse}, the thin-domain asymptotic result
\eqref{thin:m5} accurately predicts the optimal MFPT only for rather
small $b$. As for the four-trap case, in the left panel of
Fig.~\ref{fig:five_ellipse} we show that the hybrid theory, the full
PDE simulations, and the thin-domain asymptotics all yield similar
predictions for the optimal trap locations on the semi-major axis.
\subsection{Thin-Domain Asymptotics}\label{sec:thin}
For a long and thin ellipse, where $b=\delta\ll 1$ and
$a={1/\delta}$ but with $|\Omega|=\pi$, we now derive simple
approximations for the optimal trap locations and the optimal average
MFPT using an approach based on thin-domain asymptotics. For $m=2$ the
optimal trap locations are on the semi-major axis
(cf.~Fig.~\ref{fig:two_ellipse}), while for $3\leq m\leq 5$ the
optimal trap locations become collinear when the semi-minor axis $b$
decreases below a threshold (see Fig.~\ref{fig:three_snap},
Fig.~\ref{fig:four_snap}, and Fig.~\ref{fig:five_snap}).
As derived in Appendix \ref{app:thin}, the leading-order approximation for
the MFPT $u$ satisfying \eqref{Ellip_Model} in a thin elliptical with
$b=\delta\ll 1$ is
\begin{equation}
u(x,y)\sim \delta^{-2}U_0(\delta x) + {\mathcal O}(\delta^{-1}) \,,
\end{equation}
where the one-dimensional profile $U_0(X)$, with $x={X/\delta}$,
satisfies the ODE
\begin{equation}\label{sec:long_u0}
\left[\sqrt{1-X^2} \, U_0^{\prime} \right]^{\prime} = -\frac{\sqrt{1-X^2}}{D}\,,
\quad \mbox{on} \quad |X|\leq 1 \,,
\end{equation}
with $U_0$ and $U_0^{\prime}$ bounded as $X\to \pm 1$. In terms of
$U_0(X)$, the average MFPT for the thin ellipse is estimated for
$\delta\ll 1$ as
\begin{equation}\label{thin:ave}
\overline{u}_0 \sim \frac{1}{\pi} \int_{-1/\delta}^{1/\delta}
\int_{-\delta \sqrt{1-\delta^2 x^2}}^{\delta \sqrt{1- \delta^2 x^2}} u \, dx dy \sim
\frac{4}{\pi \delta^2} \int_{0}^{1} \sqrt{1-X^2} \, U_0(X) \, dX \,.
\end{equation}
In the thin domain limit, the circular traps of a common radius $\varepsilon$
centered on the semi-major axis are approximated by zero point
constraints for $U_0$ at locations on the interval $|X|\leq 1$. In
this way, \eqref{sec:long_u0} becomes a multi-point BVP problem, whose
solution depends on the locations of the zero point
constraints. Optimal values for the location of these constraints are
obtained by minimizing the 1-D integral in \eqref{thin:ave}
approximating $\overline{u}_0$. We now apply this approach for
$m=2,\ldots,5$ collinear traps.
For $m=2$ traps centered at $X=\pm d$ with $0<d<1$, the multi-point BVP
for $U_0(X)$ on $0<X<1$ satisfies
\begin{equation}\label{thin_m2:U0}
\left[\sqrt{1-X^2} \, U_0^{\prime} \right]^{\prime} = -\frac{\sqrt{1-X^2}}{D}\,,
\quad 0<X<1 \,; \qquad U_0^{\prime}(0)=0 \,, \quad U_0(d)=0 \,,
\end{equation}
with $U_0$ and $U_0^{\prime}$ bounded as $X\to \pm 1$. A particular
solution for \eqref{thin_m2:U0} is
$U_{0p}=-{[(\sin^{-1}(X))^2+X^2]/(4D)}$, while the homogeneous
solution is $U_{0H}=c_1 \sin^{-1}(X) + c_2$. By combining these
solutions, we readily calculate that
\begin{subequations}
\begin{equation}\label{thin_m2:u0_solve_1}
U_0(X) = \begin{cases}
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 - \pi
\sin^{-1} X + c_2 \right]\,, \quad d\leq X \leq 1\,, \\
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 + c_1
\right]\,, \quad 0\leq X \leq d \,,
\end{cases}
\end{equation}
where $c_1$ and $c_2$ are given by
\begin{equation}\label{thin_m2:u0_solve_2}
c_1 = - d^2 - \left( \sin^{-1}{d}\right)^2 \,, \qquad
c_2 = - d^2 + \pi \sin^{-1}{d} - \left(\sin^{-1}{d}\right)^2 \,.
\end{equation}
\end{subequations}
Upon substituting \eqref{thin_m2:u0_solve_1} into \eqref{thin:ave}, we
obtain that
\begin{subequations}
\begin{equation}\label{thin_m2:ave_ell}
\overline{u}_0 \sim -\frac{1}{\pi D\delta^2} \left[ J_0 + {\mathcal H}(d)
\right] \,,
\end{equation}
where the two integrals $J_0$ and ${\mathcal H}(d)$ are given by
\begin{align}
J_0 &\equiv \int_{0}^{1} F(X) \left[ \left( \sin^{-1}{X}\right)^2 +
X^2 - \pi\sin^{-1}(X) \right] \, dX \approx -0.703 \,, \label{thin_ave_J}\\
{\mathcal H}(d) &\equiv \pi \int_{0}^{d} F(X) \sin^{-1}(X) \, dX
+ c_2 \int_{d}^{1} F(X) \, dX + c_1\int_{0}^{d}
F(X) \, dX \,, \label{thin_ave_H}
\end{align}
\end{subequations}
where $F(X)=\sqrt{1-X^2}$. By performing a few quadratures, and using
\eqref{thin_m2:u0_solve_2} for $c_1$ and $c_2$, we obtain an explicit
expression for ${\mathcal H}(d)$:
\begin{equation}\label{H:m2}
{\mathcal H}(d) = -\frac{\pi}{2} \left[\sin^{-1}(d)\right]^2 +
\frac{\pi^2}{4} \sin^{-1}(d) - \frac{\pi d^2}{2} \,.
\end{equation}
To estimate the optimal average MFPT we simply maximize
${\mathcal H}(d)$ in \eqref{H:m2} on $0<d<1$. We compute that
$d_{\textrm{opt}}\approx 0.406$, and correspondingly
$\overline{u}_{0\mbox{min}}=-\left(\pi D\delta^2\right)^{-1} \left[ J_0 +
{\mathcal H}(d_{\textrm{opt}})\right]$. Then, by setting $\delta=b$
and $x_{\textrm{opt}}={d_{\textrm{opt}}/\delta}$, we obtain the
following estimate for the optimal trap location and minimum average
MFPT for $m=2$ traps in the thin domain limit:
\begin{equation}\label{thin:m2}
x_{0 \textrm{opt}}\sim {0.406/b} \,, \qquad \overline{u}_{0\textrm{opt}} \sim
{0.0652/( b^2 D)} \,, \quad \mbox{for} \quad b\ll 1\,.
\end{equation}
These estimates are favorably compared in Fig.~\ref{fig:two_ellipse}
with full PDE solutions computed using the closest point method
\cite{IWWC2019} and with the full asymptotic theory based on
\eqref{e:u0_bar}.
Next, suppose that $m=3$. Since there is an additional trap at the
origin, we simply replace the condition $U_0^{\prime}(0)=0$ in
\eqref{thin_m2:U0} with $U_0(0)=0$. In place of
\eqref{thin_m2:u0_solve_1},
\begin{subequations}
\begin{equation}\label{thin_m3:u0_solve_1}
U_0(X) = \begin{cases}
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 - \pi
\sin^{-1} X + c_2 \right]\,, \quad d\leq X \leq 1\,, \\
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 + c_1\sin^{-1}{X}
\right]\,, \quad 0\leq X \leq d \,,
\end{cases}
\end{equation}
where $c_1$ and $c_2$ are given by
\begin{equation}\label{thin_m3:u0_solve_2}
c_1 = - {\left(d^2 + \left[ \sin^{-1}(d)\right]^2 \right)/\sin^{-1}(d)}
\,, \qquad c_2 = - d^2 + \pi \sin^{-1}(d) - \left[\sin^{-1}(d)\right]^2 \,.
\end{equation}
\end{subequations}
The average MFPT is given by \eqref{thin_m2:ave_ell}, where ${\mathcal H}(d)$
is now defined by
\begin{equation}\label{thin_ave_H_3}
{\mathcal H}(d) \equiv c_2 \int_{d}^{1} F(X) \, dX + (c_1+\pi)
\int_{0}^{d} F(X) \, \sin^{-1}(X) \, dX \,,
\end{equation}
with $F(X)=\sqrt{1-X^2}$. By maximizing ${\mathcal H}(d)$ on $0<d<1$,
we obtain $d_{\textrm{opt}}\approx 0.567$, so that
$\overline{u}_{0\mbox{min}}=-\left(\pi D\delta^2\right)^{-1} \left[ J_0 +
{\mathcal H}(d_{\textrm{opt}})\right]$. In this way, the optimal
trap location and the minimum of the average MFPT satisfies
\begin{equation}\label{thin:m3}
x_{0 \textrm{opt}}\sim {0.567/b} \,, \qquad \overline{u}_{0\textrm{opt}} \sim
{0.0308/( b^2 D)} \,, \quad \mbox{for} \quad b\ll 1\,.
\end{equation}
In Fig.~\ref{fig:three_ellipse} these scaling laws are seen to compare
well with full PDE solutions and with the full asymptotic theory of
\eqref{e:u0_bar}, even when $b$ is only moderately small.
Next, we consider the case $m=4$, with two symmetrically placed traps on
either side of the origin. Therefore, we solve \eqref{thin_m2:U0} with
$U_0^{\prime}(0)=0$, $U_0(d_1)=0$, and $U_0(d_2)=0$, where $0<d_1<d_2$. In
place of \eqref{thin_m2:u0_solve_1}, we get
\begin{subequations}
\begin{equation}\label{thin_m4:u0_solve_1}
U_0(X) = \begin{cases}
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 - \pi
\sin^{-1} X + c_2 \right]\,, \quad d_2\leq X \leq 1\,, \\
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 + b_1 \sin^{-1}{X}
+ b_2 \right]\,, \quad d_1\leq X \leq d_2\,, \\
-\frac{1}{4D} \left[ \left(\sin^{-1}{X}\right)^2 + X^2 + c_1
\right]\,, \quad 0\leq X \leq d_1 \,,
\end{cases}
\end{equation}
where $c_1$ and $c_2$ are given by
\begin{equation}\label{thin_m4:u0_solve_2}
\begin{split}
& \qquad\qquad\qquad c_1 = - d_1^2 - \left( \sin^{-1}{d_1}\right)^2 \,, \qquad
c_2 = - d_2^2 + \pi \sin^{-1}{d_2} - \left(\sin^{-1}{d_2}\right)^2 \,,\\
b_1 &= \frac{\left(\sin^{-1}{d_1}\right)^2 -\left(\sin^{-1}{d_2}\right)^2 +
d_1^2 - d_2^2}{\sin^{-1}{d_2} -\sin^{-1}{d_1}} \,, \qquad
b_2 =-b_1 \sin^{-1} d_1 - d_1^2 - \left(\sin^{-1}{d_1}\right)^2 \,.
\end{split}
\end{equation}
\end{subequations}
The average MFPT is given by \eqref{thin_m2:ave_ell}, where
${\mathcal H}={\mathcal H}(d_1,d_2)$ is now given by
\begin{equation}\label{thin_ave_H_4}
\begin{split}
{\mathcal H}(d_1,d_2) &\equiv c_2 \int_{d_2}^{1} F(X) \, dX + (b_1+\pi)
\int_{d_1}^{d_2} F(X) \, \sin^{-1}(X) \, dX + b_2\int_{d_1}^{d_2}
F(X)\, dX \\
& \qquad + \pi \int_{0}^{d_1} F(X) \, \sin^{-1}(X) \, dX +
c_1 \int_{0}^{d_1} F(X) \, dX \,,
\end{split}
\end{equation}
where $F(X)\equiv\sqrt{1-X^2}$. By using a grid search to maximize
${\mathcal H}(d_1,d_2)$ on $0<d_1<d_2<1$, we obtain that
$d_{1\textrm{opt}}\approx 0.215$ and
$d_{2\textrm{opt}} \approx 0.656$. This yields that the optimal trap
locations and the minimum of the average MFPT, given by
$\overline{u}_{0\mbox{min}}=-\left(\pi D\delta^2\right)^{-1} \left[ J_0 +
{\mathcal H}(d_{1\textrm{opt}},d_{2\textrm{opt}})\right]$, have the
scaling law
\begin{equation}\label{thin:m4}
x_{1 \textrm{opt}}\sim {0.215/b} \,, \quad
x_{2 \textrm{opt}}\sim {0.656/b} \,, \quad
\overline{u}_{0\textrm{opt}} \sim
{0.0179/( b^2 D)} \,, \quad \mbox{for} \quad b\ll 1\,.
\end{equation}
These scaling laws are shown in Fig.~\ref{fig:four_ellipse} to agree well
with the full PDE solutions and with the full asymptotic theory of
\eqref{e:u0_bar} when $b$ is small.
Finally, we consider the case $m=5$, where we need only modify the
$m=4$ analysis by adding a trap at the origin. Setting $U_0(0)=0$,
$U_0(d_1)=0$, and $U_0(d_2)=0$ we obtain that $U_0$ is again given by
\eqref{thin_m4:u0_solve_1}, except that now $c_1$ in
\eqref{thin_m4:u0_solve_1} is replaced by $c_1\sin^{-1}(X)$, with
$c_1$ as defined in \eqref{thin_m3:u0_solve_2}. The average MFPT
satisfies \eqref{thin_m2:ave_ell}, where in place of
\eqref{thin_ave_H_4} we obtain that ${\mathcal H}(d_1,d_2)$ is given
by
\begin{equation}\label{thin_ave_H_5}
\begin{split}
{\mathcal H}(d_1,d_2) &\equiv c_2 \int_{d_1}^{1} F(X) \, dX +
(b_1+\pi) \int_{d_1}^{d_2} F(X) \, \sin^{-1}(X) \, dX \\
& \qquad + b_2\int_{d_1}^{d_2} F(X)\, dX + (c_1+\pi) \int_{0}^{d_1}
F(X) \, \sin^{-1}{X} \, dX \,,
\end{split}
\end{equation}
with $F(X)=\sqrt{1-X^2}$. A grid search yields that
${\mathcal H}(d_1,d_2)$ is maximized on $0<d_1<d_2<1$ when
$d_{1\textrm{opt}}\approx 0.348$ and
$d_{2\textrm{opt}} \approx 0.714$. In this way, the corresponding
optimal trap locations and minimum average MFPT have the scaling law
\begin{equation}\label{thin:m5}
x_{1 \textrm{opt}}\sim {0.348/b} \,, \quad
x_{2 \textrm{opt}}\sim {0.714/b} \,, \quad
\overline{u}_{0\textrm{opt}} \sim {0.0117/( b^2 D)} \,, \quad \mbox{for}
\quad b\ll 1\,.
\end{equation}
Fig.~\ref{fig:five_ellipse} shows that \eqref{thin:m5} compares well
with the full PDE solutions and with the full asymptotic theory of
\eqref{e:u0_bar} when $b$ is small.
\section{An Explicit Neumann Green's Function for the
Ellipse}\label{sec:g_ell}
We derive the {\em new explicit formula} \eqref{cell:finz_g} for the
Neumann Green's function and its regular part in \eqref{cell:R0} in
terms of rapidly converging infinite series. This Green's function
$G(\v{x};\v{x}_0)$ for the ellipse
$\Omega \equiv \lbrace{ \v{x} = (x,y) \, \vert \, {x^2/a^2}+{y^2/b^2}\leq
1\rbrace}$ is the unique solution to
\begin{subequations}\label{ell:g}
\begin{align}
\Delta G &= \frac{1}{|\Omega|} - \delta(\v{x} - \v{x}_0)\, \quad \v{x} \in \Omega\,;
\qquad \partial_n G =0\,, \,\,\, \v{x} \in \partial \Omega\,;
\label{ell:g_a}\\
G \sim -\frac{1}{2\pi}& \log{|\v{x} - \v{x}_0|} + R_e + o(1) \quad \text{as}
\quad\v{x} \to \v{x}_0\,; \qquad \int_{\Omega} G \, \text{d}\v{x}=0\,,
\label{ell:g_b}
\end{align}
\end{subequations}
where $|\Omega|=\pi a b$ is the area of $\Omega$ and $R_e$ is the
regular part of the Green's function. Here $\partial_n G$ is the
outward normal derivative to the boundary of the ellipse. To remove the
$|\Omega|^{-1}$ term in \eqref{ell:g_a}, we introduce $N(\v{x};\v{x}_0)$ defined
by
\begin{equation}\label{ell:g_to_n}
G(\v{x};\v{x}_0) = \frac{1}{4|\Omega|} (x^2+y^2) + N(\v{x};\v{x}_0) \,.
\end{equation}
We readily derive that $N(\v{x};\v{x}_0)$ satisfies
\begin{subequations}\label{ell:n}
\begin{align}
\Delta N &= - \delta(\v{x} - \v{x}_0)\quad \v{x} \in \Omega\,;
\quad \partial_n N = -\frac{1}{2|\Omega|\sqrt{{x^2/a^4} + {y^2/b^4}}}
\,,\,\,\, \v{x} \in \partial \Omega\,; \label{ell:n_a}\\
\int_{\Omega} N \, \text{d}\v{x} &= - \frac{1}{4 |\Omega|} \int_{\Omega}
(x^2 + y^2) \, \text{d}\v{x} = - \frac{1}{4 |\Omega|} \left( \frac{|\Omega|}{4}
(a^2+ b^2) \right) = -\frac{1}{16}(a^2 + b^2)\,. \label{ell:n_b}
\end{align}
\end{subequations}
We assume that $a>b$, so that the semi-major axis is on the
$x$-axis. To solve \eqref{ell:n} we introduce the elliptic cylindrical
coordinates $(\xi,\eta)$ defined by \eqref{ell:coord} and its inverse
mapping \eqref{ell:inverse_mapping}. We set
$\mathcal{N}(\xi,\eta)\equiv N(x(\xi,\eta),y(\xi,\eta))$ and seek to convert
\eqref{ell:n} to a problem for $\mathcal{N}$ defined in a rectangular
domain. It is well-known that
\begin{equation}\label{cell:0}
N_{xx}+N_{yy}= \frac{1}{f^2(\cosh^2\xi - \cos^2\eta)} \left(
\mathcal{N}_{\xi\xi} + \mathcal{N}_{\eta\eta} \right) \,.
\end{equation}
Moreover, by computing the scale factors
$h_{\xi}=\sqrt{x_{\xi}^2 + y_{\xi}^2}$ and
$h_{\eta}=\sqrt{x_{\eta}^2 + y_{\eta}^2}$ of the transformation, we
obtain that
\begin{equation}\label{cell:1}
\delta(x-x_0) \delta(y-y_0)=\frac{1}{h_{\eta} h_{\xi}}
\delta(\xi-\xi_0) \delta(\eta-\eta_0) =\frac{1}{f^2(\cosh^2\xi -
\cos^2\eta)} \delta(\xi-\xi_0) \delta(\eta-\eta_0)\,,
\end{equation}
where we used $h_\xi=h_\eta=f \sqrt{\cosh^2 \xi_0 -\cos^2\eta_0}$.
By using \eqref{cell:0} and \eqref{cell:1}, we obtain that the PDE in
\eqref{ell:n_a} transforms to
\begin{equation}\label{cell_t:pde}
\mathcal{N}_{\xi\xi}+ \mathcal{N}_{\eta\eta} = -\delta(\xi-\xi_0)\delta(\eta-\eta_0)
\,, \quad \mbox{in} \quad 0\leq \eta \leq 2\pi \,, \,\, 0\leq \xi\leq u_b\,.
\end{equation}
To determine how the normal derivative in \eqref{ell:n_a} transforms, we
calculate
\begin{equation}
\begin{pmatrix} N_x \\ N_y \end{pmatrix} = \frac{1}{x_\xi y_\eta-x_\eta y_\xi}
\begin{pmatrix} y_\eta & -y_\xi \\ -x_\eta & x_\xi \end{pmatrix}
\begin{pmatrix} \mathcal{N}_\xi \\ \mathcal{N}_\eta \end{pmatrix} \,,
\end{equation}
where from \eqref{ell:coord_1} we calculate
\begin{equation}\label{cell:xder}
x_\xi=f\sinh\xi\cos\eta = y_\eta \,, \qquad x_\eta=-f\cosh\xi\sin\eta =-y_\xi\,.
\end{equation}
Now using $x=a\cos\eta$ and $y=b\sin\eta$ on $\partial\Omega$, we calculate on
$\partial\Omega$ that
\begin{equation}\label{cell:4}
\partial_n N = \nabla N \cdot \frac{(x/a^2\,, y/b^2)}
{\sqrt{x^2/a^4+y^2/b^4}} = \frac{ \left(\frac{1}{a}\cos\eta\,\,,
\frac{1}{b}\sin\eta\right)}{ \sqrt{x^2/a^4+y^2/b^4}
\left( x_\xi y_\eta-x_\eta y_\xi\right)}
\begin{pmatrix} y_\eta & -y_\xi \\ -x_\eta & x_\xi \end{pmatrix}
\begin{pmatrix} \mathcal{N}_\xi \\ \mathcal{N}_\eta \end{pmatrix}.
\end{equation}
By using \eqref{cell:xder}, we calculate on $\partial\Omega$ that
$x_\xi y_\eta-x_\eta y_\xi = b^2 \cos^2\eta + a^2 \sin^2\eta$. With this
expression, we obtain after some algebra that \eqref{cell:4} becomes
\begin{equation}\label{cell:6}
\partial_n N = \frac{1}{ab \sqrt{x^2/a^4+y^2/b^4}} \, \mathcal{N}_u \,, \quad
\mbox{on} \quad \xi =\xi_b \,.
\end{equation}
By combining \eqref{cell:6} and \eqref{ell:n_a}, we obtain
$ \mathcal{N}_\xi = - {1/(2\pi)}$ on $\xi =\xi_b$.
Next, we discuss the other boundary conditions in the transformed
plane. We require that $\mathcal{N}$ and $\mathcal{N}_{\eta}$ are $2\pi$
periodic in $\eta$. The boundary condition imposed on $\eta=0$, which
corresponds to the line segment $y=0$ and $|x|\leq f=\sqrt{a^2-b^2}$
between the two foci, is chosen to ensure that $N$ and the normal
derivative $N_y$ are continuous across this segment. Recall from
\eqref{ell:xy_to_eta} that the top of this segment $y=0^{+}$ and
$|x|\leq f$ corresponds to $0\leq \eta\leq \pi$, while the bottom of
this segment $y=0^{-}$ and $|x|\leq f$ corresponds to
$\pi\leq \eta\leq 2\pi$. To ensure that $N$ is continuous across this
segment, we require that $\mathcal{N}(\xi,\eta)$ satisfies
$\mathcal{N}(0,\eta)=\mathcal{N}(0,2\pi-\eta)$ for any $0\leq \eta\leq
\pi$. Moreover, since $\mathcal{N}_\xi=N_y f\sin\eta$ on $\xi=0$, and
$\sin(2\pi-\eta)=-\sin(\eta)$, we must have
$\mathcal{N}_{\xi}(0,\eta)=\mathcal{N}_{\xi}(0,2\pi-\eta)$ on
$0\leq \eta\leq \pi$.
Finally, we examine the normalization condition in \eqref{ell:n_b} by using
\begin{equation}\label{cell:7}
\int_{\Omega} N(x,y) \, dx \, dy = \int_{0}^{\xi_b}\int_{0}^{2\pi}
\mathcal{N}(\xi,\eta) \,\, \Big{\vert} \mbox{det}
\begin{pmatrix} x_\xi & x_\eta \\ y_\xi & y_\eta
\end{pmatrix} \Big{\vert} \, d\xi\, d\eta\,.
\end{equation}
Since $x_\xi y_\eta - x_\eta y_\xi=f^2\left(\cosh^2\xi-\cos^2\eta\right)$,
we obtain from \eqref{cell:7} that \eqref{ell:n_b} becomes
\begin{equation}\label{cell_t:norm}
\int_{0}^{\xi_b}\int_{0}^{2\pi} \mathcal{N}(\xi,\eta) \left[\cosh^2\xi-\cos^2\eta
\right]\, d\xi \, d\eta = -\frac{1}{16f^2}{(a^2+b^2)} = -
\frac{(a^2+b^2)}{16(a^2-b^2)} \,.
\end{equation}
In summary, from \eqref{cell_t:pde}, \eqref{cell_t:norm}, and the
condition on $\xi=\xi_b$, $\mathcal{N}(\xi,\eta)$ satisfies
\begin{subequations}\label{cell:n}
\begin{gather}
\Delta \mathcal{N} = - \delta(\xi-\xi_0)\delta(\eta -\eta_0)\,\quad
0\leq \xi\leq \xi_b \,, \,\, 0\leq \eta\leq \pi \,, \label{cell:n_pde}\\
\partial_\xi \mathcal{N}= -\frac{1}{2\pi} \,, \quad \mbox{on}\,\,\xi=\xi_b
\,; \qquad \mathcal{N}\,, \,\, \mathcal{N}_\eta \quad 2\pi \,\, \mbox{periodic in }
\eta \,, \label{cell:n_bnd1}\\
\mathcal{N}(0,\eta)=\mathcal{N}(0,2\pi-\eta) \,, \quad
\mathcal{N}_{\xi}(0,\eta)=-\mathcal{N}_{\xi}(0,2\pi-\eta) \,, \quad \mbox{for} \quad
0\leq \eta\leq \pi \,, \label{cell:n_bnd2}\\
\int_{0}^{\xi_b}\int_{0}^{2\pi} \mathcal{N}(\xi,\eta) \left[\cosh^2\xi-\cos^2\eta
\right]\, d\xi \, d\eta = -\frac{(a^2+b^2)}{16(a^2-b^2)} \,.
\label{cell:n_int}
\end{gather}
\end{subequations}
The solution to \eqref{cell:n} is expanded in terms of the eigenfunctions
in the $\eta$ direction:
\begin{equation}\label{ncell:eig_ex}
\mathcal{N}(\xi,\eta) = \mathcal{A}_0(\xi) + \sum_{k=1}^{\infty} \mathcal{A}_k(\xi)
\cos(k\eta) + \sum_{k=1}^{\infty} \mathcal{B}_k(\xi) \sin(k\eta) \,.
\end{equation}
The boundary condition \eqref{cell:n_bnd1} is satisfied with
$\mathcal{A}_0^{\prime}(\xi_b)=-{1/(2\pi)}$ and
$\mathcal{A}_k^{\prime}(\xi_b)=\mathcal{B}_k^{\prime}(\xi_b)=0$, for $k\geq
1$. To satisfy $\mathcal{N}(0,\eta)=\mathcal{N}(0,2\pi-\eta)$, we require
$\mathcal{B}_k(0)=0$ for $k\geq 1$. Finally, to satisfy
$\mathcal{N}_{\xi}(0,\eta)=-\mathcal{N}_{\xi}(0,2\pi-\eta)$, we require that
$\mathcal{A}_0^{\prime}(0)=0$ and $\mathcal{A}_k^{\prime}(0)=0$ for $k\geq
1$. In the usual way, we can derive ODE boundary value problems for
$\mathcal{A}_0$, $\mathcal{A}_k$, and $\mathcal{B}_k$. We obtain that
\begin{subequations}\label{ncell:odes}
\begin{equation}\label{ncell:ode_0}
\mathcal{A}_0^{\prime\prime} = - \frac{1}{2\pi}\delta(\xi-\xi_0) \,, \quad
0\leq \xi\leq \xi_b \,; \qquad \mathcal{A}_0^{\prime}(0)=0 \,, \,\,\,
\mathcal{A}_0^{\prime}(\xi_b)=-\frac{1}{2\pi} \,,
\end{equation}
while on $0\leq\xi\leq \xi_b$, and for each $k=1,2,\ldots$, we have
\begin{gather}
\mathcal{A}_k^{\prime\prime} - k^2 \mathcal{A}_k = -\frac{1}{\pi} \cos(k\eta_0)
\delta(\xi-\xi_0) \,; \qquad \mathcal{A}_k^{\prime}(0)=0 \,, \,\,\,
\mathcal{A}_k^{\prime}(\xi_b)=0 \,, \label{ncell:ode_ak} \\
\mathcal{B}_k^{\prime\prime} - k^2 \mathcal{B}_k = -\frac{1}{\pi} \sin(k\eta_0)
\delta(\xi-\xi_0) \,; \qquad \mathcal{B}_k(0)=0 \,, \,\,\,
\mathcal{B}_k^{\prime}(\xi_b)=0 \,. \label{ncell:ode_bk}
\end{gather}
\end{subequations}
We observe from \eqref{ncell:ode_0} that $\mathcal{A}_0$ is specified only
up to an arbitrary constant.
We determine this constant from the normalization condition
\eqref{cell:n_int}. By substituting \eqref{ncell:eig_ex} into
\eqref{cell:n_int}, we readily derive the identity that
\begin{equation}\label{cell:n_int_1}
\int_{0}^{\xi_b} \mathcal{A}_0(\xi) \cosh(2\xi) \, d\xi - \frac{1}{2}
\int_{0}^{\xi_b} \mathcal{A}_2(\xi) \, d\xi = - \frac{1}{16\pi} \left(
\frac{a^2+ b^2}{a^2-b^2} \right) \,.
\end{equation}
We will use \eqref{cell:n_int_1} to derive a point constraint on
$\mathcal{A}_0(\xi_b)$. To do so, we define $\phi(\xi)=\cosh(2\xi)$, which
satisfies $\phi^{\prime\prime}-4\phi=0$ and $\phi^{\prime}(0)=0$. We
integrate by parts and use $\mathcal{A}_0^{\prime}(0)=0$ and
$\mathcal{A}_0^{\prime}(\xi_b)=-{1/(2\pi)}$ to get
\begin{equation}\label{cell:n_int_2}
\begin{split}
4\int_{0}^{\xi_b} \mathcal{A}_0\phi \, d\xi = \int_{0}^{\xi_b} \mathcal{A}_0
\phi^{\prime\prime} \, d\xi &= \left(\phi^{\prime}\mathcal{A}_0 -
\phi\mathcal{A}^{\prime}_0 \right)\vert_{0}^{\xi_b} +
\int_{0}^{\xi_b} \phi \mathcal{A}_0^{\prime\prime}\, d\xi \,, \\
& = \phi^{\prime}(\xi_b) \mathcal{A}_0(\xi_b) + \frac{1}{2\pi}
\left[ \phi(\xi_b) - \phi(\xi_0)\right] \,.
\end{split}
\end{equation}
Next, set $k=2$ in \eqref{ncell:ode_ak} and integrate over
$0<\xi<\xi_b$. Using the no-flux boundary conditions we get
$\int_{0}^{\xi_b} \mathcal{A}_2 \, d\xi={\cos(2\eta_0)/(4\pi)}$. We substitute
this result, together with \eqref{cell:n_int_2}, into
\eqref{cell:n_int_1} and solve the resulting equation for $\mathcal{A}_0(\xi_b)$
to get
\begin{equation}\label{ncell:a00_t}
\mathcal{A}_0(\xi_b) = \frac{1}{4\pi\sinh(2\xi_b)} \left[
\cosh(2\xi_0)+\cos(2\eta_0) -\cosh(2\xi_b) -\frac{1}{2}
\left( \frac{a^2 + b^2}{a^2-b^2} \right) \right]\,.
\end{equation}
To simplify this expression we use $\tanh\xi_b={b/a}$ to calculate
$\sinh(2\xi_b)={2ab/(a^2-b^2)}$ and $\coth(2\xi_b)={(a^2+b^2)/(2ab)}$, while
from \eqref{ell:coord_1} we get
\begin{equation*}
x_0^2 + y_0^2 = f^2 \left[\cosh^2\xi_0 -\sin^2\eta_0\right] =
\frac{ (a^2-b^2)}{2} \left[\cosh(2\xi_0) + \cos(2\eta_0)\right] \,.
\end{equation*}
Upon substituting these results into \eqref{ncell:a00_t}, we conclude that
\begin{equation}\label{ncell:a00}
\mathcal{A}_0(\xi_b) = -\frac{3}{16|\Omega|} (a^2 + b^2) + \frac{1}{4|\Omega|}
\left( x_0^2 + y_0^2 \right) \,,
\end{equation}
where $|\Omega|=\pi a b$ is the area of the ellipse. With this explicit
value for $\mathcal{A}_0(\xi_b)$, the normalization condition
\eqref{cell:n_int}, or equivalently the constraint
$\int_{\Omega} G \, \text{d}\v{x}=0$, is satisfied.
Next, we solve the ODEs \eqref{ncell:odes} for $\mathcal{A}_0$, $\mathcal{A}_k$,
and $\mathcal{B}_k$, for $k\geq 1$, to obtain
\begin{subequations}\label{ncell:sol_odes}
\begin{gather}
\mathcal{A}_0(\xi) = \frac{1}{2\pi} \left(\xi_b-\xi_{>}\right) + \mathcal{A}_0(\xi_b) \,,
\quad \mathcal{A}_k(\xi) =
\frac{ \cos(k\eta_0)}{k\pi \sinh(k\xi_b)} \cosh(k\xi_{<})\cosh\left(k(\xi_{>}-\xi_b)
\right) \,, \label{ncell:sol_ak} \\
\mathcal{B}_k(\xi) = \frac{ \sin(k\eta_0)}{k\pi \cosh(k\xi_b)}
\sinh(k\xi_{<})\cosh\left(k(\xi_{>}-\xi_b) \right) \,, \label{ncell:sol_bk}
\end{gather}
\end{subequations}
where we have defined $\xi_{>}\equiv\max(\xi_0,\xi)$ and $\xi_{<}\equiv
\min(\xi_0,\xi)$.
To determine an explicit expression for
$G(\v{x};\v{x}_0)={|\v{x}|^2/(4|\Omega|)} + \mathcal{N}(\xi,\eta)$, as given in
\eqref{ell:g_to_n}, we substitute \eqref{ncell:a00} and
\eqref{ncell:sol_odes} into the eigenfunction expansion
\eqref{ncell:eig_ex} for $\mathcal{N}$. In this way, we get
\begin{subequations}\label{cell:gexp}
\begin{equation}\label{cell:gexp_t1}
G(\v{x};\v{x}_0) = \frac{1}{4|\Omega|} \left(|\v{x}|^2 + |\v{x}_0|^2\right) -
\frac{3}{16|\Omega|}(a^2 + b^2) + \frac{1}{2\pi} \left(\xi_b-\xi_{>}\right)
+ \mathcal{S} \,,
\end{equation}
where the infinite sum $\mathcal{S}$ is defined by
\begin{equation}\label{cell:gexp_sum_t}
\begin{split}
\mathcal{S} & \equiv \sum_{k=1}^{\infty}
\frac{ \cos(k\eta_0)\cos(k\eta)} {\pi k \sinh(k\xi_b)}
\cosh(k\xi_{<})\cosh\left(k(\xi_{>}-\xi_b)\right) \\
& \qquad + \sum_{k=1}^{\infty}\frac{ \sin(k\eta_0)\sin(k\eta)}
{\pi k \cosh(k\xi_b)} \sinh(k\xi_{<})\cosh\left(k(\xi_{>}-\xi_b)\right) \,.
\end{split}
\end{equation}
\end{subequations}
Next, from the product to sum formulas for $\cos(A)\cos(B)$ and
$\sin(A)\sin(B)$ we get
\begin{equation}\label{cell:gexp_sum_t2}
\begin{split}
\mathcal{S} &= \frac{1}{2\pi} \sum_{k=1}^{\infty} \frac{\cosh\left(k(\xi_{>}-\xi_b)
\right)}{k} \left[ \frac{\cosh(k\xi_{<})}{\sinh(k\xi_b)} +
\frac{\sin(k\xi_{<})}{\cosh(k\xi_b)} \right] \cos\left(k(\eta-\eta_0\right)
\\
& \qquad +
\frac{1}{2\pi} \sum_{k=1}^{\infty} \frac{\cosh\left(k(\xi_{>}-\xi_b)
\right)}{k} \left[ \frac{\cosh(k\xi_{<})}{\sinh(k\xi_b)} -
\frac{\sin(k\xi_{<})}{\cosh(k\xi_b)} \right] \cos\left(k(\eta+\eta_0\right)\,.
\end{split}
\end{equation}
Then, by using product to sum formulas for $\cosh(A)\cosh(B)$, the
identity $\sinh(2A)=2\sinh(A)\cosh(A)$, $\xi_{>}+\xi_{<}=\xi+\xi_0$,
and $\xi_{>}-\xi_{<}=|\xi-\xi_0|$, some algebra yields that
\begin{equation}\label{cell:sinf}
\begin{split}
\mathcal{S} &= \frac{1}{2\pi} \mbox{Re} \left(\sum_{k=1}^{\infty}
\frac{\left[\cosh\left(k(\xi+\xi_0)\right) +
\cosh\left(k(|\xi-\xi_0|-2\xi_b)\right)\right]}{k\sinh(2k\xi_b)}
e^{ik(\eta-\eta_0)} \right) \\
& \quad + \frac{1}{2\pi} \mbox{Re} \left(\sum_{k=1}^{\infty}
\frac{\left[\cosh\left(k(\xi+\xi_0-2\xi_b)\right) +
\cosh\left(k|\xi-\xi_0|\right)\right]}{k\sinh(2k\xi_b)}
e^{ik(\eta +\eta_0)} \right)\,.
\end{split}
\end{equation}
The next step in the analysis is to convert the hyperbolic functions in
\eqref{cell:sinf} into pure exponentials. A simple calculation yields that
\begin{subequations}\label{cell:s_exp_t}
\begin{equation}\label{cell:s_exp_t1}
\mathcal{S} = \frac{1}{2\pi} \mbox{Re} \left( \sum_{k=1}^{\infty} \frac{\mathcal{H}_1}{k}
e^{ik(\eta-\eta_0)} + \sum_{k=1}^{\infty} \frac{\mathcal{H}_2}{k}
e^{ik(\eta+\eta_0)} \right)\,,
\end{equation}
where $\mathcal{H}_1$ and $\mathcal{H}_2$ are defined by
\begin{equation}
\begin{split}
\mathcal{H}_1 &\equiv \frac{1}{1-e^{-4k\xi_b}}
\left[ e^{k(\xi+\xi_0-2\xi_b)} + e^{-k(\xi+\xi_0+2\xi_b)} +
e^{k(|\xi-\xi_0|-4\xi_b)} + e^{-k|\xi-\xi_0|}\right] \,, \\
\mathcal{H}_2 &\equiv \frac{1}{1-e^{-4k\xi_b}}
\left[ e^{k(\xi+\xi_0-4\xi_b)} + e^{k(|\xi-\xi_0|-2\xi_b)} +
e^{-k(|\xi-\xi_0|+2\xi_b)} + e^{-k(\xi+\xi_0)}\right] \,.
\end{split}
\end{equation}
\end{subequations}
Then, for any $q$ with $0<q<1$ and integer $k\geq 1$, we use the
identity $\sum_{n=0}^{\infty} \left(q^k\right)^n = \frac{1}{1-q^k}$
for the choice $q=e^{-4\xi_b}$, which converts $\mathcal{H}_1$ and
$\mathcal{H}_2$ into infinite sums. This leads to a doubly-infinite sum
representation for $\mathcal{S}$ in \eqref{cell:s_exp_t1} given by
\begin{equation}\label{cell:sum_z}
\mathcal{S} = \frac{1}{2\pi} \mbox{Re}\left(
\sum_{k=1}^{\infty} \sum_{n=0}^{\infty} \frac{\left(q^n\right)^k}{k}
\left( z_1^k + z_2^k + z_3^k + z_4^k + z_5^k + z_6^k + z_7^k + z_8^k
\right)\right) \,,
\end{equation}
where the complex constants $z_1,\ldots,z_8$ are defined by
\eqref{cell:def_z}. From these formulae, we readily observe that
$|z_j|<1$ on $0\leq\xi\leq \xi_b$ for any
$(\xi,\eta)\neq (\xi_0,\eta_0)$. Since $0<q<1$, we can then switch the
order of the sums in \eqref{cell:sum_z} when
$(\xi,\eta)\neq (\xi_0,\eta_0)$ and use the identity
$\mbox{Re}\left(\sum_{k=1}^{\infty} k^{-1} \omega^k\right)=
-\log|1-\omega|$, where $|1-\omega|$ denotes modulus. In this way,
upon setting $\omega_j=q^nz_j$ for $j=1,\ldots,8$, we obtain a compact
representation for $\mathcal{S}$. Finally, by using this result in
\eqref{cell:gexp} we obtain for $(\xi,\eta)\neq (\xi_0,\eta_0)$, or
equivalently $(x,y)\neq (x_0,y_0)$, the result given explicitly in
\eqref{cell:finz_g} of \S~\ref{sec:ellipse}.
Next, to determine the regular part of the Neumann Green's function we
must identify the singular term in \eqref{cell:finz_g1} at
$(\xi,\eta)=(\xi_0,\eta_0)$. Since $z_1=1$, while $|z_j|<1$ for
$j=2,\ldots,8$, at $(\xi,\eta)=(\xi_0,\eta_0)$, the singular
contribution arises only from the $n=0$ term in
$\sum_{n=0}^{\infty} \log |1-\beta^{2n} z_1|$. As such, we add and
subtract the fundamental singularity $-{\log|\v{x}-\v{x}_0|/(2\pi)}$ in
\eqref{cell:finz_g1} to get
\begin{subequations}
\begin{equation}
G(\v{x};\v{x}_0) = -\frac{1}{2\pi} \log|\v{x}-\v{x}_0| + R(\v{x};\v{x}_0)\,,\label{cell:G_and_R}
\end{equation}
\begin{equation}\label{cell:R}
\begin{split}
R(\v{x};\v{x}_0) &= \frac{1}{4|\Omega|} \left( |\v{x}|^2 + |\v{x}_0|^2\right)
-\frac{3(a^2+b^2)}{16|\Omega|} - \frac{1}{4\pi} \log\beta -
\frac{1}{2\pi}\xi_{>} +
\frac{1}{2\pi} \log\left( \frac{|\v{x}-\v{x}_0|}{|1-z_1|}\right) \\
&\qquad -\frac{1}{2\pi}\sum_{n=1}^{\infty}\log|1-\beta^{2n} z_1|
-\frac{1}{2\pi} \sum_{n=0}^{\infty}\log\left(
\displaystyle \prod_{j=2}^{8} |1-\beta^{2n} z_j| \right) \,.
\end{split}
\end{equation}
\end{subequations}
To identify $\lim_{\v{x}\to\v{x}_0}R(\v{x};\v{x}_0)=R_{e}$, we must find
$\lim_{\v{x}\to\v{x}_0} \log\left( {|\v{x}-\v{x}_0|/|1-z_1|}\right)$. To do so,
we use a Taylor approximation on \eqref{ell:coord_1} to derive at
$(\xi,\eta)=(\xi_0,\eta_0)$ that
\begin{equation}\label{cell:loc_jac}
\begin{pmatrix} \xi-\xi_0 \\ \eta-\eta_0 \end{pmatrix} =
\frac{1}{(x_\xi y_\eta-x_\eta y_\xi)}
\begin{pmatrix} y_\eta & -x_\eta \\ -y_\xi & x_\xi \end{pmatrix}
\begin{pmatrix} x-x_0 \\ y-y_0 \end{pmatrix} \,.
\end{equation}
By calculating the partial derivatives in \eqref{cell:loc_jac} using
\eqref{cell:xder}, and then noting from \eqref{cell:def_z} that
$|1-z_1|^2\sim (\xi-\xi_0)^2+(\eta-\eta_0)^2$ as
$(\xi,\eta)\to (\xi_0,\eta_0)$, we readily derive that
\begin{equation}\label{cell:local}
\lim_{\v{x}\to\v{x}_0} \log\left( \frac{|\v{x}-\v{x}_0|}{|1-z_1|}\right) =
\frac{1}{2} \log{(a^2-b^2)} + \frac{1}{2} \log\left(\cosh^2\xi_0
-\cos^2\eta_0\right) \,.
\end{equation}
Finally, we substitute \eqref{cell:local} into \eqref{cell:R} and let
$\v{x}\to\v{x}_0$. This yields the formula for the regular part of the
Neumann Green's function as given in \eqref{cell:R0} of
\S~\ref{sec:ellipse}. In Appendix \ref{ell:disk} we show that the
Neumann Green's function \eqref{cell:finz_g} for the ellipse reduces
to the expression given in \eqref{gr:gmrm} for the unit disk when
$a\to b=1$.
\section{Discussion}\label{sec:discussion}
Here we discuss the relationship between our problem of optimal trap
patterns and a related optimization problem for the fundamental
Neumann eigenvalue $\lambda_0$ of the Laplacian in a bounded 2-D
domain $\Omega$ containing $m$ small circular absorbing traps of a
common radius $\varepsilon$. That is, $\lambda_0$ is the lowest eigenvalue of
\begin{equation}\label{eig:low}
\begin{split}
\Delta u + \lambda u &= 0\,, \quad x \in \Omega\setminus
\cup_{j=1}^{m}\Omega_{\varepsilon j} \,; \qquad \partial_n u = 0 \,,
\quad x\in \partial \Omega \,, \\
u &= 0\,, \quad x\in \partial \Omega_{\varepsilon j}\,,
\quad j = 1, \ldots,m\,.
\end{split}
\end{equation}
Here $\Omega_{\varepsilon j}$ is a circular disk of radius $\varepsilon\ll 1$
centered at $\v{x}_j\in \Omega$. In the limit $\varepsilon\to 0$, a two-term
asymptotic expansion for $\lambda_0$ in powers of
$\nu\equiv {-1/\log\varepsilon}$ is
(see \cite[Corollary~2.3]{KTW2005} and Appendix~\ref{app:eig_low})
\begin{equation}\label{eig:2term}
\lambda_{0} \sim \frac{2\pi m \nu}{|\Omega|} -
\frac{4\pi^2 \nu^2}{|\Omega|} p(\v{x}_1,\ldots,\v{x}_m) + O(\nu^3) \,,
\quad \mbox{with} \quad p(\v{x}_1,\ldots,\v{x}_m)\equiv \v{e}^T \mathcal{G} \v{e},
\end{equation}
where $\v{e}\equiv (1,\ldots,1)^T$ and $\mathcal{G}$ is the Neumann Green's
matrix. To relate this result for $\lambda_0$ with that for the
average MFPT $\overline{u}_0$ satisfying \eqref{e:u0_bar}, we let
$\nu\ll 1$ in \eqref{e:u0_bar} and calculate that
$\mathcal{A}\sim {|\Omega| \v{e}/(2\pi D m)} + {\mathcal O}(\nu)$.
>From \eqref{e:u0_bar}, we conclude that
\begin{equation}\label{disc:u0_bar}
\overline{u}_0 = \frac{|\Omega|}{2\pi D \nu m} \left( 1 +
\frac{2\pi \nu}{m} p(\v{x}_1,\ldots,\v{x}_m) + {\mathcal O}(\nu^2)\right)\,,
\end{equation}
where $p(\v{x}_1,\ldots,x_m)$ is defined in \eqref{eig:2term}. By
comparing \eqref{disc:u0_bar} and \eqref{eig:2term} we conclude, up to
terms of ${\mathcal O}(\nu^2)$, that the trap configurations that
provide local minima for the average MFPT also provide local maxima
for the first Neumann eigenvalue for \eqref{eig:low}. Qualitatively,
this implies that, up to terms of order ${\mathcal O}(\nu^2)$, the
trap configuration that maximizes the rate at which a Brownian
particle is captured also provides the best configuration to minimize
the average mean first capture time of the particle. In this way, our
optimal trap configurations for the average MFPT for the ellipse
identified in \S~\ref{ell:ex} also correspond to trap patterns that
maximize $\lambda_0$ up to terms of order ${\mathcal
O}(\nu^2)$. Moreover, we remark that for the special case of a
ring-pattern of traps, the first two-terms in \eqref{disc:u0_bar} provide
an exact solution of \eqref{e:u0_bar}. As such, for these special
patterns, the trap configuration that maximizes the
${\mathcal O}(\nu^2)$ term in $\lambda_0$ provides the optimal trap
locations that minimize the average MFPT to {\em all orders in $\nu$.}
Finally, we discuss two possible extensions of this study. Firstly, in
near-disk domains and in the ellipse it would be worthwhile to use a
more refined gradient descent procedure such as in \cite{ridgway} and
\cite{gilbert} to numerically identify globally optimum trap
configurations for a much larger number of identical traps than
considered herein. One key challenge in upscaling the optimization
procedure to a larger number of traps is that the energy landscape can
be rather flat or else have many local minima, and so identifying the
true optimum pattern is delicate. Locally optimum trap patterns with
very similar minimum values for the average MFPT already occurs in
certain near-disk domains at a rather small number of traps (see
Fig.~\ref{fig:neardisk_cos4} and Fig.~\ref{fig:neardisk_odd}). One
advantage of our asymptotic theory leading to \eqref{final:avemfpt}
for the near-disk and \eqref{e:u0_bar} for the ellipse, is that it can
be implemented numerically with very high precision. As a result,
small differences in the average MFPT between two distinct locally
optimal trap patterns are not due to discretization errors arising
from either numerical quadratures or evaluations of the Neumann
Green's function. As such, combining our hybrid theory with a refined
global optimization procedure should lead to the reliable
identification of globally optimal trap configurations for these
domains.
Another open direction is to investigate whether there are
computationally useful analytical representations for the Neumann
Green's function in an arbitrary bounded 2-D domain. In this
direction, in \cite[Theorem~4.1]{KW2003} an explicit analytical result
for the gradient of the regular part of the Neumann Green's function
was derived in terms of the mapping function for a general class of
mappings of the unit disk. It is worthwhile to study whether
this analysis can be extended to provide a simple and accurate
approach to compute the Neumann Green's matrix for an arbitrary
domain. This matrix could then be used in the linear algebraic system
\eqref{e:u0_bar} to calculate the average MFPT, and a gradient descent
scheme implemented to identify optimal patterns.
\section{Acknowledgements}\label{sec:ak}
Colin Macdonald and Michael Ward were supported by NSERC
Discovery grants. Tony Wong was partially supported by a UBC Four-Year
Graduate Fellowship.
\begin{appendix}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of the Thin Domain ODE}\label{app:thin}
In the asymptotic limit of a long thin domain, we use a perturbation
approach on the MFPT PDE \eqref{Ellip_Model} for $u(x,y)$ in order to
derive the limiting problem \eqref{sec:long_u0}. We introduce the
stretched variables $X$ and $Y$ by $X = \delta x, Y = {y/\delta}$ and
$d = {x_0/\delta}$, and set $U(X,Y)=u({X/\delta},Y\delta)$. Then the
PDE in \eqref{Ellip_Model} becomes
$\delta^4 \partial_{XX} U + \partial_{YY} U = -{\delta^2/D}$. By
expanding $U = \delta^{-2} U_0 + U_1 + \delta^2 U_2 + \ldots$ in this
PDE, we collect powers of $\delta$ to get
\begin{equation}
{\mathcal O}(\delta^{-2})\,:\,\,\partial_{YY} U_0 = 0\,; \quad
{\mathcal O}(1)\,: \,\,\partial_{YY} U_1 = 0\,; \quad
{\mathcal O}(\delta^2)\,: \,\,\partial_{YY} U_2 = -\frac{1}{D} -
\partial_{XX} U_0 \,. \label{app:pde_order}
\end{equation}
On the boundary $y = \pm\delta F(\delta x)$, or equivalently
$Y = \pm F(X)$, where $F(X)=\sqrt{1-X^2}$, the unit outward normal is
$\hat{\mathbf{n}} = {\mathbf{n}/|\mathbf{n}|}$, where
$\mathbf{n} \equiv (-\delta^2 F^{\prime}(X),\pm1)$. The condition for
the vanishing of the outward normal derivative in
\eqref{Ellip_Model} becomes
\begin{equation*}
\partial_n u = \hat{\mathbf{n}} \cdot (\partial_x u, \partial_y u) =
\frac{1}{|\mathbf{n}|}(-\delta^2F^{\prime}, \pm 1) \cdot
(\delta\partial_X U, \delta^{-1}\partial_Y U) = 0\,, \,\,\, \mbox{on}
\,\,\, Y = \pm F(X) \,.
\end{equation*}
This is equivalent to the condition that
$\partial_Y U = \pm \delta^4 F^{\prime}(X) \partial_X U$ on
$Y = \pm F(X)$. Upon substituting $U = \delta^{-2} U_0 + U_1 + \delta^2 U_2 +
\ldots$ into this expression, and equating powers of $\delta$, we obtain on
$Y=\pm F(X)$ that
\begin{equation}
{\mathcal O}(\delta^{-2})\,: \,\,\partial_Y U_0 =0\,; \quad
{\mathcal O}(1)\,; \,\,\partial_Y U_1 = 0\,; \quad
{\mathcal O}(\delta^2)\,; \,\, \quad\partial_Y U_2 = \pm F^{\prime}(X)
\partial_X U_0 \,. \label{eqn:three_traps_bc_different_order}
\end{equation}
From (\ref{app:pde_order}) and
(\ref{eqn:three_traps_bc_different_order}) we conclude that
$U_0 = U_0(X)$ and $U_1 = U_1(X)$. Assuming that the trap radius
$\varepsilon$ is comparable to the domain width $\delta$, we will
approximate the zero Dirichlet boundary condition on the three traps
as zero point constraints for $U_0$.
The ODE for $U_0(X)$ is derived from a solvability condition on
the ${\mathcal O}(\delta^2)$ problem:
\begin{equation}\label{long:u2}
\partial_{YY} U_2 = -\frac{1}{D} - U_0^{\prime\prime}\,, \,\,\,
\mbox{in}\,\,\, \Omega\setminus\Omega_a\,; \quad \partial_Y U_2 =
\pm F^{\prime}(X) U_0^{\prime}\,, \,\,\, \mbox{on} \,\,\,
Y = \pm F(X)\,, \,\, |X|<1 \,.
\end{equation}
We multiply this problem for $U_2$ by $U_0$ and integrate in $Y$ over
$|Y|<F(X)$. Upon using Lagrange's identity and the boundary
conditions in \eqref{long:u2} we get
\begin{equation}
\begin{aligned}
\int_{-F(X)}^{F(X)} \left(U_0 \partial_{YY} U_2 - U_2 \partial_{YY} U_0\right)\,
dY &= \left[ U_0 \partial_Y U_2 - U_2 \partial_Y U_0 \right]
\Big{\vert}_{-F(X)}^{F(X)}= 2U_0 F^{\prime}(X) U_0^{\prime} \,, \\
\int_{-F(X)}^{F(X)} U_0 \left( -\frac{1}{D} - U_{0}^{\prime\prime} \right) \, dY
&= -2F(X)U_0\left(\frac{1}{D} + U_0^{\prime\prime}\right) = 2U_0 F^{\prime}(X)
U_0^{\prime}\,.
\end{aligned}
\end{equation}
Thus, $U_0(X)$ satisfies the ODE
$\left[F(X)U_0^{\prime}\right]^{\prime}= -{F(X)/D}$, with
$F(X)=\sqrt{1-X^2}$, as given in \eqref{sec:long_u0} of
\S~\ref{sec:thin}. This gives the leading-order asymptotics
$u\sim \delta^{-2}U_0(X)$.
\section{Limiting Case of the Unit Disk}\label{ell:disk}
We now show how to recover the well-known Neumann Green's function and
its regular part for the unit disk by letting $a\to b =1$ in
\eqref{cell:finz_g} and \eqref{cell:R0}, respectively. In the limit
$\beta\equiv {(a-b)/(a+b)}\to 0$ only the $n=0$ terms in the
infinite sums in \eqref{cell:finz_g} and \eqref{cell:R0} are
non-vanishing. In addition, as $\beta\to 0$, we obtain from
\eqref{ell:coord} that $|\v{x}|^2\sim {f^2 e^{2\xi}/4}$ and
$|\v{x}_0|^2\sim {f^2 e^{2\xi_0}/4}$, and $\xi_b=-\log{f} + \log(a+b)\to
-\log{f}+\log{2}$, where $f\equiv \sqrt{a^2-b^2}$. This yields that
\begin{equation}\label{lim:iden}
\xi + \xi_0 - 2 \xi_b \sim \log\left(\frac{2|\v{x}|}{f}\right) +
\log\left(\frac{2|\v{x}_0|}{f}\right) -2\log{2} + 2\log{f} = \log\left(
|\v{x}||\v{x}_0|\right) \,.
\end{equation}
As such, only the $z_1$ and $z_4$ terms in the infinite sums
in \eqref{cell:finz_g1} with $n=0$ persist as $a\to b=1$, and so
\eqref{cell:finz_g1} reduces in this limit to
\begin{equation}\label{lim:g1}
G(\v{x};\v{x}_0) \sim \frac{1}{4|\Omega|} \left( |\v{x}|^2 + |\v{x}_0|^2\right)
-\frac{3}{8|\Omega|} + \frac{1}{2\pi}\left(\xi_b-\xi_{>}\right) -
\frac{1}{2\pi}\log|1-z_1| - \frac{1}{2\pi}\log|1-z_4| \,,
\end{equation}
where $|\Omega|=\pi$ and $\xi_{>}\equiv\max(\xi_0,\xi)$. Since
$\eta\to\theta$ and $\eta_0\to\theta_0$, where $\theta$ and $\theta_0$
are the polar angles for $\v{x}$ and $\v{x}_0$, we get from
\eqref{cell:def_z} that $z_4\to |\v{x}||\v{x}_0|e^{i(\theta-\theta_0)}$ as
$a\to b=1$. We then calculate that
\begin{equation}\label{lim:z4}
-\frac{1}{2\pi} \log|1-z_4|= -\frac{1}{4\pi} \log|1-z_4|^2=
-\frac{1}{4\pi} \log\left( 1 -2 |\v{x}||\v{x}_0| \cos(\theta-\theta_0)
+ |\v{x}|^2|\v{x}_0|^2 \right) \,.
\end{equation}
Next, with regards to the $z_1$ term we calculate for $a\to b=1$ that
\begin{equation}\label{lim:z1_1}
|\xi-\xi_0| = \begin{cases}
\xi-\xi_0 \sim \log\left( \frac{|\v{x}|}{|\v{x}_0|} \right)\,, &\quad \mbox{if}\,\,
0<|\v{x}_0|<|\v{x}|\,, \\
-(\xi-\xi_0) \sim \log\left( \frac{|\v{x}_0|}{|\v{x}|} \right)\,, &\quad \mbox{if}
\,\, 0<|\v{x}|<|\v{x}_0| \,.
\end{cases}
\end{equation}
From \eqref{cell:def_z} this yields for $a\to b=1$ that
\begin{equation}\label{lim:z1_2}
z_1=e^{-|\xi-\xi_0|+i(\eta-\eta_0)} \sim \begin{cases}
\frac{|\v{x}_0|}{|\v{x}|} e^{i(\theta-\theta_0)} \,, &\quad \mbox{if} \,\,
0<|\v{x}_0|<|\v{x}| \,, \\
\frac{|\v{x}|}{|\v{x}_0|} e^{i(\theta-\theta_0)} \,, &\quad \mbox{if} \,\,
0<|\v{x}|<|\v{x}_0| \,.
\end{cases}
\end{equation}
By using \eqref{lim:z1_2}, we calculate for $a\to b=1$ that
\begin{equation}\label{lim:z1}
-\frac{1}{4\pi} \log|1-z_1|^2 = -\frac{1}{2\pi}\log|\v{x}-\v{x}_0|
+ \begin{cases}
\frac{1}{4\pi} \log|\v{x}|^2 \,,& \,\, \mbox{if } 0<|\v{x}_0|<|\v{x}|\,, \\
\frac{1}{4\pi} \log|\v{x}_0|^2 \,, & \,\, \mbox{if } 0<|\v{x}|<|\v{x}_0|\,.
\end{cases}
\end{equation}
Next, we estimate the remaining term in \eqref{lim:g1} as $a\to b=1$ using
\begin{equation}\label{lim:umin}
\frac{1}{2\pi} \left(\xi_b-\xi_{>}\right) =\frac{1}{2\pi} \begin{cases}
\xi_b-\xi \sim -\frac{1}{2\pi}\log|\v{x}|\,, & \quad \mbox{if }\,\,
|\v{x}|>|\v{x}_0|>0 \,,\\
\xi_b-\xi_0 \sim -\frac{1}{2\pi}\log|\v{x}_0|\,, & \quad \mbox{if }\,\,
0<|\v{x}|<|\v{x}_0| \,. \end{cases}
\end{equation}
Finally, by using \eqref{lim:z4}, \eqref{lim:z1}, and \eqref{lim:umin}
into \eqref{lim:g1}, we obtain for $a\to b=1$ that
\begin{equation}\label{lim:gfinal}
\begin{split}
G(\v{x};\v{x}_0) &\sim -\frac{1}{2\pi}\log|\v{x}-\v{x}_0|
-\frac{1}{4\pi} \log\left( 1 -2 |\v{x}||\v{x}_0| \cos(\theta-\theta_0)
+ |\v{x}|^2 |\v{x}_0|^2 \right)\\
& \qquad + \frac{1}{4|\Omega|}\left( |\v{x}|^2 + |\v{x}_0|^2\right)
-\frac{3}{8|\Omega|},
\end{split}
\end{equation}
where $|\Omega|=\pi$. This result agrees with that in \eqref{gr:gm}
for the Neumann Green's function in the unit disk. Similarly, we can
show that the regular part $R_e$ for the ellipse given in
\eqref{cell:R0} tends as $a\to b=1$ to that given in \eqref{gr:rm} for
the unit disk.
\section{Asymptotics of the Fundamental Neumann
Eigenvalue}\label{app:eig_low}
For $\nu\ll 1$, it was shown in \cite{KTW2005}, by using a matched
asymptotic expansion analysis in the limit of small trap radii similar
to that leading to \eqref{e:u0_bar}, that the fundamental Neumann
eigenvalue $\lambda_0$ for \eqref{eig:low} is the smallest positive
root of
\begin{equation}\label{a:eig_helm}
{\mathcal K}(\lambda) \equiv \mbox{det}\left(I + 2\pi \nu {\mathcal G}_H
\right)=0\,.
\end{equation}
Here $\nu=-{1/\log\varepsilon}$ and ${\mathcal G}_H$ is the Helmholtz Green's
matrix with matrix entries
\begin{align}\label{a:GreenMAtrix}
(\mathcal{G})_{Hjj} = R_{Hj} \,\,\, \text{for} \,\,\, i = j
\quad \text{and} \quad (\mathcal{G})_{Hij} = (\mathcal{G})_{Hji} =
G_{H}(\v{x}_i ; \v{x}_j) \,\,\, \text{for} \,\,\, i \neq j \,,
\end{align}
where the Helmholtz Green's function $G_{H}(\v{x};\v{x}_j)$ and its regular
part $R_{Hj}$ satisfy
\begin{subequations}\label{a:GreenFunctionProb}
\begin{align}
\Delta G_H +\lambda G_H &= -\delta(\v{x}-\v{x}_j) \,,\quad \v{x} \in \Omega\,;
\qquad \partial_n G_H =0\,, \,\,\, \v{x} \in \partial \Omega\,;
\label{a:GreenFunctionProb_A}\\
G_H \sim -\frac{1}{2\pi}& \log{|\v{x} - \v{x}_j|} + R_{Hj} + o(1) \,,
\quad \text{as} \quad\v{x} \to \v{x}_j\,. \label{a:GreenFunctionProb_B}
\end{align}
\end{subequations}
For $0<\lambda\ll 1$, we estimate ${\mathcal G}_H$ by expanding
$G_H={A/\lambda} + G +{\mathcal O}(\lambda)$, for some $A$ to be
found. From \eqref{a:GreenFunctionProb}, we derive in terms
of the Neumann Green's matrix ${\mathcal G}$ that
\begin{equation}\label{a:gh_exp}
{\mathcal G}_H = -\frac{m}{\lambda |\Omega|} E + {\mathcal G} +
{\mathcal O}(\lambda) \,, \qquad \mbox{with} \quad E \equiv \frac{1}{m}
\v{e} \v{e}^T \,,
\end{equation}
for $0<\lambda \ll 1$. From \eqref{a:gh_exp} and \eqref{a:eig_helm},
the fundamental Neumann eigenvalue $\lambda_0$ is the smallest
$\lambda>0$ for which there is a nontrivial solution $\v{c}\neq \v{0}$ to
\begin{equation}\label{a:sing_mat}
\left(I -\frac{2\pi \nu m}{\lambda |\Omega|} E + 2\pi \nu {\mathcal G}
+ {\mathcal O}(\nu) \right) \v{c} =0 \,.
\end{equation}
Since this occurs when $\lambda={\mathcal O}(\nu)$, we define
$\lambda_c>0$ by $\lambda = {2\pi \nu m \lambda_c/|\Omega|}$,
so that \eqref{a:sing_mat} can be written in equivalent form as
\begin{equation}\label{a:math_neum}
E \v{c} = \lambda_c \left( I + 2\pi \nu {\mathcal G} + {\mathcal O}(\nu^2)
\right)\v{c} \,, \qquad \mbox{where} \quad
\lambda = \frac{2\pi \nu m} {|\Omega|} \lambda_c\,.
\end{equation}
Since $E\v{e}=\v{e}$, while $E\v{q}=0$ for any $\v{q}\in \mathbb{R}^{m-1}$
with $\v{e}^T\v{q}=0$, we conclude for $\nu\ll 1$ that the only
non-zero eigenvalue of \eqref{a:math_neum} satisfies $\lambda_c\sim 1$
with $\v{c}\sim \v{e}$. To determine the correction to this
leading-order result, in \eqref{a:math_neum} we expand
$\lambda_c=1+\nu \lambda_{c1}+\cdots$ and
$\v{c}=\v{e}+\nu \v{c}_{1}+ \cdots$. From collecting
${\mathcal O}(\nu)$ terms in \eqref{a:math_neum}, we get
\begin{equation}\label{a:mat_solve}
\left(I - E \right) \v{c}_{1} = -2\pi {\mathcal G}\v{e} - \lambda_{c1}\v{e}
\,.
\end{equation}
Since $I-E$ is symmetric with the 1-D nullspace $\v{e}$, the solvability
condition for \eqref{a:mat_solve} is that $-2\pi\v{e}^T {\mathcal G}\v{e}-
\lambda_{c1} \v{e}^T\v{e}=0$. Since $\v{e}^T\v{e}=m$, this yields the two-term
expansion
\begin{equation}\label{a:lambda_c1}
\lambda_{c}= 1+\nu \lambda_{c1}+\ldots \,, \qquad \mbox{where}
\quad \lambda_{c1} = -\frac{2\pi}{m} \v{e}^T {\mathcal G}\v{e}\,.
\end{equation}
Finally, using $\lambda = {2\pi \nu m\lambda_c/|\Omega|}$, we
obtain the two-term expansion as given in \eqref{eig:2term}.
\end{appendix}
\bibliographystyle{plain}
|
2,869,038,156,689 | arxiv | \section{Introduction}
Radio galaxies (RGs) are found from the cores to the extremities of galaxy clusters \citep[e.g.,][]{kale15,padovani16,Garon19}. Cluster RGs frequently appear significantly distorted from simple, bilateral, axial symmetry \citep[e.g.,][]{deGregory17,Garon19}, revealing non-axisymmetric environmental impacts. Sometimes the distortions can be attributed to galaxy motions relative to the cluster center. But, perhaps more revealing about cluster physics, many distortions are likely to reflect large-scale ICM flows and shocks; i.e., ``ICM weather'' related to cluster formation and evolution \citep[e.g.,][]{Bonafede14,Owen14,Shimwell14,vanWeeren17,Mandal18,WilberNov18}.
In order to improve understanding of the physics of these behaviors and associated observables, we have undertaken a broad-based study, primarily through simulations, but also including analytic modeling, analyzing dynamical RG-ICM interactions involving both steady winds through the life of the RG \citep[][]{jones16,ONeill19a} and shock impact on an existing RG \cite[][]{jones16,nolting19a,ONeill19b}. Most directly related to the present report, \cite{nolting19a} studied through simulations the interactions between cluster merger-strength ICM shocks and RG formed in a static medium when the incident shock normals are aligned with the axis of jets responsible for creating the RG. Here we consider the analogous interactions when the shock normals are orthogonal to the RG jet flows. \cite{nolting19a} pointed out that the evolution of a RG in response to a shock encounter has two successive components. The first component is associated with the abrupt change of conditions across the shock discontinuity, while the second component is a prolonged interaction with a post-shock wind whose properties are determined by the shock jump conditions. We will see in the present study that the same basic dynamical elements apply, independent of shock-RG orientation, However, some signature outcomes are sensitive to orientation. We also point to the \cite{ONeill19a} work analyzing in detail evolution of and emission from steady jets in a steady, orthogonal wind to form classical ``narrow angle tail'' (NAT) RG morphology.
\cite{nolting19a}, confirmed earlier studies demonstrating that shock impact on a low density cavity, such as a RG lobe, can transform the cavity into a ``doughnut-like'' ring vortex. This topological transformation, the most distinctive feature of a shock encounter with a lobed RG, results from shear induced by the enhanced post shock speed inside the lobe \citep[e.g.,][]{EnsslinBruggen02, PfrommerJones11}. In laboratory settings shocks in air striking helium bubbles have, for example, created analogous vortex rings \citep[e.g.,][]{Ranjan08}.
In the astrophysical context, rings of diffuse radio emission possibly related to shocked RG plasma have been discovered in, for example, Abell 2256 \citep{Owen14} and the Perseus cluster \citep{SibringdeBruyn98}. A distinct scenario related to this physics is the so-called ``radio phoenix.'' In the radio phoenix model aged cosmic ray electron (CRe) populations from expired AGN activity are overrun by an ICM shock wave \citep{Ensslin01, EnsslinBruggen02} and reaccelerated primarily by adiabatic compression to become luminous once again. Such objects could have complex morphologies as well as strongly curved, steep radio spectra \citep{vanWeeren19}. If, at the other extreme, the RG jets remain active through a shock encounter, so interact with the post-shock wind, RG-shock dynamics are considerably enriched, as already noted in \cite{nolting19a} for aligned shock-jet geometry and in \cite{ONeill19b} for shocked tailed RG. On the other hand, key signature behaviors that might be used to identify encounters generally and to constrain the conditions involved are yet to be established. Our further efforts aim to help fill that gap.
The remainder of this paper is organized as follows: Section \ref{sec:interactcartoon} outlines the underlying physics of the shock-RG encounter (\S \ref{subsec:cavities}), including vortex ring formation (\S \ref{subsec:VortRings}), and subsequent wind--jet interactions (\S \ref{subsec:bending}) when the wind velocity is transverse to ongoing jet flows. Section \ref{sec:methods} describes our simulation specifics, including numerical methods (\S \ref{subsec:numerics}) and details of our simulation setups (\S \ref{subsec:Setup}). In section \ref{sec:Discussion} we discuss the results of the simulations, while section \ref{sec:Summary} provides a brief summary of our findings.
\section{Outline of Orthogonal Shock--RG Interaction Dynamics}
\label{sec:interactcartoon}
The geometry of the problem we explore in this paper is illustrated in figure \ref{fig:orth-setup}. Specifically, a RG initially evolves in a homogeneous, stationary ICM prior to a shock encounter. The RG is formed, beginning at $t = 0$, by a pair of steady, oppositely directed jets that are identical except for the sign of the jet velocity. In the figure those jets are vertical. A plane shock whose normal is orthogonal to the jet axes first contacts the RG lobes at a time $t_i>0$ (from the left in the figure). Depending on the simulation, the jets may or may not remain active through the encounter. In one case jet activity is terminated long before the shock encounter to mimic a radio phoenix scenario.
\begin{figure*}
\centering
\includegraphics[scale=0.7]{OrthSetup.pdf}
\caption{Basic geometry of the orthogonal shock--RG encounter.}
\label{fig:orth-setup}
\end{figure*}
To describe the basic shock-RG encounter mechanics we need to specify several ICM, shock and RG properties and their relationships. In what follows properties associated with the unshocked ICM are identified by subscripts, `i', while properties of the post shock ICM wind, are marked by `w'. Properties of the RG cavities (= lobes) are identified by `c'. RG jet properties are designated by `j'. Where it is important to distinguish jet or cavity properties within the unshocked ICM from those same jet properties within the post shock wind, it is convenient to apply the distinct, hybrid labels, `ji' and `jw', or ``ci'' and ``cw'. It may also be useful up front to clarify that a feature or property is ``upwind'' of some second structure at a given time if an encounter between the two structures will occur in the future. Thus, in the current context, unshocked ICM material is upwind of the ICM shock, so in figure \ref{fig:orth-setup} to the right of the shock. Similarly, a vector in the post shock flow pointing ``upwind '' would point left in figure \ref{fig:orth-setup}.
We begin our outline with a characterization of the ICM shock transition. For this we need the incident shock Mach number, $\mathcal{M}_{si}$, along with the unshocked ICM density, $\rho_i$ and sound speed, $a_i$; that is, $\mathcal{M}_{si} = v_{si}/a_i$. The unshocked ICM pressure (assuming an adiabatic index, $\gamma = 5/3$) is, $P_i = (3/5)~\rho_i a_i^2$. Standard shock jump conditions give us properties of the post shock ICM wind; namely,
\begin{align}
\label{eq:jump-d}
\rho_w = \frac{4\mathcal{M}_{si}^2}{\mathcal{M}_{si}^2+3}\rho_i,\\
\label{eq:jump-p}
P_w = \frac{5M_{si}^2-1}{4}P_i,\\
\label{eq:jump-v}
|v_w| = \frac{3}{4}\frac{M_{si}^2-1}{M_{si}}a_{i},\\
\label{eq:jump-a}
a_w = \frac{\sqrt{(M_{si}^2 + 3)(5 M_{si}^2 - 1)}}{4 M_{si}} a_i,\\
\label{eq:windMach}
|M_w| = \frac{|v_w|}{a_w} = 3 \frac{M_{si}^2 - 1}{\sqrt{(M_{si}^2 + 3)(5M_{si}^2 - 1)}},
\end{align}
where the wind velocity, $v_w$, is measured in the frame of the unshocked ICM. Since our scenario involves a RG initially developing in a static ICM, we henceforth, unless otherwise stated, refer all velocities to the rest frame of the unshocked ICM (= the rest frame of the AGN/RG). In this study we carried out simulations involving two ICM shock strengths. Specifically, we considered $\mathcal{M}_{si} = 4$, for which $\rho_w/\rho_i = 3.37$, $P_w/P_i = 19.75$, $|v_w|/a_i = 2.81$, $a_w/a_i = 2.42$, and $|M_w| = 1.16$. For comparison, we also simulated one case with a weaker $\mathcal{M}_{si} = 2$ shock, leading to $\rho_w/\rho_i = 2.29$, $P_w/P_i = 4.75$, $|v_w|/a_i = 1.13$, $a_w/a_i = 1.44$, and $|M_w| = 0.78$. All our simulations reported here involve pre-shock ICM conditions with $\rho_i = 5\times 10^{-27}\rm{g/cm^3}$, $P_i = 1.33\times 10^{-11}~\rm{dyne/cm^2}$ and $a_i = 667~\rm{km/sec}$.
\subsection{Shock--Lobe Collisions}
\label{subsec:cavities}
Shock and post shock flow behaviors inside the RG cavities (lobes) are largely consequences of the large density contrast between the ICM and the cavities. Thus, to characterize this interaction we should specify $\rho_c$. For light jets, as in our simulated scenarios, we expect $\rho_c \la \rho_j \ll \rho_i$. Specifically, here we have used $\rho_j = 10^{-2} \rho_i$, and, indeed we find pre-shock cavity conditions with $\rho_c \la 10^{-2} \rho_i$. Such cavities generally reach at least rough pressure balance with their surroundings, and in our simulations we find $P_c \sim P_i$ before shock impact. Consequently, before shock impact $a_c \ga 10 a_i$.
A simple outline of shock-lobe interaction can be constructed from the fact that $a_c \gg a_i$. Detailed discussions can be found in \cite{PfrommerJones11} and references therein. Since the speed of the shock inside the cavity must satisfy $v_{sc} > a_c \gg a_i$, while in the scenario under discussion, $v_{si} = \mathcal{M}_{si} a_i \la\rm{a~few}~a_i$, the shock propagates more rapidly inside the cavity than in the surrounding ICM. Because the cavity is much hotter than the ICM, so that $a_c >> a_w$, the internal shock is considerably weaker than the incident shock; that is, $\mathcal{M}_{sc} = v_{sc}/a_c << \mathcal{M}_{si}$. Somewhat rarefied post shock ICM (wind) plasma, separated from cavity plasma by a contact discontinuity (CD), fills the cavity behind the shock at speeds $v_{CD} > v_w$. In the end, the cavity is crushed by this penetration. Coincidentally, the fast post shock penetration of ICM inside the cavity generates strong shear along the original cavity boundary. The result of these two developments is a topological transformation of the original cavity into a vortex ring whose axis aligns with the original shock normal. In the scenarios being examined here, there are two RG lobes being similarly transformed simultaneously. Thus, immediately after shock passage through the RG lobes there are two similar, coplanar vortex rings.
The simplicity of this outcome contrasts significantly with the outcome when the AGN jets and the shock normal align (or nearly align) as discussed in \cite{nolting19a}. For the latter geometry ICM-lobe encounters are sequential, rather than simultaneous. So, although vortex ring structures do develop, the flows, especially within the second, downwind lobe, are much more complicated than in the scenario outlined here. The events simulated in \cite{nolting19a} also all included continued active jets that were aligned (or nearly aligned) with the incident shock normal, which contributed further, distinctive behaviors to the dynamical evolution.
\subsection{Vortex Ring Dynamics}
\label{subsec:VortRings}
We return briefly to a basic discussion of what happens to the pair of vortex rings that emerge from the shock encounters under study in the present work. The full dynamics of vortex rings has been studied in depth analytically, in laboratory settings, and also numerically. Some useful and simple insights into the current situation come from such studies. In particular, a vortex line, or `filament,' can be shown to induce an associated velocity field in a relationship analogous to the Biot-Savart law of electromagnetism connecting a line of current to the encircling magnetic field. Specifically, a straight vortex line of infinite length and circulation, $\Gamma$, induces a velocity, $\delta v$, at a distance d given by
\begin{equation}
\delta v = \frac{\Gamma}{2\pi d}.
\label{eq:inducedVel}
\end{equation}
Conceptually, a vortex ring can be pictured as a vortex line connecting to itself, with opposite sides of the ring represented as counter-rotating, vortices. The electromagnetic analogy is a current loop, of course. Such counter-rotating vortices induce modifications in each other by equation \ref{eq:inducedVel} that project the vortex ring forward along its symmetry axis \citep[see, e.g.,][]{Leweke16}. When a vortex ring or filament is not circular, but possesses nonuniform curvature, these induction effects induce geometry changes. Where the curvature is highest, the induction effect is strongest. For instance, \cite{Hama62} showed that an initially parabolic vortex filament will result in a larger induced velocity at the vertex, causing it to lead the rest of the filament, which in turn alters the direction of the induced velocity at that point. The structure becomes 3 dimensional and the vertex acquires a vertical component in its induced velocity. In addition to self inducing a velocity forward along its axis, as a vortex ring propagates it is prone to entraining material from the surrounding medium, eventually slowing its propagation through the background medium (the post shock wind, in the present case) \citep{Maxworthy72}.
The same relationship leads multiple vortex rings to induce motions in each other. If two similar vortex rings propagate along parallel axes, as in this study, adjacent elements are counter-rotating vortices. But, the induced motion from this pair will be opposite to the induced motions from the top and bottom of a single vortex ring. This leads to a slowing of the motion of both vortex rings, with the slowing effect greatest at their nearest approach. This effectively attracts and tilts the rings towards each other. Lab experiments have verified this, demonstrating, as well, that ring pairs merge as the near edges touch. Since the vorticity in each ring at their nearest points is opposite, the net vorticity there vanishes, leading to a ``vortex reconnection event'' \citep{Oshima77}. Thus, the pair of vortex rings created by shock passage in our present scenario evolves into a single vortex ring roughly spanning the full extent of both RG lobes.
Finally, we point out that the vortex ring structures under discussion, once formed, are essentially isolated from the AGN itself, unless they come in contact with active jets. (This does not actually happen in our one simulation with sustained jet activity, $\bf{M_s4J}$ in Table \ref{table:tab1}, although with somewhat different jet dynamics, it could). The presence or absence of this interaction obviously impacts the evolution of the jets and their behaviors as synchrotron sources.
\subsection{Jet Propagation in the Post Shock Crosswind}
\label{subsec:bending}
If the RG jets remain active through the shock encounter (true in one of our simulations, $\bf{M_s4J}$), the post shock wind in the geometry under investigation induces a ram pressure-based force across each jet ($\sim \rho_w v_w^2/r_j$, with $r_j$ the jet radius) that deflects the jet's trajectory transversely. \cite{ONeill19a} examined in some detail jet trajectories for arbitrary relative orientations between the undisturbed jets and winds. So long as the jets are internally supersonic, the trajectories of steady jets can be expressed over a broad range of initial orientations with respect to a cross-wind in terms of a characteristic bending length, $\ell_b$, derived decades ago in the context of so-called ``narrow angle tail'' RG (NATs); \citep[][]{BegelmanReesBlandford,JonesOwen79}. In our present context the relation is
\begin{equation}
\ell_b= \frac{\rho_jv_j^2}{\rho_wv_w^2}~r_j.
\label{eq:ellb}
\end{equation}
\cite{ONeill19a} showed that long term jet/tail trajectories in steady winds are well-described as swept back tails with transverse displacements from their launch points of several $\ell_b$. In our simulation $\bf{M_s4J}$ $\ell_b \approx 4 r_j\sim 12$ kpc. The $\sim 40$ kpc lateral displacements for the jets from their launch points visible at late times in figure \ref{fig:orth3-PS-RHO} are consistent with this simple model, since the actual jet trajectories tend to be wider than the simple $\ell_b$ metric \citep[e.g.,][]{ONeill19a}.
We note that, so long as these jet trajectories do not intersect the vortex ring, the jets have no significant dynamical influence on the vortex ring, nor do they feed CRe or magnetic flux into the ring. We also note that the response of jets to the transverse winds encountered in this study is quite distinct from the response of a jet to a head or tail wind, as in the \cite{nolting19a} study \citep[see, also][and references therein]{jones16}.
\section{Simulation Specifics}
\label{sec:methods}
\subsection{Numerical Methods}
\label{subsec:numerics}
The simulations reported here used the Eulerian WOMBAT ideal 3D nonrelativistic MHD code described in \cite{PeteThesis} on a uniform, Cartesian grid employing an adiabatic equation of state with $\gamma = 5/3$. The simulations utilized the 2$^{nd}$ order TVD algorithm with constrained transport (CT) magnetic field evolution as in \cite{Ryu98}. Specific simulation setups are introduced in \S \ref{subsec:Setup} and listed in Table \ref{table:tab1}. While the AGN-launched jets in our simulations were magnetized as outlined below, the undisturbed ICM media in the simulations presented here were unmagnetized, allowing us to focus more directly on AGN-associated behaviors.
Bipolar jets in the simulations were created beginning at $t = 0$ within a ``jet launch cylinder'' of radius, $r_j$ and length $l_j$ within which a plasma of uniform density, $\rho_j$, and gas pressure, $P_j$ (so sound speed, $a_j = \sqrt{\gamma P_j/\rho_j}$), was maintained. A toroidal magnetic field, $B_{\phi} = B_0 (r/r_j)\hat{\phi}$ was also maintained within the jet launch cylinder. A characteristic ``plasma $\beta$'' parameter for the jets, reflecting the relative dynamical role of the jet magnetic field, is $\beta_{pj} = 8\pi P_j/B_0^2 = 75$ in the jets considered in this work. Thus, the magnetic pressures are subdominant to the gas pressure at the jet source. Aligned jet flows emerged from each end of the launch cylinder with velocity, $v_j$, along the cylinder axis, so with internal Mach number $\mathcal{M}_j = v_j/a_j$. The jet velocity, $v_j$, also changed sign midway along the cylinder length producing the bipolar jet symmetry. The launch cylinder was surrounded by a 2 zone, coaxial collar, within which properties transitioned to local ambient conditions. Jets were steady until a simulation-dependent time, $t_{j,off}$, after which they were cycled off.
Passive cosmic ray electrons (CRe) were injected into the simulations within the launched jets to enable computation of synthetic radio synchrotron emission properties of the simulated objects\footnote{Except for a negligible ICM population included to avoid numerical singularities in the CRe transport algorithm, all CRe were injected onto the computational domain via the jet launch cylinder.}. The CRe momentum distribution, $f(p)$, was tracked using the conservative, Eulerian ``coarse grained momentum volume transport'' CGMV algorithm in \cite{JonesKang05}. $f(p)$ spanned the range $10 \la p/(m_e c)\approx \Gamma_e\la 1.7\times 10^5$ (so, energies 5 MeV $\la E_{CRe} \approx \Gamma_e m_e c^2 \la$ 90 GeV) with uniform logarithmic momentum bins, $1\le k\le 8$. Inside a given momentum bin, $k$, $f(p) \propto p^{-q_{k}}$, with $q_k$ being bin dependent and evolving in time and space. $\Gamma_e$ represents CRe Lorentz factors.
At injection from the AGN source (= the jet launch cylinder), the CRe momentum distribution was a power law with $q = q_0 = 4.2$, over the full momentum range. This translates into a synchrotron spectral index, $\alpha = \alpha_0 = 0.6$ ($I_{\nu} \propto \nu^{-\alpha})$ using the conventional synchrotron-CRe spectral relation for extended power laws. The synchrotron emission, including spectra, reported here are computed numerically using $f(p)$ over the full momentum range specified above along with the standard synchrotron emissivity kernel for isotropic electrons in a local vector magnetic field $\vec{B}$ \citep[e.g.,][]{BlumenthalGould70B}. For our analysis below we calculated synthetic synchrotron emission at frequencies $150$ MHz $\leq \nu$ $\la 1$GHz. This emission, as it turns out, comes predominantly from regions with magnetic field strengths $\sim 1 \rightarrow\rm{few}~\mu$G, so mostly reflect CRe energies $\ga$ a few GeV ($\Gamma_e \sim 10^4$) (well inside our distribution).
We included adiabatic, as well as radiative (synchrotron and inverse Compton) CRe energy changes outside of shocks, along with test-particle diffusive shock (re)acceleration (DSA) at any shocks encountered. We did not include $2^{nd}$ order turbulent CRe reacceleration or CRe energy losses from Coulomb collisions with ambient plasma. The former depends on uncertain kinetic scale turbulence behaviors beyond the scope of this study, while the latter is most relevant for CRe with energies well below those responsible for the radio synchrotron emission computed in this work \citep[e.g.,][]{nolting19a}. CRe radiative losses combine synchrotron with inverse Compton (iC) scattered CMB radiation. The simulations reported here assumed a redshift, $z = 0.2$. The resulting radiative lifetime can be written
\begin{equation}
\tau_{rad} \approx 110 \frac{1}{\Gamma_{e4}\left[1+ B_{4.7}^2\right]}~\rm{Myr},
\end{equation}
where $\Gamma_{e4} = \Gamma_e/10^4$ and $B_{4.7} = B/(4.7\mu\rm{G})$. The first term in the denominator on the RHS reflects inverse Compton (iC) losses at z = 0.2, while the second represents synchrotron losses. Thus, we can see that for $\Gamma_e \sim 10^4$, of primary interest for the radio emission in this work, $\tau_{rad} \sim 100$ Myr, and that iC losses are predominant.
DSA of the CRe was implemented at shock passage by setting $q_{k,out} = \min(q_{k,in},3\sigma /(\sigma - 1))$ immediately post-shock, where $\sigma$ is the code-evaluated compression ratio of the shock.
This simple treatment is appropriate in the CRe energy range covered, since likely DSA acceleration times to those energies are much less than a typical time step in the simulations ($\Delta t \ga 10^4$ yr). Since our CRe have no dynamical impact, we treat the total CRe number density, $n_{CRe}$, as arbitrary. Consequently, while we compute meaningful synchrotron brightness, polarization and spectral distributions from our simulations, synchrotron intensity normalizations are arbitrary.
\subsection{Simulation Setups}
\label{subsec:Setup}
For this study we carried out four 3D MHD simulations (labeled $\bf{M_s4J}$, $\bf{M_s4}$, $\bf{M_s4Ph}$ and $\bf{M_s2Ph}$) of plane ICM shock impacts on symmetric, double-lobed RG formed prior to shock impact by light, bipolar AGN jets within a homogeneous, unmagnetized medium (see Table \ref{table:tab1}). While both the homogeneity and the lack of fields are significant simplifications from real cluster environments, we make these choices to simplify the interpretation of the outcomes of our simulations. Homogeneity of the medium helps isolate the dynamical effects of the particular interactions under study, without the influence of buoyancy effects and other nonuniformities. The lack of magnetic fields except those introduced by the jets help us understand the synchrotron emission we observe and how the jet fields evolve, without having to worry about how the ICM fields are interacting with those in the jet or contribute to the synchrotron emission. Dynamical studies in realistic, magnetized clusters with pressure and density profiles and static gravitational potential (not present in these simulations) are important and left to future work.
In each simulation, the incident ICM shock was oriented with its normal orthogonal to the symmetry axis of the RG (so orthogonal to the axis of the AGN jets that made the RG). The incident shock either had Mach number, $\mathcal{M}_{si} = 4$, reflected in the simulation label as $\bf{M_s4}$, or, in one case, Mach number $\mathcal{M}_{si} = 2$, reflected in the label as $\bf{M_s2}$.
\begin{deluxetable}{ccccccccc}
\tabletypesize{\footnotesize}
\tablewidth{0pt}
\tablecaption{Simulation Specifics\label{table:jetparams}}
\tablehead{
\colhead{Run} & \colhead{$M_{si}$} & \colhead{$P_{w}/P_i$} & \colhead{$v_{w}$} &\colhead{$x_{domain}$} &\colhead{$y_{domain}$} & \colhead{$z_{domain}$} & \colhead{$x_{jc}$} & \colhead{$t_{j,off}$} \\
\colhead{} & \colhead{} & \colhead{} & \colhead{($10^3$ km/sec)} & \colhead{kpc} & \colhead{kpc} & \colhead{(kpc) } & \colhead{(kpc)} & \colhead{(Myr)}
}
\startdata
$\bf{M_s4}J$ & 4.0 & 19.8 & 1.88 & $\pm$ 320 & $\pm$ 240 & $\pm$ 240 & -57 & N/A\\
$\bf{M_s4}$ & 4.0 & 19.8& 1.88 & $\pm$ 320 & $\pm$ 240 & $\pm$ 240 & -57 & 32\\
$\bf{M_s4Ph}$ & 4.0 & 19.8 & 1.88 & $\pm$ 208 & $\pm$ 240& $\pm$ 240 & -32 & 16\\
$\bf{M_s2Ph}$ & 2.0 & 4.75 & 0.75 & $\pm$ 160 & $\pm$ 240& $\pm$ 240 & -16 & 16\\
\enddata
\tablecomments{All simulations had: $\rho_i = 5\times 10^{-27}~\rm{g/cm^3}$, $P_i = 1.33\times 10^{-11}~\rm{dyne/cm^2}$, $a_i = 6.7\times 10^2~\rm{km/sec}$, $\rho_j = 10^{-2}\rho_i$, $P_{ji} = P_i$, $a_{j} = 10 a_i$, $v_j = 6.7\times 10^4~\rm{km/sec}$, $\mathcal{M}_j = 10$, $B_0 = 2.1\mu$ G, $\beta_{pj} = (8\pi P_{j})/B_0^2 = 75$, $r_j = 3~\rm{kpc}$, $l_j = 12$ kpc. All simulations employed uniform spatial grids with $\Delta x = \Delta y = \Delta z = 0.5$ kpc}
\label{table:tab1}
\end{deluxetable}
In one simulation, ${\bf{M_s4J}}$, the AGN jets remained steady throughout the simulation in order to explore dynamical relationships between the shock-induced vortex ring structures and the jets as they become deflected in the post shock wind, as well as to compare the relative synchrotron evolutions of the two dynamical components of the shocked RG. In addition, this allows us to look for distinctions between jet behaviors in this orthogonal shock context and the simple, steady cross wind studied in \cite{ONeill19a}. Simulation ${\bf{M_s4}}$ was identical to ${\bf{M4J}}$ except AGN jet activity ceased shortly after the shock first came into contact with the RG lobes (so no $\bf{J}$ in the simulation label). Since the RG prior to the shock interaction is identical in simulations ${\bf{M_s4J}}$ and ${\bf{M_s4}}$ we can look explicitly at roles of the jets in the post shock evolution of ${\bf{M_s4J}}$.
The other two simulations, ${\bf{M_s4Ph}}$ and ${\bf{M_s2Ph}}$, designed to simulate so-called ``radio Phoenix'' sources \citep[e.g.,][]{Ensslin01,Kempner04} (motivating the $\bf{Ph}$ in their labels), deactivated the AGN jets 89 Myr prior to first shock contact with RG evolution continuing in the interim. Recall from \S \ref{subsec:numerics} that 110 Myr represents a rough timescale for radiative energy losses by radio bright CRe, so that the CRe populations in those two simulations are significantly aged at shock impact. The only significant difference between the ${\bf{M_s4Ph}}$ and ${\bf{M_s2Ph}}$ simulations is the strength of the incident ICM shock.
In all the simulations the shock normal is along the $\hat{x}$ axis, so that $\vec{v}_w = v_w \hat{x}$. The jet launch cylinder is aligned to the $\hat{y}$ axis, with the center of the launch cylinder at rest with coordinates ($x_{jc}, 0, 0$), so centered in the y-z plane. As already noted, all the simulations involve pre shock ICM conditions with $\rho_i = 5\times 10^{-27}~\rm{g/cm^3}$, $P_i = 1.33\times 10^{-11}~\rm{dyne/cm^2}$, and with jet properties at launch, $\rho_j = 10^{-2} \rho_i$, $P_j = P_i$. The jets all had internal Mach numbers at launch, $\mathcal{M}_{ji} = 10$, so $v_j = 6.7\times 10^4~\rm{km/sec}$.
Table \ref{table:tab1} provides a summary of remaining key properties of each simulation. The first four table columns list the simulation label, the strength of the incident shock, $\mathcal{M}_{si}$, the resulting pressure jump across the incident shock and the post shock wind velocity. The dimensions of the computational domain are listed for each simulation in columns, 5-7, while $x_{jc}$ for each AGN jet is given in column 8. The final column lists the time during the simulated events when the jet launching is cycled `off,' or deactivated, $t_{j,off}$. In simulations $\bf{M_s4J}$ and $\bf{M_s4}$, first shock contact with the RG lobes takes place at $t = 19$ Myr, while in simulations $\bf{M_s4Ph}$ and $\bf{M_s2Ph}$ first shock contact takes place at $t = 105$ Myr. Again, in both the $\bf{M_s4J}$ and $\bf{M_s4}$ simulations AGN activity was steadily building the RG until at least 13 Myr after the shock first contact, while in the $\bf{M_s4Ph}$ and $\bf{M_s2Ph}$ simulations, jet activity ceased 89 Myr before any shock contact, leaving the RG to evolve passively during that interval.
\section{Discussion}
\label{sec:Discussion}
We now examine and compare the four simulations from Tables \ref{table:tab1}. All four of the simulations involved AGN jets with Mach number $M_{ji} = 10$, jet mass density, $\rho_j = 10^{-2}\rho_i$, and the characteristic magnetic field strength, $B_0 = 2.1 \mu$G. Each simulation involved an external ICM shock running over the structures generated by the RG jet, with three of the simulations having an ICM shock of Mach $M_{si}=4$, and the $\bf{M_s2Ph}$ simulation having $M_{si}=2$.
The simulations divide into two ``pairs,'' based on their properties and the motivations behind them. The $\bf{M_s4J}$ and $\bf{M_s4}$ simulations both involve Mach 4 ICM shock impact on lobed RG that had active AGN input at least until shock impact. They differ in whether the AGN jet remained constant throughout the simulation or were deactivated during shock impact on the RG. This difference allows us to explore the influence of the post shock jet flows on both the dynamics and observable emission of the post shock RG. In both cases the AGN activity means that CRe in the interaction are relatively fresh up at least to the time of shock impact.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth3-PS-RHO-withlabels.pdf}
\caption{Volume renderings of the $\bf{M_s4J}$ at four times increasing top to bottom. The shock normal and jet axis are in the viewing plane. Shock impact on the RG begins soon after the top snapshot. Left: Jet mass fraction ($>30$\% visible). The location of the shock is outlined in dashed gray lines; Right: Log mass density spanning 3 decades in $\rho$, with key dynamical structures highlighted, including the ICM shock. Colors in all images follow the “CubeYF” colormap with “yellow” high and “purple” low. Images are rendered from a distance of 857 kpc from the RG.}
\label{fig:orth3-PS-RHO}
\end{figure*}
In contrast, the $\bf{M_s4Ph}$ and $\bf{M_s2Ph}$ simulations begin with a relatively short period of AGN jet activity (16 Myr), but, then the AGN jets deactivate and the RG lobe plasma is allowed to relax for 89 Myr before a shock impact. Of course, the CRe inside the RG lobes cool radiatively (and to a small degree adiabatically) in the interim. The intent was to investigate the ``radio phoenix'' scenario, in which fossil plasma from expired AGN is reactivated via ICM shocks. These two simulations differ only in the strength of the ICM shock incident on the lobe, with $M_{si}=4$ in the former case and $M_{si}=2$ in the later. This work extends the early simulation study of this scenario by \cite{EnsslinBruggen02}. There are two possibly significant distinctions in our approach, although both studies involved 3D MHD simulations of shock impact on low density cavities containing fossil CRe. The first difference is that in our simulations, the cavities formed dynamically in response to AGN jets, whereas \cite{EnsslinBruggen02} initialized their simulation with a static, spherical and uniform cavity with a discontinuous boundary. Dynamical cavities do not have uniform, static interiors, nor simple boundaries, even after substantial relaxation. This can, for example, influence the stability of the cavity boundary during shock passage, and, so impact expected vortex structures. The second distinction in the two simulation studies is that, while both followed evolution of passive CRe populations, our simulations allowed for the possibility of DSA, whereas \cite{EnsslinBruggen02} assumed it was absent. As it turns out neither of these distinctions is very significant, so that our results largely support the radio phoenix simulation results of \cite{EnsslinBruggen02}.
\subsection{Simulation $\bf{M_s4J}$: $M_{si} = 4.0$, $t_{j,off} =$N/A}
\label{subsec:Orth3}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth3-synch-index.pdf}
\caption{Synchrotron images from $\bf{M_s4J}$ at the times in Figure \ref{fig:orth3-PS-RHO}. Resolution is 0.5 kpc. The AGN jet axis and shock normal are in the plane of the sky. Left: Linearly plotted 150 MHz intensity with arbitrary units. Right: 150/600 MHz spectral index, $\alpha_{150/600}$, for regions above 0.1\% of the peak intensity at 150 MHz. Spectral index scale is on the far right. At launch the jet synchrotron spectral index was $\alpha=0.6$. The location of the shock is outlined in dashed gray lines}
\label{fig:orth3-synch-index}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth3-BmagmG.pdf}
\caption{Volume renderings of the magnitude of the magnetic field in the $\bf{M_s4J}$ simulation at two of the times from figure \ref{fig:orth3-PS-RHO}, rendered from the same view point and orientation. The location of the shock is outlined in dashed gray lines.}
\label{fig:orth3-bmag}
\end{figure*}
The dynamical evolution of the $\bf{M_s4J}$ shock--RG interaction is shown in figure \ref{fig:orth3-PS-RHO}. The figure presents four snapshots of the volume-rendered\footnote{As viewed along the $\hat{z}$ axis at a distance roughly 857 kpc from the AGN.} jet mass fraction tracer (left panels) and logarithmic mass density (right panels) at: (1) $t =19$ Myr, just prior to RG--shock first contact (refer to Figure \ref{fig:orth-setup} for the geometry); (2) $t = 38$ Myr, after the cocoon has been shocked and the post-shock flow has begun to bend the jets; (3) $t = 104$ Myr, after the bent jets have penetrated through the vortex ring; and (4) $t = 202$ Myr, after the vortex ring had pulled inward toward the midplane and was mostly hidden by the jets and nascent NAT tails. In figure \ref{fig:orth3-PS-RHO} and all subsequent volume-renderings, the location of the shock in the ICM is outlined in dashed gray lines. We note that jet material leaving the AGN source after $t=32$ Myr, when the ICM shock passed the location of the jet source, is bent by the post shock wind and does not directly ``know'' about the ICM shock. The resulting NAT morphology does not explicitly require a shock, but is purely a result of the relative motion between the jet source and the medium. On the other hand, we point out below that the bent jets and the radio tails they produce ultimately reach the shock from downwind and modify it.
At $t=38$ Myr, the shock has propagated through the lobes of the RG. The jets have been obviously bent downwind by post shock ram pressure and are beginning to form what will become the tails of the future NAT. The previously planar shock has been modified during its passage through the RG lobes. In particular it has advanced ahead of the external, ICM shock in sections where it has intersected the low-density, high-sound-speed cocoon. Also visible at $t=38$ Myr is the beginning of the vortex ring structure formed from the remnants of the shocked cocoon material. Immediately after shock impact, it is still two distinct vortex rings originating from the two separate cocoons, with a small separation at the midpoint between the two remnant cocoons. The rings are elongated in the vertical direction because they trace the boundaries of the elongated cocoons prior to the shock impact. By $t=104$ Myr, the two parallel vortex rings have merged, as described in section \S \ref{subsec:VortRings}. The single ring structure is more apparent when rotated out of the plane of the sky, as in the left panel of figure \ref{fig:orth34-PS-rot60}.
Also by $t=104$ Myr, the jets have been bent completely downwind by the wind and a NAT structure has formed. In this construction we can roughly identify both jets, as coherent flows, and associated ``tails'', as somewhat more diffuse, blended flows with motions more or less aligned with the jets \citep[e.g.,][]{ONeill19a}. The tails, with embedded jets, can be seen to be passing through the vortex ring and advancing farther downwind. The impingement of the tail/jet structures on the shock from behind occurs because the downwind velocity of the tail plasma is actually greater than the post-shock wind speed. The vortex ring also advances downwind as a result of self-induction, as outlined previously, although the upwind advancement is less rapid than for the tails.
The downwind penetration by the tails, also pointed in the context of more traditional NAT formation by \cite{ONeill19a}, comes about quite simply as a result of the dynamics of tail formation. The physics is particularly straightforward when, as in this case, the launched jet velocities are orthogonal to the wind velocity. Then all of the downwind momentum in the deflected jets is necessarily extracted from the post shock wind. The tails include a mix of post shock ICM and jet plasma, so, again, all of their downwind momentum came from the post shock wind. Because mass densities in the tails are generally significantly less than in the post shock wind (see figure \ref{fig:orth3-PS-RHO}), the concentration of momentum flux in the tails leads to their enhanced velocities with respect to the wind. As long as a shock propagating into a medium at rest has Mach number $M_{s,i} \gtrsim 1.87$, the post-shock wind speed will be supersonic with respect to the pre-shock ICM sound speed. Therefore, as just noted, since the tails advance faster than the post-shock wind, they can overtake the external shock. In that case their progress could create effective bow shocks in advance of the external, ICM shock. By $t=104$ Myr this has occurred in the $\bf{M_s4J}$ simulation, and the visible shock surface in figure \ref{fig:orth3-PS-RHO} is a combination of the ICM shock and the bow shock from the tails.
By $t=202$ Myr, the vortex ring has pulled inward nearer to the jets, becoming difficult to distinguish in the renderings of jet mass fraction and density. The large curvature of the vortex structure near the top and bottom of the ring causes those locations to lead the rest of the ring slightly, in response to the increased induced velocity at that point (see equation \ref{eq:inducedVel}). This alters the direction of propagation of this section of the ring, adding a component in the direction toward the midplane between the jets, causing the ring to shrink in vertical size.
Radio synchrotron images with 0.5 kpc resolution are shown in figure \ref{fig:orth3-synch-index} at the same times as in figure \ref{fig:orth3-PS-RHO}. The AGN jets and the ICM shock normal are in the plane of the sky. Each image is constructed from integrated synchrotron emissivities along the line of sight. The left panels show the synchrotron brightness (arbitrary units) at 150MHz. The right panels show the radio spectral index, $\alpha_{150/600}$, with an intensity cut such that the image includes only pixels where the intensity at 150MHz is above 0.1\% of the peak intensity at 150 MHz at that time. As before, the location of the shock in the ICM is indicated by a dashed gray line. At $t=38$ Myr, the shock interaction causes brightening in the lobes as they are compressed, energizing the CRe and enhancing the magnetic field strength. Figure \ref{fig:orth3-bmag} shows volume renderings of the magnetic field strength from the $\bf{M_s4J}$ simulation at two times after the shock has impacted the RG. As a result of the shock impact, the magnetic fields in the remnant shocked lobes are compressed and amplified. This is greatest in the regions where the still active jets interact with the magnetic fields originally in the remnant lobes, relatively near the midpoint between the two.
By $t=104$ Myr, when the bending in the jets is well established, the magnetic field adjacent the jet launching cylinder is distorted by the shear associated with the post shock wind, amplifying the field and making it predominantly poloidal with respect to the jet axis. At launch the jets' magnetic field was purely toroidal. Also at $t=104$ Myr, the region where the two vortex rings converge into one ring shows a significant enhancement in the magnetic fields.
All of these regions of enhanced magnetic field strength show up significantly in the synchrotron images in figure \ref{fig:orth3-synch-index}. Indeed, the sensitivity of synchrotron emissivity to magnetic field is obvious in a comparison between the field strengths in figure \ref{fig:orth3-bmag} and the radio bright regions in figure \ref{fig:orth3-synch-index}.
By $t=104$ Myr, it becomes very difficult to see the vortex ring structure in the radio intensity images in contrast to the tails. There are two main reasons for this: first, CRe population contained in the vortex ring was deposited in the lobes prior to the shock impact, so it is an older population and has experienced substantially more cooling from inverse Compton and synchrotron losses. Second, as can be seen in figure \ref{fig:orth3-bmag}, the magnetic fields in the ring are generally weaker than in the tails. Overall, this means that in the presence of active jets, the emission from a vortex ring structure containing shocked lobe material will be subdominant, and the timescale over which the ring may be visible will be dependent on the cooling rate of the CRe.
In addition to the timescale for cooling being a limiting factor for the duration of vortex ring visibility, over time the dynamical evolution of the ring may also limit its visibility. In the $\bf{M_s4J}$ simulation, after the two vortex rings from the two lobes had merged, the resulting ring was highly elongated in the vertical direction. This more elliptical ring structure had high curvature at the top and bottom of the ring, resulting in higher self induced velocities at those points. This caused those parts of the ring to move forward downwind ahead of the rest of the ring, altered the geometry of the ring, and as a result, changed the direction of the induced velocity at those points to have a component toward the midplane. The end result is that the vertical extent of the ring decreases as it propagates. In our simulation, this limited the observability of the ring because the significantly radio brighter RG tails occupied the region interior to the ring, so as it decreased in vertical extent, it began to occupy the same region as the tails in projection, and became hidden. This dynamical situation is likely to occur in any elongated vortex ring, and if two vortex rings (from a pair of RG lobes) merge, they are likely to be elongated along the direction connecting the two previous ring centers. Whether or not the rings become hidden as they `shrink' will depend on the presence and detailed dynamics of any RG jets/tails.
The evolution of the CRe populations can also be seen in the spectral index images on the right of figure \ref{fig:orth3-synch-index}. At launch, the CRe in the jet have power law momentum spectra with $q_0 = 4.2$, so that the jet synchrotron spectrum is a power law with $\alpha \sim \alpha_0 = 0.6$. Some lobe material dominated by early jet activity displays slightly ``aged'', steeper spectra by the time of shock impact. In response to the shock passage, adiabatic compression energizes CRe and enhances field strength. Since the synchrotron intensity images are made at fixed frequency, the post shock emission comes from CRe that were previously lower energy, so that their radiative lifetimes were long compared to the expired time. Consequently, at this relatively early time, $t = 38$ Myr, there is little apparent spectral steepening in those populations. In contrast, at $t=104$ Myr, which now involves $t \sim \tau_{rad}$ for CRe of primary interest, the portion of the ring still bright enough to show up in the image has steepened to a spectral index, $\alpha \sim 1.0$. This is significantly steeper than the emission from the jet tails in the same region, since the latter contain plasma that only recently was launched in the jets. By $t=202$ Myr the vortex ring is no longer visible in the spectral index image, because, largely in response to radiative aging, the intensities used in determining $\alpha$ have fallen below the applied intensity cuts. Spectra displayed in the tails can be seen to steepen to $\alpha \gtrsim 1.4$ over a distance from the source of $\sim 400$ kpc. Those end tail portions represent CRe deposited largely during and soon after shock impact, so that $t \ga \tau_{rad}$ over much of the relevant CRe energy range.
While in this paper, we specifically did not set out to model any individual sources, but rather learn about the physics of a class of physical interactions in clusters, there are cases which bear resemblance to the radio images we produced of our simulations. One striking example worth noting here is the so-called ``Coma relic'' \citep[see, e.g.][]{Giovannini91}, in which radio galaxy jets are bent into a NAT which forms disrupted tails which lead to a bright steep spectrum feature transverse to the tails. This similarity in structure to the $\bf{M_s4J}$ case (see figure \ref{fig:orth3-synch-index}) could imply a similar dynamical origin. However, the nature of the shock associated with the Coma relic is a matter of ongoing investigation.
\subsection{Simulation $\bf{M_s4}$: $M_{si} = 4.0$, $t_{j,off} =32$ Myr}
\label{subsec:Orth4}
Figure \ref{fig:orth4-PS-RHO} shows volume renderings of the jet mass fraction (left) and the logarithmic mass density (right) from the $\bf{M_s4}$ simulation at times $t=38$ Myr and $t=104$ Myr. The $\bf{M_s4}$ simulation began as a restart of the $\bf{M_s4J}$ simulation from time $t=22$ Myr, but deactivated the AGN jet at $t=32$ Myr, approximately when the shock reached the jet launch cylinder. This distinction from $\bf{M_s}4J$, makes clearer the level of jet influence on evolution of the vortex rings and the shock front after its encounter with the RG, while also illuminating the role of fresh CRe injection by the jets as the dynamical structures evolve.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth4-PS-RHO.pdf}
\caption{Volume renderings of the $\bf{M_s4}$ at two of the times from figure \ref{fig:orth3-PS-RHO}. The shock normal and jet axis are in the viewing plane. The jets deactivated shortly before the top snapshot. Left: Jet mass fraction ($>30$\% visible) with the location of the shock outlined in dashed gray lines; Right: Log mass density spanning 3 decades in $\rho$, with key dynamical structures highlighted, including the ICM shock. Images are rendered from a distance of 857 kpc from the RG.}
\label{fig:orth4-PS-RHO}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth4-synch.pdf}
\caption{Synchrotron images from $\bf{M_s4}$ at the times in Figure \ref{fig:orth4-PS-RHO}. Resolution is 0.5 kpc. The AGN jet axis and shock normal are in the plane of the sky. Left: Linearly plotted 150 MHz intensity with arbitrary units. Right: 150/600 MHz spectral index, $\alpha_{150/600}$, for regions above 0.5\% of the peak intensity at 150 MHz. Spectral index scale on the far right. At launch the jet spectral index was $\alpha=0.6$}
\label{fig:orth4-synch}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth3-4-0020-PS-rot60.pdf}
\caption{Volume renderings of the jet mass fraction from (Left) $\bf{M_s4J}$ and (Right) $\bf{M_s4}$ at 104 Myr. The view is rotated around the vertical axis by 60 degrees so the shock propagates into the page in order to highlight the ``ring'' structures produced. Images are rendered from a distance of 348 kpc from the RG.}
\label{fig:orth34-PS-rot60}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth34-Spectra.pdf}
\caption{Integrated spectral evolution of the $\bf{M_s4J}$ (left) and $\bf{M_s4}$ (right) simulations, in arbitrary flux units. Reference slopes of $\alpha=0.6$ and $\alpha=1.0$ are included. Shock impacts on the RGs begin at $t \sim 20$ Myr}
\label{fig:orth34-spectra}
\end{figure*}
As in the $\bf{M_s4J}$ simulation, the shock propagates relatively quickly through the low density cavity, moving ahead of the shock in the external medium. However, by time $t=104$ Myr, deviation from shock planarity has diminished significantly, in contrast to the behavior in the $\bf{M_s4J}$ simulation. This reinforces our conclusion that the significant deviations from shock planarity in the $\bf{M_s4J}$ simulation at this same time are more the result of added downwind momentum by the jet interacting with the shock than simply from the shock's interaction with the initial cavity. At time $t=104$ Myr in the $\bf{M_s4}$ simulation, a lower density (relative to the post-shock wind density) ``wake'' formed behind the jet launching cylinder, which for numerical reasons remained impenetrable, and connects to the vortex ring. The vortex ring itself formed in much the same way as in the $\bf{M_s4J}$ simulation. The cocoon (lobe) plasma became wrapped up into the shock-induced vortex rings developing along the peripheries of the cavities. The vortex ring then advanced at the same rate as in the $\bf{M_s4J}$ simulation. Based on this, we conclude that the vortex rings in the two simulations evolve mostly independent of the presence or absence of jets. This is due at least in part to the fact that the jets in this simulation are deflected into the interiors of the vortex rings, rather, than, for instance into the ring perimeters. Figure \ref{fig:orth34-PS-rot60} shows at time $t = 104$ Myr volume renderings illustrating the relationship between the jets and the vortex ring in the $\bf{M_s4J}$ simulation and the comparative vortex ring structure in the absence of the jets.
In figure \ref{fig:orth4-synch}, the synchrotron emission structure in the ring is visible. After the shock impact, at time $t=38$ Myr, the radio emissivity in the shocked cocoon was again enhanced as the CRe were energized and the fields amplified by compression. At time $t=104$ Myr, the visible parts of the ring are dominated by filamentary emission originating in magnetic flux tubes. The initially toroidal field topology that was dominant in the jet and in the cocoon prior to the shock interaction is stretched and folded into itself. As the vortex formed, the field was wrapped up around the vortex over an eddy time (the time it takes for the fluid to circle around the vortex core, $\sim 75$ Myr in this case). This structure cannot be seen in the $\bf{M_s4J}$ synchrotron images, because the emission from the tails dominate the vortex ring. This is because the tails are continuously refreshed with new CRe populations from the jet. Consequently, the tails generally contain younger CRe populations than those in the vortex ring. The latter is composed of aged CRe that filled the lobes before the shock interaction.
Additionally, more structure from the vortex ring can be seen in the spectral index maps on the right of figure \ref{fig:orth4-synch}, since the bright tails are absent. At $t=38$ Myr, the compressed material is again mostly near the injection index of $\alpha_0 = 0.6$, but near the midpoint between the lobes, the spectrum is steeper than in figure \ref{fig:orth3-synch-index}, since there are no jets to inject fresh CRe into this region. At $t=104$ Myr, the spectral index ranges over $0.7<\alpha < 1.4$, with much of the emission showing $\alpha \sim 1.0$. The brightest emission comes from those regions with higher field strength. Those regions generally produce emission with a flattened spectrum, because the higher fields imply the emission comes from lower energy CRe that have experienced less radiative cooling.
Figure \ref{fig:orth34-spectra} provides a summary of the spectral evolution of the integrated emission for both the $\bf{M_s4J}$ and $\bf{M_s4}$ simulations. The properties of both simulations are very similar at the two earliest times shown. However, at later times the intensities are greater and the spectra flatter with less curvature in the $\bf{M_s4J}$ simulation, reflecting the continued input of energy and CRe by the jets.
\subsection{Simulations $\bf{M_s4Ph}$: $M_{si} = 4.0$, $t_{j,off} =$16 Myr\\ and $\bf{M_s2Ph}$: $M_{si} = 2.0$, $t_{j,off} =$16 Myr}
\label{subsec:Orth56}
Each of the simulations in this pair began with a Mach 10 jet pair that was on for 16 Myr before deactivating. That activity inflated RG lobes, which resembled the early stages of those in the other simulated RGs, so similar to what is seen in the top panels of figures \ref{fig:orth3-PS-RHO} and \ref{fig:orth3-synch-index}. After jet energy input ceased the lobes relaxed towards pressure equilibrium with the ICM. From jet deactivation to shock impact about 89 Myr later, the cocoons were dynamically relatively quiet, although their bases did merge the structure into a single, connected, cocoon. (There was no buoyancy in this ICM, so the detached lobes did not move away from their source.) On the other hand, in the almost 90 Myr after jet inflow ceased, but before shock impact, the CRe in the cavities cooled significantly via radiation losses during that time. Those losses were dominated by inverse Compton scattering, so the cooling rate was almost constant. Had a gravitational potential been included, and thus buoyant effects been in play, adiabatic losses (as the lobes detached, rose, and expanded) would have contributed more substantially.
Of course, from shock impact forward, the evolution of both RG was dramatic. The principal distinction between the two simulation was the strength of the impacting ICM shock. In the $\bf{M_s4Ph}$ simulation the shock was Mach 4, while in the $\bf{M_s2Ph}$ the shock was Mach 2. Post shock dynamical evolution of the $\bf{M_s4Ph}$ simulation can be seen through volume renderings in figure \ref{fig:orth5-ps-rho}, with the jet mass fraction on the left, and the logarithmic mass density on the right. At $t=105$ Myr (slightly after the top panels in Figure \ref{fig:orth5-ps-rho}), the merged cocoon was impacted by the shock. At $t=230$ Myr, the expected vortex ring formed from the shocked cocoon can be observed. However, the jet mass fraction in the vortex is low ($\la 30\%$) due to substantial entrainment of ICM material.
The radio observable consequences of the shock interaction can be seen in figures \ref{fig:orth5-synch} and \ref{fig:orth56-spectra}. Prior to the shock impact, the radio emission at 150 MHz had faded dramatically due to the mentioned radiative cooling that makes this case into a radio phoenix scenario. Because this dimming is substantial, we display the radio intensity on a logarithmic scale spanning 3 decades in brightness, to better reveal the presence of the structures. After the shock passage, the brightness is substantially increased by adiabatic compression of the CRe as well as increased field strength in the cocoon. The radio spectrum also flattens because adiabatic CRe re-energization and magnetic field enhancement cause the emission in the observed band to be dominated by CRe previously at energies too low to radiate in this band, but also low enough to reduce their radiative losses (see the right panels of figure \ref{fig:orth5-synch}). Even 125 Myr after the shock impact there are regions of flatter emission ($\alpha_{150/600}\sim1.0$) than the situation immediately prior to the shock, when most of the cocoon exhibited spectral indices, $\alpha_{150/600}\sim 1.3$, with substantially steeper spectra at higher frequencies. This is also evident in the integrated spectra in figure \ref{fig:orth56-spectra}. The right panel shows the evolution of the $\bf{M_s4Ph}$ simulation, including the spectrum just before the jet is deactivated ($t = 13$ Myr) and at a time shortly after the shock has fully compressed the cocoon ($t = 164$ Myr). In the case of $\bf{M_s4Ph}$ the shock crossing time is about 25 Myr and ends around $t\sim 130$ Myr). The left panel shows for comparison the $\bf{M_s2Ph}$ spectral evolution with the weaker, $\mathcal{M}_s = 2$, shock . In this $\bf{M_s2Ph}$ case, the shock takes $\sim60$ Myr to fully compress the cocoon ($t \sim 165$ Myr). In both cases there is substantial brightening and flattening of the spectra following the shock interaction. This results mostly from the increase in magnetic field strength and adiabatic compression of the CRe, and not from any DSA, however. We examined the CRe momentum distributions directly and saw no evidence of flattening in the CRe spectra associated with DSA. Also, the radio spectra on the right in figure \ref{fig:orth56-spectra} for the $\bf{M_s4Ph}$ simulation is consistent with pure adiabatic compression of $10\pm2$\%. This is consistent with our observation that within the RG cocoons the shock strength is significantly reduced. Due to the lack of significant mixing between the ICM and the RG plasma prior to the shock impact, the cocoon is relatively homogeneous and the density is about 50-100 times less dense than the ICM. This leads to the shock becoming almost sonic with $M_s\gtrsim 1$. There are, however, some regions with $M_s\sim 2$ as it passes through the cocoon. As mentioned earlier, the results of the $\bf{M_s4Ph}$ and $\bf{M_s2Ph}$ simulations are consistent with analogous findings reported by \cite{EnsslinBruggen02}.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth5-PS-RHO-2panel.pdf}
\caption{Volume renderings of the $\bf{M_s4Ph}$ at 98 Myr (top, right before the shock interaction) and 230 Myr (bottom). The shock normal and jet axis are in the viewing plane. Left: Jet mass fraction ($>30$\% visible) with the location of the shock outlined in dashed gray lines; Right: Log mass density spanning 3 decades in $\rho$, with key dynamical structures highlighted, including the ICM shock. Images are rendered from a distance of 410 kpc from the RG.}
\label{fig:orth5-ps-rho}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth5-LogSynch-Index-2Panel.pdf}
\caption{Synchrotron images from $\bf{M_s4Ph}$ at the times in Figure \ref{fig:orth5-ps-rho}. Resolution is 0.5 kpc. The AGN jet axis and shock normal are in the plane of the sky. Left: Logarithmic 150 MHz intensity spanning 3 decades in brightness. Right: 150/600 MHz spectral index, $\alpha_{150/600}$, for regions above 0.5\% of the peak intensity at 150 MHz. At both times, the shock is just out of the field of view, to the left(right) at $t=98(230)$ Myr. Spectral index scale on the far right. At launch the jet spectral index was $\alpha=0.6$}
\label{fig:orth5-synch}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Orth56-spectra.pdf}
\caption{Integrated spectral evolution of the ``radio phoenix'' simulations in arbitrary flux units. Left: $\bf{M_s2Ph}$. Right: $\bf{M_s4Ph}$. In both simulations jet activity ceased at $t = 13$ Myr, while first shock contact was at $t = 112$ Myr. In each plot, the black line represents the time at which the shock has fully compressed the aged RG cocoon}
\label{fig:orth56-spectra}
\end{figure*}
\section{Summary}
\label{sec:Summary}
We have reported a 3D MHD study of the interactions between lobed radio galaxies initially at rest in a homogeneous ICM and plane ICM-strength shocks when the radio galaxy jet axis is orthogonal to the incident shock normal. These simulations included cases in which the radio jets remained active throughout the simulation, cases in which jet activity terminated during the interaction and cases in which the jet activity had ceased long enough before the shock impact, so to allow embedded relativistic electron populations to ``age'' radiatively before the encounter. This last case is designed as a probe of the so-called ``radio phoenix'' scenario that illuminates non-luminous fossil relativistic electron populations through shock encounters.
As in previous studies, these shocks, as they encounter low density RG lobes, propagate very rapidly through the lobes relative to the surroundings. This generates strong shear along the boundary between the lobes and the surrounding ICM. That causes each lobe to form a vortex ring in the shape of the projected cross section of the lobe from the perspective of the incident shock. Such vortex ring formation is the principal obvious signature of the shock encounter. In the cases studied here, where two similar lobes are impacted simultaneously by a shock, two co-planar rings form simultaneously. Due to their mutual induced motions, those two rings merge into a single ring as they propagate downwind behind the shock. The merged elongated rings acquire a velocity component toward the midplane through self induction at the high curvature top and bottom of the elongated ring as they propagate. In our simulations, this caused the ring to become hidden by the bright RG jets/tails as they began to overlap in projection.
If RG jets remain active following such a shock encounter, they are deflected by ram pressure from post-shock winds and form tails propagating downwind towards the shock. These tails extend downwind faster than the wind, even overtaking the shock. This can noticeably deform the shock surface.
Our simulations included the evolution of relativistic electrons introduced by the AGN jets, accounting for adiabatic, radiative and diffusive shock acceleration physics. From those results we computed synchrotron intensities and spectra, Because the shock strengths are strongly depressed inside the radio lobes, diffusive shock acceleration is not very important. On the other hand, as suggested in other studies, adiabatic compression of the relativistic electrons and amplification of magnetic fields during the shock encounter and lead to substantially enhanced synchrotron brightness, as well as spectral flattening and straightening. When the radio jets remain active we found that, because their relativistic electron populations are characteristically less aged, their emission mostly dominated emission from the remnants of the pre-impact radio galaxy. Our simulations of shock encounters with previously extinguished radio galaxy lobes produce results that are consistent with earlier studies of this scenario.
\acknowledgements
This work was supported at the University of Minnesota by NSF grant AST1714205 and by the Minnesota Supercomputing Institute. CN was supported by an NSF Graduate Fellowship under Grant 00039202 as well as with a travel grant through the School of Physics and Astronomy at the University of Minnesota. We thank numerous colleagues, but especially Larry Rudnick and Avery F. Garon for encouragement and feedback.
|
2,869,038,156,690 | arxiv | \section{Introduction}\label{sec:intro}
\input{sections/a_KinematicsIntro.tex}
\section{VVV proper motions}\label{sec:vvvpm}
\input{sections/b_KinematicsVVVpm.tex}
\section{Made-to-Measure Milky Way Models}\label{sec:m2m}
\input{sections/c_KinematicsNmagic.tex}
\section{Red Giant Kinematics}\label{sec:rg_kinematics}
\input{sections/d_RGB_kinematics.tex}
\input{sections/e_correlationRG.tex}
\section{Extracting the RC\&B from the VIRAC RGB}\label{sec:getRCB}
\input{sections/f_KinematicsGetRc.tex}
\section{Red Clump Kinematics }\label{sec:rc_kinematics}
\input{sections/g_kinematics_Bsliced.tex}
\input{sections/h_kinematics_Ksliced.tex}
\section{Summary \& Conclusions}\label{sec:conclusions}
\input{sections/z_KinematicsConclusion.tex}
\section*{Acknowledgements}
We acknowledge the simultaneous work by \citet{sanders_2019}, who also used an absolute proper motion catalogue derived from VVV and \textit{Gaia} DR2 to study the kinematics of the bulge. The authors of both publications were aware of each others work, but arrived at their conclusions independently.
We thank the anonymous referee whose comments led to improvements in the paper.
We gratefully acknowledge the pioneering work of Matthieu Portail in producing the current version and documentation of \texttt{NMAGIC} used in this publication.
We acknowledge useful discussions with Isabella S{\"o}ldner-Rembold and Johanna Hartke.
CW acknowledges funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie grant agreement No 798384.
This work was based on data products from observations made with ESO Telescopes at the La Silla or Paranal Observatories under ESO programme ID 179.B-2002.
We are grateful to the VISTA Science Archive for providing a user friendly interface from which we could access the VIRAC catalogue.
This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia.
We have used the python $astropy.coordinates.SkyCoord$ package to convert coordinates and proper motions between coordinate systems and the $cov\_pmrapmdec\_to\_pmllpmbb$ function from $galpy$ \citep{bovy_2015} to convert the error covariance matrix between coordinate systems.
\bibliographystyle{mnras}
\subsection{The VIRAC Proper Motion Catalogue}\label{subsec:virac}
The VISTA Variables in the Via Lactea (VVV) \citep{minniti_2010} survey is a public, ESO, near-InfraRed (IR) survey which scanned the MW bulge, and an adjacent section of the disc at $l<0^\circ$. Using the 4m class VISTA telescope for a 5 year period, a typical VVV tile was observed in between 50 to 80 epochs from 2010 to 2015. An extended area of the same region of the galaxy is currently being surveyed as part of the VVVX survey.
The VISTA Infrared Camera (VIRCAM) has a total viewing area of 0.6 deg$^2$ for each pointing with each pointing known as a pawprint. A VVV tile consists of 6 pawprints, three in $l$ times two in $b$, with a total coverage of $\approx$ 1.4 by 1.1$^\circ$, and substantial overlap between the individual pawprints. This overlap ensures that a large number of sources are observed in two or more pawprints.
The bulge region observations are comprised of 196 tiles spanning roughly $-10<l<10^\circ$ and $-10<b<5^\circ$.
The VVV Infrared Astrometric Catalogue (VIRAC) takes advantage of the excellent astrometric capabilities of the VVV survey to present 312,587,642 unique proper motions spread over 560 $\mathrm{deg^2}$ of the MW bulge and southern disc \citepalias{smith_2018}.
In the astrometric analysis a pawprint set was constructed by cross-matching the telescope pointing coordinates within a 20" matching radius which results in a sequence of images of the same on-sky region at different epochs. Each pawprint set was treated independently to allow precise photometry.
This yielded a total of 2100 pawprint sets from which independent proper motions could be calculated.
In section 2 of \citetalias{smith_2018} the criteria for rejecting a pawprint are outlined.
Within each pawprint set a pool of reference sources with $\mu_l^\star$ and $\mu_b$ not significantly deviant from the local $<\mu_l^\star>$ and $<\mu_b>$ are extracted in an iterative process. All proper motions within a pawprint set are calculated \textit{relative} to this pool but, because absolute $<\mu_l^\star>$ and $<\mu_b>$ are unknown at this stage, there is an unknown drift in $l$ and $b$ for each pawprint which we measure in section \ref{subsec:2absPM} using \textit{Gaia} data.
The difference in drift velocity of the reference sources between pawprint sets, within a VVV tile, is smaller than the measurement error on the proper motion measurements from a single pawprint set.
A VVV tile can therefore be considered to be in a single consistent reference frame with a constant offset from the absolute reference frame.
To calculate final proper motions for stars observed in multiple pawprints \citetalias{smith_2018} use inverse variance weighting of the individual pawprint measurements.
Also provided is a reliability flag to allow selection of the most reliable proper motion measurements.
The approach and criteria to determine this flag is presented in section 4.2 of \citetalias{smith_2018}.
In this paper we only use the stars where the reliability flag is equal to one denoting that the proper motion are the most trustworthy.
In this work we adopt the VVV tiling structure for the spatial binning. For integrated on-sky maps we split each tile into quarters for greater spatial resolution. However when considering the kinematics as a function of magnitude we use the full tile to maintain good statistics in each magnitude interval.
For the majority of tiles in the VIRAC catalogue there is photometry in $K_s$, H and J bands. The exceptions are fields b274 and b280 for which VIRAC has no H band data and b212 and b388 for which VIRAC has no J band data.
These data were not present in VVV DR4 when the photometry was added to VIRAC.
We make use of an example tile in figures illustrating the analysis approach. The tile is b278 which is centred at approximately $l$=1.0$^\circ$, $b$=-4.2$^\circ$.
\subsection{Correction to absolute Proper motions with Gaia}\label{subsec:2absPM}
The VIRAC catalogue presents the proper motions in right ascension (RA), $\mathrm{\mu_{\alpha^*}}$, and declination (DEC), $\mathrm{\mu_{\delta}}$, relative to the mean proper motions in a VVV tile.
To obtain the absolute proper motions each VVV tile is cross matched with the $Gaia$ DR2 catalogue to make use of its exquisite absolute reference frame \citep{lindegren_2018}.
Only matches within 1.0 arcsec are considered.
Figure \ref{fig:absolutePM} shows the proper motions as measured by $Gaia$ plotted against the proper motions as measured by VIRAC for VVV tile b278.
The left panel shows the comparison for RA and the right panel shows the comparison for DEC.
Stars are selected for use in the fitting based upon a series of quality cuts:
\begin{inparaenum}
\item The uncertainty in proper motion measurement is less than 1.5 $\mathrm{mas \, yr^{-1}}$ for both \textit{Gaia} and VIRAC.
\item The star has an extincted magnitude in the range $10<K_s<15$ mag.
\item The star is classed as reliable according to the VIRAC flag.
\item The cross match angular distance between VIRAC and \textit{Gaia} is less than 0.25".
\end{inparaenum}
These criteria result in a sample of stars for which the mean G band magnitude is $\approx16.5$ with a dispersion of $\approx1.0$ magnitudes.
By construction a linear relationship, with gradient equal to one, is fit to the distribution. This fits well given that we expect there should be a single offset between \textit{Gaia} and VVV proper motions for each pawprint set.
The offset between the zero point for VIRAC and \textit{Gaia} is caused by the drift motion of the pool of reference stars used for each pawprint set. The measured offsets and uncertainties for the example tile are quoted in figure \ref{fig:absolutePM}.
The consistency checks performed by \citetalias{smith_2018} showed that measurements between different pawprint sets are consistent at the tile scale. A single offset per tile is therefore used to correct from relative proper motions to the absolute frame.
To check this assumption further we computed the offsets on a sub tile scale for tile b278, see figure \ref{fig:absolutePM_offsetTest}. We use a ten by ten sub-grid and determine $\sigma_{\Delta\mu_{\alpha}}$=0.10 $\mathrm{mas \, yr^{-1}}$ and $\sigma_{\Delta\mu_\delta}$=0.12 $\mathrm{mas \, yr^{-1}}$. These values show that the uncertainty in the fitted offset is larger than the formal statistical uncertainty derived on the offsets by about two orders of magnitude. We also see indications of a gradient across the tile for the DEC offsets.
These are likely a combination of two effects.
There are known systematics in the \textit{Gaia} proper motion reference frame \citep{lindegren_2018}, an example of which was observed in the LMC \citep{helmi_2018}.
Additionally there are possible variations in $<\mu_l^\star>$ and $<\mu_b>$ on this scale due to variation in the average distance of the reference sources, causing a variation in the measured mean proper motions, caused by variable extinction.
\subsection{Extracting Red Giants}\label{subsec:getRGonly}
The stellar population observed by the VVV survey can be split into two broad categories; the foreground (FG) disk stars and the bulge stars.
Figure \ref{fig:galaxiaDistanceMap} shows the colour-distance distribution of a stellar population model made using $galaxia$ \citep{sharma_2011}.
The model was observed in a region comparable to the example tile and only stars with $K_{s0}<14.4$ mag are used.
The FG disk stars are defined to be those that reside between the bulge and the Sun, at distances D $\lesssim$ 4 kpc.
Considering the magnitude range $11.5<K_{s0}<14.4$ mag we work in, the stars observed at D $\lesssim4$ kpc will be mostly main sequence (MS) stars.
The bulge stars residing at distances D $>$ 4 kpc are expected to be predominantly RG stars.
Figure \ref{fig:galaxiaDistanceMap} is analogous to a colour-absolute magnitude diagram and shows the two stellar types are separated spatially along the line of sight with only a relatively small number of sub-giant (SG) stars bridging the gap.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/4paper_color_dist_hist.pdf}
\caption{Tile b278 (1.$^\circ$, -4.2$^\circ$). Colour-distance distribution for a single line of sight, and in the magnitude range $11.0<K_{s0}<14.4$ mag, made using the $galaxia$ model. We see a clear MS and then a RG branch with a strong density peak at the galactic centre, much of which is due to RC stars at this distance.
The RG stars are clearly separated spatially from the MS stars that can only be observed when at distances D $\lesssim$ 3 kpc (horizontal black line). We remove the FG MS stars as they will have disc kinematics and we wish to study the kinematic structure of the bulge-bar.}
\label{fig:galaxiaDistanceMap}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/4paper_color_color_selection.pdf}
\caption{Tile b278 (1.$^\circ$, -4.2$^\circ$). Illustration of the colour selection procedure for the \textit{galaxia} synthetic stellar population. The top panel shows the reddened colour-colour log density diagram for the example tile. The middle panel shows the gaussian mixtures that have been fitted to this distribution. The blue contours highlight the foreground population and the red contours show the RGs.
The bottom panel shows the RGB population following the subtraction of the FG component.}
\label{fig:colorcolor_galaxia}
\end{figure}
To study the kinematics of the bulge we remove the FG stars to prevent them contaminating the kinematics of the bulge stars.
Considering the colour-colour distribution of stars, $(J-K_s)$ vs $(H-K_s)$, we expect the bluer FG to separate from the redder RG stars, see figure \ref{fig:galaxiaDistanceMap}.
We use the colour-colour distribution as the stars' colours are unaffected by distance.
A stellar population that is well spread in distance will still have a compact colour-colour distribution if the effects of extinction and measurement uncertainties are not too large.
The top panel of figure \ref{fig:colorcolor_galaxia} shows the colour-colour distribution for the $galaxia$ model observed in the example tile.
There are two distinct features in this diagram.
The most apparent feature is the redder (upper right) density peak that corresponds to stars on the RGB.
The second feature is a weaker, bluer density peak (lower left) which corresponds to the MS stars.
These two features overlap due to the presence of sub-giants which bridge the separation in colour-colour space.
In tiles where there is more extinction the RGB component is shifted to even redder colours.
The MS stars, which are closer, are not obscured by the extinction to the same extent and are not shifted as much as the RG stars.
This increases the distinction between the two components and so we separate based upon colour before correcting for extinction.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/4paper_GMM_results_dist_mag.pdf}
\caption{Tile b278 (1.$^\circ$, -4.2$^\circ$).
Top panel: Distance distribution of the galaxia synthetic stellar population. The whole distribution is outlined in black and the sample has been divided according to the result of the GMM fitting for the foreground. The stars called RGB are shown in red and the FG component in blue.
We zoom in on the 0. $<$ D/kpc $<$ 3.5 region of the plot to provide greater clarity.
Bottom panel: The same decomposition now mapped into magnitudes. In addition we show the contribution of the stars classed as RGB by the GMM that are at distances D $<$ 3.5 kpc as the green histogram. These stars contribute $\sim0.6\%$ of the total RGB population.
This shows that the GMM modelling is successful in identifying most of the MS foreground stars with only a slight residual contamination. }
\label{fig:distance_distribution_galaxia}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/colorSelection_plot4paper.pdf}
\caption{Tile b278 (1.$^\circ$, -4.2$^\circ$). Plots illustrating the separation of FG stars from the RG stars for the VVV example tile using a GMM technique. Top: Colour-colour histogram for the example tile. There are two populations, FG and RGB stars, that overlap slightly in this space but are clearly individually distinct density peaks. Middle: GMM contours showing the fit to the colour-colour distribution. The fit has correctly identified the two populations and allows a probability of the star belonging to either population to be assigned. Bottom: Histogram of the same data where each particle is now weighted by probability of being a RG.
The FG component has been successfully removed. There is a smooth transition in the overlap region between FG and RGB with no sharp cutoffs in the number counts of stars. This is expected from a realistic stellar population and cannot be achieved with a simple colour cut.
}
\label{fig:colorcolor}
\end{figure}
We use gaussian mixture modelling (GMM) to fit a multi component 2D gaussian mixture (GM) to the colour-colour distribution.
Fitting was performed with $scikit-learn$ \citep{scikit-learn}.
The fit is improved by using only stars with an extinction corrected magnitude $K_{s0}<14.4$ mag, see section \ref{subsec:extinction} for details of the extinction correction.
At fainter magnitudes the FG and RGB sequences merge together and it becomes increasingly difficult for the GMM to accurately distinguish the two components.
We use different numbers of gaussians depending on the latitude, and the fits have been visually checked to ensure that they have converged correctly.
Identifying the FG component and the RG component, we weight each star by its probability of being a RG star.
The weighting is calculated as follows,
\begin{equation}\label{eqn:rcWeighting}
w_{\mathrm{RG}} = \frac{ P\left( \mathrm{RG} \right) }{P\left( \mathrm{RG} \right) + P\left( \mathrm{FG} \right)},
\end{equation}
where P(RG) and P(FG) are the probability of a star's colours given the RGB and FG gaussian mixtures respectively, and $\mathrm{w_{RG}}$ can take values in the range 0 to 1.
For the few stars that do not have a measured J band magnitude we assign a weighting equal to one.
These stars are mostly highly reddened, causing their J band magnitude to not be measured and are therefore likely to be bona fide bulge stars.
To test the procedure outlined above it was applied to the $galaxia$ model. The model has had extinction applied and the magnitudes are randomly convolved with typical observational uncertainties to mimic the VVV survey. When selecting only the bright stars to apply the modelling we correct the mock extincted magnitudes using the same method as is used on the data to make the test as consistent as possible.
The progression is shown in figure \ref{fig:colorcolor_galaxia} with the top panel outlining the double peaked nature of the colour-colour diagram. The middle panel shows the fitted gaussians, FG in blue and RGB in red, and the bottom panel showing the original histogram now weighted according to equation \ref{eqn:rcWeighting}. The GMM has identified the density peaks correctly and removed the stars in the FG part of the diagram.
Figure \ref{fig:distance_distribution_galaxia} shows the results of the GMM procedure on the $galaxia$ population's distance (top) and luminosity function (bottom). The GMM successfully removes the majority of stars at distances $D<3$ kpc. The contamination fraction in the RGB population by stars at $D<5$ kpc distance is then only $\approx 1\%$.
Figure \ref{fig:distance_distribution_galaxia} also shows the presence of a FG population that corresponds to the blue MS population shown in figure \ref{fig:galaxiaDistanceMap} at colours $(J-K_s)_0\lesssim0.7$. At D$\lesssim$1.2 kpc a small number of stars are included in the RGB population which plausibly correspond to the redder faint MS population seen in figure \ref{fig:galaxiaDistanceMap}. This population accounts for $\sim0.6\%$ of the overall RGB population. The RGB population tail at D $\lesssim$ 3 kpc is composed of SG stars. The GMM is clearly extremely successful at removing the MS stars and leaving a clean sample of RGB with a tail of SG stars.
Having demonstrated that the GMM colour selection process works we apply it to each tile. Figure \ref{fig:colorcolor} shows the progression for tile b278. This plot is very similar to figure \ref{fig:colorcolor_galaxia} and gives us confidence that the GMM procedure is a valid method to select the RGB bulge stars.
The sources at low $(H-K_s)$ and high $(J-K_s)$ present in the data but not the model are low in number count and do not comprise a significant population.
As mentioned in section \ref{subsec:virac} there are 4 tiles with incomplete observations in either H or J bands. Tiles b274 and b280 have no H band measurements in VIRAC and the colour-colour approach cannot be applied. For these tiles we apply a standard colour cut at $(J-K_s)_0<0.52$ to remove the FG stars. Figure \ref{fig:b274_color_mag} illustrates this cut and also includes lines highlighting the magnitude range we work in, $11.5< K_{s0}< 14.4$ mag. The fainter limit is at the boundary where the FG and RGB sequences are beginning to merge together and the brighter limit is fainter than the clear artefact which is likely due to the VVV saturation limit.
We exclude the two tiles with no J band observations from the analysis as we do not wish to include the extra contamination due to the foreground in these two tiles. These tiles are plotted in grey throughout the rest of the paper.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/274_color_mag_diagram_cuts.pdf}
\caption{ Tile b274 (-4.8$^\circ$, -4.2$^\circ$). Colour magnitude diagram for one of two tiles with no H band observations and requiring a colour cut at $(J-K_s)_0=0.52$ mag (vertical black line) to separate the FG stars.
The two horizontal lines mark the boundary of our magnitude range of interest at $11.5<K_{s0}<14.4$ mag. The fainter boundary is selected to be brighter than where the FG and RGB populations merge in this diagram which aids in the application of the colour-colour selection in tiles with full colour information.}
\label{fig:b274_color_mag}
\end{figure}
\subsection{Extinction Correction}\label{subsec:extinction}
By observing in the IR, VVV can observe a lot deeper near the galactic plane where optical instruments like \textit{Gaia} are hindered by the dust extinction.
However, at latitudes $|b|<2^{\circ}$ the extinction becomes significant even in the IR, with $A_K > 0.5$.
We use the extinction map derived by \citet{gonzalez_2012}, shown together with the VVV tile boundaries in figure \ref{fig:extinction}, to correct the $K_s$ band magnitudes directly following $K_{s}=K_{s0}+A_K(l,b)$ where $K_{s0}$ is the unextincted magnitude. This map has a resolution of 2'.
We correct H and J bands, where available using the $A_K$ values from the map and the coefficients $A_H/A_K=1.73$ and $A_J/A_K=3.02$ \citep{nishiyama_2009}.
We use the extinction map as opposed to an extinction law because some of the stars do not have the required H or J band magnitudes.
A further issue, caused partially by extinction but also by crowding in the regions of highest stellar density, is the incompleteness of the VVV tiles.
Our tests have demonstrated that at latitudes $|b|>1.0^\circ$ and away from the galactic centre, ($|l|>2.0^\circ$,$|b|>2.0^\circ$), the completeness is $>80\%$ at $K_{s0}= 14.1$ mag. However inside these regions the completeness is lower, and so we exclude these region from our magnitude dependant analysis.
Our extinction correction assumes that the dust is a foreground screen. Due to the limited scale height of the dust this is a good assumption at high latitude. The assumption becomes progressively worse at lower latitudes and the distribution of actual extinctions increasingly spreads around the map value due to the distance distribution along the line of sight.
Due to incompleteness we exclude the galactic plane, which is also where the 2D dust assumption is worst, from our magnitude dependent analysis.
We further apply a mask at $A_K=1.0$ mag when considering integrated on-sky maps.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_A_vvvpm/extinctionMap.pdf}
\caption{Extinction data from \citet{gonzalez_2012}. Map showing the $K_s$ band extinction coefficient $A_K$ at a resolution of 2'. It shows the large extinction in the galactic plane and also in places out to $|b|<2^\circ$. Overplotted on this map are the outlines of the VVV tiling pattern with tile b201 at the bottom right, tile b214 at the bottom left and tile b396 at the top left.}
\label{fig:extinction}
\end{figure}
\subsection{Synthetic Luminosity Function}\label{subsec:particle2stellarDist}
To construct an absolute LF representing the bulge stellar population we used:
\begin{inparaenum}
\item The Kroupa initial mass function \citep{kroupa_2001} as measured in the bulge \citep{wegg_2017};
\item a kernel-smoothed metallicity distribution in Baade's window from \citet{zoccali_2008} where we use the metallicity measurement uncertainty to define each kernel;
\item isochrones describing the stellar evolution for stars of different masses and metallicities.
\end{inparaenum}
The PARSEC + COLIBRI isochrones \citep{bressan_2012,marigo_2017} were used with the assumption that the entire bulge population has an age of 10 Gyr \citep{clarkson_2008,surot_2019}.
These three ingredients were combined in a Monte Carlo simulation where an initial mass and metallicity are randomly drawn and then used to locate the 4 nearest points on the isochrones.
Interpolating between these points allows the [$M_K$,$M_H$,$M_J$] magnitudes of the simulated star to be extracted.
The simulation was run until $\mathrm{10^6}$ synthetic stars had been produced.
To observe the model as if it were the VIRAC survey it is necessary to implement all the associated selection effects.
In section \ref{subsec:getRGonly} a colour based selection was used to weight stars based on their probability of belonging to the RGB.
The same colour based procedure was applied to the synthetic stars' colour-colour diagram and the corresponding weighting factors were calculated.
The results of the simulation, with the colour weightings applied, are shown in the upper panel of figure \ref{fig:lf}.
As expected, the RC LF is very narrow facilitating their use as standard candles in studies of the MW (eg. \citealt{stanek_1994,bovy_2014,wegg_2015}).
\begin{figure}
\includegraphics[width=\columnwidth]{figures_C_syntheticLF/LF_4paper.pdf}
\caption{
Theoretical luminosity function used as inputs to the modelling to facilitate the observation of the particle model consistently with the VVV survey.
Top:
The initial LF is shown in red crosses. This is produced from the Monte Carlo sampling and the colour-colour selection procedure has been applied in a manner consistent with the VIRAC data.
The Markov Chain Monte Carlo fit using four components, an exponential background, a gaussian each for the AGBB and RGBB, and a skewed gaussian for the RC, is overplotted as the blue line.
Bottom: LF now split into the components that will be used in this paper; the RC (red), RGBB (cyan), AGBB (green), that are combined to produce the RC\&B, and the RGBC (blue). }
\label{fig:lf}
\end{figure}
\begin{table}
\centering
\caption{
Reference table of the most commonly used acronyms.
}
\label{tab:acronyms}
\begin{tabular}{cl}
\hline
Acronym & Definition\\
\hline
LF & Luminosity Function\\
FG & Foreground\\
SG & Sub-Giant\\
RGB & Red Giant Branch\\
RC & Red Clump\\
RGBB & Red Giant Branch Bump\\
AGBB & Asymptotic Giant Branch Bump\\
RGBC & Red Giant Branch Continuum\\
RC\&B & Red Clump and Bumps\\
\hline
\end{tabular}
\end{table}
We define the exponential continuum of RGB stars, not including the over densities at the RC, RGBB and AGBB, to be a distinct stellar population, henceforth referred to as the red giant branch continuum (RGBC).
We refer to the combined distribution of the RC, RGBB and AGBB stars as the RC\&B.
A list of stellar type acronyms used in this paper is given in table \ref{tab:acronyms}.
We fit the simulated LF with a four component model that we then combine to construct the RGBC and RC\&B. We use an exponential for the RGBC,
\begin{equation}\label{eqn:rgbc_lf}
\mathcal{L}_{\mathrm{RGBC}}\left(M_{K_{s0}}\right) = \alpha \exp{\left( \beta M_{K_{s0}} \right)}.
\end{equation}
We fit separate gaussians for the RGBB and AGBB,
\begin{equation}
\mathcal{L}_{\mathrm{RGBB/AGBB}}\left(M_{K_{s0}}\right) = \frac{C_i}{\sqrt{2\pi\sigma_i^2}} \exp{\left(-\frac{1}{2} \zeta_i^2 \right) },
\end{equation}
where,
\begin{equation}
\zeta_i=\frac{ M_{K_{s0}} - \mu_i }{\sigma_i},
\end{equation}
and $\mu_i$, $\sigma_i$, and $C_i$ denote the mean, dispersion, and amplitude of the respective gaussians.
We use a skewed gaussian for the RC distribution,
\begin{equation}
\mathcal{L}_{\mathrm{RC}}\left(M_{K_{s0}}\right) = \frac{C_{RC}}{\sqrt{2\pi\sigma_{RC}^2}}
\exp{\left(-\frac{1}{2} \zeta_{RC}^2 \right) }
\left[
1 + \mathrm{erf}\left( \frac{\gamma}{\sqrt{2}}\zeta_{RC} \right)
\right],
\end{equation}
where $\mathrm{erf}\left(\right)$ is the standard definition of the error function and $\gamma$ is the skewness parameter.
Fitting was performed using a Markov Chain Monte Carlo procedure; the results are shown in the lower panel of figure \ref{fig:lf} and the fitted parameters are presented in table \ref{tab:fitparams}.
These four LFs are used as individual inputs to the modelling code and allow each particle to be observed as any required combination of the defined stellar evolutionary stages.
These choices are well motivated as \citet{nataf_2010} and \citetalias{wegg_2013} showed that the RGBC is well described by an exponential function and the RC LF is known to be skewed \citep{girardi_2016}.
Ideally we would use only the RC stars from VIRAC when constructing magnitude resolved maps as they have a narrow range of absolute magnitudes and so can be used as a standard candle.
We statistically subtract, when necessary, the RGBC through fitting an exponential.
As shown in figure \ref{fig:lf} the RC and RGBB are separated by only $\approx0.7$ mag. When convolved with the LOS density distribution these peaks overlap.
Because it is difficult to distinguish the RGBB from the RC observationally we accept these stars as contamination.
It is also important to include the AGBB \citep{gallart_1998}; stars of this stellar type residing in the high density bulge region can make a significant kinematic contribution at bright magnitudes, $K_{s0}<12.5$ mag, where the local stellar density is relatively smaller.
\begin{table}
\centering
\caption{
Parameters for the LF shown in figure \ref{fig:lf}.
}
\label{tab:fitparams}
\begin{tabular}{lr}
\hline
Parameter & Value \\
\hline
$\alpha$ & 0.1664 \\
$\beta$ & 0.6284 \\
$\mu_{\mathrm{RGBB}}$ & -0.9834 \\
$\sigma_{\mathrm{RGBB}}$ & 0.0908 \\
$C_{\mathrm{RGBB}}$ & 0.0408 \\
$\mu_{\mathrm{AGBB}}$ & -3.0020 \\
$\sigma_{\mathrm{AGBB}}$ & 0.2003 \\
$C_{\mathrm{AGBB}}$ & 0.0124 \\
$\mu_{\mathrm{RC}}$ & -1.4850 \\
$\sigma_{\mathrm{RC}}$ & 0.1781 \\
$C_{\mathrm{RC}}$ & 0.1785 \\
$\gamma$ & -4.9766 \\
\hline
\end{tabular}
\end{table}
\subsection{VIRAC Observables}\label{subsec:nmagicObservables}
The kinematic moments we consider are the mean proper motions, the corresponding dispersions and the correlation between the proper motions.
We here define dispersion,
\begin{equation}
\sigma_{\mu_i} = \sqrt{ <\mu_i^2> - <\mu_i>^2 },
\end{equation}
with $i \in \left( l,b \right)$ and the correlation,
\begin{align}\label{eqn:correlation}
\mathrm{corr} \left( \mu_l , \mu_b \right) & =
\frac{<\mu_l \mu_b > - <\mu_l><\mu_b>}{ \sqrt{ \left( <\mu_l^2> - <\mu_l>^2 \right) \left( <\mu_b^2> - <\mu_b>^2 \right) } }\\
& = \frac{\sigma_{lb}^2}{\sigma_l\sigma_b}.
\end{align}
In the previous section we described the method to construct synthetic absolute LFs for the RGBC and the RC\&B stars, see figure \ref{fig:lf}. We now combine this with the dynamical model of \citetalias{portail_2017} to observe the model through the selection function of the VIRAC survey. For a more detailed description of the process used to reconstruct surveys see \citetalias{portail_2017}.
Each particle in the model has a weight corresponding to its contribution to the overall mass distribution.
When constructing a measurable quantity, or "observable", all particles that instantaneously satisfy the observable's spatial criteria, i.e. being in the correct region in terms of $l$ and $b$, are considered and the particle's weight is used to determine its contribution to the observable.
In addition to the particle weight there is a second weighting factor, or "kernel", that describes the selection effects of the survey.
The simplest example of an observable is a density measurement for which,
\begin{equation}
\rho = \Sigma_{i=0}^n w_i K(z_i),
\end{equation}
where the sum is over all particles, $w_i$ is the weight of the i$^{th}$ particle, $z_i$ is the particle's phase space coordinates and the kernel $K$ determines to what extent the particle contributes to the observable.
To reproduce VIRAC we integrate the apparent LF of the particle within the relevant magnitude interval to determine to what extent a stellar distribution at that distance modulus contributes.
For the magnitude range $11.8<K_{s0}<13.6$ mag, which we use for constructing integrated kinematic maps, and the stellar population denoted by X, the kernel is given by,
\begin{equation}
K(z_i) = \delta(z_i) \int_{K_{s0}=11.8}^{K_{s0}=13.6} \mathcal{L}_X(K_{s0}-\mu_i) dK_{s0}
\label{eqn:kernel}
\end{equation}
where the LF is denoted $\mathcal{L}_X$, the distance modulus of the particle is $\mu_i$, and $\delta(z_i)$ determines whether the star is in a spatially relevant location for the observable.
More complicated observables are measured by combining two or more weighted sums. For example a mean longitudinal proper motion measurement is given by,
\begin{equation}
<\mu_l^\star> = \frac{\Sigma_{i=0}^n w_i K(z_i) \mu_{l,i}}{\Sigma_{i=0}^n w_i K(z_i)},
\label{eqn:mean}
\end{equation}
where $\mu_{l,i}$ is the longitudinal proper motion of the i$^{th}$ particle.
This generalises to all further kinematic moments as well.
To account for the observational errors in the proper motions we input the median proper motion uncertainty measured from the VIRAC data for each tile. We use the median within the integrated magnitude range for the integrated measurements, see section \ref{sec:rg_kinematics}, and the median as a function of magnitude for the magnitude resolved measurements, see section \ref{sec:rc_kinematics}.
Given the true proper motion of a particle in the model we add a random error drawn from a normal distribution centred on zero and with width equal to the median observational error.
Temporal smoothing allows us to reduce the noise in such observables by considering all previous instantaneous measurements weighted exponentially in look-back time \citepalias{portail_2017}.
\subsection{Integrated Kinematics For All Giant Stars}
We first present integrated kinematic moments calculated for the magnitude range 11.8 $<K_{s0}<$ 13.6 mag which extends roughly $\pm3$ kpc either side of the galactic centre. Figure \ref{fig:integrated_maps} shows $<\mu_l^\star>$, $<\mu_b>$, $\sigma_{\mu_l^\star}$, $\sigma_{\mu_b}$, the dispersion ratio, and [$\mu_l^\star$,$\mu_b$] correlation components and compares these to equivalent maps for the fiducial model.
The $<\mu_l^\star>$ maps show the projected mean rotation of the bulge stars where the global offset is due to the tangential solar reflex motion measured to be -6.38 $\mathrm{mas \, yr^{-1}}$ using Sgr A* \citep{reid_2004}.
They contain a clear gradient beyond $|b|>3.^\circ$ with the mean becoming more positive at positive $l$ because of the streaming velocity of nearby bar stars, see also section \ref{sec:rc_kinematics}, figure \ref{fig:rcb_meanPM_L_vvv}.
A similar result was also reported by \citet{qin_2015} from their analysis of an N-body model with an X-shaped bar.
Away from the galactic plane the model reproduces the data well. It successfully reproduces the $<\mu_l^\star>$ isocontours which are angled towards the galactic plane. These isocontours are not a linear function of $l$ and $b$ and have an indent at $l=0^\circ$ likely caused by the boxy/peanut shape of the bar.
The $<\mu_b>$ maps show a shifted quadrupole signature. There are two factors we believe contribute to this effect; the pattern rotation and internal longitudinal streaming motions in the bar.
The near side of the bar at positive longitude is rotating away from the sun and the far side is rotating towards the sun. The resulting change in on-sky size manifests as $\mu_b$ proper motions towards the galactic plane at positive longitudes and away from the galactic plane at negative longitudes.
The streaming motion of stars in the bar has a substantial component towards the sun in the near side and away from the sun in the far side which has been seen in RC radial velocities \citep{vasquez_2013}.
For a constant vertical height above the plane, motion towards the sun will be observed as $+\mu_b$.
By removing the effect of the solar motion in the model, and then further removing the pattern rotation, we estimate the relative contribution to $<\mu_b>$ from the pattern rotation and internal streaming to be 2:1.
The offset of $\approx$ -0.2 $\mathrm{mas \, yr^{-1}}$ from zero in $\mu_b$ is due to the solar motion, $V_{z,\odot}$.
The quadrupole signature is also offset from the minor axis due to the geometry at which we view the structure.
It should be noted here that the random noise in the mean proper motion maps is greater than that of the corresponding dispersions. This is a consequence of systematic errors introduced by the \textit{Gaia} reference frame correction \citep{lindegren_2018} to which the mean is more sensitive.
The dispersion maps both show a strong central peak around the galactic centre. This is also seen in the model and is caused by the deep gravitational potential well in the inner bulge.
In both cases the decline in dispersion away from the plane is more rapid at negative longitude while at positive longitude there are extended arms of high dispersion.
For both dispersions there is a strip of higher dispersion parallel to the minor axis and offset towards positive longitude; centred at $l\sim1^\circ$. This feature is prominent for both data and model for the latitudinal proper motions. For the longitudinal case the model shows this feature more clearly than the data but the feature is less obvious compared to the latitudinal dispersions.
Both maps also show a lobed structure which is also well reproduced by the bar model and is likely a result of the geometry of the bar combined with its superposition with the disc. The model is observed at an angle of 28.0$^\circ$ from the bar's major axis \citepalias{portail_2017} and so at negative longitudes the bar is further away and therefore the proper motion dispersions are smaller.
On the other side, for sub-tiles at $l>7.0^\circ$ the dispersions are larger and both dispersions decline more slowly moving away from $b=0^\circ$, as in this region the nearby side of the bar is prominent.
The dispersion ratio $\mu_l^\star / \mu_b $ shows an asymmetric X-shaped structure with the region of minimum anisotropy offset from the minor axis by about $2^\circ$ at high $|b|$. The dispersion ratio is slightly larger than 1.1 along the minor axis and reaches 1.4 at high $|l|$ near the plane of the disc. These features are reproduced well by the model which has slightly lower dispersion ratio around the minor axis.
The correlation maps show a clear quadrupole structure with the magnitude of the correlation at $\approx0.1$.
The correlation is stronger at positive longitudes which is likely due to the viewing angle of the bar as the model also shows the signature.
This shows that the bar orbits expand in both $l$ and $b$ while moving out along the bar major axis.
This is consistent with the X-shaped bar but could also be caused by a radially anisotropic bulge so this result in itself is not conclusive evidence for the X-shape.
However the fiducial model is a very good match to the structure of the observed signal which gives us confidence that this signature is caused by an X-shaped bulge similar to the model.
In addition, the difference between correlation amplitude between positive and negative longitude rules out a dominant spherical component as this would produce a symmetrical signature.
All of the results of the integrated kinematic moments are consistent with the picture of the bulge predominantly being an inclined bar, rotating clockwise viewed from the north galactic pole, with the near side at positive longitude. The fiducial bar model is a very good match to all of the presented kinematic moments which gives us confidence that the model can provide a quantitative understanding of the structure and kinematics of the bulge.
\subsection{Comparison to Earlier Work}
Previous studies of MW proper motions have been limited to small numbers of fields. Due to the difficulty of obtaining quasars to anchor the reference frame these studies have dealt exclusively with relative proper motions.
In this section we compare VIRAC to two previous studies, \citetalias{kozlowski_2006} and \citetalias{rattenbury_2007}.
These studies have a relatively large number of fields, 35 HST fields for \citetalias{kozlowski_2006} and 45 OGLE fields for \citetalias{rattenbury_2007}, so on-sky trends are visible.
Both of these studies have different selection functions from VIRAC and so here we mainly compare the average trends in the data with less focus on the absolute values.
We do not consider other previous works because in some cases they discuss only results for a single field.
Comparing kinematics for single fields is less informative due to the effects of the selection functions and other systematics,
Figure \ref{fig:compare2otherData} shows the comparison of the dispersions, dispersion ratio and correlation measurements from VIRAC with those of \citetalias{kozlowski_2006} and \citetalias{rattenbury_2007}.
We see excellent agreement between the VIRAC data and the \citetalias{rattenbury_2007} measurements in all 4 kinematic moments.
The dispersion trends are clearly consistent; both VIRAC and \citetalias{rattenbury_2007} dispersion measurements increase towards the MW plane.
The lobe structures caused by the superposition of barred bulge and disc are also reproduced in both the VIRAC data and \citetalias{rattenbury_2007} with the dispersion at high positive $l$ larger than at high negative $l$ for both dispersions.
The dispersion ratios also match nicely with the lowest ratio found along the minor axis and then increasing for larger $|l|$ sub-tiles. The correlation maps are also in excellent agreement with a clear quadrupole signature visible in both VIRAC and \citetalias{rattenbury_2007}.
The agreement between VIRAC and \citetalias{kozlowski_2006} is less compelling. This is likely due to the larger spread of measurements in adjacent sub-tiles. In the dispersion maps we still see the general increase in dispersion towards the galactic plane, however the trend is far less smooth for the \citetalias{kozlowski_2006} data than for the VIRAC or \citetalias{rattenbury_2007} data. There also appears to be a slight offset in the absolute values although this is expected since VIRAC does not replicate the selection function of \citetalias{kozlowski_2006}.
For the dispersion ratio we observe a similar overall trend; the dispersion ratio increases moving away from the minor axis. This is likely due to the X-shape. There is a single outlying point in the dispertion ratio map at $\sim$($5^\circ$,$-4^\circ$) that has a ratio $\approx$0.3 greater than the immediately adjacent sub-tile. This outlier is caused by a high $\sigma_{\mu_b}$ measurement. The correlations are in good agreement between the two datasets although the \citetalias{kozlowski_2006} sample only probes the $(+l,-b)$ quadrant.
\subsection{Correlation in Magnitude Slices}
In this section we decompose the integrated RGB correlation map into magnitude bins of width $\Delta K_{s0}=0.1$ mag, see figure \ref{fig:correlation_ksliced}.
As in the integrated map, the magnitude resolved correlation maps all show a distinct quadrupole structure as well as a disparity between the strength of the correlation at positive and negative longitude. The magnitude binning also reveals that the brightest and faintest stars have less correlated proper motions than stars in the magnitude range $12.5<K_{s0}<13.1$ mag which corresponds to the inner-bulge RC stellar population.
As RC stars have a narrow LF their magnitude can be used as a rough proxy for distance. The rise and fall of the correlation therefore demonstrates that a fraction of RC stars in the
inner bulge ($\pm0.3$ mag $\sim\pm1.2$ kpc along the LOS) have correlated proper motions. This signature is very similar in the analogous plots for the fiducial barred bulge model in figure \ref{fig:correlation_ksliced}. There is no evidence in the VIRAC data that the correlated RC fraction decreases towards the Galactic centre, as would be expected if a more axisymmetric classical bulge component dominated the central parts of the
bulge. In the RGB population, underneath the RC, the correlation is spread out in magnitude because of the exponential nature of the RGB; this plausibly explains the baseline correlation seen at all magnitudes in figure \ref{fig:correlation_ksliced}.
\subsection{Structure of the Red Giant Branch Continuum} \label{subsec:structureRGBC}
The RGBC absolute LF, as discussed in section \ref{subsec:particle2stellarDist}, is well described by an exponential function.
We assume that the stellar population is uniform across the entire MW bulge distance distribution and therefore there exists a uniform absolute magnitude LF for the RGBC,
\begin{equation}
\mathcal{L}\left(M_{K_{s0}}\right) \, \propto \, e^{\,\beta M_{K_{s0}}},
\end{equation}\label{eqn:absLF}
where $\beta$ is the exponential scale factor, see equation \ref{eqn:rgbc_lf}.
We now demonstrate that the proper motion distribution of the RGBC is constant at all magnitudes. This will allow us to measure the proper motion distribution of the faint RGBC, where there is no contribution from the RC\&B, and subtract it at all magnitudes. The result is the proper motion distribution as a function of RC standard candle magnitude with only a small contamination from RGBB and AGBB stars.
Consider two groups of stars at distance moduli $\mu_1$ and $\mu_2$ with separation $\Delta\mu=\mu_2-\mu_1$. These groups generate two magnitude distributions $\mathcal{L}_1 \propto 10^{\beta \mu_1} $ and $\mathcal{L}_2 \propto 10^{\beta \mu_2} $ respectively.
$\mathcal{L}_2$ can be rewritten as,
\begin{equation}
\mathcal{L}_2 \propto 10^{\beta (\Delta\mu + \mu_1)} \propto 10^{\beta\Delta\mu}10^{\beta\mu_1},
\end{equation}
meaning both groups of stars produce the same magnitude distribution but with a relative scaling that depends upon the distance separation and the density ratio at each distance modulus.
Generalising this to the bulge distance distribution; each distance generates an exponential luminosity function that contributes the same relative fraction of stars to each magnitude interval.
This is also true for the velocity distributions from the various distances and so we expect the velocity distribution of the RGBC to be the same at all magnitudes.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_F_RCselection/background_distribution_hypothesis_proof_histograms.pdf}
\caption{
Histograms of the RGBC proper motion distributions from the model at three magnitude intervals, along a single LOS, considering all model disk and bulge particles.
The histograms are individually normalised and clearly show that the three profiles lie directly on top of each other. This is the case for all magnitude intervals we are considering. The proper motion distribution at each magnitude has the same structure but the overall normalisation changes allowing the distribution at faint magnitudes without RC\&B contamination to be used at brighter magnitudes.
}
\label{fig:bkg_hypothesis}
\end{figure}
To test this further we construct the RGBC ($\mu_{l,b}$,$K_{s0}$) distributions for a single LOS using the model and the RGBC absolute LF constructed in section \ref{subsec:particle2stellarDist}. We then normalise the distributions for each magnitude interval individually and the distributions for three magnitudes are shown in figure \ref{fig:bkg_hypothesis}. This shows that the RGBC proper motion distributions are magnitude independent. The distribution at faint magnitudes, $14.1<K_{s0}<14.3$ mag, where there is no contamination from the RC\&B, can be used to remove the RGBC at brighter magnitudes where the RC\&B contributes significantly.
\subsection{Extracting the Kinematics of the RC\&B}\label{subsec:extractingVIRACkinematics}
We have just shown that the proper motion distribution of the RGBC at faint magnitudes, where it can be directly measured, is an excellent approximation of the proper motion distribution at brighter magnitudes where it overlaps with the RC\&B.
We use this to subtract the RGBCs contribution to the VIRAC magnitude - proper motion distributions.
The first step is to fit the RGBC LF marginalised over the proper motion axis.
This provides the fraction of RGBC stars in each magnitude interval relative to the number of RC\&B stars.
We fit a straight line to $\log(N_{ \mathrm{RGBC} })$,
\begin{equation}
\log(N_{ \mathrm{RGBC} }) = A + B \left( K_{s0} - K_{s0, \mathrm{RC} } \right),
\end{equation}
where $A$ and $B$ are the constants to be fitted and $K_{s, \mathrm{RC} }=13.0$ mag is the approximate apparent magnitude of the RC.
When fitting, we use the statistical uncertainties from the Poisson error of the counts in each bin.
The LF is fitted within two magnitude regions on either side of the clump; $11.5<K_{s0}<11.8$ and $14.1<K_{s0}<14.3$ mag.
The bright region is brighter than the start of the RC over density but is not yet affected by the saturation limit of the VVV survey.
The faint region is selected to be fainter than the end of the RGBB but as bright as possible to avoid uncertainties due to increasing incompleteness at faint magnitudes.
The fit for the example tile is shown in figure \ref{fig:bkg_fitting_VVV}. Included are the two fitting regions in red and the RC\&B LF in green following the subtraction of the fitted RGBC.
\begin{figure}
\includegraphics[width=\columnwidth]{figures_F_RCselection/backgroundFitting_4paper.pdf}
\caption{ (b278 (1.$^\circ$, -4.2$^\circ$)) This plot shows the fit to the RGBC for the example tile in the VIRAC data. We use two magnitude intervals, 11.5$<K_s<$11.8 and 14.1$<K_s<$14.3 mag, shown as the red regions for the fitting. Subtracting the fit, red line, from the tile LF, shown in black, gives the LF of the RC\&B.}
\label{fig:bkg_fitting_VVV}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{figures_F_RCselection/faintProperMotionHistograms_4paper_v2.pdf}
\caption{
(b278 (1.$^\circ$, -4.2$^\circ$)) Process for extracting the kinematics as a function of magnitude for the RC\&B from the total RGB ($K_{s0}$,$\mu_l^\star$) distribution.
Left plot: The kernel density smoothed RGB distribution (left panel) with white lines highlighting the magnitude interval used for constructing the proper motion distribution of the RGBC, (right panel). This RGBC distribution is subtracted at each magnitude normalised according to the RGBC fit. Right plot: The ($K_{s0}$,$\mu_l^\star$) distribution (left panel) for the RC\&B following the subtraction of the RGBC. The vertical white lines highlight a magnitude bin for which the kinematic measurements are shown (right panel). The horizontal dashed line shows the mean, and the error bar shows the dispersion.
}
\label{fig:faint_bkg_subtraction}
\end{figure*}
The second step to extract the RC\&B velocity distribution is to remove the RGBC velocity distribution.
This process is summarised in figure \ref{fig:faint_bkg_subtraction}.
We construct the RGBC velocity distribution using a kernel density estimation procedure.
For consistency we compute the RGBC proper motion profile using the same faint magnitude interval used for the RGBC fitting.
The background is scaled to have the correct normalisation for each magnitude interval according to the exponential fit.
The total proper motion profile for each magnitude interval is then constructed using the same kernel density estimation procedure.
We use a rejection sampling approach to reconstruct the RC\&B proper motion distribution with discrete samples. We sample two random numbers:
\begin{inparaenum}
\item The first in the full range of proper motions covered by the two proper motion distributions, total distribution and the scaled RGBC distribution, in the magnitude interval.
\item The second between zero and the maximum value of the two kernel density smoothed curves.
\end{inparaenum}
Only points that lie between the two distributions, in the velocity range where the two distributions are statistically distinct, are kept, as only these points trace the RC\&B distribution. We sample the same number of points as the exponential fit indicates there are in the RC\&B component. This is to reconstruct the distribution with the correct level of accuracy.
For this sample of points we compute the mean and dispersions analytically.
We repeat this sampling in a Monte Carlo procedure to obtain 100 realisations of the mean and dispersion measurements and use these to characterise the uncertainty upon the measurements.
This approach ignores the variable broadening as a function of magnitude caused by measurement uncertainties. To test this we extracted the magnitude-proper motion data from the model for a variety of representative tiles and convolved the values with the median VIRAC uncertainties.
The convolution increases the dispersion by $~0.06$ $\mathrm{mas \, yr^{-1}}$ at $K_{s0}=11.8$ mag and $\sim0.16\pm0.05$ $\mathrm{mas \, yr^{-1}}$ at $K_{s0}=13.6$ mag. The broadening at fainter magnitudes is more sensitive to the spatial location of the tile.
The model provides discrete samples of the RC\&B kinematic distribution as a function of magnitude and so we calculated the convolved mean proper motions and dispersions analytically.
We then applied the same analysis as described for the data to the complete convolved distribution drawn from the model, disregarding the known separation between RC\&B and RGBC.
Comparing the analytically calculated kinematics with the data-method measurements we find a systematic uncertainty in the recovered values of $\lesssim$0.1 $\mathrm{mas \, yr^{-1}}$ for dispersion and significantly less for the mean. This systematic can be positive or negative for a given tile but is consistent at all magnitude intervals along the LOS.
\begin{figure*}
\centering
\subfigure{\includegraphics[width=\textwidth]{figures_G_Bsliced/meanMul_VVV_RCB_funcLKB.pdf}}
\subfigure{\includegraphics[width=\textwidth]{figures_G_Bsliced/meanMul_NMAGIC_RCB_funcLKB.pdf}}
\caption{ Top panels: $<\mu_l^\star>$ maps of the RC\&B stars in latitude slices as a function of magnitude for the VIRAC data. The contours correspond to the stellar number count of the RC\&B stars. Focusing on the top row in particular where we observe a split RC\&B we see that the two density peaks have $\Delta<\mu_l^\star>$ $\approx$ 1 $\mathrm{mas \, yr^{-1}}$.
Lower panels: Equivalent plots for the fiducial bar model from \citetalias{portail_2017} which matches the mean transverse motion and the gradients in the data very well. The grey areas in the VIRAC plots are masked based on our measurement errors and are shaded in the model plots to guide the eye.}
\label{fig:rcb_meanPM_L_vvv}
\end{figure*}
\subsection{Latitude Slices}
The luminosity function along the minor axis for high latitude tiles in the bulge exhibits a double peaked distribution which is believed to be due to an X-shaped boxy/peanut bulge. The acute viewing angle of the bar causes lines of sight near the minor axis at high latitude to intersect the near arm first and subsequently the faint arm of the X-shape. As discussed in the Introduction, this scenario is supported by various evidence from observations and N-body simulations, but alternative scenarios based on multiple stellar populations along the line-of-sight have also been suggested. In this section we present proper motion kinematics of RC\&B stars as a function of magnitude which provide an independent test of these scenarios.
Figures \ref{fig:rcb_meanPM_L_vvv} to \ref{fig:rcb_dispPM_B_vvv} show the number density, mean proper motions and proper motion dispersions of RC\&B stars in latitude slices as a function of magnitude for both VIRAC and the fiducial dynamical model from P17.
In section \ref{sec:getRCB} we described the rejection sampling approach to measure the proper motion mean and dispersion. We apply an opaque mask to bins in which the RC\&B contributes less than 10\% of the stars according to the RGBC fit to ensure that the results are reliable. We apply a secondary transparent mask to all regions where the Monte Carlo resampling measurement uncertainty is greater than 0.1 $\mathrm{mas \, yr^{-1}}$ to guide the eye as to where the results are most secure.
As mentioned in section \ref{sec:getRCB} there is also a systematic uncertainty of at maximum 0.1 $\mathrm{mas \, yr^{-1}}$ in the dispersion measurements and smaller for the mean measurements which is caused by the magnitude dependent broadening of the proper motion distributions.
The fiducial model has been fitted to star count data and radial velocity
data for the bulge and long bar as described in Section \ref{sec:m2m}, but no VIRAC proper motion data was used. It nonetheless provides excellent predictions for the observed PM data, and can therefore be used to understand the signatures present in the VIRAC maps.
\subsubsection{Number Density}\label{subsec:RCdensity}
The star counts of RC\&B stars are shown with the grey contours in figure \ref{fig:rcb_meanPM_L_vvv}.
N ear the minor axis at $|b|>4.^\circ$ the contours show a bi-modal star count distribution while at $|b|>6.^\circ$ they show clear evidence of double peaked luminosity functions.
These results are both consistent with \citet{saito_2011} and \citetalias{wegg_2013}, who studied the distribution of RC stars using VVV, and with previous studies \citep{mcwilliam_2010,nataf_2010}. As expected they are consistent with the structure of a boxy/peanut bulge with the near end at positive longitude. The model, which is known to host an X-shaped structure, nicely replicates the extension of the final density contour towards fainter magnitudes which is caused by the presence of the RGBB stars.
\subsubsection{Mean Longitudinal Proper Motion}
The VIRAC $<\mu_l^\star>$ of the RC\&B as a function of tile and magnitude is shown in the upper plot of figure \ref{fig:rcb_meanPM_L_vvv}.
The overall proper motion of the galactic centre is consistent with the solar reflex motion $\mu_l^\star=-6.38$ $\mathrm{mas \, yr^{-1}}$ \hbox{\citep{reid_2004}}.
We see that at all latitudes the brighter stars have a less negative proper motion than the fainter stars and the observed gradient is well reproduced by the model.
A zoom in of the $b=-6.37^\circ$ slice for the model $<\mu_l^\star>$ is shown in the top panel of figure \ref{fig:zoom_in_4_streaming}.
The overall bright to faint $<\mu_l^\star>$ gradient shows the mean rotation of stars as a function of distance which is lower than for circular orbits in a disk. The barred structure causes a longitudinally asymmetric pattern different from expected for a circular rotation field. These features are sensitive to the pattern rotation and to streaming motions in the bar.
The effect of streaming can be seen at $|l|\lesssim4^\circ$. Considering $<\mu_l^\star>$ there is a smooth but rapid transition from more positive to more negative $\mu_l^\star$ between $12.2<K_{s0}<13.2$ mag where the mean is dominated by the RC.
This is followed by a kink at $K_{s0}\sim13.5$ where the RGBB stars in the near side region of high bulge density cause a kink towards more positive mean proper motion.
Ther initial transition is much stronger in the tiles near the minor axis, $|l|\lesssim4^\circ$, and the kinks are only observed in this region. The kinks being longitude dependant makes this unlikely to be a purely stellar population effect. We expect the greatest streaming velocities near the minor axis and so it is likely that a combination of stellar type and streaming is causing these effects.
This kink in the proper motion profiles as a function of $K_{s0}$ can also be seen in the VIRAC data in figure \ref{fig:rcb_meanPM_L_vvv}.
At bright magnitudes $<\mu_l^\star>$ becomes more negative again due to AGBB stars in the high density bulge region which have more negative proper motions than the closer RC and RGBB stars.
At higher latitudes that exhibit a double peaked density distribution the misalignment of the proper motion transition causes the brighter peak to have mean proper motion $\approx1$ $\mathrm{mas \, yr^{-1}}$ more positive than the fainter peaks.
This demonstrates that the bright peak in the split RC has significantly distinct proper motion kinematics from the faint peak.
The faint and bright RC division can therefore not have a purely stellar population origin.
Instead, the observed effects are well reproduced by the X-shaped bar model, shown in the lower plots.
Since the barred potential and the orbits in it are largely fixed by the fitted data, and both RC peaks are visited by similar orbits \citep{portail_2015b}, it is hard to see how the barred model could support the split RC peaks through different stellar populations.
\subsubsection{Mean Latitudinal Proper Motion}
The VIRAC $<\mu_b>$ of the RC\&B as a function of tile and magnitude is shown in the upper plot of figure \ref{fig:rcb_meanPM_B_vvv}.
The $<\mu_b>$ appear noisier compared to $<\mu_l^\star>$ because while both maps are subject to systematic errors of $\approx$0.1 $\mathrm{mas \, yr^{-1}}$, $<\mu_b>$ covers a smaller range of values.
The systematics are a combination of the relative to absolute correction, see section \ref{sec:vvvpm}, and the effect of variable broadening on our RC\&B extraction approach, see section \ref{sec:getRCB}.
The reflex motion due to the sun's vertical motion is $\approx-0.2$ $\mathrm{mas \, yr^{-1}}$ for $V_{z,\odot}=7.25 \, \mathrm{ km \, s^{-1}} $ \citep{schoenrich_2010} which broadly accounts for the overall offset from zero in the fiducial model shown in the lower plots.
At latitudes $|b|>4^\circ$ the $<\mu_b>$ isocontours for both VIRAC and the model highlight a transition that is aligned with the bar axis shown in the star count contours.
Considering the zoom in of the $b=-6.37^\circ$ slice for the model $<\mu_b>$ shown in the bottom panel of figure \ref{fig:zoom_in_4_streaming}, the near side of the bar, along the $l=2.5^\circ$ LOS, shows strong negative $<\mu_b>$ while the far side shows more positive $<\mu_b>$. If only pattern rotation were contributing we would expect a smoothly declining trend as the apparent proper motion decreases for stars at greater distance.
At this latitude the strong variation in $<\mu_b>$ is plausibly explained by streaming motions. Specifically, streaming motions in the near side towards the Sun induce an apparent negative $\mu_b$ while streaming motion away from the sun on the far side induce an apparent positive $\mu_b$.
We see further evidence in figure \ref{fig:zoom_in_4_streaming} with a spur of more negative $<\mu_b>$ that is located at $|l|\lesssim3.0^\circ$ and $K_{s0}\sim13.3$ mag.
This feature is caused by RGBB stars in the near half of the bar which are streaming towards the Sun and so present a negative $<\mu_b>$.
These $<\mu_b>$ motions in the VIRAC data in figure \ref{fig:rcb_meanPM_B_vvv} are therefore due to a superposition of streaming velocities in the bar frame along the LOS as well as the bar pattern rotation.
We see similar features of streaming motions in the model, including at latitudes closer to the plane where they are not visible in the VIRAC data for our magnitude range.
\subsubsection{Longitudinal Proper Motion Dispersion}
$\sigma_{\mu_l^\star}$ as a function of tile and magnitude for the RC\&B is shown in the upper plot of figure \ref{fig:rcb_dispPM_L_vvv} and corresponding plots for the fiducial model are shown below.
We see a clear centrally concentrated dispersion peak for tiles close to the plane. This dispersion peak is reproduced by the model where it is caused by the depth of the central potential as opposed to being a separate bulge component.
For latitudes in the range $3<|b|<6^\circ$ there is a clear gradient in the dispersion between the near side of the bar and its far side which is at lower dispersion.
This is reproduced by the model and is because, while the RC\&B stars on both sides have symmetric intrinsic dispersion, the greater distance for the far side of the bar makes the dispersion appear smaller.
The dispersion gradient becomes less pronounced beyond $|b|>6^\circ$ for both VIRAC and the model.
For latitudes $|b|<4^\circ$ there is a secondary peak of high dispersion $\sim0.8$ mag fainter than the central peak at $K_{s0}=12.7$ mag.
This is caused by the RGBB stars near the galactic centre.
\subsubsection{Latitudinal Proper Motion Dispersion}
$\sigma_{\mu_b}$ as a function of tile and magnitude for the RC\&B is shown in the upper panels of figure \ref{fig:rcb_dispPM_B_vvv}. The latitudinal dispersions show structures very similar to those in the longitudinal dispersion maps.
We see a concentrated central peak due to RC stars in the deep potential well near the galactic centre and a fainter second peak which is caused by the RGBB stars. These features are well reproduced by the model which is shown in the lower panels.
There is a clear gradient between the two ends of the bar for latitudes $|b|>4^\circ$ with more distant stars having smaller proper motion for the same intrinsic dispersion.
A notable difference to the longitudinal maps is the shallower gradient in the dispersion between brighter and fainter magnitudes. This is likely due to the foreground bar component having a small vertical dispersion in comparison to that of the X-shaped boxy/peanut bulge.
\subsection{Magnitude Slices}
Figure \ref{fig:dispMul_kslice} shows the breakdown of longitudinal dispersion in different magnitude intervals for the data (top panels) and fiducial bar model (bottom panels). At all magnitudes we see a high dispersion peak at the galactic centre which is caused by the deep potential well and stars orbiting aligned to the bar major axis.
This peak is offset slightly towards positive longitude due to the acute observation angle of the bar.
The magnitude of this peak is strongest at $K_{s0}\approx12.8$ mag which corresponds to RC stars in the centre. The central peak dispersion decreases until $K_{s0}\approx13.3$ mag at which point the dispersion increases again due to RGBB stars in the galactic centre.
We see excellent agreement with the fiducial bar model which reproduces the two central dispersion peaks.
The model reproduces the arc of low dispersion at negative longitude which is likely caused by the low dispersion of the far side of the bar. The high dispersion peak at brighter magnitudes is not symmetric about the minor axis with near plane positive longitude regions at higher dispersion than their counterpart at negative longitude. This is likely due to the intrinsic dispersion of the near side of the bar.
This plot is complementary to the integrated map, see figure \ref{fig:integrated_maps}, showing the origin of the dynamically colder region at $|b|>5^\circ$ is not a single feature of the bar but rather a superposition of the kinematics at different magnitude intervals.
|
2,869,038,156,691 | arxiv | \section{Introduction}
\label{intro}
When newly born stars emerge from their natal clouds as class II and
class III pre main-sequence (PMS) objects, they can be placed in a
Hertzsprung-Russell (HR) diagram. Low-mass ($<2\,M_{\odot}$)
stars take 10--100\,Myr to descend the Hayashi track
and settle onto the zero-age main-sequence, so the HR diagram can
be used, in combination with theoretical models, to estimate individual
ages for PMS stars or construct the age distribution of a group of
PMS stars. The HR diagrams of young star forming regions (SFRs)
usually have an order of magnitude range of luminosity at a given
effective temperature ($T_{\rm eff}$, see Fig.1), and this luminosity dispersion is often
interpreted as star formation that has been ongoing for $\geq 10$\,Myr
within a single SFR or young cluster (e.g. for young, nearby SFRs --
Palla \& Stahler 1999, 2000; for massive young clusters -- Beccari et
al. 2010; or even for resolved star clusters in other galaxies -- Da
Rio, Gouliermis \& Gennaro 2010a).
The presence and extent of any age spread is an important constraint on
models of star formation. A significant ($\geq 10$\,Myr) spread would
favour a ``slow'' mode, where global collapse is impeded by, for
example, a strong magnetic field (e.g. Tassis \& Mouschovias 2004).
Age spreads that were $\leq 1$\,Myr however,
could be explained by the rapid dissipation of turbulence and star
formation on a dynamical timescale (e.g. Elmegreen 2000). The reality
or not of age spreads is also important from a practical point of
view. Ages from the HR diagram are used to understand the progression
of star formation (e.g. triggering scenarios, collect-and-collapse
models) and the age-dependent masses estimated from an HR diagram are
usually the only way of determining the initial mass function.
In this short review, I ask:
\begin{enumerate}
\item Are the luminosity spreads (at a given $T_{\rm eff}$) in the HR diagram real?
\item If so, do these necessarily imply a wide spread of ages within an individual SFR?
\end{enumerate}
\section{Luminosity spreads?}
\label{lumspread}
Hartmann (2001) identified many sources of astrophysical and
observational scatter that contribute to an {\em apparent} spread
in the luminosities of PMS stars at a given $T_{\rm eff}$. These
include the likelihood that many ``stars'' are unresolved multiples;
that individual stars may be subject to a range of
extinction and reddening; that PMS stars can be
highly variable; that the luminosity contributed by
accretion processes could vary from star-to-star; that in (nearby) SFRs
the stars are at a range of distances; and that placing stars on a
HR diagram requires temperature (or spectral type
or colour) and luminosity (brightness) measurements which have
observational uncertainties. Hartmann concluded that efforts to infer
star formation histories would be severely hampered by these effects
and that the luminosity and hence age spreads claimed by Palla \&
Stahler (2000), among others, must be extreme upper limits.
Hillenbrand, Bauermeister \& White (2008)
showed that it is difficult to verify or indeed quantify luminosity
spreads, and hence infer age spreads, unless (a) observational
uncertainties are small and (b) both the {\it size and distribution} of
other astrophysical sources of luminosity dispersion are well
understood.
One approach to tackle these difficulties is to quantify
spreads that could be contributed by individual sources of dispersion
and model the outcome. Burningham et al. (2005) used photometric
measurements at more than one epoch to empirically assess the affects
of variability on two young SFRs ($\sigma$~ Ori and Cep OB3b) with
significant (compared to observational uncertainties) scatter in their
colour-magnitude diagrams (CMDs). This approach takes account of
correlated variability in colours and magnitudes and the non-Gaussian
distribution of variability-induced dispersion. A coeval population was
simulated using the observed levels of variability, the likely
effects of binarity and observational errors. This model was found to
significantly {\it underpredict} the observed dispersion. In other
words, variability (on timescales of years or less), binarity and
observational error could only account for a small fraction of the
luminosity dispersion. On the other hand, Slesnick, Hillenbrand \&
Carpenter (2008) examined the slightly older Upper Sco SFR and showed
that the large observed luminosity spreads could perhaps be entirely
explained by a coeval population affected by a combination of
observation errors, distance dispersion and binarity. However, the
additional dispersion (particularly due to distance uncertainties) was
so large in this case that additional scatter equivalent to a real age
dispersion of $\pm 3$\,Myr remained a possibility.
A more sophisticated statistical approach has been taken by Da Rio,
Gouliermis \& Gennaro (2010a) who, using a maximum likelihood method
akin to that proposed by Naylor \& Jeffries (2006), fitted a
2-dimensional synthetic surface density to the CMD of a SFR in the
Large Magellanic Cloud. The model includes contributions from
unresolved binarity, variability, differential extinction and
accretion. These authors conclude that the luminosity spread in the CMD
is too large to be accounted for by the ``nuisance'' sources of
dispersion and interpret the additional scatter as a spread in ages of
FWHM 2.8--4.4 Myr
An alternative for investigating the reality of the luminosity
dispersions is to examine proxies such as radius or gravity that would
be expected to show a corresponding dispersion, but whose measurement
is not so greatly affected by the additional astrophysical sources of
scatter. An example is the use of rotation periods and projected
equatorial velocities to estimate the projected radii, $R \sin i$, of
PMS stars in the Orion Nebula cluster (ONC, Jeffries 2007). These
measurements are largely unaffected by binarity, variability,
differential extinction, distance or accretion. Assuming that spin-axes
are randomly oriented, the distribution of $R \sin i$ can be modelled
to estimate mean radii and the extent of any true spread in radius at a
given $T_{\rm eff}$. The results confirm that a factor of 2--3 (FWHM)
spread in radius exists at a given $T_{\rm eff}$ and this concurs with
the order of magnitude luminosity spread seen in the HR diagram of the
same objects.
In summary, although there are few detailed investigations to draw on,
the evidence so far suggests that the luminosity spreads seen in
SFRs are mostly genuine. Only a fraction of the dispersion can be
explained by observational uncertainties, variability, binarity and
accretion.
\section{Age Spreads?}
\label{agespread}
If the luminosity dispersions are genuine, then it is natural to plot a
set of HR diagram isochrones, estimate an age for each star and hence
infer an age distribution. However it is possible that physical causes
other than age could contribute to a real dispersion of luminosity in
the HR diagram of young PMS stars. Accretion could perturb the
evolution of the central star, inducing a luminosity spread even in a
coeval population (Tout, Livio \& Bonnell 1999). To investigate the
fidelity of ages deduced from the HR diagram we can compare these ages
with those estimated using independent clocks. These include the
depletion of photospheric lithium, the evolution of stellar rotation
and the dispersal of circumstellar material.
\subsection{Lithium Depletion} %
\label{li}
Lithium is ephemeral in the photospheres of young, low-mass stars. Once
the central temperature of a star reaches the Li ignition temperature,
($\sim 2.5\times 10^{6}$\,K) convective mixing leads to almost
complete Li depletion unless the PMS star leaves the Hayashi track and
develops a radiative core (see Jeffries 2006). In principle the level
of Li in the atmosphere of a low-mass PMS star is a mass-dependent
clock. Palla et al. (2005) and Sacco et al. (2007) have searched for
Li-depleted stars that are bona-fide members of the Orion Nebula
cluster and the $\sigma$~Ori and $\lambda$~Ori associations. They do
find a few such objects (a few per cent of the total) and using models for Li depletion,
infer ages for them of $>10$\,Myr, compared to HR diagram ages
of 2--5\,Myr for the bulk of the PMS population. These observations
are consistent with the presence of a small fraction of older objects,
co-existing with the bulk of the younger PMS population, arguing in
favour of a large age spread.
Whilst this interpretation is possible, there are some problems. First,
the bimodal distribution of Li abundances (i.e. most stars are
undepleted with a small fraction of extremely Li-depleted objects) does
not seem consistent with a smooth underlying distribution of ages and
indeed contamination by older, non-members of the cluster has been
suggested (Pflamm-Altenburg \& Kroupa 2007). Second, although in some
(but not all) cases, the Li-depletion age for these stars matches the
HR diagram age, they are {\it not} fully independent age
indicators. The central temperature of the star, which controls the
Li-burning, will depend on the stellar radius (and hence luminosity in
the HR diagram). If for some reason the star had a smaller radius than
expected at a given age and therefore appeared older in the HR
diagram, its central temperature would {\it also} be higher and it would have
a greater capacity to burn Li.
\subsection{Rotation rates}
\label{rotation}
Young, PMS stars typically rotate with periods of 1--10\,d. There is
strong evidence that PMS stars with circumstellar disks and active
accretion rotate more slowly on average than those without disks
(e.g. Rebull et al. 2006; Cieza \& Baliber 2007). A widely accepted
idea is that stars which are accreting from a disk are braked by the
star-disk interaction and held at a roughly constant spin period
(Rebull, Wolff \& Strom 2004). Once the disk disperses, or below some
threshold accretion rate, the brake is released and the star spins up
as it rapidly contracts along the Hayashi track. Thus, the rotation
rate of PMS stars should broadly reflect the age of the population --
an older population should have fewer strong accretors (see
section~\ref{accrete}), have had more time to spin-up, and hence should
contain a greater proportion of fast rotators than a younger
population. As the lifetime of accretion is of order a few Myr, then
age spreads of 10~Myr should manifest themselves as big differences in
the rotation period distributions of the ``older'' and ``younger''
populations.
This rotation clock has been investigated by Littlefair et
al. (2011). They divided the PMS populations of several nearby SFRs
into ``old'' (low luminosity) and ``young'' (high luminosity) samples
and compared their rotation period distributions. The null hypothesis
that the samples were drawn from the same distribution could be
rejected at high significance levels, but the surprising result is
that the faster rotating sample is actually the one containing the
``young'' objects. If the luminosity spreads were truly caused by an
age spread, the ``disk-locking'' model would predict the opposite result.
Littlefair et al. interpret this by assuming the populations in each
SFR are coeval, but the luminosity spreads are introduced through
differing accretion histories which also influence the stellar rotation
rate (see section~\ref{interpret}).
\subsection{Disk dispersal}
\label{accrete}
\begin{figure}[t]
\includegraphics[scale=.45]{fig1.eps}
\caption{The HR diagrams and inferred age distributions for samples of
stars in the Orion Nebula cluster (ONC, data from Da Rio et al. 2010b).
(Left) Upper plot shows isochrones (from Siess et al. 2000, labelled
in Myr) and
stars in the ONC separated by infrared excess. Open symbols are stars
with $\Delta (I-K)>0.3$ (data from Hillenbrand et al. 1998). Lower
diagram shows the age distributions which have identical means and
similar dispersions. (Right) A similar plot, but the open symbols are
stars with $L_{\rm accrete}/L_{\rm bol}>0.1$ (from Da Rio et
al. 2010b). Again, the lower plot shows the age distributions of
these samples are very similar.
}
\label{fig1}
\end{figure}
It is well known that the lifetime of circumstellar material around
young PMS stars, traced by the fraction of objects exhibiting
infrared excesses or accretion diagnostics, is on average a few
Myr (e.g. Haisch, Lada \& Lada 2001; Calvet et al. 2005; Jeffries et
al. 2007; Hern\'andez et al. 2008). The precise reasons for disk
dispersal are still unclear, but if the fraction of stars
accreting strongly from a circumstellar disk does decrease with age
then we would expect to see fewer active accretors among any older
population {\it within a single SFR}.
Surprisingly little work has been done in this area. Hartmann et
al. (1998) found that mass accretion rates did decline with increasing HR
diagram age in Taurus and Chamaeleon. Bertout, Siess \& Cabrit (2007)
claimed that accreting classical T-Tauri stars in Taurus appeared significantly
younger in the HR diagram than their weak-lined, non-accreting
counterparts. On the other hand, Hillenbrand et al. (1998) find no
correlation between age and the fraction of PMS stars in the ONC with
near-infrared excesses. These studies are difficult because they are
afflicted by a number of biases and selection effects.
In preparation for this review I examined a new catalogue of sources in
the ONC by Da Rio et al. (2010b), which they claim to be complete to
very low luminosities. They have estimated the luminosity and effective
temperature of stars using a careful star-by-star estimate of accretion
luminosity and extinction. Their catalogues give estimated masses and
ages based on the models of Siess, Dufour \& Forestini (2000). Figure~1
shows HR diagrams and deduced age distributions, where the samples have
been divided according to (a) whether the $I-K$ excess over a
photospheric colour is $>0.3$ (data from Hillenbrand et al. 1998) or
(b) whether the accretion luminosity is $>0.1\,L_{\rm bol}$. Neither of
these accretion/disk diagnostics shows a significant age dependence
within the ONC, the mean ages and age distributions of the subsamples
are indistinguishable. I am currently exploring any possible biases
(e.g. dependences of age and the likelihood of possessing a disk on
position within the cluster) that might explain these results.
Taking the results at face value suggests either: (i) Any true age
spreads are much less than the few Myr characteristic timescale for the
cessation of accretion and dispersal of circumstellar material and that
a star's position in the HR diagram {\it is not} primarily age
dependent. (ii) The scatter in the luminosities caused by the nuisance
sources discussed in section~\ref{lumspread} is so large that it erases
the expected age-dependent decrease in the fraction of stars exhibiting
accretion or disk signatures. For the reasons discussed in
section~\ref{lumspread} I regard this latter possibility as
unlikely. In either case (i) or (ii) it would mean that the HR diagram
could not be used to claim a large age spread or to estimate the star
formation history.
\section{Episodic accretion -- a possible explanation}
\label{interpret}
The idea that early accretion could alter a PMS star's
position in the HR diagram and make it appear older have been around
for some time (e.g. Mercer-Smith, Cameron \& Epstein 1984; Tout et
al. 1999). Recently it has been realised (e.g. by Enoch et al. 2009)
that accretion onto very young stars may be transient or episodic, with
very high accretion rates ($\sim 10^{-4}\,M_{\odot}$\,yr$^{-1}$)
occurring for brief periods of time ($\sim 100$\,yr). ``Episodic
accretion'', which would take place during the early class I T-Tauri
phase, has been modelled by Vorobyov \& Basu (2006) and its
consequences for the PMS HR diagram are explored by Baraffe, Chabrier \&
Gallardo (2009). They find that if the accreted energy is efficiently
radiated away, then a short phase of rapid accretion compresses the PMS
star, leading to a smaller radius and lower luminosity. The star will
not relax back to the configuration predicted by non-accreting models
for a thermal timescale ($\simeq 20$\,Myr for the PMS stars I am
discussing), and hence interpreting the HR diagram using
non-accreting models would lead to erroneously large ages. A
distribution of accretion histories in a coeval SFR could lead to a
luminosity spread and the appearance of an age spread. As there may be
no connection between accretion rates in the class I phase and later
accretion as a class II T-Tauri star this could effectively randomise
the ages determined from the HR diagram for young class II and class III PMS stars.
The model may also account for the apparent spin-down of PMS stars with
age and for the small proportion of stars which appear to have
anomalously high Li depletion. A PMS star with a true age of say
3\,Myr, that had been subjected to relatively slow accretion rates
during the class I phase would have contracted over 3\,Myr from a
larger radius and spun-up significantly. A coeval PMS star that had
previously accreted at much high rates would already be smaller, less
luminous and appear older, but would be relaxing back to its
equilibrium configuration on a 20\,Myr timescale and so would have
undergone very limited contraction and spin-up (Littlefair et
al. 2011). The same stars would have smaller radii and higher central
temperatures than their slow-accreting counterparts and could therefore
burn Li more readily (Baraffe \& Chabrier 2010).
\section{Conclusions}
The evidence to date suggests that the luminosity dispersion seen in
the HR diagrams of young SFRs has a significant component that cannot
be attributed to ``nuisance'' sources such as binarity, variability and
accretion. However, attempts to verify the consequent age spreads
implied by the positions of PMS stars in the HR diagram have
mixed success. In particular, the rotation rates of PMS stars and the
fraction of stars showing active accretion or evidence for
circumstellar material within a single SFR do not show the expected
decrease with age. ``Episodic accretion'' potentially resolves this paradox --
a very high rate of accretion during the class I phase could drive
PMS stars out of equilibrium and towards smaller radii and lower
luminosities. A distribution of early accretion rates would effectively
scramble ages determined from the HR diagram for a population of
class II and class III PMS stars.
If this scenario is borne out by further work, then the traditional HR
diagram is a poor tool for estimating the ages of young ($<20$\,Myr)
PMS stars and also perhaps for estimating age-dependent masses. Large
scale survey work may instead have to rely on less precise but
potentially more accurate clocks such as rotation rates or the presence
of circumstellar material, although of course these may not be
universal and could have significant environmental dependencies.
\input{jeffriesbib}
\end{document}
|
2,869,038,156,692 | arxiv | \section{Introduction}
Spontaneous symmetry breaking and macroscopic quantum states are important concepts in physics
\cite{strocchi.2008,beekman-etal.2019}.
One system in which these phenomena are well established are exciton polaritons in
semiconductor microcavities
(see, e.g.,
\cite{%
fan-etal.97pra,%
cao-etal.97,%
kuwata-gonokami-etal.97,%
duer-etal.1997,%
kira-etal.99b,%
baumberg-etal.2000,%
ciuti-etal.00,%
savvidis-etal.00,%
kwong-etal.01prl,%
keeling_collective_2007,%
schumacher-etal.07prb,%
amo-etal.2009superfluid,%
kamide-ogawa.10,%
liu-etal.15,%
schulze-etal.14,%
kamandardezfouli-etal.14,%
carcamo-etal.20,%
ozturk-etal.2021,%
moskalenko-snoke.00,%
balili-etal.07,%
bajoni-etal.08,%
berney-etal.08,%
amo-etal.2009BEC,%
semkat-etal.09,%
deng-etal.10,%
menard-etal.14,%
schmutzler-etal.15,%
barachati-etal.2018,%
bao-etal.19}),
where polaritonic and photonic Bose-Einstein condensation
\cite{%
moskalenko-snoke.00,%
balili-etal.07,%
bajoni-etal.08,%
berney-etal.08,%
amo-etal.2009BEC,%
semkat-etal.09,%
deng-etal.10,%
menard-etal.14,%
schmutzler-etal.15,%
barachati-etal.2018,%
bao-etal.19,%
pieczarka-etal.2022prb,%
pieczarka-etal.2022},
and polaritonic Bardeen-Cooper-Schrieffer (BCS) states
\cite{%
comte-nozieres.82,%
keeling-etal.05,%
kremp-etal.08,%
kamide-ogawa.10,%
byrnes-etal.10,%
combescot-shiau.15,%
hu-liu.20,%
hu-etal.21}
have been discussed
\footnote{Recently, excitonic Bose-Einstein condensation in bulk semiconductors has been demonstrated \protect\cite{morita-etal.2022}.}.
Here, the condensate wave function, or order parameter, is associated with optically active polaritons, which facilitates its
observation through optical experiments.
These systems are open, dissipative and pumped, hence the physics of these symmetry-broken states can be quite different from their counterparts in thermal equilibrium. For example, the polaritonic order parameter oscillates
typically at a frequency in the visible or near-infrared spectrum (close to the exciton frequency in the non-condensed state),
and most studies of
symmetry-broken states in
polaritonic systems have characterized the properties of these states using optical probes nearly resonant with the frequency of the order parameter.
However, interesting and potentially new physical effects of the macroscopic quantum state can also be obtained from light fields far detuned from that resonance, for example from terahertz (THz) fields. An example utilizing THz radiation to elucidate the physics of a polaritonic BEC was given in Ref.\
\cite{menard-etal.14}.
Systematic studies of the fluctuation modes of a many-particle system, either condensed or in the normal state, usually involve the system's linear response, triggered by physical fluctuations or weak external probes.
For the broad class of condensed systems with complex order parameters, where symmetries of the phase(s) are broken, much attention has been paid to the Goldstone (phase) modes and, when they exist \footnote{The conditions for the existence of the Higgs modes are
discussed in \cite{varma.2002}.}, the Higgs (amplitude) modes
\cite{varma.2002,combescot-etal.2006pra,pekker-varma.15,brierley-etal.11,behrle-etal.18,steger-etal.2013,schwarz-etal.2020}.
Polaritonic condensates involve composite particles with a large number of internal degrees of freedom (the relative motion of electrons and holes making up the excitonic polarization, which is coupled to the light field in the cavity). This creates a rich landscape of possible fluctuation states. If the broken symmetry is U(1), the only fluctuation mode that can be expected is a simple phase mode.
In general, all other modes
require a more detailed classification, not only in terms of phase and amplitude fluctuations, as is usually done, but also electron-hole density fluctuations, as we do here.
The density considered here receives contributions from both the order parameter amplitude and the incoherent pumped reservoir.
In this study, we investigate the physics of fluctuation modes
of a polariton laser in the BCS regime triggered by a THz probe.
Extending our work in Ref.\ \onlinecite{binder-kwong.2021} to the THz case and
using a many-particle approach based on the diagonalization of the fluctuation matrix, including the electron-hole Coulomb interaction,
we obtain all fluctuation modes induced by THz radiation and compare them with those resulting from an `optical' (nearly resonant with the order parameter) probe.
We find that the orbital angular momentum of the THz-induced fluctuation modes is different from that of the order parameter and that of the conventional optical fluctuation modes.
Both cases, THz and optical, include collective (discrete) modes in addition to the spectral continua. The continuum THz-induced modes can yield THz gain, similar to the case without Coulomb interaction, which we studied in Ref. \onlinecite{spotnitz-etal.2021}. But due to the many-particle Coulomb interactions, we also find collective (discrete) THz-induced fluctuation modes (we label them as `T' modes). We provide a detailed characterization of the physics of these modes, including phase, amplitude and density fluctuations for each degree of freedom.
Importantly, we find that the new THz-induced collective fluctuation modes yield THz gain which could make them of broad interest in future applications.
Our semiclassical theoretical approach to the THz-induced fluctuation modes of a polariton laser operating in the BCS regime is an extension of previous work
\cite{binder-kwong.2021}
that was restricted to probe fields (or fluctuations) with frequencies in the vicinity of the frequency of the order parameter, which in the polariton comprises the interband polarization and the light field in the cavity.
The dynamical variables are the interband polarization $p(\mathbf{k})$ (where $\mathbf{k}$ is the electronic wave vector),
the carrier distribution $f(\mathbf{k})$ (same for electrons and holes since we use equal electron and hole masses and relaxation rates),
and the single-mode cavity laser field $\mathbf{E}_\ell$.
We use the Hartree-Fock (HF) approximation for $p(\mathbf{k})$ and $f(\mathbf{k})$, reducing their equations of motion to the semiconductor Bloch equations \cite{chow-etal.94}, amended by phenomenological dephasing, intraband relaxation, non-radiative recombination, and incoherent population pump terms.
The detailed equations are given in
section \ref{sec:zeroth-order-theory}.
Below threshold, this theory yields the lower and upper polariton in the optical response. Above threshold and in steady state, the order parameter $(p^{(0)}(\mathbf{k}),E^{(0)}_\ell)$
oscillates at the laser frequency $\hbar \omega_\ell$, which is approximately at the fundamental bandgap $E_g$ ($\sim 1.5$eV in GaAs).
We denote frequencies similar to $E_g$ as interband frequencies, and distinguish them from THz frequencies, which we call intraband frequencies,
where interband (intraband) refers to the electronic excitations caused by fields oscillating at the corresponding frequencies.
The complex order parameter has an arbitrary phase factor $e^{i \phi}$, which is fixed in any given realization (spontaneous symmetry breaking).
Fluctuation modes of the laser can be triggered by weak optical-frequency (interband) or THz (intraband) probes.
In Sec.\ \ref{sec:first-order-theory},
the evolution equations of the variables
$(p(\textbf{k}),f(\textbf{k}),E)$
are expanded to first order in the probe around the steady-state laser solution. These linear response equations are simplified by angular momentum selection rules: an optical-frequency probe drives fluctuations in the sector with orbital angular momentum $m=0$, and a THz probe drives the modes with $m=\pm 1$.
As detailed in Sec.\ \ref{sec:modedecomp},
the equations are discretized on a radial-k grid and solved by diagonalization of the fluctuation matrix.
The benefit of this microscopic approach is that we obtain all fluctuation modes from a many-particle theory within the HF approximation, including discrete (collective) modes and continuous spectra,
without making any assumptions on the physics of the fluctuation modes.
The THz response (the induced intraband current) is constructed as an expansion in the eigenmodes of the fluctuation matrix.
This, in turn, allows us in Sec.\ \ref{sec:results}
to identify spectral features in the intraband conductivity, and thus the THz absorption spectrum, and to associate these features with specific fluctuation modes.
In Sec.\ \ref{sec:modedecomp}, we formulate the decomposition of fluctuation modes into phase, amplitude, and density oscillation components.
In Secs.\ \ref{sec:phasampresp} and \ref{subsec:mode-characterization},
we perform an in-depth analysis of the modes'
oscillation characteristics, and, for most modes, distinguish them from pure phase (Goldstone) or amplitude (Higgs) modes.
\section{Microscopic formalism of electron, hole, and photon dynamics}
\label{sec:zeroth-order-theory}
In this section, we write down the microscopic Hamiltonian for the system of conduction band electrons, valence band holes, and cavity photons, and the equations of motion of field expectation values as used in this paper.
We use bold letters to denote position vectors and physical quantities whose directions are defined in physical space, e.g., wave vector $\mathbf{k}$, electric field $\mathbf{E}$.
An overhead arrow is used to denote a finite array of numbers arranged in column vector form.
For two column vectors, $\vec{a} = (a_1, a_2 , \cdots , a_N)^T$, $\vec{b} = (b_1, b_2 , \cdots , b_N)^T$, the symbol $\vec{a}^{\, T} \vec{b}$ denotes a dot product: $\vec{a}^{\, T} \vec{b} = \sum^{N}_{i=1} a_i b_i$.
\subsection{The Hamiltonian}
\label{subsec:H}
The setup that we consider is an optical microcavity containing a zero-width quantum well.
The fundamental resonance frequency of the cavity is close to the quantum well's band gap.
The system is incoherently pumped to sustain steady state lasing.
The laser's fluctuation spectrum is probed with the linear response to a weak THz field.
A coordinate system is set up in which the $z$ axis is normal to the quantum well's plane.
Our model Hamiltonian for the electrons, holes, and cavity photons is
\begin{widetext}
\begin{align}
\hat{H} &= \sum_{\alpha, \vb{k}} \varepsilon_{\alpha \vb{k}} a_{\alpha \vb{k}}^{\dagger} a_{\alpha \vb{k}}
+ \sum_{\lambda \vb{q}} \hbar \omega_{\lambda\vb{q}} c_{\lambda \vb{q}}^{\dagger} c_{\lambda \vb{q}}
- \frac {1} {\sqrt{\mathcal{A}}} \sum_{\lambda e h \vb{q}, \vb{k}}
\left[ \Gamma_{eh}^{\lambda} (\vb{k} ,\vb{q}) c_{\lambda \vb{q}} a_{e, \mathbf{k}}^{\dagger}
a_{h, \mathbf{q}-\mathbf{k}}^{\dagger} + \mathrm{H.c.} \right]
\label{eq:modelH}\\
& \quad + \sum_{\nu \alpha \vb{q}, \vb{k}} g_{\alpha}^{\nu} ( \vb{k} + \tfrac{1}{2} \vb{q} )
A_{T\nu} ( \vb{q} , t ) a_{\alpha , \mathbf{q}+\mathbf{k}}^{\dagger} a_{\alpha , \mathbf{k}}
+ \frac{1}{2\mathcal{A}} \sum_{\mathbf{k}, \mathbf{k}', \mathbf{q}' \neq 0} \sum_{\mu, \mu'}
V_{\mathbf{q}'}^{c} a_{\mu, \mathbf{k}}^{\dagger} a_{\mu', \mathbf{k}'}^{\dagger}
a_{\mu', \mathbf{k}' + \mathbf{q}'} a_{\mu, \mathbf{k} - \mathbf{q}'} . \nonumber
\end{align}
\end{widetext}
where $a_{e \vb{k}}$, $a_{h \vb{k}}$, and $c_{\lambda \vb{q}}$ are the annihilation operators
for conduction band electrons, valence band holes, and cavity photons respectively.
$\vb{k}$, $\vb{k}'$, $\vb{q}$ and $\vb{q}'$ are 2D wavevectors parallel to the quantum well's plane
(all wavevectors in this paper are in-plane unless specified otherwise), $\lambda$ labels the cavity photon spin, and $\mathcal{A}$ is the normalization area in the plane.
We consider interband transitions only between the highest heavy-hole valence band and the lowest
conduction band. So the band subscripts label the degenerate spin orbitals: $e = \pm 1/2, h = \pm 3/2$.
The subscript $\alpha$ runs through both electron and hole bands.
The subscripts $\mu$ and $\mu^{\prime}$ in the Coulomb interaction term run over the degenerate conduction and valence band spin orbitals, $\mathrm{c} = \pm \frac{1}{2}$ and $\mathrm{v} = \pm \frac{3}{2}$, respectively: $\mu, \mu^{\prime} \in \{\mathrm{c}, \mathrm{v}\}$.
Parabolic bands are used for the charges: $\varepsilon_{e \vb{k}} = \frac {\hbar^2 k^2} {2 m_e} + E_g$ and
$\varepsilon_{h \vb{k}} = \frac {\hbar^2 k^2} {2 m_h}$, where $m_\alpha$ is the effective mass in band $\alpha$
(both $m_e$ and $m_h$ are positive on our case),
and $E_g$ is the band gap. $\omega_{\lambda \vb{q}}$ is the cavity resonance frequency.
The interband eh-laser interaction term is calculated in the rotating wave approximation.
Though it is treated as an input parameter in the numerical calculations,
the interband coupling strength
$\Gamma_{eh}^{\lambda} (\vb{k} ,\vb{q}) $ can be given by
$\Gamma_{eh}^{\lambda} (\vb{k} ,\vb{q}) = \left\vert \vb{d}_{\mathrm{c}\mathrm{v}} \left(\vb{k} \right) \cdot
\bm{\epsilon}_{\ell \lambda} \right\vert \Psi_{\mathrm{cav}} \left(z_{\mathrm{QW}} \right)
\sqrt{2 \pi \hbar \omega_{\lambda \vb{q}} /\epsilon_{b}}$.
The interband dipole moment is
$\vb{d}_{\mathrm{c}\mathrm{v}} \left(\vb{k} \right) = i e \hbar \left\langle\mathrm{c},
\vb{k} \right\vert \hat{\vb{p}}\left\vert \mathrm{v}, \vb{k} \right\rangle / [ m_0 \, \Delta
E_{\mathrm{c v}} (\vb{k}) ]$,
where $m_0$ is the free-space electron mass, $e$ is the magnitude of the electron's charge ($e > 0$),
the states $\left\vert \mathrm{c}, \vb{k} \right\rangle$ ($\left\vert \mathrm{v}, \vb{k} \right\rangle$)
in the electron momentum matrix element are conduction (valence) band Bloch wave functions, and
$\Delta E_{\mathrm{c v}} (\vb{k}) = \varepsilon_{e \vb{k}} + \varepsilon_{h \vb{k}}$.
$\Psi_{\mathrm{cav}} \left(z_{\mathrm{QW}} \right)$ is the cavity photon mode 1D wave function along the $z$
direction evaluated at the position of the quantum well $z_{\mathrm{QW}}$, $\epsilon_{b}$ is the background
dielectric function inside the cavity, and $\bm{\epsilon}_{\ell \lambda}$ is the polarization unit vector of the
optical field.
(Some nuances of the relation between the interband dipole and momentum matrix elements are discussed in
Ref.\ \cite{gu-etal.13}; see also Ref.\ \cite{mahon-etal.19}.)
Conservation of angular momentum in the $e = \pm 1/2$ to $h = \pm 3/2$ transitions requires (see, e.g., Refs.\ \onlinecite{hu-etal.21} \& \onlinecite{spotnitz-etal.2021}) the ``circular selection rules''
$\vb{d}_{\mathrm{c}\mathrm{v}} \left(\vb{k} , \vb{q} \right) \cdot \bm{\epsilon}_{\ell \lambda} =
d_{\mathrm{c}\mathrm{v}} \left(\vb{k} , \vb{q} \right)
\delta_{|e+h|,1}
\delta_{e+h,\lambda}$
where the angular momentum labels in the conduction-valence band picture are related to those in the electron-hole picture via $\mathrm{c}=e$ and $\mathrm{v}=-h$.
That is, for a given conduction band, the corresponding valence band and laser-photon polarization are fixed.
The THz probe is treated in the Hamiltonian Eq.\ (\ref{eq:modelH}) as a classical applied vector potential $\vb{A}_{T} (t)$.
We use a gauge in which the scalar potential is zero \cite{mahon-etal.19} so that $\vb{E}_{T} ( \vb{x} , t ) = - \tfrac{1}{c} \partial \vb{A}_{T} ( \vb{x} , t ) / \partial t$, with $\vb{x}$ being the 3D spatial coordinates.
The probe induces intraband transitions with the coupling $g^{\nu}_{\alpha}(\vb{k})$, where $\nu$ labels the polarization state of the THz field.
With the approximations of isotropy and small transverse THz wavevector $\vb{q}\ll\vb{k}$, the coupling strength is evaluated as
\begin{equation}
g^{\nu}_{\alpha} \left(\vb{k} \right) = - \frac{ s_\alpha e }{ m_{\alpha} c} \hbar \vb{k} \cdot \bm{\epsilon}_{T \nu},
\label{eq:gnuform}
\end{equation}
where $\bm{\epsilon}_{T \nu}$ is the polarization unit vector for the THz field and $s_\alpha$ is the sign of the particle's charge: $s_e = -1 , s_h = 1$.
The intraband e/h-EM interaction operator of Eq.\ \eqref{eq:modelH} states that intraband electronic transitions are possible for $\mathbf{q}=0$ if $\mathbf{k} \neq 0$.
Angular momentum is conserved between an electron and a THz photon by the factor of $\hbar\mathbf{k}$ in Eq.\ \eqref{eq:gnuform}, which changes the angular quantum number $m$ by $\pm 1$ for the electronic orbital motion.
Energy is conserved in these intraband transitions by changes in an excitonic state, or as described in this paper, by transitions the band and its light-induced counterpart (cf. Fig. \ref{fig:bandstruct} below for more details).
This interaction does not change the electron or hole spin, of which its strength is independent; and due to the assumption of electronic isotropy, its strength is independent of the THz field polarization.
The quasi-2D Coulomb interaction energy is
\begin{equation}
V_{\mathbf{q}}^{c} \equiv \frac{2\pi e^{2}}{\epsilon_{b} } \frac{1}{\left|\mathbf{q}\right| + \kappa_{0}} . \label{eq:Vqcdef}
\end{equation}
where $\kappa_{0}$ is a small, constant screening wavenumber.
Because typical THz wavenumbers $q$ in a dielectric are much less than a typical electron quasimomentum $k$, $q \ll k$, it was previously found in Ref.\ \cite{spotnitz-etal.2021} that in calculating the THz response (specifically the conductivity $\sigma_{T}$) of the QW it is a very good approximation to take the in-plane THz field wavevector to be zero, $\vb{q}_{\|} \approx 0$. Varying the angle of incidence of the THz probe does affect its transmissivity, as was shown in Fig.\ 12 in Ref.\ \cite{spotnitz-etal.2021}. However, this is almost entirely due to Maxwell's equations, and their specific results in Eqs.\ (A12)--(A16) of Ref.\ \cite{spotnitz-etal.2021}, and not because of any change in the THz-induced conductivity $\sigma_{T}$. In this paper, we again use Eqs.\ (A12)--(A16) of Ref.\ \cite{spotnitz-etal.2021}, and the difference from Ref.\ \cite{spotnitz-etal.2021} arises in the calculation of $\sigma_{T}$.
Therefore, the effects of the probe geometry on the THz transmissivity have already been adequately shown in Fig.\ 12 of Ref.\ \cite{spotnitz-etal.2021}, and it is no longer necessary to consider the effects of changing the THz angle of incidence. Instead, we simplify the math by taking the THz probe $\vb{E}_{T}$ to be normally incident:
\begin{equation}
\mathbf{E}_{T} (z,t) \cdot \bm{\epsilon}_{\nu} \equiv E_{T\nu}(z,t)
= \int \frac{\mathrm{d}\omega}{2\pi} e^{i(q_{z}z - \omega t)} E_{T\nu}(\omega)
\end{equation}
where
$q_{z} = \sqrt{\epsilon_{b}} \omega/c$ and $\bm{\epsilon}_{\nu}$ is the polarization unit vector, equal to $\hat{y}$ for $\nu =y$ or $\hat{x}$ for $\nu = x$.
The quantum well microcavity's response to this probe consists of fluctuations with zero (in-plane) momentum (the steady lasing state before the THz probe arrives being isotropic in the plane). Accordingly, only the $\vb{q}= \vb{0}$ part of $\Gamma_{eh}^{\lambda} (\vb{k} ,\vb{q})$ is in use. We approximate this relevant part, $\Gamma_{eh}^{\lambda} (\vb{k} ,\vb{0})$, by a function, denoted by $\Gamma_{eh}^{\lambda} (\vb{k})$, that equals a constant for $|\mathbf{k}|$ less than a set value $k_{max}$ and zero for $|\mathbf{k}| > k_{max}$. The numerical value of the constant is adjusted such that the splitting between the exciton and the lower polariton (LP) resonance, sometimes called vacuum Rabi splitting and denoted by $\Omega_R$, has a given value consistent with state of the art microcavities.
We also use the notation $\omega_{\lambda \vb{0}} \equiv \omega^{\lambda}_{cav}$.
\subsection{Equations of motion of the cavity field, the interband polarization, and the charge densities}
Using the Hamiltonian in Eq.\ \eqref{eq:modelH}, single-time
equations of motion are derived for the slowly-varying envelope of the electron-hole polarization at zero
center-of-mass momentum,
$ p_{e h} (\vb{k}, t ) \equiv \left\langle a_{h, -\vb{k}} (t) a_{e, \vb{k}} (t)
\right\rangle e^{i \omega_{\ell} t}$, the occupation functions $f_{\alpha} (\vb{k}, t )
\equiv \left\langle a_{\alpha, \vb{k}}^{\dagger} (t) a_{\alpha, \vb{k}} (t) \right\rangle$,
where $\alpha \in \{e,h\}$, and the envelope of the laser field amplitude (the squared
magnitude of which is the 2D photon density in the designated mode)
$E_{\ell \lambda} ( t ) \equiv (1/\sqrt{\mathcal{A}})\left\langle c_{\lambda, \vb{q}=\vb{0}} (t)
\right\rangle e^{i \omega_{\ell} t}$. $\omega_{\ell}$ is the laser frequency, which is obtained
from solving the steady state equations absent the THz probe.
The circular selection rules require that $p_{e h} (\vb{k}, t ) = 0$ for $(e,h) \notin \{ (\frac{1}{2},-\frac{3}{2}), (-\frac{1}{2},\frac{3}{2})\}$.
In an equation of motion for $f_{e}$, the value for the index $h$ in factors of $p_{eh}$ should be chosen so that $(e,h) = (\frac{1}{2},-\frac{3}{2})$ or $=(-\frac{1}{2},\frac{3}{2})$.
The many-body dynamics is treated at the same approximate level as the semiconductor Bloch equation (SBE).
Photonic correlations, involving non-factorizable parts of expectation values of products of photon operators or products of photon and charge operators, are ignored.
Effects of Coulomb correlations beyond the SBE are modeled by appropriate relaxation and dephasing terms.
The Hamiltonian Eq.\ \eqref{eq:modelH} does not account for pumping, cavity loss via emission of the laser field,
or interactions with the environment, e.g., phonons.
These effects are also modeled phenomenologically,
in the same way as was done in Refs.\ \onlinecite{hu-etal.21,spotnitz-etal.2021,binder-kwong.2021}.
We set $m_{e} = m_{h}$ and also assume all the incoherent relaxation rates are the same for the electrons and the holes. These conditions imply $f_{e=-1/2}(\vb{k},t) = f_{h=3/2}(-\vb{k},t)$ and $f_{e=1/2}(\vb{k},t) = f_{h=-3/2}(-\vb{k},t)$. We keep the electron distribution $f_e (\vb{k} , t)$ as the dynamical variable for each spin configuration. This electron-hole symmetry assumption is made for simplicity. Reverting to the correct mass ratio and unequal relaxation rates would not change any physical conclusions in the paper.
Under these approximations, we obtain the following equations of motion,
\begin{widetext}
\begin{multline}
i\hbar \frac{\partial }{\partial t} p_{e h} (\mathbf{k}, t) = \left( \frac{\hbar^{2}k^{2}}{2m_{r}}+E_{g}-\frac{2}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c}f_e(\mathbf{k}^{\prime}, t) -\hbar \omega_{\ell}-i\gamma + \sum_\nu g^{\nu}(\mathbf{k}) A_{T \nu}(t) \right) p_{e h} (\mathbf{k},t) \\
-\left[ 1-2f_e(\mathbf{k},t)\right] \left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (\vb{k}) E_{\ell \lambda} (t) +\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c} p_{e h} (\mathbf{k}^{\prime},t)\right] , \label{eq:SBE-P-dot}
\end{multline}
where $g^\nu (\vb{k}) \equiv g^\nu_{e}(\vb{k}) + g^\nu_{h}(-\vb{k}) = \frac{e}{m_{r}c} \hbar \vb{k} \cdot \bm{\epsilon}_{T \nu}$, with the reduced mass $m_{r}$ given by $\frac{1}{m_{r}} = \frac{1}{m_{e}} + \frac{1}{m_{h}}$,
\begin{multline}
\hbar \frac{\partial}{\partial t} f_e (\vb{k},t) = 2 \Im \left\{ \left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (\vb{k}) E_{\ell \lambda}^{\ast} (t) + \frac{1}{\mathcal{A}} \sum_{\vb{k}^{\prime}} V_{\vb{k}-\vb{k}^{\prime}}^{c} p^{\ast}_{e h} (\vb{k}^{\prime},t) \right] p_{e h}(\vb{k},t)\right\} \\
- \gamma_{F} \left[f_e(\vb{k},t) - f_{F}(\vb{k},t)\right] - \gamma_{nr} f_e(\vb{k},t)- \gamma_{p} \left[f_e(\vb{k},t) - f_{p}(\vb{k})\right] ;
\label{eq:fmot}
\end{multline}
and %
\footnote{An alternative to the single-mode equation for the cavity field based on the
propagation of the light through the entire microcavity structure has been
given in Ref.\ \protect\onlinecite{carcamo-etal.20}.}
\begin{equation}
i \hbar \frac{\partial}{\partial t} E_{\ell \lambda} (t)
= (\hbar \omega^{\lambda}_{cav} - \hbar\omega_{\ell} - i \gamma_{cav}) E_{\ell \lambda} (t) - \frac{N_{QW}}{\mathcal{A}} \sum_{ \vb{k}eh} \Gamma^{\lambda \ast}_{e h} (\vb{k}) p_{e h}(\vb{k},t).
\yesnumber \label{eq:Esingmod}
\end{equation}
\end{widetext}
The approximation of the incoherent scattering terms in terms of relaxation rates has been previously described in detail in Eqs.\ (B1)--(B8), (B16), and (B17) of \cite{spotnitz-etal.2021}; as well as in Eqs.\ (5)--(8) of the supplemental material for \cite{binder-kwong.2021}, and in Eqs.\ (23)--(26) of \cite{hu-etal.21}.
In Eqs.\ \eqref{eq:SBE-P-dot}, \eqref{eq:Esingmod}, and below, $\gamma$ is the dephasing rate of e-h pairs, $\gamma_{cav}$ is the decay rate of the cavity field, and $N_{QW}$ is the number of quantum wells. In Eq.\ \eqref{eq:fmot}, $\gamma_{F}$ is the thermalization rate, $\gamma_{p}$ is the incoherent pump rate, and $\gamma_{nr}$ is the non-radiative decay rate.
We define the total distribution relaxation rate $\gamma_{f}\equiv \gamma _{F}+\gamma _{p}+\gamma _{nr}$.
Incoherent pumping is modeled by the term $- \gamma_{p} \left[f_e(\vb{k},t) - f_{p}(\vb{k})\right]$, which drives the distribution function $f_e(\vb{k},t)$ towards a pump-induced Fermi function $f_{p}(\vb{k})$ at the rate $\gamma_{p}$.
The pump chemical potential $\mu_{p}$ is chosen such that the density $n_{p} = 2 \int \frac{\mathrm{d}^{2} k}{(2\pi)^2} f_{p}(\vb{k})$, corresponds to the chosen pump density, which is an input parameter in this theory.
Intraband carrier-carrier scattering drives the distribution functions towards Fermi functions at the rate $\gamma_{F}$, without changing the total carrier density $n(t)$ in each band. This is included via the term $- \gamma_{F} \left[f_e(\vb{k},t) - f_{F}(\vb{k},t)\right]$.
The thermal chemical potential $\mu_{F}$ is chosen at each time such that $n(t) =2\int \frac{d^{2}k}{(2\pi )^{2}}f_e(\mathbf{k},t)=2\int \frac{d^{2}k}{(2\pi)^{2}}f_{F}(\mathbf{k},t) .$
$f_{F}(\vb{k})$ and $f_{p}(\vb{k})$ are Fermi distributions with distinct chemical potentials:
\begin{equation}
f_{x} \left(\vb{k}; \mu_{x}\right) = \frac{1}{e^{\left( \varepsilon_{\vb{k}} - \mu_{x} \right)/k_B T} + 1} , \label{fermidisteq}
\end{equation}
where $x \in \{F, p\}$, and $T$ is an effective e-h temperature, also set as a parameter.
\section{Linear response to a Terahertz probe}
\label{sec:first-order-theory}
\subsection{Laser Steady State}
The stationary solutions are given by taking the $i \hbar \frac{\partial}{\partial t}$ terms and $A_{T \nu}$ to be zero in Eqs.\ \eqref{eq:SBE-P-dot}--\eqref{eq:Esingmod}.
They are denoted with a superscript $(0)$: $f^{(0)}_e (\vb{k})$, $p^{(0)}_{e h} (\vb{k})$ and $E^{(0)}_{\ell \lambda}$. The use of $0$ references that the stationary solutions are zeroth-order in the perturbing THz field $A_{T \nu}(t)$.
We limit ourselves to $s$-wave solutions, meaning that all $\mathbf{k}$-dependent functions depend only on the magnitude of the wave vector, $k=|\mathbf{k}|$. One steady state solution, with $p^{(0)}_{e h} (\vb{k}) = 0$ and $E^{(0)}_{\ell \lambda} = 0$, represents the non-lasing, `normal' state. When the pump density $n_p$ is raised above a threshold, additional solutions, representing the lasing state, with non-zero $p^{(0)}_{e h} (\vb{k})$, $E^{(0)}_{\ell \lambda}$, and $\omega_{\ell}$, appear.
Explicit expressions of these solutions can be found as Eqs.\ (9)--(14) in the Supplement to \cite{binder-kwong.2021}, where the stationary frequency is denoted in that paper by $\omega_{0}$ and in this paper by $\omega_{\ell}$.
In practice, we obtain the laser solutions numerically by solving Eqs.\ \eqref{eq:SBE-P-dot}--\eqref{eq:Esingmod} (with $A_{T \nu} = 0$) in time, using a small fluctuation to trigger the normal-to-lasing phase transition, and evolving the solution to a steady state.
The laser solution spontaneously breaks a $U(1)$ symmetry. The overall phase of the set of complex variables $(p^{(0)}_{e h} (\vb{k}), E^{(0)}_{\ell \lambda})$ is not determined by the equations, and there are infinitely many solutions which are assigned different values of this overall phase but are otherwise equivalent.
Formally, one can generalize the concept of a gap function $\Delta (\mathbf{k})$ to the polariton lasing or polariton BCS-like state (where BCS stands for Bardeen-Cooper-Schrieffer), which alternatively can be referred to as the effective Rabi frequency $\Omega _{eff}(\mathbf{k}) $,
\begin{equation}
\Delta (\mathbf{k}) \equiv \Omega _{eff}(\mathbf{k})\
= \Gamma^{\lambda}_{e h} (\vb{k}) E_{\ell \lambda}^{(0)} +\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}%
^{\prime }}^{c} p^{(0)}_{e h} (\mathbf{k}^{\prime}) .
\label{eq:definition-BCS-gap-and-eff-Rabi-frequ}
\end{equation}
The original $T=0$K BCS state for superconductors, which follows from a HF theory for Cooper pairs in an interacting Fermi gas is given, for example, on pp.\ 326-336 of Ref.\ \cite{fetter-walecka.71}.
An analogous theory for polaritons was formulated in Refs.\ \cite{byrnes-etal.10,kamide-ogawa.10}.
The quasi-phenomenological approach for the BCS-like gap in the polariton laser, as previously formulated in Refs.\ \onlinecite{hu-etal.21,binder-kwong.2021,spotnitz-etal.2021}, is further corroborated in this paper.
The expressions used there are formally the same as in the standard BCS theory.
We reproduce them here for convenience in the notation of Ref.\ \onlinecite{binder-kwong.2021}:
\begin{equation}
\tilde{\xi}(\mathbf{k}) = \frac{\hbar^{2}k^{2}}{2m_{e}} + \Sigma_{e}^{HF}(\mathbf{k})-\frac{1}{2}\left( \hbar \omega_{\ell}-E_{g}\right)
\label{eq:xiergdef}
\end{equation}
where the single-particle Hartree-Fock self-energy is
\begin{equation}
\Sigma_{e}^{HF}(\mathbf{k})= - \frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{%
\mathbf{k-k}^{\prime }}^{c}f^{(0)}_{e}(\mathbf{k}^{\prime }),
\end{equation}
and the excitation energies are
\begin{equation}
\tilde{E}(\mathbf{k})=\sqrt{\tilde{\xi }^{2}(\mathbf{k})+|\Delta (%
\mathbf{k})|^{2}} . \label{equ:E-single-particle-open}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=3in]{Fig01-Band-plot-v4-05.pdf}
\caption{
(Color online.)
Schematic of the parabolic two-band band structure
with the conduction band (CB) in red and the valence band (VB) in blue, separated by the HF Coulomb-renormalized bandgap $E_{g}^{\ast}$, and further renormalized by the electron-hole interaction and the light field (dressed bands), cf. spectral function in \cite{kremp-etal.08,yamaguchi-etal.15}.
The BCS renormalization induces a gap with magnitude $\tilde{E}^{pair}_{gap}$, and separates the CB (VB) into U$_{e}$ and L$_{e}$ (U$_{h}$ and L$_{h}$, in the hole picture) branches.
UU, LL, LU and UL transitions are indicated (the latter two resulting in the decay continuum).
Dashed lines denote smaller, but nonzero, spectral weight for the branch.
All vertical transitions between the dressed bands (both solid and dashed lines) are possible.
The THz-frequency transitions result from those indicated by the subsequent subtraction of the frequency $\hbar\omega_{\ell}$ through coherent mixing.
}
\label{fig:bandstruct}
\end{figure}
It is useful to formulate the theory in terms of pair energies, rather than single-particle (electron or hole) energies.
This has been done in Ref.\ \cite{comte-nozieres.82}, where the HF theory was formulated in terms of pair energies which are twice the single-particle energies,
$\xi ^{pair}(\mathbf{k})=2\xi (\mathbf{k})$, and similarly the pair excitation energy
is $\tilde{E}^{pair}(\mathbf{k}) = 2 \tilde{E}(\mathbf{k})$.
We then obtain the pair BCS-like gap from minimizing $\tilde{E}(\mathbf{k})$
\begin{equation}
\tilde{E}_{gap}^{pair} = 2 \min_{k} \tilde{E}(\mathbf{k}), \label{eq:def-epairgap}
\end{equation}
As will be shown in section \ref{sec:modedecomp}, the linear THz response can be formulated as an eigenvalue problem.
We then show in Fig.\ \ref{fig:eigenergkspect}(a) that this quasi-phenomenological BCS-like approach indeed yields the same pair excitation dispersion relation, $\tilde{E}^{pair}(k)$, as we obtain from the much more rigorous linear response spectrum given by the eigenvalues $\Re \varepsilon_{k}$.
Conversely, the identification of the $\Re \varepsilon_{k}$ UU and LL continua with $\tilde{E}^{pair}(k)$ enables the interpretation of the finite-frequency linear excitations as BCS-like excited pairs.
The UU and LL notation and a schematic plot of the renormalized bands are given in Fig.\ \ref{fig:bandstruct}.
\subsection{Linear THz response}
After a desired steady state laser solution is prepared as described in the previous subsection, we switch on a small continuous-wave external THz field $\vb{A}_{T}(\omega)$ and calculate the linear response of the laser to this probe.
Specifically, we write the interband polarization, the charge density, and the cavity photon field in the presence of $\vb{A}_{T}$ as $p_{e h}(\mathbf{k},t)=p^{(0)}_{e h}(\mathbf{k})+ p^{(1)}_{e h}(\mathbf{k},t)$, $f_e(\mathbf{k})=f^{(0)}_e(\mathbf{k})+ f^{(1)}_e(\mathbf{k},t)$ and $E_{\ell \lambda}(t)=E_{\ell \lambda}^{(0)} + E_{\ell \lambda}^{(1)}(t)$, where the perturbative quantities of $p^{(1)}_{e h}(\mathbf{k},t)$, $f^{(1)}_e(\mathbf{k},t)$ and $E_{\ell \lambda}^{(1)}(t)$ are taken to be on the order of $\vb{A}_{T}$.
Inserting this form of $p_{e h}(\mathbf{k},t)$, $f_e(\mathbf{k},t)$ and $E_{\ell \lambda} (t)$ in Eqs.\ \eqref{eq:SBE-P-dot}--\eqref{eq:Esingmod} and linearizing around the steady state solution,
we obtain the first-order equations for $p^{(1)}_{e h}(\mathbf{k},t)$, $f^{(1)}_e(\mathbf{k},t)$ and $E_{\ell \lambda}^{(1)}(t)$ in the time domain as:
\begin{widetext}
\begin{eqnarray}
i\hbar \frac{\partial }{\partial t}p^{(1)}_{e h}(\mathbf{k}) &=&\left( \frac{%
\hbar ^{2}k^{2}}{2m_{r}}+E_{g} +2\Sigma_{e}^{HF}(\mathbf{k})-\hbar \omega
_{\ell}-i\gamma \right) p^{(1)}_{e h}(\mathbf{k}) \nonumber \\
&&-\frac{2}{\mathcal{A}} p^{(0)}_{e h}(\mathbf{k}) \sum_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime
}}^{c}f^{(1)}_{e}(\mathbf{k}^{\prime }) \nonumber \\
&&-\left[ 1-2f^{(0)}_{e}(\mathbf{k})\right] \left[\sum_{\lambda} \Gamma^{\lambda}_{e h} (\vb{k})E_{\ell \lambda}^{(1)}+\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{%
\mathbf{k-k}^{\prime }}^{c}p^{(1)}_{e h}(\mathbf{k}^{\prime })\right] \nonumber
\\
&&+2f^{(1)}_{e}(\mathbf{k}) \left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (\vb{k})E_{\ell \lambda}^{(0)}+\frac{1}{\mathcal{A}}%
\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c} p^{(0)}_{e h}(\mathbf{k}^{\prime })\right] \nonumber \\
&& + \sum_{\nu} g^{\nu}(\vb{k}) A_{T \nu} p^{(0)}_{e h}(\vb{k}) \label{eq:delta-P-dot}
\end{eqnarray}%
\begin{eqnarray}
-i\hbar \frac{\partial }{\partial t}p^{(1) \ast}_{e h}(\mathbf{k}) &=&\left(
\frac{\hbar ^{2}k^{2}}{2m_{r}}+E_{g} +2\Sigma_{e}^{HF}(\mathbf{k})-\hbar
\omega_{\ell}+i\gamma \right) p^{(1) \ast}_{e h}(\mathbf{k}) \nonumber \\
&&-\frac{2}{\mathcal{A}} p^{(0) \ast}_{e h}(\mathbf{k}) \sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime
}}^{c}f^{(1)}_{e}(\mathbf{k}^{\prime }) \nonumber \\
&& -\left[ 1-2f^{(0)}_{e}(\mathbf{k})\right] \left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (\vb{k})E_{\ell \lambda}^{(1)\ast}
+\frac{1}{\mathcal{A}} \sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c}
p^{(1) \ast}_{e h}(\mathbf{k}^{\prime })\right] \nonumber \\
&&+2f^{(1)}_{e}(\mathbf{k})\left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (\vb{k})E_{\ell \lambda}^{(0)\ast}+\frac{1}{\mathcal{A}}%
\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c}p^{(0) \ast}_{e h}(%
\mathbf{k}^{\prime })\right] \nonumber \\
&& + \sum_{\nu} g^{\nu}(\vb{k}) A_{T \nu}^{\ast} p^{(0) \ast}_{e h}(\vb{k}) \label{eq:delta-P-star-dot}
\end{eqnarray}%
\begin{eqnarray}
i\hbar \frac{\partial }{\partial t}f^{(1)}_{e}(\mathbf{k}) &=& \left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (\vb{k})E_{\ell \lambda}^{(0)\ast}+\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{%
\mathbf{k-k}^{\prime }}^{c}p^{(0) \ast}_{e h}(\mathbf{k}^{\prime })\right] p^{(1)}_{e h}(\mathbf{k}) \nonumber \\
&&+\left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (\vb{k})E_{\ell \lambda}^{(1)\ast}+\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k%
}^{\prime }}^{c}p^{(1) \ast}_{e h}(\mathbf{k}^{\prime })\right] p^{(0)}_{e h}(%
\mathbf{k}) \nonumber \\
&&-\left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (\vb{k})E_{\ell \lambda}^{(0)}+\frac{1}{\mathcal{A}}\sum\limits_{\mathbf{k}%
^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c}p^{(0)}_{e h}(\mathbf{k}^{\prime })\right]
p^{(1) \ast}_{e h}(\mathbf{k},t) \nonumber \\
&& -\left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (\vb{k})E_{\ell \lambda}^{(1)}+\frac{1}{\mathcal{A}}%
\sum\limits_{\mathbf{k}^{\prime }}V_{\mathbf{k-k}^{\prime }}^{c}p^{(1)}_{e h}(%
\mathbf{k}^{\prime })\right] p^{(0) \ast}_{e h}(\mathbf{k}) \nonumber \\
&&-i \gamma_{f} f^{(1)}_{e}(\mathbf{k})
\label{eq:delta-f-dot}
\end{eqnarray}%
\begin{equation}
i\hbar \frac{\partial }{\partial t}E_{\ell \lambda}^{(1)}=\left( \hbar \omega _{cav}-\hbar
\omega _{\ell}-i\gamma _{cav}\right) E_{\ell \lambda}^{(1)}- \frac{N_{QW}}{\mathcal{A}}%
\sum\limits_{\mathbf{k}eh}\Gamma^{\lambda \ast}_{e h} (\vb{k})p^{(1)}_{e h}(\mathbf{k},t)
\label{eq:delta-E-dot}
\end{equation}%
\begin{equation}
-i\hbar \frac{\partial }{\partial t}E_{\ell \lambda}^{(1)\ast}=\left( \hbar \omega
_{cav}-\hbar \omega _{\ell}+i\gamma _{cav}\right) E_{\ell \lambda}^{(1)\ast}-\frac{%
N_{QW}}{\mathcal{A}}\sum\limits_{\mathbf{k}eh}\Gamma^{\lambda}_{e h} (\vb{k})p^{(1) \ast}_{e h}(%
\mathbf{k}) . \label{eq:delta-E-star-dot}
\end{equation}
\end{widetext}
\subsection{Orbital angular momentum selection rules for THz fluctuations}
\label{subsec:orb-ang-mom-sel}
In this subsection, we expand the linear response equations Eqs.\ \eqref{eq:delta-P-dot} - \eqref{eq:delta-E-star-dot} in an orbital angular momentum basis in $\vb{k}$ space.
Since the Hamiltonian and the steady state are isotropic in the plane, one expects the linear `susceptibility' connecting the fluctuations to the probe to be diagonal in angular momentum space.
For a $\vb{q} = \vb{0}$ plane-wave interband (typically optical-frequency) probe, the angular momentum of the absorbed photon accounts for the `spin' change in the unit cell orbital,
and the coupling to the charges' motion within the band is circularly symmetric
(so-called first-class dipole allowed transitions with $s$-like electron-hole envelope function, or in other words, zero angular momentum associated with the relative electron-hole motion).
For the $\vb{q} = \vb{0}$ plane-wave intraband THz probe considered here, the coupling $\sum_{\nu} g^{\nu}(\vb{k}) A_{T \nu} = \sum_{\nu} \frac{e}{m_{r}c} \hbar \vb{k} \cdot \bm{\epsilon}_{T \nu} A_{T \nu}$ transfers (one unit of) angular momentum between the THz photon and the electron's intraband motion. In short, the interband and intraband probes excite respectively (denoting the in-plane intraband orbital angular momentum by $m$) the $m=0$ and $m=\pm 1$ sectors of the fluctuation modes of the laser.
We expand the fluctuation fields in angular harmonics:
\begin{equation}
\Psi^{(1)} (\vb{k}) = \Psi^{(1)} (k,\theta_k) = \sum_{m \in \mathbb{Z}} \Psi^{(1)} (k,m) e^{im\theta_k}
\end{equation}
\begin{equation}
\Psi^{(1)}(k,m) = \frac{1}{2\pi}\int_{0}^{2\pi} d\theta_k \Psi^{(1)} (k,\theta_k) e^{-im\theta_k}
\label{eq:modeexpans}
\end{equation}
where $\Psi^{(1)}$ stands for $p^{(1)}_{e h}$, $f^{(1)}_{e}$, and $E_{\ell \lambda}^{(1)}$. (Since $E_{\ell \lambda}^{(1)}$ does not depend on $\vb{k}$, all components with $m \neq 0$ equal zero.)
Expanding Eqs.\ \eqref{eq:delta-P-dot}--\eqref{eq:delta-E-star-dot} in the angular harmonics, we obtain the equations for each $m$ component of the fluctuation
of the interband polarization,
\begin{widetext}
\begin{eqnarray}
i\hbar \frac{\partial }{\partial t} p^{(1)}_{e h}(k,m,t) &=&\left( \frac{%
\hbar ^{2}k^{2}}{2m_{r}}+E_{g}
-2
\int_{0}^{\infty} \frac{k^{\prime} \mathrm{d}k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} f^{(0)}_{e} (k^{\prime})
-\hbar \omega_{\ell}-i\gamma \right) p^{(1)}_{e h}(k,m,t) \nonumber \\
&&-2 p^{(0)}_{e h}(k) \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} f^{(1)}_{e}(k^{\prime},m,t) \nonumber \\
&&-\left[ 1-2f^{(0)}_{e}(k)\right] \left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (k)E_{\ell \lambda}^{(1)} \delta_{0,m} +\int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} p^{(1)}_{e h}(k^{\prime},m,t)\right] \nonumber
\\
&&+2f^{(1)}_{e}(k,m,t) \left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (k)E_{\ell \lambda}^{(0)} + \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} p^{(0)}_{e h}(k^{\prime})\right] \nonumber \\
&& + g_B k p^{(0)}_{e h}(k) \left[A_{Tx}(t) \left(\delta_{1,m} + \delta_{-1,m} \right) -i A_{Ty}(t) \left( \delta_{1,m} - \delta_{-1,m} \right) \right] \label{eq:delta-P-dot-ang}
\end{eqnarray}%
the complex conjugate of the interband polarization,
\begin{eqnarray}
-i\hbar \frac{\partial }{\partial t}p^{(1) \ast}_{e h}(k,-m,t) &=&\left(
\frac{\hbar ^{2}k^{2}}{2m_{r}}+E_{g}
-2 \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d}k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} f^{(0)}_{e} (k^{\prime})
-\hbar \omega_{\ell}+i\gamma \right) p^{(1) \ast}_{e h}(k,-m,t) \nonumber \\
&&- 2 p^{(0) \ast}_{e h}(k) \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} f^{(1)}_{e}(k^{\prime},m,t) \nonumber \\
&& -\left[ 1-2f^{(0)}_{e}(k)\right] \left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (k)E_{\ell \lambda}^{(1)\ast} \delta_{m,0}
+\int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} p^{(1) \ast}_{e h}(k^{\prime},-m,t)\right] \nonumber \\
&& +2f^{(1)}_{e}(k,m,t) \left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (k)E_{\ell \lambda}^{(0)\ast}+ \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} p^{(0) \ast}_{e h}(k^{\prime })\right] \nonumber \\
&& + g_B k p^{(0) \ast}_{e h}(k) \left[A_{Tx}^{\ast}(t) \left(\delta_{1,m} + \delta_{-1,m} \right) -i A_{Ty}^{\ast}(t) \left( \delta_{1,m} - \delta_{-1,m} \right) \right] , \label{eq:delta-P-star-dot-ang}
\end{eqnarray}%
the distribution function,
\begin{eqnarray}
i\hbar \frac{\partial }{\partial t}f^{(1)}_{e}(k,m,t) &=&\left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (k)E_{\ell \lambda}^{(0)\ast}+ \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} p^{(0) \ast}_{e h}(k^{\prime })\right] p^{(1)}_{e h}(k,m,t) \nonumber \\
&&+\left[ \sum_{\lambda} \Gamma^{\lambda \ast}_{e h} (k)E_{\ell \lambda}^{(1)\ast} \delta_{m,0} + \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} p^{(1) \ast}_{e h}(k^{\prime},-m,t)\right] p^{(0)}_{e h}(k) \nonumber \\
&&-\left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (k)E_{\ell \lambda}^{(0)}+ \int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{0} p^{(0)}_{e h}(k^{\prime})\right]
p^{(1) \ast}_{e h}(k,-m,t) \nonumber \\
&&-\left[ \sum_{\lambda} \Gamma^{\lambda}_{e h} (k)E_{\ell \lambda}^{(1)} \delta_{m,0} +\int_{0}^{\infty} \frac{k^{\prime} \mathrm{d} k^{\prime}}{2\pi} V_{k,k^{\prime}}^{m} p^{(1)}_{e h}(%
k^{\prime},m,t)\right] p^{(0) \ast}_{e h}(k) \nonumber \\
&&-i \gamma_{f} f^{(1)}_{e}(k,m,t)
\label{eq:delta-f-dot-ang}
\end{eqnarray}%
the cavity field,
\begin{equation}
i\hbar \frac{\partial }{\partial t}E_{\ell \lambda}^{(1)} (t) =\left( \hbar \omega _{cav}-\hbar
\omega _{\ell}-i\gamma _{cav}\right) E_{\ell \lambda}^{(1)} (t) - N_{QW} \sum_{eh} \int_{0}^{\infty} \frac{k \mathrm{d}k}{2\pi} \Gamma^{\lambda \ast}_{e h} (k)p^{(1)}_{e h}(k,m=0,t)
\label{eq:delta-E-dot-ang}
\end{equation}%
and the complex conjugate of the cavity field,
\begin{equation}
-i\hbar \frac{\partial }{\partial t}E_{\ell \lambda}^{(1)\ast} (t) =\left( \hbar \omega_{cav}-\hbar \omega _{\ell}+i\gamma _{cav}\right) E_{\ell \lambda}^{(1)\ast} (t) - N_{QW} \sum_{eh} \int_{0}^{\infty} \frac{k \mathrm{d}k}{2\pi} \Gamma^{\lambda}_{e h} (k)p^{(1) \ast}_{e h}(k,m=0,t) \label{eq:delta-E-star-dot-ang}
\end{equation}
\end{widetext}
In Eqs.\ \eqref{eq:delta-P-dot-ang} and \eqref{eq:delta-P-star-dot-ang}, we define the THz coupling constant $g_B \equiv \frac{e\hbar}{2m_{r}c} = \frac{m_0}{m_r} \mu_B$, $\mu_B$ being the Bohr magneton. We have chosen a linear polarization basis along the $x$ and $y$ axes for the THz field, writing the components as $A_{Tx}$ and $A_{Ty}$.
The angular momentum component of the screened Coulomb potential $V_{k,k^{\prime}}^{m}$ is given by
\begin{eqnarray}
V_{k,k^{\prime}}^{m} &=& \frac{e^{2}}{\epsilon_{b}} \int_{0}^{2 \pi} \mathrm{d} \theta \frac{e^{- i m \theta}}{\sqrt{k^{2} + k^{\prime^{2}} - 2 k k^{\prime} \cos \theta} + \kappa_{0}} \quad , \quad \nonumber \\
V^{c}_{| \vb{k} - \vb{k}^\prime |} &=& \sum_{m \in \mathbb{Z}} V_{k,k^{\prime}}^{m} e^{i m (\theta_k-\theta_k')}
\label{eq:vkkpmfin}
\end{eqnarray}
where $\vb{k} = (k,\theta_k)$. The symmetry relations $V_{k,k^{\prime}}^{-m} = V_{k,k^{\prime}}^{m}$ and $V_{k,k^{\prime}}^{m} = V_{k^{\prime},k}^{m}$ are satisfied.
For an alternative formulation of Eq.\ \eqref{eq:vkkpmfin} and its evaluation in terms of elliptic integrals, see Appendix \ref{appx:coulmel}.
Eqs.\ \eqref{eq:delta-P-dot-ang}--\eqref{eq:delta-E-star-dot-ang} show explicitly that the equations for different angular momenta are decoupled. Since the THz source terms contain only the harmonics $m = \pm 1$, only $p^{(1)}_{e h}(k,m = \pm 1,t)$ and $f^{(1)}_{e}(k,m=\pm 1, t)$ are excited in the response. As shown below, the THz conductivity $\sigma_{T} (\omega)$ also contains only
$f^{(1)}_{e} (k,m = \pm 1, t)$. So the equations for $m = \pm 1$ form a closed set sufficient for the THz linear response problem.
Since there is no THz source term for the $m=0$ harmonics,
Eqs.\ \eqref{eq:delta-P-dot-ang}--\eqref{eq:delta-E-star-dot-ang} for $m=0$ are homogeneous, and we take as the solution $f^{(1)}_{e}(k,m = 0,t) = 0$, $p^{(1)}_{e h}(k,m = 0,t) = 0$, and $E^{(1)}_{\ell \lambda} = 0$.
This is consistent with the selection rule that the cavity photon fluctuation $E^{(1)}_{\ell \lambda}$ belongs to the $m=0$ sector and so does not appear in the THz ($m = \pm 1$) response.
\subsection{Linear THz conductivity}
The reflectivity and/or transmissivity spectra of the THz probe are typically measured to study the linear resonse of the system. These spectra can be expressed in terms of the conductivity tensor that connects the THz electric field to the induced current. In this subsection, we relate the conductivity to the density response formulated above.
The THz field propagates normally to the QW, i.e.\ $\mathbf{E}_{T}(\vb{r},t)$ is a plane wave with wavevector $\vb{q} \parallel \hat{z}$ and linear polarization in the $\hat{x}$ or $\hat{y}$ directions, $\bm{\epsilon}_{\nu} = \hat{x}$ or $=\hat{y}$. It induces a two-dimensional current $\vb{J}^{(1)}$ in the quantum well. Each (linearly polarized) component of the current,
$J_{\nu}^{(1)} = \vb{J}^{(1)} \cdot \bm{\epsilon}_{\nu}$,
consists of a paramagnetic part $J_{\nu}^{p(1)}$ and a diamagnetic part $J_{\nu}^{d(1)}$, which are described in more detail in Ref.\ \onlinecite{spotnitz-etal.2021}.
In the frequency domain, the THz field and the induced current in our 2D isotropic setting are related by the conductivity $\sigma_{T\nu}(\omega)$:
\begin{equation}
J_{\nu}^{(1)}(\omega) = \sigma_{T\nu}(\omega) E_{T\nu}(\omega) ~ ,
\label{eq:thz-cond-def}
\end{equation}
where $E_{T\nu}(\omega)$ is the transmitted field amplitude.
Like the current density, the conductivity can be written as a sum, $\sigma_{T \nu}(\omega) = \sigma_{T \nu}^{p}(\omega) + \sigma_{T \nu}^{d}(\omega)$, of a paramagnetic term $\sigma_{T \nu}^{p} \left(\omega \right) = J_{\nu}^{p (1)} \left(\omega \right) / E_{T\nu} \left(\omega \right)$ and a diamagnetic term $\sigma_{T \nu}^{d} \left(\omega \right) = J_{\nu}^{d (1)} \left(\omega \right) / E_{T\nu} \left(\omega \right)$.
The outgoing (reflected and transmitted) THz waves are given in terms of the conductivity. We quote the result here, the derivation being given in Appx.\ A of Ref.\ \cite{spotnitz-etal.2021}.
The transmission $T(\omega)$ and reflection $R(\omega)$ coefficients ($|T(\omega)|^2$ being the transmissivity and $|R(\omega)|^{2}$ being the reflectivity) and the absorptivity $A(\omega)$ are given by
\begin{IEEEeqnarray}{rCl}
T (\omega) &\equiv& \frac{E_{T\nu}^{(t)}(\omega)}{E_{T\nu}^{(i)}(\omega)} = \frac{1}{1 + \beta \left(\omega \right)} \label{eq:Tomega} \\
R (\omega) &\equiv& \frac{E_{T\nu}^{(r)}(\omega)}{E_{T\nu}^{(i)}(\omega)} = -\frac{\beta(\omega)}{1 + \beta (\omega)} \yesnumber \label{eq:Romega} \\
A (\omega) &=& 1 - |T (\omega)|^2 - |R (\omega)|^2 \yesnumber \label{eq:Aomega} \\
&=& \frac{2 \Re \beta (\omega) }{ | 1 + \beta^R (\omega ) |^2} \nonumber
\end{IEEEeqnarray}
where
\begin{equation*}
\beta (\omega) = \frac{2 \pi}{c \sqrt{\epsilon_b}} \sigma_{T \nu} (\omega)
\end{equation*}
where the THz field amplitudes are denoted $E_{T\nu}^{(i)}(\omega)$ for the incident, $E_{T\nu}^{(r)}(\omega)$ for the reflected, and $E_{T\nu}^{(t)}(\omega)$ for the transmitted amplitudes.
The paramagnetic and diamagnetic conductivities are calculated from Eqs.\ (25) and (26) of Ref.\ \cite{spotnitz-etal.2021} as
\begin{align}
\sigma_{T\nu}^{p} (\omega) &= -\frac{2S_{d} g_B c}{E_{T\nu}(\omega)\mathcal{A}} \sum_{\mathbf{k}} (\mathbf{k}\cdot\bm{\epsilon}_{\nu})f^{(1)}_{e}(\mathbf{k},\omega) ~,
\yesnumber \label{eq:condthz} \\
\sigma_{T\nu}^{\mathrm{d}} (\omega) &= \frac{i e^{2}S_{d}}{\omega m_{r} \mathcal{A}} \sum_{\mathbf{k}} f_e^{(0)} (\mathbf{k}) ~. \label{eq:Drudecond}
\end{align}
$S_{d}$ is the spin degeneracy factor of the conduction electrons and valence holes. In terms of the angular momentum components of $f_e^{(1)}$, $\sigma_{T\nu}^{p} (\omega)$ is written for $\nu = y$ or $=x$ as
\begin{equation}
\sigma_{T\nu}^{p} (\omega) = -\frac{S_{d} g_B c}{E_{T\nu}(\omega)} \int_{0}^{\infty} \frac{k^{2}\mathrm{d}k}{\pi} f^{(1)}_{e}(k,m=1,\omega) e^{i \frac{\pi}{2} \delta_{\nu,y}} ~,
\nonumber \label{eq:condthzypol}
\end{equation}
Taking into account the different source terms for $\nu = x$ and $\nu =y$, the symmetry $\sigma_{Tx}^{p} = \sigma_{Ty}^{p}$ is obtained.
The conductivity does not depend on the phase of $\mathbf{E}_{T}$, and has the symmetry $\sigma_{T \nu}(\omega)=\sigma_{T \nu}^{\ast}(-\omega)$, which leads to $A(\omega) = A(-\omega)$ and likewise for $|R|^2$ and $|T|^2$.
The Drude model of conductivity is commonly used in phenomenological analysis of data. In this model, the entire conductivity is represented as
\begin{equation}
\sigma_{T\nu}^{\mathrm{Drude}} (\omega) = \frac{i e^{2} n}{(\omega + i \gamma_{D}) m_{r}} \quad , \quad n = \frac{S_{d}}{\mathcal{A}} \sum_{\mathbf{k}} f^{(0)} (\mathbf{k}) ~. \label{eq:Drudecondwgm}
\end{equation}
where $\gamma_D$ accounts for all loss and relaxation processes. The diamagnetic conductivity in Eq.\ \eqref{eq:Drudecond} agrees with $\sigma_{T\nu}^{\mathrm{Drude}} (\omega)$ except for the absence of $\gamma_D$. We think that if a loss rate is phenomenologically inserted into the diamagnetic conductivity, its interpretation should be different from that of the Drude $\gamma_D$. Since in our formulation, many or all dissipative and relaxation processes are already included in the paramagnetic conductivity, only the remaining omitted processes should be represented in a loss rate in the diamagnetic conductivity. In this paper, we assume all losses are accounted for in $\sigma_{T\nu}^{p} (\omega)$.
Eqs.\ \eqref{eq:delta-P-dot-ang}--\eqref{eq:delta-f-dot-ang} and \eqref{eq:condthz} enable an interpretation of the coherent frequency mixing which leads to the paramagnetic THz response.
The rotating wave approximation is not made for the THz frequencies, so both positive and negative frequency components $\pm\omega_{T}$ contribute to the response.
In the THz source term in Eq.\ \eqref{eq:delta-P-dot-ang} (Eq.\ \eqref{eq:delta-P-star-dot-ang}), the positive and negative THz frequency components $\omega_{T}$ and $-\omega_{T}$ both add coherently to the optical-frequency polarization $p_{eh}^{(0)}$ ($p_{eh}^{(0)\ast}$) with lasing frequency $\omega_{\ell}$ ($-\omega_{\ell}$), to give the THz-induced polarization $p_{eh}^{(1)}$ ($p_{eh}^{(1)\ast}$) with frequency $\pm \omega_{T} + \omega_{\ell}$ ($\pm \omega_{T}-\omega_{\ell}$).
Then, in Eq.\ \eqref{eq:delta-f-dot-ang}, $p_{eh}^{(1)}$ ($p_{eh}^{(1)\ast}$) coherently mixes frequencies with $p_{eh}^{(0)\ast}$ and $E_{\ell \lambda}^{(0)\ast}$ which have frequency $-\omega_{\ell}$ ($p_{eh}^{(0)}$ and $E_{\ell \lambda}^{(0)}$ which have frequency $\omega_{\ell}$) to give the $f^{(1)}$ frequency components $\pm \omega_{T} + \omega_{\ell} - \omega_{\ell} = \pm \omega_{T}$ ($\pm \omega_{T} - \omega_{\ell} + \omega_{\ell} = \pm\omega_{T}$).
Finally, Eq.\ \eqref{eq:condthz} shows that $f_{e}^{(1)}(\omega_{T})$ gives the measurable response via the conductivity $\sigma_{T\nu}^{p}(\omega_{T})$ at the same frequency.
Thus, taking the lasing state as a given, THz transitions occur as a coherent frequency mixing process between the THz probe frequency $\omega_{T}$ and the lasing frequency $\omega_{\ell}$.
The transition process for the linear THz interaction is $\pm \omega_{T} + \omega_{\ell} - \omega_{\ell} = \pm \omega_{T}$, for the positive or negative frequency component $\pm \omega_{T}$ of the THz probe.
As the THz interaction process includes interband energies, the positive (negative) frequency response can be identified with upper hole U$_h$ to/from upper electron U$_e$ transitions (lower hole L$_h$ to/from lower electron L$_e$ transitions), although the observable THz response, i.e., the conductivity, only occurs at intraband energies.
Thus the resonant transitions are shown as interband in Fig.\ \ref{fig:bandstruct}.
In sum, the positive (negative) frequency probe is resonant with the UU (LL) transitions.
The decay continuum of eigenvalues results from UL and LU transitions.
To linear order, the intraband probe excites THz-frequency density fluctuations $f^{(1)}$ and optical frequency polarization fluctuations $p^{(1)}$, where the difference of the polarization fluctuation frequency from the lasing frequency $\omega_{\ell}$ is also in the THz. The THz probe stimulates absorption or emission at the same THz frequency, but does not change the optical light field.
\section{Mode decomposition of the response function}
\label{sec:modedecomp}
Eqs.\ \eqref{eq:delta-P-dot-ang} - \eqref{eq:delta-E-star-dot-ang} are a system of linear differential equations which is diagonal in $m$, but not in $k$.
For each $m$, these equations are numerically solved on a discretized grid in $k$ space.
Since we consider only the $m = \pm 1$ channels in this paper, $E^{(1)}_{\ell \lambda} = 0$ as shown above, and Eqs.\ \eqref{eq:delta-E-dot-ang} and \eqref{eq:delta-E-star-dot-ang} can be omitted. The discretized Eqs.\ \eqref{eq:delta-P-dot-ang} - \eqref{eq:delta-f-dot-ang} can be written in the following matrix form:
\begin{equation}
i\hbar \frac{\partial }{\partial t}\vec{x}=M\vec{x}+\vec{s}(t)
\label{eq:delta-x-dot}
\end{equation}
where $\vec{x}$ denotes the column vector
\begin{equation}
\vec{x}(m,t)=\left(
\begin{matrix}
\vec{p}^{\, (1)}_{e h}(m,t) \\
\vec{p}^{\, (1)\ast}_{e h}(-m,t) \\
\vec{f}^{\, (1)}_{e}(m,t)
\end{matrix}
\right) \label{eq:definition-x}
\end{equation}
Here $\vec{p}^{\, (1)}_{e h} (m,t)$ and $\vec{f}^{\, (1)}_{e} (m,t)$ stand for column vectors whose elements are the values of the functions at the $k$ grid points: $p^{\, (1)}_{e h} (k_i, m, t)$ and $f^{\, (1)}_{e} (k_i, m, t)$, $i = 1, \cdots, N_k$, where $N_k$ is the number of $k$ points. (We use an arrow over a variable to denote a column vector of variable values over the set of discretized $k$ points.) We write the source vector in the structural form
\begin{equation}
\vec{s}(t)=\left(\begin{matrix}{\vec{s}}_p(t)\\-{\vec{s}}_p^{\, \ast}(t)\\{\vec{s}}_f(t)\\\end{matrix}
\right) \ .
\label{eq:Mblockform}
\end{equation}
$\vec{s}(t)$ contains the $A_{T}(t)$ terms in Eqs.\ \eqref{eq:delta-P-dot-ang} and \eqref{eq:delta-P-star-dot-ang}.
The dimension of the vectors $\vec{x} (t)$ and $\vec{s} (t)$ is $3 N_k \equiv N$.
The matrix $M$ is a complex-valued $N \times N$ non-Hermitian matrix that is a linear function of the steady-state solution $p^{(0)}_{e h}(k)$, $f^{(0)}_e (k)$ and $E_{\ell \lambda}^{(0)}$.
$M$ depends only on the absolute value of the angular harmonic $m$, $M = M(|m|)$.
The vector $\vec{s} (t)$ is proportional to $A_{T \nu}(t)$.
Some symmetries of $M$ and $\vec{s}$ are given in Appx.\ \ref{appx:vector-symmetries}.
\subsection{Response function constructed from eigenvectors}
We formulate the response function associated with Eq.\ \eqref{eq:delta-x-dot} as the inverse operator of $i\hbar\frac{\partial}{\partial t} - M$.
Assume $\vec{s}(t)$ is a pulse in time and $\vec{x}=0$ initially. Fourier transforming to frequency space gives
\begin{multline}
\hbar\omega\vec{x}(\omega)=M\vec{x}(\omega)+\vec{s}(\omega) \quad \\
\Rightarrow\quad\vec{x}(\omega)=\left[\hbar\omega-M\right]^{-1}\vec{s}(\omega)\equiv F(\omega)\vec{s}(\omega)
\end{multline}
where $F(\omega)$ is the linear response matrix. Denote the eigenvalues and eigenvectors of $M$ by $\lambda_n$ and $\vec{y}_n$ respectively:
\begin{equation}
M{\vec{y}}_n=\lambda_n{\vec{y}}_n\quad,\quad\ n=1,2,\ldots,N \label{eq:righteig}
\end{equation}
Since $M$ is not Hermitian, the set of eigenvectors may fail to span the $N$-dimensional space of $\vec{x} (t)$, making $M$ non-diagonalizable, for some values of the parameters.
But this failure of diagonalizability typically occurs only at a zero-measure set of points in parameter space.\cite{kato.95}
Called exceptional points (EP), these points are where two or more eigenvalues and eigenvectors become the same.\cite{kato.95,heiss.04,hanai-etal.19,hanai-littlewood.20}
In our computations, we do not set the parameters at exactly an EP but infer the presence and location of an EP by the behavior of the eigenvalues and eigenvectors nearby. We therefore proceed with our formulation assuming the parameters are
outside of the EP set.
Construct the matrix $U$ from the eigenvectors as column vectors arranged side by side:
\begin{equation}
U=\left(\begin{matrix}{\vec{y}}_1{\vec{y}}_2\cdots{\vec{y}}_N\end{matrix}\right)
\label{eq:Ueigvecrep}
\end{equation}
Since the eigenvectors are linearly independent, $U$ is invertible, and $M$ is diagonalized as
\begin{equation}
M=UDU^{-1}\quad,\quad\ D= \begin{pmatrix}\lambda_1&0&\cdots&0\\0&\lambda_2&\cdots&0\\\vdots&\vdots& \ddots &\vdots \\ 0&0&\cdots&\lambda_N \\ \end{pmatrix}
\label{eq:Mdiag}
\end{equation}
The linear response matrix can be written as
\begin{equation}
F(\omega)=\left[\hbar\omega-M\right]^{-1}=U\left[\hbar\omega-D\right]^{-1}U^{-1}
\label{eq:freq-resp-matrix}
\end{equation}
In component form, it is
\begin{equation}
F_{ij}(\omega)=\sum_{n=1}^{N}\frac{U_{in}(U^{-1})_{nj}}{\hbar\omega-\lambda_n}
\label{eq:freq-resp-elem}
\end{equation}
(If the eigenvalue $\lambda_n$ of a mode is real-valued, the denominator should be $\hbar\omega-\lambda_n+i\eta$, $\eta \downarrow 0$.)
Returning to the time domain, Eq.\ \eqref{eq:delta-x-dot} is similarly expanded in the eigenvectors as
\begin{equation}
i\hbar\frac{\partial\vec{x}}{\partial t}=UDU^{-1}\vec{x}+\vec{s}(t)
\end{equation}
Multiplying from the left by $U^{-1}$ gives
\begin{equation}
i\hbar\frac{\partial\vec{b}}{\partial t}=D\vec{b}+\vec{T}(t)\quad,\quad\vec{b}=U^{-1}\vec{x}\ ,\ \vec{T}=U^{-1}\vec{s}
\label{eq:diag-vector-time}
\end{equation}
Component-wise, Eq.\ \eqref{eq:diag-vector-time} is
\begin{equation}
i\hbar\frac{\partial b_n}{\partial t}=\lambda_nb_n+T_n(t)\quad,\quad\ n=1,\cdots,N
\label{eq:diag-elem-time}
\end{equation}
If at the initial time $t_0$, $x(t_0)=0$, and the source pulse comes afterwards, then the solution to Eq.\ \eqref{eq:diag-elem-time} is
\begin{equation}
b_n(t)=-\frac{i}{\hbar}\int_{t_0}^{\infty}{dt^{\prime}\ \theta(t-t^{\prime})e^{-i\lambda_n(t-t^{\prime})/\hbar}T_n(t^{\prime})}
\end{equation}
or, in matrix form
\begin{IEEEeqnarray*}{rCl}
\vec{b}(t) &=& -\frac{i}{\hbar}\int_{t_0}^{\infty}{dt^{\prime}\ \theta(t-t^{\prime})C(t-t^{\prime})\vec{T}(t^{\prime})}\quad,\quad\ \\
C(t-t^{\prime}) &=& \left(\begin{matrix}e^{- \frac{i}{\hbar} \lambda_1 (t-t^{\prime})}&0&\cdots&0 \\ 0&e^{- \frac{i}{\hbar} \lambda_2(t-t^{\prime})}&\cdots&0 \\ \vdots&\vdots& \ddots &\vdots\\ 0&0&\cdots&e^{- \frac{i}{\hbar} \lambda_N(t-t^{\prime})} \end{matrix}\right)
\end{IEEEeqnarray*}
This gives the solution as
\begin{equation}
\vec{x}(t)=\int_{t_0}^{\infty}{dt^{\prime}\ F(t-t^{\prime})\vec{s}(t^{\prime})}
\end{equation}
where
\begin{IEEEeqnarray}{rCl}
F(t-t^{\prime}) &=& -\frac{i}{\hbar}\theta(t-t^{\prime})UC(t-t^{\prime})U^{-1}\quad, \\
F_{ij}(t-t^{\prime}) &=& -\frac{i}{\hbar}\theta(t-t^{\prime})\sum_{n=1}^{N} U_{in} e^{-\frac{i}{\hbar} \lambda_n(t-t^{\prime})}(U^{-1})_{nj} \nonumber
\end{IEEEeqnarray}
One can verify that the Fourier transform of the time-domain response function $F(t-t^{\prime})$ is the frequency-domain response function defined in Eqs.\
\eqref{eq:freq-resp-matrix} and \eqref{eq:freq-resp-elem}.
\subsubsection*{The response function expressed in terms of the left and right eigenvectors}
$U^{-1}$ can be expressed in terms of the left eigenvectors of $M$. The left eigenvectors $\vec{z}_n, n=1,\cdots,N$ are defined as
\begin{equation}
{\vec{z}}_n^{\, \dag} M={\widetilde{\lambda}}_n{\vec{z}}_n^{\, \dag} \qquad \text{or} \qquad M^T{\vec{z}}_n^{\, \ast}={\widetilde{\lambda}}_n{\vec{z}}_n^{\, \ast}
\label{eq:Nailefteigenvecconv}
\end{equation}
with a set of left eigenvalues ${\widetilde{\lambda}}_n$.
Some simple properties of the left eigenmodes are as follows.
The sets of left eigenvalues and right eigenvalues are the same because a matrix and its transpose have equal determinants: $\mathrm{det}(\lambda-M^T)=\mathrm{det}(\lambda-M)$.
A left eigenvector and a right eigenvector belonging to two different eigenvalues are orthogonal to each other. Orthogonality is defined in the usual way as in quantum mechanics: two vectors $\vec{a}$ and $\vec{b}$ are orthogonal if
\begin{equation*}
{\vec{a}}^{\, \dag}\vec{b}=\sum_{i}{a_i^{\ast} b_i=0}.
\end{equation*}
The orthogonality proof is similar to that in quantum mechanics. If $M{\vec{y}}_n=\lambda_n{\vec{y}}_n$ and ${\vec{z}}_k^{\, \dag}M=\lambda_k{\vec{z}}_k^{\, \dag}$, then ${\vec{z}}_k^{\, \dag}M{\vec{y}}_n=\lambda_k{\vec{z}}_k^{\, \dag}{\vec{y}}_n=\lambda_n{\vec{z}}_k^{\, \dag}{\vec{y}}_n$. If $\lambda_k\neq\lambda_n$, then ${\vec{z}}_k^{\, \dag}{\vec{y}}_n=0$.
Eigenvectors belonging to a degenerate eigenvalue can be orthogonalized within the degenerate subspace.
Returning to the representation of $U^{-1}$, we normalize the eigenvectors by requiring
\begin{equation*}
{\vec{z}}_n^{\, \dag}{\vec{y}}_n=1
\end{equation*}
for each $n$.
Then $U^{-1}$ is given by
\begin{equation} U^{-1}=\left(\begin{matrix}&{\vec{z}}_1^{\, \dag}&\\&{\vec{z}}_2^{\, \dag}&\\&\vdots&\\&{\vec{z}}_N^{\, \dag}&\\\end{matrix}\right)
\end{equation}
The orthonormalization $\vec{z}_{k}^{\, \dag} \vec{y}_{n} = \delta_{kn},$ enforces $U^{-1} U = U U^{-1} = I$.
The linear response function can be written as
\begin{equation}
F(\omega)=\sum_{n=1}^{N}\frac{{\vec{y}}_n{\vec{z}}_n^{\, \dag}}{\hbar\omega-\lambda_n}\quad,\quad\ F_{ij}(\omega)=\sum_{n=1}^{N}\frac{y_{n,i}z_{n,j}^{\ast}}{\hbar\omega-\lambda_n} \ .
\end{equation}
The corresponding solution $\vec{x}(\omega)$ is
\begin{equation}
\vec{x}(\omega)=\sum_{n=1}^{N}{c_n(\omega)}{\vec{y}}_n\quad,\quad\ c_n(\omega)=\frac{{\vec{z}}_n^{\, \dag}\vec{s}(\omega)}{\hbar\omega-\lambda_n} \ .
\label{eq:linrespvecomeigvecexp}
\end{equation}
The corresponding expressions in the time domain are
\begin{IEEEeqnarray}{rCl}
F(t-t^{\prime}) &=& -\frac{i}{\hbar}\theta(t-t^{\prime})\sum_{n=1}^{N}{{\vec{y}}_ne^{-i\lambda_n(t-t^{\prime})/\hbar}
{\vec{z}}_n^{\, \dag}}\quad,\quad\ \\
F_{ij}(t-t^{\prime}) &=& -\frac{i}{\hbar}\theta(t-t^{\prime})\sum_{n=1}^{N}{y_{n,i}e^{-i\lambda_n(t-t^{\prime})/\hbar}
z_{n,j}^{\ast}} \ ,
\end{IEEEeqnarray}
and the solution is
\begin{multline}
\vec{x}(t)=\sum_{n=1}^{N} c_n(t){\vec{y}}_n \quad, \quad \\
c_n(t)=-\frac{i}{\hbar}\int_{t_0}^{\infty}{\theta(t-t^{\prime})e^{-i\lambda_n(t-t^{\prime})/\hbar}
{\vec{z}}_n^{\, \dag}\vec{s}(t^{\prime})} \ .
\end{multline}
\section{Phase and amplitude representation of the response function}
\label{sec:phasampresp}
The response function $F(t)$ (or $F(\omega)$) is expressed in Sec.\ \ref{sec:modedecomp} as a function of the eigenvectors of the
interband polarization fluctuation $\vec{p}^{\, (1)}_{e h} (m,t)$, its complex conjugate, and the density fluctuation.
Since phase and amplitude modes of the coherent steady state are of physical interest, (a.k.a., Goldstone and Higgs modes, resp.,)
the interpretation of our numerical results will be helped by switching to a representation in terms of the phase and amplitude of $\vec{p}^{\, (1)}_{e h} (m,t)$.
This is formulated in this section.
\subsubsection*{Time domain}
Write the polarization in amplitude-phase form and write the amplitude and phase as sums of unperturbed and response terms:
\begin{multline}
p_{eh}(\vb{k},t)=R(\vb{k},t) \ e^{i \phi(\vb{k},t)} \\
= \left[ R^{(0)}(\vb{k},t) +R^{(1)}(\vb{k},t) \right] e^{i \left[\phi^{(0)} (\vb{k},t)+\phi^{(1)}(\vb{k},t) \right]}
\end{multline}
Linearize in the first-order response terms:
\begin{widetext}
\begin{equation}
p_{eh}(\vb{k},t)=\left[R^{(0)}(\vb{k},t)+R^{(1)}(\vb{k},t)+iR^{(0)}(\vb{k},t)\phi^{(1)}(\vb{k},t)\right] \ e^{i\phi^{(0)}(\vb{k},t)} + \cdots
\end{equation}
Expand $R^{(1)}$ and $\phi^{(1)}$ in angular momentum:
\begin{equation}
p_{eh}(\vb{k},t)=\left[R^{(0)}(k,t)+\sum_{m}{\left(R_m^{(1)}(k,t)+iR^{(0)}(k,t)\phi_m^{(1)}(k,t)\right)\ e^{im\theta_k}}\right]\ e^{i\phi^{(0)}(k,t)} \label{eq:R-phi-linear}
\end{equation}
Comparing Eq.\ \eqref{eq:R-phi-linear} with
\begin{equation}
p_{eh}(\vb{k},t)=p^{(0)}_{eh}(k,t) +
\sum_{m} p^{(1)}_{eh}(k,m,t) e^{im\theta_k}
\end{equation}
gives ($R^{(0)}$ and $\phi^{(0)}$ being isotropic)
\begin{equation}
p_{e h}^{(1)}(k,m,t)=\ e^{i\phi^{(0)}(k,t)} \left( R_{m}^{(1)}(k,t)+iR^{(0)}(k,t)\phi_m^{(1)}(k,t)\right)
\label{eq:R1-phi1-t}
\end{equation}
\end{widetext}
Note that because $R^{(1)}(\vb{k},t)$ and $\phi^{(1)}(\vb{k},t)$ are real,
\begin{IEEEeqnarray}{rCl}
R_{-m}^{(1)}(k,t) &=& R_m^{(1)\ast}(k,t) \quad , \quad \nonumber \\
\phi_{-m}^{(1)}(k,t) &=& \phi_m^{(1)\ast}(k,t) \label{eq:m-sym-t}
\end{IEEEeqnarray}
\subsubsection*{Frequency domain}
The unperturbed laser is assumed to be in a steady state ($e^{-i\omega_\ell t}$ has been taken out) so that $R^{(0)}$ and $\phi^{(0)}$ are time-independent.
Fourier transform the response quantities with respect to time.
The relations Eq.\ \eqref{eq:m-sym-t} between $m$ and $-m$ components in the time domain translate to the following relations in the frequency domain:
\begin{IEEEeqnarray}{rCl}
R_{-m}^{(1)}(k,\omega) &=& R_m^{(1)\ast}(k,-\omega)\quad,\quad \nonumber \\
\phi_{-m}^{(1)}(k,\omega) &=& \phi_m^{(1)\ast}(k,-\omega)
\label{eq:m-sym-omega} \ .
\end{IEEEeqnarray}
The Fourier transform of Eq.\ \eqref{eq:R1-phi1-t} is
\begin{multline}
p_{e h}^{(1)}(k,m,\omega)= \\
e^{i\phi^{(0)}(k)} \left( R_{m}^{(1)}(k,\omega)+iR^{(0)}(k)\phi_{m}^{(1)}(k,\omega) \right) \label{eq:R1-phi1-omega}
\end{multline}
From Eqs.\ \eqref{eq:m-sym-omega} and \eqref{eq:R1-phi1-omega}, we have
\begin{multline}
p_{e h}^{(1)\ast}(k,-m,-\omega)= \\
e^{-i\phi^{(0)}(k)} \left( R_{m}^{(1)}(k,\omega)-iR^{(0)}(k)\phi_{m}^{(1)}(k,\omega) \right) \label{eq:R1-phi1-omega-prime}
\end{multline}
Solving for $R_m^{(1)}(k,\omega)$ and $\phi_m^{(1)}(k,\omega)$ from Eqs.\ \eqref{eq:R1-phi1-omega}
and \eqref{eq:R1-phi1-omega-prime} gives
\begin{equation}
R_m^{(1)}(k,\omega)= \frac{1}{2} \left({\widetilde{p}}_{e h}^{(1)}(k,m,\omega)+{\widetilde{p}}_{e h}^{(1)\ast}(k,-m,-\omega)\right)
\label{eq:rm1komofp}
\end{equation}
\begin{multline}
\phi_m^{(1)}(k,\omega) = \\
\frac{1}{2iR^{(0)}(k)}\left({\widetilde{p}}_{e h}^{(1)}(k,m,\omega)-{\widetilde{p}}_{e h}^{(1)\ast}(k,-m,-\omega)\right)
\label{eq:phimkomofrp}
\end{multline}
where
\begin{equation*}
{\widetilde{p}}_{eh}^{(1)}(k,m,\omega)=\ e^{-i\phi^{(0)}(k)} p_{eh}^{(1)} (k,m,\omega)
\end{equation*}
The amplitude response $R_m^{(1)}(k,\omega)$ and the phase response $\phi_{m}^{(1)}(k,\omega)$ can be computed from $p_{eh}^{(1)}(k,m,\omega)$ through Eqs.\ \eqref{eq:rm1komofp} and \eqref{eq:phimkomofrp}.
\begin{figure*}
\centering
\includegraphics{Fig02-sm1275_lineigen.pdf}
\caption{%
(Color online.)
The linear response eigenvalues $\varepsilon_{n}$
(written as $\lambda_{n}$ in Eq.\ \eqref{eq:righteig} and Sec.\ \ref{sec:modedecomp})
in the case of (a) an intraband THz probe and (b) an interband optical probe.
The discrete modes differ between (a) and (b), while the spectral continua (UU, LL, and decay) are nearly identical.
The eigenvalues are symmetric under $\Re \varepsilon \to - \Re \varepsilon$.
The T modes are the only discrete modes predicted with an intraband probe, while an interband probe may see G, M, and H modes.
(The default parameter values for all figures are given in Sec.\ \ref{sec:results}.)
}
\label{fig:lineigen}
\end{figure*}
\subsection{Response function}
The vectors $\vec{x} (m,t)$ and $\vec{s} (m,t)$ are given in block from in Eqs.\ \eqref{eq:definition-x} and \eqref{eq:Mblockform}.
Their frequency domain counterparts are
\begin{IEEEeqnarray}{rCl}
\vec{x}(m,\omega) &=& \left(\begin{matrix}{\vec{p}}^{\,(1)}_{e h}(m,\omega)\\
{\vec{p}}^{\, (1)\ast}_{e h}(-m,-\omega)\\
{\vec{f}}^{\, (1)}_e (m,\omega)\\\end{matrix}\right)
\label{eq:xomblockform}
\quad , \quad \\
\vec{s}(m,\omega) &=&
\left(\begin{matrix}{\vec{s}}_{p} (m,\omega) \\
-{\vec{s}}_p^{\, \ast}(-m,-\omega)\\
{\vec{s}}_f(m,\omega)\\\end{matrix}\right) .
\label{eq:somblockform}
\end{IEEEeqnarray}
Write the left and right eigenvectors as
\begin{equation}
\vec{y}_{n} =
\begin{pmatrix}
\vec{X}_n \\ \vec{Y}_n \\ \vec{Z}_{n}
\end{pmatrix}
, \quad
\vec{z}_{n} =
\begin{pmatrix}
\vec{X}_{n}^{\prime} \\ \vec{Y}_{n}^{\prime} \\ \vec{Z}_{n}^{\prime}
\end{pmatrix},
\label{eq:eigveccomp}
\end{equation}
respectively.
With these definitions, Eq.\ \eqref{eq:linrespvecomeigvecexp} becomes
\begin{widetext}
\begin{equation}
\left(\begin{array}{l}
\vec{p}^{\, (1)}_{e h}(m,\omega) \\
\vec{p}^{\,(1) *}_{e h}(-m,-\omega) \\
\vec{f}^{\, (1)}_e (m,\omega)
\end{array}\right)=\sum_{n=1}^{N} \frac{1}{\hbar \omega-\lambda_{n}}\left(\begin{array}{l}
\vec{X}_{n} \\
\vec{Y}_{n} \\
\vec{Z}_{n}
\end{array}\right)\left(\vec{X}_{n}^{\prime \dagger} \vec{s}_{p}(m,\omega)-\vec{Y}_{n}^{\prime \dagger} \vec{s}_{p}^{\, *}(-m,-\omega)+\vec{Z}_{n}^{\prime \dagger} \vec{s}_{f}(m,\omega)\right)
\label{eq:xiomofxyzsblock}
\end{equation}
The interband polarization is written in phase-amplitude form in
Eqs.\ \eqref{eq:rm1komofp} and \eqref{eq:phimkomofrp}.
We perform the same transformation on the source vector
\begin{eqnarray}
\vec{s}_{R}(m,\omega)&=&\frac{1}{2}\left(\tilde{\vec{s}}_{p}(m,\omega)+\tilde{\vec{s}}_{p}^{\, *}(-m,-\omega) \right) ; \quad \tilde{s}_{p}(k,m,\omega)=e^{-i \phi^{(0)}(k)} s_{p}(k,m,\omega)
\label{eq:srofsxitilde} \\
\vec{s}_{\phi}(m,\omega) &=& \frac{1}{2i} \left(\tilde{\vec{s}}_{p}(m,\omega)-\tilde{\vec{s}}_{p}^{\, *}(-m,-\omega)\right)
\label{eq:sphiofsxitilde} \\
\Rightarrow \ \tilde{\vec{s}}_{p} (m,\omega) &=& \vec{s}_{R}(m,\omega) + i \vec{s}_{\phi}(m,\omega), \quad
\tilde{\vec{s}}_{p}^{\, *}(-m,-\omega) = \vec{s}_{R}(m,\omega) - i \vec{s}_{\phi}(m,\omega) \ .
\label{eq:sxitildeofsrphi}
\end{eqnarray}
For the case where $E_{\ell\lambda}^{(1)} \neq 0$, such as with an optical (interband) probe, $E_{\ell\lambda}^{(1)}$ can be written in $R$ and $\phi$ components in the same way as $\vec{p}_{eh}^{\, (1)}$.
With Eqs.\ \eqref{eq:srofsxitilde}--\eqref{eq:sxitildeofsrphi}, the coefficient in front of $\vec{y}_n$ in Eq.\ \eqref{eq:xiomofxyzsblock} can be written as
\begin{IEEEeqnarray*}{rCl}
c_{n} (\omega) &=& (\hbar\omega - \lambda_{n})^{-1} \left[ \vec{X}_{n}^{\prime \dagger} \vec{s}_{p}(m,\omega)-\vec{Y}_{n}^{\prime \dagger} \vec{s}_{p}^{\, *}(-m,-\omega)+\vec{Z}_{n}^{\prime \dagger} \vec{s}_{f}(m,\omega) \right] \\
&=& (\hbar\omega - \lambda_{n})^{-1} \left[ \left(\tilde{\vec{X}}_{n}^{\prime \dagger} -\tilde{\vec{Y}}_{n}^{\prime \dagger} \right) \vec{s}_{R}(m,\omega)
+ i \left(\tilde{\vec{X}}_{n}^{\prime \dagger} +\tilde{\vec{Y}}_{n}^{\prime \dagger} \right) \vec{s}_{\phi}(m,\omega)
+\vec{Z}_{n}^{\prime \dagger} \vec{s}_{f}(m,\omega) \right] \yesnumber
\end{IEEEeqnarray*}
\end{widetext}
where we have defined
\begin{equation}
\tilde{X}_{n k}^{\prime} = X_{n k}^{\prime} e^{-i \vec{\phi}^{(0)}(k) }, \quad
\tilde{Y}_{n k}^{\prime} = Y_{n k}^{\prime} e^{i \vec{\phi}^{(0)}(k) }
\end{equation}
Taking the sum and difference of the first two equations in Eq.\ \eqref{eq:xiomofxyzsblock} gives
\begin{equation}
\left(\begin{array}{l}
\vec{R}_{m}^{(1)}(\omega) \\
\vec{\alpha}_{m}^{\, (1)}(\omega) \\
\vec{f}^{\, (1)}_e (m,\omega)
\end{array}\right)=\sum_{n=1}^{N} c_{n}(\omega)\left(\begin{array}{c}
\frac{1}{2}\left(\tilde{\vec{X}}_{n}+\tilde{\vec{Y}}_{n}\right) \\
\frac{1}{2 i}\left(\tilde{\vec{X}}_{n}-\tilde{\vec{Y}}_{n}\right) \\
\vec{Z}_{n}
\end{array}\right)
\label{eq:rphivecexp}
\end{equation}
where we have defined
\begin{IEEEeqnarray*}{rCl}
\alpha_{m}^{\, (1)} (k,\omega) &=& R^{(0)}(k) \phi_{m}^{(1)} (k,\omega), \\
\tilde{X}_{nk} &=& {X}_{nk} e^{-i {\phi}^{(0)} (k)}, \\
\tilde{Y}_{nk} &=& Y_{nk} e^{i \phi^{(0)} (k)}
.
\end{IEEEeqnarray*}
Eq.\ \eqref{eq:rphivecexp} can also be written in a response function form:
\begin{equation}
\left(\begin{array}{l}
\vec{R}_{m}^{(1)}(\omega) \\
i \vec{\alpha}_{m}^{\, (1)}(\omega) \\
\vec{f}^{\, (1)}_e (m,\omega)
\end{array}\right)
= \begin{pmatrix}
F_{RR} & F_{R\phi} & F_{R f} \\
F_{\phi R} & F_{\phi \phi} & F_{\phi f} \\
F_{f R} & F_{f \phi} & F_{ff}
\end{pmatrix}
\begin{pmatrix}
\vec{s}_{R} (m,\omega) \\
i \vec{s}_{\phi} (m,\omega) \\
\vec{s}_{f} (m,\omega)
\end{pmatrix}
\label{eq:rphivecrespfunc}
\end{equation}
where the block sub-matrices are given by
\begin{equation}
F_{i j}(\omega)=\sum_{n=1}^{N} \frac{\vec{a}_{n i} \vec{b}_{n j}^{\, \dagger}}{\hbar \omega-\lambda_{n}} \quad i, j=R, \phi, f \label{eq:Fijom-phas}
\end{equation}
\begin{IEEEeqnarray}{rCl}
\left(\begin{array}{c}
\vec{a}_{n R} \\
\vec{a}_{n \phi} \\
\vec{a}_{n f}
\end{array}\right) &=& \left(\begin{array}{c}
\frac{1}{2}\left(\tilde{\vec{X}}_{n}+\tilde{\vec{Y}}_{n}\right) \\
\frac{1}{2}\left(\tilde{\vec{X}}_{n}-\tilde{\vec{Y}}_{n}\right) \\
\vec{Z}_{n}
\end{array}\right) \quad, \\
\left(\begin{array}{c}
\vec{b}_{n R} \\
\vec{b}_{n \phi} \\
\vec{b}_{n f}
\end{array}\right) &=& \left(\begin{array}{c}
\tilde{\vec{X}}_{n}^{\prime}-\tilde{\vec{Y}}_{n}^{\prime} \\
\tilde{\vec{X}}_{n}^{\prime}+\tilde{\vec{Y}}_{n}^{\prime} \\
\vec{Z}_{n}^{\prime}
\end{array}\right) \quad .
\end{IEEEeqnarray}
The classification of a single mode is determined by the corresponding summand in Eq.\ \eqref{eq:Fijom-phas}.
We call a mode $n$ a pure $\begin{pmatrix}
\text{amplitude} \\ \text{phase} \\ \text{density}
\end{pmatrix}$ mode if its
$\begin{pmatrix}
\vec{a}_{n R} \\
\vec{a}_{n \phi} \\
\vec{a}_{n f}
\end{pmatrix}$
component is predominant.
If none of the $\vec{a}_{ni}$ components can be said to clearly dominate, then it is a mixed mode.
Collective modes have eigenvectors distributed over a relatively wide range of $k$ values, and eigenvalues which are discrete, rather than part of a continua.
To be a Higgs mode, a mode must be a collective amplitude mode, with $\vec{a}_{R} \gg \vec{a}_{\phi},\vec{a}_{f}$; and to be a Goldstone mode, a mode must be a collective phase mode, with $\vec{a}_{\phi} \gg \vec{a}_{R}, \vec{a}_{f}$.
In Figs.\ \ref{fig:ampl_phas_box_log}
and \ref{fig:ampl_phas_lograt}, the symbols $R$, $\alpha$, and $f$, for a given mode, with or without the superscript $^{(1)}$, refer to $\vec{a}_{R}$, $\vec{a}_{\phi}$, and $\vec{a}_{f}$ for a right eigenvector, and $\vec{b}_{R}$, $\vec{b}_{\phi}$, and $\vec{b}_{f}$ for a left eigenvector, respectively.
\section{Results and Discussion}
\label{sec:results}
In the following, we present numerical results.
Unless otherwise noted, we use the following parameter values:
number of quantum wells $N_{QW}=1$,
effective Bohr radius $a_{B}=14$ nm,
background dielectric constant $\epsilon_{b}=16.1$,
unrenormalized band gap $E_{G}=1.562$ eV;
$E_{B}^{2D}=12.8$ meV, where the exciton binding energy in 2D
is related to the one in 3D via $E_{B}^{2D}=4E_{B}^{3D}$ and $E_{B}^{3D}=\frac{\hbar ^{2}}{2m_{r}a_{B}^{2}}$ determines the reduced e-h mass,
$m_{r}^{-1}=m_{e}^{-1}+m_{h}^{-1}$ ($m_{e}=m_{h}$ in our approximation);
therefore the effective electron mass is $m_{e} = 0.121m_{0}$, where $m_{0}$ is the free electron mass;
cavity resonance frequency $\hbar \omega _{cav}=1.550$ eV,
screening wavenumber $\kappa_{0} = 9 \times 10^{-3} a_{B}^{-1}$,
dephasing $\gamma = 0.2$ meV, Fermi relaxation rate $\gamma_{F} = 2\gamma$,
pump relaxation rate $\gamma_{p}=0.4$ meV, non-radiative decay rate $\gamma_{nr}=10^{-4}$ meV, total distribution relaxation rate $\gamma_{f}= \gamma_{F} + \gamma_{p} +\gamma_{nr}= 0.8001$ meV, cavity decay rate $\gamma _{cav}=0.1$ meV,
effective electron temperature $T = 50$ K,
and interband coupling constant $\Gamma_{eh}^{\lambda} = 64.04$ peV$\cdot$m.
The default pump density is $n_{p} = 1 a_{B}^{-2}$.
In figures \ref{fig:sm1261_cond_re_dens}--\ref{fig:bcs_gap_dens},
the dephasing is $\gamma = 0.5$ meV, so $\gamma_{F}= 1$ meV and $\gamma_{f}= 1.4001$ meV.
\subsection{Fluctuation Spectrum}
\label{subsec:fluct-spect}
In Fig. \ref{fig:lineigen} we show the complex eigenvalues, comparing the
new case of intraband fluctuations (fluctuations triggered by THz fields)
with the known spectrum \cite{binder-kwong.2021}
for the case of interband fluctuations (i.e., fluctuations modes triggered by interband fields).
The interband modes contain the Goldstone modes $G_0$, Goldstone companion modes $G_1$, discrete collective modes ($M$, $H$), a decay continuum (vertical continuum in the figure, at the lasing frequency), and positive and negative frequency spectral continua (almost horizontal in the figure) with the `hook' feature.
See Ref.\ \onlinecite{binder-kwong.2021} for more details on the collective modes in Fig.\ \ref{fig:lineigen}(b) and on the evolution of these optically-induced fluctuation modes from below to above the lasing threshold.
The intraband modes do not contain the Goldstone modes (because of their different angular momentum), but they do contain collective modes T$_i$ and continua similar to the interband modes.
\begin{figure*}
\centering
\includegraphics{Fig03-sm1233_eigenergkspect.pdf}
\caption{The real and imaginary parts of the UU continuum eigenvalues $\varepsilon_{k}$ as a function of the wavenumber $k$ of the corresponding eigenvector's magnitude peak, $\varepsilon_{k} \equiv \lambda_{n}$ s.t.\ $k = \max_{k'} |\vec{X}_{n} (k')|$, taken over all $n$ in the UU continuum. (Here, $\lambda_{n}$ and $\vec{X}_{n}$ are defined in Eqs.\ \eqref{eq:righteig} and \eqref{eq:eigveccomp}, resp.)
$\varepsilon_{k}$ for the THz response matrix is nearly identical.
(a) The real part is $\Re \varepsilon_{k} = 2 \tilde{E}_{k}$, twice the excitation/branch energy.
The minimum branch energy, i.e., the BCS gap $\tilde{E}_{gap}^{pair}$, occurs at $k_{BCS}$. $k_{BCS}$ divides the UU continuum into high and low $k$ regions, which are also indicated in
\ref{fig:lineigen} and \ref{fig:bandstruct}.
(b) $\Im \varepsilon_{k}$ is bounded above by the dephasing $-\gamma$ and below by $\approx -\frac{1}{2}\left(\gamma+\gamma_{f}\right)$, with this minimum at $\approx k_{\ell}$. A second kink occurs near $k_{BCS}$.
In the case where $\gamma_{f}$ is taken to be $\gamma$,
$\Im \varepsilon_{k} = -\gamma$ uniformly.}
\label{fig:eigenergkspect}
\end{figure*}
The UU and LL continua can be traced back to the single-particle spectrum (electronic band structure) by noting that their eigenfunctions are sharply peaked at a given wave number $k$, which is then associated with the corresponding eigenvalue $\varepsilon$.
This gives the real part of the eigenenergies vs $k$ as $\Re \varepsilon_{k} = 2 \tilde{E}_{k}$.
The resulting plot, figure \ref{fig:eigenergkspect}, shows the wave vector dependence of the energies of the continuum states.
Fig.\ \ref{fig:eigenergkspect}(a) shows the real part of the energies vs $k$, in analogy to the peak positions of the continuum states in the first-order density response function shown in Fig.\ \ref{fig:sm1233_f1_abs_clrmap_no0}. The corresponding imaginary part is shown in Fig.\ \ref{fig:eigenergkspect}(b).
It is then possible to identify the ``low $k$'' hook feature with the single-particle states created by the light-induced bands at $k<k_{BCS}$, where $k_{BCS}$ is the location of the light-induced gap in the single-particle spectrum, Fig. \ref{fig:bandstruct}.
\begin{figure}
\centering
\includegraphics{Fig04-sm1270_thz_eigval_tmode_track.pdf}
\caption{Plot of the eigenvalues, $\Im \varepsilon$ vs $\Re \varepsilon$, for the THz $M(m=1)$ matrix, showing the evolution of the modes T$_{0}$ (above) and T$_{1}$ (below) for color-coded pump densities $n_{p}$ from $0.6/a_B^2$ through $2/a_B^2$ in increments of $0.1/a_B^2$. The T$_{0}$ and T$_{1}$ mode energies are roughly symmetric with respect to the dashed line at $-\frac{1}{2}(\gamma+\gamma_{f})$.
The collective (discrete) T modes are shown as triangles, connected by thick lines;
and the UU continua are shown as thin solid lines.}
\label{fig:thz_eigval_tmode_track}
\end{figure}
Figure \ref{fig:thz_eigval_tmode_track}, similar in format to Fig.\ \ref{fig:lineigen}(a), shows the eigenvalue and T mode evolution with increasing $n_{p}$.
For high enough $n_{p}$, the T$_0$ modes can become unstable ($\Im \varepsilon > 0$).
\begin{figure*}
\centering
\includegraphics{Fig05-sm1233_eigenvec.pdf}
\caption{Plots of the magnitude of the
$\vec{p}_{eh}^{\, (1)}$
component of some right eigenvectors, i.e., $\vec{X}$ in the notation of Sec.\ \ref{sec:phasampresp}, and denoted $\delta P(k)$ here. The eigenvectors are unitless, and each has its own arbitrary scaling factor. Only the T modes are taken from the THz response matrix; the rest are taken from the optical probe response matrix. (See the supplement to Ref.\ \cite{binder-kwong.2021} for details of the optical probe response. The interband-probe $M$ matrix differs only in having a different constant factor on $V_{k,k'}^{0}$, in having only $m=0$ components, and therefore in having a nonzero $E_{\ell\lambda}^{(1)}$.)
(a) Collective, discrete-eigenvalue modes. All have minima near the laser $k_{\ell}$ and the Fermi $k_{F}$ wavenumbers. Here, $k_{\ell}$ is defined by $\tilde{\xi} (k_{\ell})=0$, where $\tilde{\xi}(\mathbf{k})$ is defined in Eq.\ \eqref{eq:xiergdef}; and $k_{F}$ is defined by $f_{e}^{(0)} (k_{F}) = \frac{1}{2}$ and $p_{eh}^{(0)} (k_{F}) = 0$.
Note that $k_{\ell} \neq k_{BCS}$, where the BCS gap wavenumber $k_{BCS}$ is defined as $2\tilde{E}(k_{BCS}) = \tilde{E}_{gap}^{pair}$ (see Eqs.\ \eqref{equ:E-single-particle-open} and \eqref{eq:def-epairgap}).
(b) An evenly-spaced sampling of UU continuum modes, over a portion of the energy range at which the $\tilde{E}(k)$ band is doubly degenerate with respect to $k$.
Close to $k_{BCS}$, the eigenfunctions have two peaks. These peaks occur at pairs of $k$ values, $k_{1}$ and $k_{2}$, for which $\tilde{E} (k_1) = \tilde{E} (k_2)$. For eigenenergies greater than the band where $\tilde{E}(k)$ is $k$-degenerate, the eigenfunction magnitudes are simple $k$-peaks.
The LL modes are identical, except for the interchanges $\delta P \leftrightarrow \delta P^{\ast}$ and $\delta E \leftrightarrow \delta E^{\ast}$, where $(\delta P, \delta E)$ and $(\delta P^{\ast}, \delta E^{\ast})$ correspond to $\vec{X}$ and $\vec{Y}$ in Eq.\ \eqref{eq:eigveccomp}, respectively. Their $\delta f$ components ($\vec{Z}$ in Eq.\ \eqref{eq:eigveccomp}) are identical. $|\delta P|$ and $|\delta E|$ are greater than $|\delta P^{\ast}|$ and $|\delta E^{\ast}|$ in the UU continuum, while per the interchange symmetry, this is reversed for LL. The THz response matrix $M$ eigenfunctions are very similar, except they lack the $\delta E$ and $\delta E^{\ast}$ components.}
\label{fig:eigenvec}
\end{figure*}
Figure \ref{fig:eigenvec} shows the eigenvectors for selected modes, with Fig. \ref{fig:eigenvec}(a) showing collective modes and Fig. \ref{fig:eigenvec}(b) continuum states.
\begin{figure}
\centering
\includegraphics{Fig06-sm1233_f1_abs_clrmap_no0.pdf}
\caption{
(Color online.)
First-order density (carrier distribution function) response $|f_{e}^{(1)} (k,m=1,\omega)|$.
The curved (yellow) peak location line corresponds to vertical transitions within, say, the conduction bands shown in Fig. \protect\ref{fig:bandstruct},
and the horizontal peak line corresponds to the T modes.
%
%
The $f^{(1)}=0$ line at 1.35 (2.4) $a_B^{-1}$ is
at $k_{\ell}$ ($k_{F}$), defined in the caption to Fig.\ \ref{fig:eigenvec}.
}
\label{fig:sm1233_f1_abs_clrmap_no0}
\end{figure}
\begin{figure}
\centering
\includegraphics{Fig07-sm1233_f1_abs_clrmap.pdf}
\caption{Color map of $|f_{e}^{(1)} (k,m=1,\omega)|$.
Maxima occur at $\pm 2\tilde{E}_{k}$, corresponding to the structure in the eigenvalues $\Re \varepsilon_{k}$; at $\omega = 0$ for all $k$, as is expected from the $1/\omega$ factor present in the THz source vector $\vec{s}$; and at $\hbar\omega \approx 2\tilde{E}_{k_{\ell}}$, from $k=0$ to the second intersection with $2\tilde{E}_{k}$, corresponding to the T$_0$ and T$_1$ resonance.
There are two minima, at $k = k_{\ell}$ and at $k = k_{F}$ for all $\omega$, which correspond to the minima in $|p_{eh}^{(0)} (k)|$.}
\label{fig:f1_abs_clrmap}
\end{figure}
An important consequence of Fig. \ref{fig:lineigen} is the fact that both interband and intraband spectra show a BCS-like gap. In both cases, the gap is not only that between the frequency of the order parameter and excited continuum states (at the hook-like feature), but also involves collective modes, M and T, in the vicinity of the continuum gap. These modes stem from the strong Coulomb interaction and are not present in a photon laser, where Coulomb interactions are negligible \cite{spotnitz-etal.2021}.
In particular, Fig. \ref{fig:lineigen}a predicts the possibility to experimentally observe the BCS-like gap using THz radiation (more details below, cf. Fig. \ref{fig:sm1261_cond_re_dens}).
To further analyze the occurrence of collective modes due to the many-particle Coulomb interaction, it is helpful to look at the first-order (in the probe) modification of the carrier distribution, $f^{(1)}$,
as a function of wave vector and frequency, Fig.\ \ref{fig:sm1233_f1_abs_clrmap_no0}. In addition to the light-induced band from the single-particle spectrum, we see a horizontal line at about 8.5 meV that is due to the excitation of the collective T modes as shown
in Fig.\ \ref{fig:lineigen}.
Fig.\ \ref{fig:f1_abs_clrmap} provides a larger-scale view of \ref{fig:sm1233_f1_abs_clrmap_no0}.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{Fig08-sm1233_ampl_phas_box_log.pdf}
\caption{
(Color online.)
Ratios of the arc length $\alpha$ and density $f$ to the amplitude components $R$ of selected collective modes (Goldstone, M and T according to
Fig. \protect\ref{fig:lineigen}) for right (R) and left (L) eigenvectors with averaging over $k$ as
described in text.
}
\label{fig:ampl_phas_box_log}
\end{figure}
\begin{figure*}
\centering
\includegraphics{Fig09-sm1233_ampl_phas_lograt.pdf}
\caption{The base-ten logarithm of the arc length-to-amplitude $\left|\alpha^{(1)} / R^{(1)} \right|$ and density-to-amplitude $\left|f^{(1)} / R^{(1)} \right|$ ratios as a function of wavenumber $k$ for the left and right eigenvectors of selected modes. In the plots of $\left|\alpha^{(1)} / R^{(1)} \right|$ for the optical-probe eigenvectors, dashed lines denote the $\delta E$ component, and solid lines denote the $\delta P$ component. The THz-probe modes have no associated $\delta E$ component, and so only the $\delta P$ component of the $\left|\alpha^{(1)} / R^{(1)} \right|$ ratio is plotted for T$_0$ and T$_1$.
The $\delta E$ eigenvector components have no defined $k$-dependence, and so their $\left|\alpha^{(1)} / R^{(1)} \right|$ ratios are plotted as constant in $k$. The figure shows the $\delta E$ ratios are the asymptotic limit for high $k$ of the $\delta P$ ratios.
The UU and Decay modes are merely example modes taken from their respective continua.
All modes except T$_0$ and T$_1$ are taken from the optical probe response matrix.
Many ratios exhibit extrema near $k_{\ell} = 1.35 a_{B}^{-1}$.
The main outlier in the set is that the $\left|\alpha^{(1)} / R^{(1)} \right|$ ratios of the right eigenvectors show the undamped Goldstone mode G$_0$ to clearly be a phase mode.}
\label{fig:ampl_phas_lograt}
\end{figure*}
\subsection{Mode characterization}
\label{subsec:mode-characterization}
Conventional discussions of fluctuation modes focus on whether the modes are phase or amplitude modes \cite{varma.2002}.
However, in our general theory the fluctuations also involve density fluctuation.
To analyse the fluctuation modes in terms of phase, amplitude, and density fluctuations, we write each interband variable (i.e., each $p(k,m)$ and $E$) as a complex number
$z=Re^{i \phi}$, with $R^{(0)}e^{i \phi^{(0)}}$ the zeroth-order steady state solution.
The phase $\phi$ is varied at first order in the perturbation, leading to a first-order arc length variation $\alpha^{(1)} = R^{(0)} \phi^{(1)}$.
The first-order (in the external probe) amplitude modulation is denoted by $R^{(1)}$.
%
%
In order to avoid ambiguities stemming from the separate normalization of each left (L) and right (R) eigenvector, we show the ratios
$\alpha^{(1)}/R^{(1)}$ and $f^{(1)}/R^{(1)}$ for both L and R.
%
In our many-particle approach, the right eigenvector of a Goldstone mode has non-zero phase fluctuations (in $p(k)$ and $E$) but zero amplitude and density fluctuations for each $k$. The right eigenvector of a Higgs mode has zero phase fluctuations for each $k$. Both left and right eigenvectors are needed to construct the response function. The fluctuation matrix being non-Hermitian, the left and right eigenvectors for the same eigenvalue can be quite different. It is hence possible that the response function may mix the three subsets of amplitude, phase and density.
For example, an external amplitude fluctuation can create a phase mode if the left eigenvector has an amplitude component and the right eigenvector a phase component.
See Sec.\ \ref{sec:phasampresp} for more details.
Figure \ref{fig:ampl_phas_box_log} summarizes our results as means. Rather than showing the $k$-resolved results, we show averages over $k$.
In the horizontal axis, L or R denotes the ratio for the left or right eigenvectors, respectively. $R$, $\alpha$, and $f$ are the amplitude, arc-length, and density components of the eigenvectors, and $\left\langle \alpha/R \right\rangle$ and $\left\langle f/R \right\rangle$ denote whether the ratio to $R^{(1)}$ is of the arc length or the density, respectively. In the vertical axis label, the expression $\left\langle x/R \right\rangle$ denotes
%
\begin{equation}
\left\langle x/R \right\rangle \equiv
\begin{cases}
\frac{1}{N_{k}} \sum_{i=1}^{N_{k}} \left\vert x^{(1)}(k_{i})/R^{(1)}(k_{i}) \right\vert , & \text{for} \ P^{(1)} \\
\left\vert x^{(1)}/R^{(1)} \right\vert, & \text{for} \ E^{(1)} .
\end{cases}
\end{equation}
Figure \ref{fig:ampl_phas_lograt} provides the underlying $k$-resolved data.
\begin{figure}
\centering
\includegraphics{Fig10-sm1261_cond_re_dens.pdf}
\caption{
The real part of the total conductivity, which is proportional to the absorptivity $A$, as a function of frequency. The key is pump density in units of inverse effective Bohr radius squared $a_{B}^{-2}$. All pump densities are above threshold, which is at $0.41/a_B^2$.
Here, $\gamma = 0.5$ meV,
and $\gamma_f = 1.4001$ meV.
The dashed lines correspond to the frequency of the BCS-like gap, $\tilde{E}_{gap}^{pair}$.
For the highest pump density, the frequencies of the T-modes and of the gain peak corresponding to the UU-continuum are indicated.
}
\label{fig:sm1261_cond_re_dens}
\end{figure}
Figure \ref{fig:ampl_phas_box_log} gives the justification to call $G_0$ a Goldstone (phase) mode, as the numerical result yields the phase component (presented as arc length) of both the interband polarization and the cavity field four orders of magnitude larger than the amplitude and density fluctuations. On the other hand, the collective modes M in the interband response and T in the THz response are mixed modes since all ratios (arc length/amplitude, density/amplitude) are approximately of order 1; they are neither Higgs nor Goldstone modes.
We do not observe pure Higgs modes.
\subsection{Spectroscopic observables}
\label{sec:spectroscopic-observables}
\begin{figure}
\centering
\includegraphics{Fig11-sm1261_cond_im_r_np.pdf}
\caption{(a) The imaginary part of the THz-domain conductivity, $10^{3} \Im \sigma_{T}(\omega)/c$. (b) The magnitude of the reflection coefficient amplitude $|R(\omega)|$, as a percentage. Note that $R$ is the ratio of reflected to incident amplitudes, while $|R|^2$ is the ratio of intensities.
Both (a) and (b) are plotted against the frequency $\hbar\omega$ (meV).
The key is the pump density $n_{p}$ in units of inverse effective Bohr radius squared $a_{B}^{-2}$.
The dashed lines denote the BCS gaps $\tilde{E}_{gap}^{pair}$ for the corresponding pump densities.
Both $\Im \sigma(\omega)$ and $|R(\omega)|$ are dominated by their $\omega^{-1}$ dependence, coming from the diamagnetic part of the conductivity, but some features can be seen in the vicinity of $\tilde{E}_{gap}^{pair}$, particularly at the highest $n_{p}$. Any features around the BCS gap result from the paramagnetic part of the conductivity.}
\label{fig:cond_im_r_np}
\end{figure}
\begin{figure}
\centering
\includegraphics{Fig12-sm1261_cond_re_at_np.pdf}
\caption{Plotted versus THz probe frequency $\hbar\omega$ (meV): (a) Real THz-frequency conductivity $10^{4} \Re \sigma_{T}(\omega)/c$. (b) Transmission coefficient $10^{4}\left(|T(\omega)|-1\right)$. (c) Absorptivity $10^{3} A(\omega)$.
The key is the pump density $n_{p}$ in units of $a_{B}^{-2}$.
The dashed lines denote the BCS gaps $\tilde{E}_{gap}^{pair}$ for the respective pump densities.
Note that $T$ is the ratio of transmitted to incident amplitudes, while $|T|^2$ is the ratio of intensities. $A$ is the fraction of incident intensity which is absorbed. $A <0$ denotes gain.
At higher pump densities, two minima may be seen in $\Re \sigma$ and $A$, in the vicinity of the BCS gap. The first valley corresponds to the frequency of the T$_0$ and T$_1$ modes.
As the pump density drops below that required for the emergence of the T modes, the second minimum spreads out to become a single broad minimum.
}
\label{fig:cond_re_at_np}
\end{figure}
\begin{figure}
\centering
\includegraphics{Fig13-sm1290_bcs_gap_dens.pdf}
\caption{Comparison of the frequencies $\hbar\omega$ (meV) of the BCS-like gap $\tilde{E}^{pair}_{gap}$ and of the minima $\omega_{B}$ and $\omega_{T}$ of the THz-domain conductivity $\Re \sigma_{T}(\omega)$, for varying pump density $n_{p}$.
The light red line denotes the magnitude of the BCS-like gap, $\tilde{E}^{pair}_{gap}$.
The dark blue line shows the frequency $\hbar\omega_{B}$ for which the real part of the conductivity $\Re\sigma(\omega)$ is a minimum; that is, $\hbar\omega_{B}$ is defined such that $\Re\sigma(\omega_B)=\min \Re\sigma(\omega)$. For pump densities at which the T modes can be discerned, the $\hbar\omega_{B}$ minimum is the broader, deeper, higher-frequency conductivity minimum.
The purple line tracks the frequency $\omega_{T}$ of the first, smaller minimum in $\Re \sigma$, corresponding to the T modes. This is assigned $\omega_{T}=0$ when the peak-finding algorithm cannot find this minimum, i.e., for pump densities below the emergence of the T modes.
The plot shows that the BCS-like gap $\tilde{E}^{pair}_{gap}$ is closely tracked by the gain maximum, over a wide range of pump densities, and in particular by the frequency $\hbar\omega_{T}$ of the T mode $\Re \sigma$ minimum.
}
\label{fig:bcs_gap_dens}
\end{figure}
Finally, we use the eigenvectors and eigenvalues to obtain the intraband conductivity, the real part of which is proportional to the THz absorption. Figure
\ref{fig:sm1261_cond_re_dens} shows extrema in the conductivity that approximately scale with the BCS-like gap, hence predicting a THz-based observation of the gap. At low pump densities, we have one broad extremum stemming mostly from the continuum states, which, in combination, lead to a broad THz gain band. At high pump densities (here $1.4 a_{B}^{-2}$), a narrow line becomes visible. This stems from the two T-modes in Fig.\ \ref{fig:lineigen}.
They too yield a region of THz gain, but this gain peak becomes a narrow resonance since
the imaginary part of their eigenvalues $\frac{1}{2} (\Im \varepsilon_{T_0} + \Im \varepsilon_{T_1})$
approaches the dephasing (specifically, $-\frac{1}{2} (\gamma + \gamma_f)$; see Fig.\ \ref{fig:thz_eigval_tmode_track}),
which can in principle be made very small.
As the pump density is increased or the interband dephasing rate is decreased (beyond what is plotted here), the T-modes yield a dispersive-like feature in $\Re \sigma_{T}$ and a Lorentzian feature in $\Im \sigma_{T}$, with increasing height-to-width ratios.
Figure \ref{fig:cond_im_r_np} provides corresponding data of the imaginary part.
Figure \ref{fig:cond_re_at_np} provides data analogous to Fig.\ \ref{fig:sm1261_cond_re_dens}, but for a larger selection of pump densities and also showing, in addition to the real part of the THz conductivity, plots of the transmission and absorption.
Fig.\ \ref{fig:sm1261_cond_re_dens} shows that the extrema in the THz conductivity track the polaritonic BCS gap (shown as vertical dashed lines in that figure).
This tracking behavior is made more apparent and summarized in
Fig.\ \ref{fig:bcs_gap_dens}.
\section{Conclusion}
In summary, we have shown that, in a polariton laser operating in the polaritonic BCS regime,
fluctuations or external probes far detuned from the laser frequency $\omega_\ell$, i.e., the frequency of the order parameter, can induce fluctuation modes that are different from those induced by fluctuations or probes close to $\omega_\ell$.
In addition to the importance of their existence, finding the characteristic properties of these modes can help with future experimental identification.
The fluctuation modes in this new set have an orbital angular momentum different from that of the order parameter; they contain spectral continua as well as collective modes, but do not contain the Goldstone modes.
The collective modes are shown not to be pure amplitude (Higgs) modes.
All modes can contribute to THz gain, including collective modes T$_i$ with frequencies close to the BCS-like gap.
The THz gain resonance from these T modes can, at least in principle, be made arbitrarily narrow, as their
width-to-height ratio decreases with the interband dephasing rate and with increasing pump density.
In this limit, the T$_0$ modes may also become unstable; this warrants further investigation.
The THz gain mechanism presented here is novel, but experimental verification is needed and its efficiency should be compared with that of existing THz emitters.
Future comparison of the modes found here with Bardasis-Schrieffer polaritons defined in thermal equilibrium systems \cite{bardasis-schrieffer.1961,sun-millis.2020} is desirable, as it would enhance the understanding of fluctuations in lasers, being examples of condensed open-pumped-dissipative systems.
Furthermore, as discussed in Ref.\ \cite{hu-etal.2021}, the experimental observation of the new modes can help solidify evidence for the polaritonic BCS states.
\begin{acknowledgments}
We gratefully acknowledge useful discussions with Hui Deng, University of Michigan;
financial support from the NSF under grant number DMR 1839570;
and the use of High Performance Computing (HPC) resources supported by the University of Arizona.
\end{acknowledgments}
|
2,869,038,156,693 | arxiv | \chapter{Introduction}
\label{sec1}
\section{Integrable equations}
As an introduction we briefly review the theory of integrable systems and give an overview of the preceding research.
There exist various distinct notions referred to under the name of `integrable' systems in mathematics and mathematical physics.
However, we can state without accuracy that the differential equations are `integrable' if they are highly symmetric and have sufficiently `many' first integrals (conserved quantities) so that the integration of them is possible.
In the middle of the 19th century, J. Liouville first defined the notion of `exact integrability' of Hamiltonian systems of classical mechanics in terms of Poisson commuting invariants \cite{Arnold,Goriely}.
Let us consider a Hamiltonian $H=H(\mathbf{p},\mathbf{q})$ with $n$-degree of freedom which is analytic in $\mathbf{p}=(p_1,\cdots,p_n),\mathbf{q}=(q_1,\cdots,q_n)\in\mathbb{R}^n$.
The Hamilton equations are
\[
\dot{q_i}=\frac{\partial H}{\partial p_i},\ \dot{p_i}=-\frac{\partial H}{\partial q_i}\ (i=1,2,\cdots, n).
\]
\begin{Definition}
The Hamiltonian $H(\mathbf{p},\mathbf{q})$ is Liouville integrable if there exist $n$ independent analytic first integrals $I_1=H,I_2,\cdots,I_n$ in involution $($i.e. $\left\{I_i,I_j\right\}=0)$.
\end{Definition}
In the late 1960s, the localized solutions of partial differential equations have been found to be understood by viewing these equations as infinite dimensional integrable systems.
These localized solutions are called solitons. The classical example of solitons is a solution of the Korteweg-de Vries equation (KdV equation) which describes shallow water wave phenomena \cite{Bouss1,Bouss2,Korteweg}. The discovery of solitary wave solutions dates back to the 1830s.
In 1834, Scott Russell discovered a solitary wave phenomenon while observing the motion of a boat in a canal. He noticed that the speed of the waves depends on their size, and that these waves will never merge---a large wave overtakes a small one \cite{Russell1,Russell2}. Later in 1895, Korteweg and de Vries proved that these waves can be simulated by the solutions of the following partial differential equation which is now called the KdV equation:
\begin{equation}
\frac{\partial u(x,t)}{\partial t}+6u(x,t)\frac{\partial u(x,t)}{\partial x}+\frac{\partial^3 u(x,t)}{\partial x^3}=0. \label{continuousKdV}
\end{equation}
The KdV equation became increasingly important when it was discovered that the equation can simulate many physical phenomena such as plasma physics and internal waves.
Zabusky and Kruskal found that the KdV equation was the governing equation of the Fermi-Pasta-Ulam lattice equation, and that the solutions of the KdV equation pass through one another and subsequently retain their characteristic form and velocity \cite{ZK1965}.
It has later been discovered that these soliton equations can be understood from a broader perspective.
In 1980s, M. Sato and Y. Sato discovered that wide class of nonlinear integrable equations and their solutions can be treated uniformly by considering them on an infinite dimensional Grassmannian \cite{SatoSato}. This is the notable `Sato theory', in which the Sato equation is a `master' equation that produces an infinite series of nonlinear partial differential equations. The theory is also called the theory of the KP hierarchies, since one of the simplest equations among those series of equations is the Kadomtsev-Petviashvili equation (KP equation), which describes shallow water waves of dimension two:
\[
\frac{\partial}{\partial x}\left( 4\frac{\partial u}{\partial t} -12 u\frac{\partial u}{\partial x} -\frac{\partial^3 u}{\partial x^3}\right)-3\frac{\partial^2 u}{\partial y^2}=0.
\]
The KdV equation and its soliton solutions are proved to be obtained from the reduction of the KP equation and its soliton solutions.
Next we review another important class of integrable differential equations: the Painlev\'{e} equations.
The Painlev\'{e} equations were originally discovered by P. Painlev\'{e} and B. Gambier as second order ordinary differential equations whose solutions do not have movable singularities other than poles. \cite{Painleve,Gambier,Okamotobook,Okamoto,Okamoto2,Okamoto3,Okamoto4}.
\begin{Proposition}
Let us consider the differential equation
\begin{equation}
\frac{d^2 y}{dx^2}=R\left(x,y,\frac{dy}{dx}\right), \label{contiP}
\end{equation}
where $R(x,y,z)$ is a rational function of $y,z$ whose coefficients are analytic functions of $x$
defined on some domain $D\subset\mathbb{C}^3$.
If the equation \eqref{contiP} does not have movable singular points, then it falls into one of the following cases:
\begin{itemize}
\item Linear equations.
\item Equations of the form
\[
\left(\frac{dx}{dt}\right)^2-4 x^3+g_2 x+g_3=0.
\]
Their solutions are written by the Weierstra\ss\ elliptic function.
\item Solvable equations.
\item One of the six Painlev\'{e} equations (P${}_{\mbox{\scriptsize{I} }}$, P${}_{\mbox{\scriptsize{II} }}$, P$_{\mbox{\scriptsize{III}}}$, P$_{\mbox{\scriptsize{IV}}}$, P$_{\mbox{\scriptsize{V}}}$, P$_{\mbox{\scriptsize{VI}}}$). We just present the first two of the Painlev\'{e} equations:
\begin{itemize}
\item Painlev\'{e} I equation (P${}_{\mbox{\scriptsize{I} }}$)
\[
\frac{d^2 x}{dt^2}=6x^2+t
\]
\item Painlev\'{e} II equation (P${}_{\mbox{\scriptsize{II} }}$)
\[
\frac{d^2 x}{dt^2}=2x^3+tx+\alpha
\]
$\cdots$
\end{itemize}
\end{itemize}
\end{Proposition}
In the 1970s, it has been found that the correlation function
of the two-dimensional Ising model are related to the Painlev\'{e} III equation \cite{Wu}, and since then the Painlev\'{e} equations have been investigated eagerly as one of the classes of integrable equations by K. Okamoto and many other researchers.
Also, the Painlev\'{e} equations can be obtained via similarity reduction of some soliton equations.
\section{Discrete integrable equations}
We review some of the topics on the integrability of discretized equations. Roughly speaking, the discrete integrable systems have `many' conserved quantities and soliton solutions. If the discretization is chosen appropriately, the discrete system preserves the essential properties that the corresponding continuous system possesses.
A discrete version of the KP equation is derived via the Miwa transformation from the KP hierarchy.
The Miwa transformation is the following transformation that changes the variables $(x_1,x_2,\cdots)$ to $(m_1,m_2,\cdots)$:
\[
x_n=\sum_{j=1}^{\infty} m_j \frac{1}{n(a_j)^n},
\]
where $n=1,2,\cdots$ and $a_1,a_2,\cdots \in\mathbb{C}\setminus \{0\}$ are distinct constants.
Let us suppose that the variables $m_i$ take only integer values, and consider a function $\tau(m_1,m_2,\cdots)$, which is a Miwa transformation of the $\tau$-function solution $\tau(x_1,x_2,\cdots)$ of the Sato's bilinear identity.
Then we obtain the following bilinear relation for distinct $i,j,k\in\mathbb{Z}$:
\begin{equation}
(a_i-a_j)\tau_{ij}\tau_k+(a_j-a_k)\tau_{jk}\tau_i+(a_k-a_i)\tau_{ki}\tau_j=0. \label{hirotamiwa}
\end{equation}
The equation \eqref{hirotamiwa} is the discrete KP equation, and is also called `Hirota-Miwa equation'.
The discrete KdV equation is obtained by imposing a restriction
\[
\tau(m_1+1,\ m_2+1,\ m_3)=\tau(m_1,\ m_2,\ m_3)
\]
to the Hirota-Miwa equation. This kind of restriction on the independent variables (imposing shift invariance, omitting some of the variables, e.t.c.) to construct simpler classes of equations is called the `reduction'. It gives the following bilinear form of the discrete KdV equation:
\begin{equation}
(1+\delta)\sigma_{n+1}^{t-1}\sigma_n^{t-1}=\delta \sigma_{n+1}^{t-1}\sigma_n^{t+1} + \sigma_n^t \sigma_{n+1}^t. \label{bilineardkdv}
\end{equation}
Here $\sigma_n^t:=\tau(t,0,n)$.
(Note that, in this paper, the word `reduction' is also used to indicate other process: the projection modulo a maximal ideal.)
By introducing a new variable
\[
x_n^t=\frac{\sigma_n^t\sigma_{n+1}^{t-1}}{\sigma_{n+1}^t \sigma_n^{t-1}},
\]
we obtain the discrete KdV equation as the following nonlinear partial difference equation:
\begin{equation}
\frac{1}{x_{n+1}^{t+1}}-\frac{1}{x_n^t}+\frac{\delta}{1+\delta}\left(x_n^{t+1}-x_{n+1}^t \right)=0.
\label{dKdV1}
\end{equation}
In 1990, the discrete versions of the Painlev\'{e} equations have been discovered by A. Ramani, B. Grammaticos and J. Hietarinta \cite{RGH}.
They are considered to be integrable in the sense that they pass the singularity confinement test.
The singularity confinement test judges whether the spontaneously appearing singularities disappear after a few iteration steps of the systems.
For example, let us consider the following mapping related to the discrete Painlev\'{e} I equation:
\[
x_{n+1}=-x_{n-1}+\frac{1}{x_n^2}.
\]
If we evolve the equation from $x_0=u$ and $x_1=0$, then we have
\[
x_1=0,\ x_2=\infty,\ x_3=0,\ x_4=-\infty+\infty,
\]
thus $x_5$ is indeterminate.
However, if we introduce a small positive parameter $\epsilon>0$ and take $x_1=\epsilon$, then
\[
x_2=\epsilon^{-2}-u,\ x_3=-\frac{\epsilon(1-\epsilon^3-2\epsilon^2 u +\epsilon^4 u^2)}{(\epsilon^2 u -1)^2},
\]
\[
x_4=\frac{(\epsilon^2 u-1)(-u-2\epsilon +\epsilon^4+4\epsilon^3 u+3\epsilon^2 u^2-2\epsilon^5 u^2-3\epsilon^4 u^3+\epsilon^6 u^4)}{(1-\epsilon^3-2\epsilon^2 u +\epsilon^4 u^2)^2}.
\]
By taking the limit $\epsilon\to 0$, we obtain the time evolution as follows:
\[
x_1=0,\ x_2=\infty,\ x_3=0,\ x_4=u.
\]
In this case we can see that, by introducing a parameter $\epsilon$ in the initial value, the indeterminacy resulting from the singularity at $x_1=0$ is `confined' within finite time steps and then the initial value $u$ reappears.
Most integrable discrete systems have been proved to pass the test \cite{Grammaticosetal}.
We will treat some of the discrete Painlev\'{e} equations in the following sections.
Note that, although the singularity confinement test is a very powerful tool to detect the integrability of many discrete equations, it is not easy to apply the test to partial difference equations.
In 2014, after this thesis is submitted, the author and his collaborators invented a new integrability criterion called `co-primeness' condition, which can be considered as one type of generalization of the singularity confinement test.
The benefit of the co-primeness is that it is applicable also to the partial difference equations, however, we do not treat this topic in this article and leave the details to other papers.
\section{Ultra-discrete integrable equations} \label{udintegrable}
The ultra-discrete integrable systems are obtained from the discrete integrable ones through a limiting procedure called `ultra-discretization'.
Both the dependent and independent variables of the ultra-discrete systems take discrete values, usually the integers. Therefore they are considered as cellular automata. The cellular automaton is a discrete computational model which consists of a regular grid of cells. Each cell has a finite number of states, corresponding to the value of the independent variable of the ultra-discrete system.
It is studied not only in mathematical physics, but also in many fields in natural and social sciences such as computability theory, theoretical biology and jamology.
One of the most famous cellular automata may be the Elementary Cellular Automata (ECA) \cite{Wolfram}. It is one-dimensional and the time evolution of a cell depends only on its two neighboring cells.
We give an ECA with `rule $90$', which is the `simplest non-trivial' ECA as an example \cite{Wolframetal}. Let the values of one-dimensional cells at time step $t$ be $\{x_n^t\}_{n=-\infty}^\infty$ where each cell satisfies $x_n^t\in\{0,1\}$.
The next step $x_n^{t+1}$ is defined as the exclusive disjunction of $x_{n-1}^t$ and $x_{n+1}^t$, and therefore be expressed as $x_n^{t+1}\equiv x_{n-1}^t+x_{n+1}^t\mod 2$.
The time evolution on a large scale gives the shape of the Sierpi\'{n}ski gasket, a fractal. See the figure \ref{figuresier}.
\begin{figure}
\centering
\includegraphics[width=9cm,bb=110 500 480 740]{figure1.eps}
\caption{The time evolution $x_n^t$ of the ECA with rule $90$, where the dot `.' denotes $x_n^t=0$.}
\label{figuresier}
\end{figure}
We are also interested in more complex cellular automata whose solutions behave analogous to those of some discrete integrable systems.
We present one way to ultradiscretize the discrete KdV equation and explain how this limiting procedure gives the Box Ball Systems (BBS). The BBS is one of the typical soliton cellular automata discovered by D. Takahashi and J. Satsuma, and has been investigated extensively by T. Tokihiro et. al. \cite{TS1990,T1993,TTMS}. To simplify the process, we use the following lemma:
\begin{Lemma}
Under the boundary condition $\lim_{n\to -\infty}x_n^t=1$, the discrete KdV equation \eqref{dKdV1} takes the following form:
\begin{equation}
x_{n+1}^{t+1}=\left(\delta x_{n+1}^t+(1-\delta)\prod_{k=-\infty}^n\frac{x_k^{t+1}}{x_k^t}\right)^{-1}. \label{dKdVpermanentform}
\end{equation}
\end{Lemma}
To ultradiscretize the equation \eqref{dKdVpermanentform}, we first introduce an auxiliary variable $\varepsilon>0$ and change variables as
\[
x_n^t=e^{U_n^t/\varepsilon},\ \delta=e^{-L/\varepsilon}.
\]
By taking the logarithms of the both sides of \eqref{dKdVpermanentform}, we have
\[
U_{n+1}^{t+1}=-\varepsilon\log\left(e^{(U_{n+1}^t-L)/\varepsilon}+(1-e^{-L/ \varepsilon})\exp\left(\sum_{k=-\infty}^n(U_{k}^{t+1}-U_k^t)/\varepsilon\right)\right).
\]
By taking the limit $\varepsilon\to +0$,
and by using the identities
\[
\lim_{\varepsilon\to +0}\varepsilon\log(\exp(A/\varepsilon)+\exp(B/\varepsilon))=\max(A,B),
\]
and
\[-\max(A,B)=\min(-A,-B),\]
which are true for all $A,B\in\mathbb{R}$, we obtain the evolution equation of the BBS:
\begin{equation}
U_{n+1}^{t+1}=\min\left(L-U_{n+1}^t,\sum_{k=-\infty}^n(U_k^{t}-U_k^{t+1})\right). \label{hakotama1}
\end{equation}
Here the parameter $L$ is called the `capacity of the box'.
In particular, if $L=1$ the BBS \eqref{hakotama1} evolves inside of $\{0,1\}$.
We give an example of the time evolution in the figure \ref{figurebbs}.
\begin{figure}
\centering
\includegraphics[width=11cm,bb=100 590 500 740]{figure2.eps}
\caption{The time evolution $U_n^t$ of the BBS, where a dot `.' indicates a zero.}
\label{figurebbs}
\end{figure}
This solution corresponds to the three-soliton solution of the (continuous and discrete) KdV equation.
The origin of the name `Box Ball' system is that the solution of BBS can be seen as moving balls $(1$'s$)$ in an infinite array of empty boxes $(0$'s$)$.
Note that taking the ultra-discrete limit is closely related to taking the $p$-adic valuation of the given discrete equations as pointed out by S. Matsutani \cite{Matsutani}. We will look into this approach in more detail at the end of the paper.
Finally let us note on another topic on ultra-discrete equations. The ultra-discrete analogs of the Painlev\'{e} equation have been studied recently. For example, an ultra-discrete version of the $q$-Painlev\'{e} II equation has been obtained through `ultra-discretization with parity variables' \cite{Mimura}, which is a generalized method of ultra-discretization, and its special solution has been obtained. In connection with this theory of extended ultra-discretization procedure, N. Mimura et. al. proposed an ultra-discrete version of the singularity confinement test \cite{Mimura2}.
Another type of singularity confinement test for ultra-discrete equation has been proposed by N. Joshi and S. Lafortune, where the `singularities' for the max-plus equations have been introduced as the non-differentiable points of the piecewise linear functions \cite{Nalini1}.
\section{Arithmetic dynamical systems}
In this section let us summarize the definition of the field of $p$-adic numbers and briefly explain the `good' reduction.
Let $p$ be a prime number. A non-zero rational number $x \in {\mathbb Q}$ ($x \ne 0$) can be written uniquely as $x=p^{v_p(x)} \dfrac{u}{v}$ where $v_p(x), u, v \in {\mathbb Z}$ and $u$ and $v$ are coprime integers neither of which is divisible by $p$.
We call $v_p(x)$ the $p$-adic valuation of $x$.
The $p$-adic norm $|x|_p$ is defined as
\[|x|_p=p^{-v_p(x)}\ (|0|_p=0.)\]
The local field ${\mathbb Q}_p$ is a completion of ${\mathbb Q}$ with respect to the $p$-adic norm.
It is called the field of $p$-adic numbers, and its subring
\[
{\mathbb Z}_p:=\{x\in {\mathbb Q}_p | \ |x|_p \le 1\ (\leftrightarrow v_p(x)\ge 0)\}
\]
is called the ring of $p$-adic integers \cite{Murty}.
The $p$-adic norm satisfies a special inequality
\[
|x+y|_p \le \max\{|x|_p,|y|_p \}.
\]
\begin{Definition}
The absolute value $|\cdot|$ of a valued field $K$ is non-archimedean (or also called ultrametric) if the following estimate is satisfied for all $\alpha,\beta\in K$:
\[
|\alpha+\beta | \le \max \{|\alpha|,|\beta| \}.
\]
\end{Definition}
The $p$-adic norm $|\cdot|_p$ of $\mathbb{Q}_p$ is thus non-archimedean, and the field $\mathbb{Q}_p$ is a non-archimedean field.
Let $\mathfrak{p}=p{\mathbb Z}_p=\left\{x \in {\mathbb Z}_p |\ v_p(x) \ge 1 \right\}$ be the maximal ideal of ${\mathbb Z}_p$.
We define the reduction of $x$ modulo $\mathfrak{p}$ as
\[
\pi:\ {\mathbb Z}_p \ni x \mapsto \pi(x) \in {\mathbb Z}_p/\mathfrak{p} \cong {\mathbb F}_p.
\]
We write $\pi(x)$ as $\tilde{x}$ for simplicity.
Note that the reduction is a ring homomorphism:
\begin{equation}
\widetilde{x \pm y}=\tilde{x} \pm \tilde{y},\quad \widetilde{x \cdot y}=\tilde{x} \cdot \tilde{y},
\quad \widetilde{\left(\frac{x}{y}\right)}=\frac{\tilde{x}}{\tilde{y}}\ (\mbox{for}\ \tilde{y}\ne 0).
\label{prel}
\end{equation}
The element $x\in\mathbb{Z}_p$ is uniquely written as the $p$-adic polynomial series:
\[
x=\sum_{i=0}^\infty x_i p^i,
\]
where each $x_i\in\{0,1,2,\cdots, p-1\}$. The reduction is naturally computed as $\tilde{x}=x_0$.
The reduction map $\pi$ is generalized to ${\mathbb Q}_p^{\times}$:
\begin{equation}\label{padicreductionmap}
\pi:\ {\mathbb Q}_p^{\times}\ni x=p^{v_p(x)} u\ (u\in{\mathbb Z}_p^{\times})\mapsto
\tilde{x}=\left\{
\begin{array}{cl}
0 & (v_p(x)>0)\\
\infty & (v_p(x)<0)\\
\tilde{u} & (v_p(x)=0)
\end{array}
\right. \in {\mathbb F}_p\cup\{\infty\},
\end{equation}
which is no longer homomorphic.
The element $z\in\mathbb{Q}_p \setminus \mathbb{Z}_p$ is uniquely expanded as the Laurent series using $v_p(x)=k<0$:
\[
z=\sum_{i=k}^\infty z_i p^i,
\]
where each $z_i\in\{0,1,2,\cdots, p-1\}$ and $z_k\neq 0$.
In this case, the reduction is $\tilde{z}=\infty$.
For a dynamical system $\phi$ consisting of two rational mappings defined over $(x,y)\in {\mathbb Q}_p^2$:
\[
\phi(x,y)=\left(\frac{\sum_{i,j} d_{ij}x^i y^j}{\sum_{i,j} c_{ij}x^i y^j},\frac{\sum_{i,j} d'_{ij}x^i y^j}{\sum_{i,j} c'_{ij}x^i y^j}\right)\in(\mathbb{Z}_p(x,y))^2,
\]
the `reduced' system
\[
\tilde{\phi}=\pi(\phi)
\]
is defined as the system whose coefficients are all reduced to ${\mathbb F}_p$:
\[
\tilde{\phi}(x,y)=\left(\frac{\sum_{i,j} \tilde{d}_{ij}x^i y^j}{\sum_{i,j} \tilde{c}_{ij}x^i y^j},\frac{\sum_{i,j} \tilde{d}'_{ij}x^i y^j}{\sum_{i,j} \tilde{c}'_{ij}x^i y^j}\right)\in(\mathbb{F}_p (x,y))^2.
\]
We define the notion of `good reduction', which basically means that the time evolution of the system and the reduction modulo $\mathfrak{p}$ commutes.
\begin{Definition}\label{GRdef}
The rational system $\phi$ has a \textit{good reduction} modulo $\mathfrak{p}$ on the domain $\mathcal{D}\subseteq {\mathbb Z}_p^2$ if we have $\widetilde{\phi(x,y)}=\tilde{\phi}(\tilde{x},\tilde{y})$ for any $(x,y) \in \mathcal{D}$.
\end{Definition}
\begin{figure}
\centering
\includegraphics[width=8cm,bb=160 220 500 420]{figure3.eps}
\caption{Good reduction modulo $\pi$.}
\label{figuregr}
\end{figure}
It is equivalent for the diagram in the figure \ref{figuregr} to be commutative.
Originally, the good reduction was defined for a rational mapping with one variable \cite{Silverman}.
\begin{Definition}[\cite{Silverman}]
A rational map $\phi:\mathbb{P}^1\to\mathbb{P}^1$ defined over the valued field $K$ is said to have good reduction modulo $\mathfrak{p}$ if $\deg (\phi)=\deg (\tilde{\phi})$.
\end{Definition}
A map with a good reduction satisfies the following proposition.
\begin{Proposition}[\cite{Silverman}]
Let $\phi:\mathbb{P}^1\to\mathbb{P}^1$ be a rational map that has good reduction.
Then the map $\phi$ satisfies $\tilde{\phi} (\tilde{P})=\widetilde{\phi(P)}$ for all $P\in\mathbb{P}^1(K)$.
\end{Proposition}
With this property in mind, we define the good reduction for the dynamical systems with two variables as satisfying $\tilde{\phi} (\tilde{P})=\widetilde{\phi(P)}$ (Definition \ref{GRdef}).
\section{Purpose of our research and main results}
The purpose of our research is to define and investigate the discrete integrable equations over finite fields. We wish to study the implication of integrability over finite fields. We also expect to construct cellular automata directly from the discrete systems over finite fields.
In the case of linear discrete equations, for example, we can well-define the equations over finite fields just by changing the field on which the equations are defined to finite fields. However, in the case of nonlinear equations, since the systems are usually formulated by rational functions, the division by $0$ mod $p$ and some indeterminacies such as $0/0$ and $\infty\pm \infty$ frequently appear.
These points makes it difficult for us to well-define the equations over finite fields. Thus there has been few studies on the nonlinear discrete integrable equations defined over finite fields.
There are mainly three approaches to overcome this difficulty.
(a) The first one is to study the equation that does not contain division terms. Santini et. al. studied cellular automata constructed from one type of the Scr\"{o}dinger equations which is free from division
\cite{BSR}.
Bilinear form of the discrete KP and KdV equations \eqref{hirotamiwa}, \eqref{bilineardkdv} have been treated over the finite field $\mathbb{F}_p$ and their soliton solutions over $\mathbb{F}_p$ are obtained \cite{BD,Bialecki,DBK}.
(b) The second one is to restrict the domain of definition of the system so that the indeterminacies do not appear. The discrete Toda equation over finite fields and its graphical structures have been obtained \cite{ytakahashi}. Roberts and Vivaldi studied the integrability over finite fields in terms of the lengths of the periodic orbits \cite{Roberts,Roberts2,Roberts3}.
(c) The third one is to extend the space of initial conditions to make the mapping well-defined.
We investigate this third approach in this paper and try two different schemes.
(c-i) The first scheme is to apply the Sakai's theory on discrete Painlev\'{e} equations to the case of finite domains.
According to the theory developed by K. Okamoto and H. Sakai, the space of initial conditions for the discrete Painlev\'{e} equation becomes a birational surface as we extend the domain $\mathbb{P}^2$ by blowing-up at each singular point \cite{Okamotosp,Sakai}.
We show, in chapter \ref{sec3}, that this procedure is still valid if applied to the finite domain of definition $\mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$. We in particular treated the discrete Painlev\'{e} II equation, and presented the extended domain of initial conditions for $p=3$ and $m=1$. What is more, we have shown that the size of the extended domain we construct is
smaller than that made by the Sakai theory. Since the domain over the finite field has a discrete topology, the extended domain need not to be birational, but needs only to be bijective.
(c-ii) The second scheme of extension is to define the equations over the field of $p$-adic numbers $\mathbb{Q}_p$ and then reduce them to the finite field $\mathbb{F}_p$.
Through this approach, we wish to establish the significance of `integrability' of the systems over finite fields.
For example, if we try to define the discrete Painlev\'{e} equations over the field ${\mathbb F}_p$,
the initial value space is a finite set ${\mathbb F}_p \times {\mathbb F}_p$. Since the system consists of transitions between just $p^2$ points, it is not clear how we can formulate the integrability of the system from the integrability of the original system defined over $\mathbb{R}$ or $\mathbb{C}$. To resolve this problem we consider a pair of fields $({\mathbb Q}_p,\ {\mathbb F}_p)$ in chapter \ref{chapter3}. We can say that the system over ${\mathbb F}_p$ is `integrable' if it is integrable over ${\mathbb Q}_p$ and its reduction to the finite field ${\mathbb F}_p$ has an `almost good reduction' property.
We prove that, although the integrable mappings generally do not have a good reduction modulo a prime,
they do have an \textit{almost good reduction} (AGR), which is a generalized notion of good reduction.
We demonstrate that AGR can be used as an integrability detector over finite fields, by proving that
dP${}_{\mbox{\scriptsize{II} }}$ ,$q$P${}_{\mbox{\scriptsize{I} }}$, $q$P${}_{\mbox{\scriptsize{II} }}$, $q$P$_{\mbox{\scriptsize III}}$, $q$P$_{\mbox{\scriptsize IV}}$ and $q$P$_{\mbox{\scriptsize V}}$ equations have AGR over appropriate domains.
We also prove that one of the chaotic system, the Hietarinta-Viallet equation, has AGR and conclude that AGR is an arithmetic analog of the singularity confinement method.
We then discuss the relation of our approach to the Diophantine integrability proposed by R. Halburd \cite{Halburd}. We also propose a way to reduce the systems defined over the extended field of $\mathbb{Q}_p$, and then apply the procedure to obtain the cellular automaton from the complex-valued equations.
Lastly, in chapter \ref{sec5}, we apply our methods to the soliton systems, in particular the discrete KdV equation and one of its generalized forms \cite{KMT}.
We present two methods of extension: first one is to use a field of rational functions whose coefficients are in the finite field, and the second one is to use the field of $p$-adic numbers just like we have done in the previous sections. The soliton solutions obtained through both two methods are introduced and their periodicity is discussed. Special types of solitary waves that appear only over the finite fields are presented and their nature is studied. The reduction properties of the two-dimensional lattice systems are discussed.
Let us summarize the main results of this paper. The key definition is definition \ref{AGRdef}, in which almost good reduction property for non-autonomous discrete dynamical systems is formulated.
The main theorems of this paper are the followings:
theorem \ref{thminit} on the space of initial conditions, and theorems
\ref{PropQRT}, \ref{PropdP2}, \ref{PropqP1}, \ref{PropqP2}, \ref{PropqP3}, \ref{PropqP4}, \ref{PropqP5}, \ref{PropHV} on the almost good reduction property for discrete Painlev\'{e} equations.
\chapter{The space of initial conditions of one-dimensional systems}\label{sec3}
A discrete Painlev\'{e} equation is a non-autonomous and nonlinear second order ordinary difference equation with several parameters.
When it is defined over a finite field, the dependent variable takes only a finite number of values and its time evolution will attain an indeterminate state in many cases for generic values of the parameters and initial conditions.
\section{Space of initial conditions of dP${}_{\mbox{\scriptsize{II} }}$ equation}
For example, the discrete Painlev\'{e} II equation (dP${}_{\mbox{\scriptsize{II} }}$ equation) is defined as
\begin{equation}
u_{n+1}+u_{n-1}=\frac{z_n u_n+a}{1-u_n^2}\quad (n \in \mathbb{Z}),
\label{dP2equation}
\end{equation}
where $z_n=\delta n + z_0$ and $a, \delta, z_0$ are constant parameters \cite{NP}.
Let $q=p^k$ for a prime $p$ and a positive integer $k \in {\mathbb Z}_+$.
When \eqref{dP2equation} is defined over a finite field ${\mathbb F}_{q}$,
the dependent variable $u_n$ will eventually take values $\pm 1$ for generic parameters and initial values $(u_0,u_1) \in {\mathbb F}_{q}^2$,
and we cannot proceed to evolve it.
If we extend the domain from ${\mathbb F}_{q}^2$ to $({\mathbb P}{\mathbb F}_q)^2=({\mathbb F}_q\cup\{\infty\})^2$, ${\mathbb P}{\mathbb F}_q$ is not a field and we cannot define arithmetic operation in \eqref{dP2equation}.
To determine its time evolution consistently, we have two choices:
One is to restrict the parameters and the initial values to a smaller domain so that the singularities do not appear.
The other is to extend the domain on which the equation is defined.
In this article, we will adopt the latter approach.
It is convenient to rewrite \eqref{dP2equation} as:
\begin{equation}
\left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{\alpha_n}{1-x_n}+\dfrac{\beta_n}{1+x_n}-y_{n},\\
y_{n+1}&=x_n,
\end{array}
\right.
\label{dP2}
\end{equation}
where $\alpha_n:=\frac{1}{2}(z_n+a),\ \beta_n:=\frac{1}{2}(-z_n+a)$.
Then we can regard \eqref{dP2} as a mapping defined on the domain ${\mathbb F}_q \times {\mathbb F}_q$.
To resolve the indeterminacy at $x_n = \pm 1$, we apply the theory of the state of initial conditions developed by H. Sakai \cite{Sakai}.
First we extend the domain to ${\mathbb P}{\mathbb F}_q \times {\mathbb P}{\mathbb F}_q$, and then blow it up at four points $(x,y)=(\pm 1, \infty), (\infty, \pm 1)$
to obtain the space of initial conditions:
\begin{equation}
\tilde{\Omega}^{(n)}:=\mathcal{A}_{(1,\infty)}^{(n)}\cup \mathcal{A}_{(-1,\infty)}^{(n)}\cup \mathcal{A}_{(\infty,1)}^{(n)}\cup \mathcal{A}_{(\infty,-1)}^{(n)},
\label{omega}
\end{equation}
where $\mathcal{A}_{(1,\infty)}^{(n)}$ is the space obtained from the two dimensional affine space ${\mathbb A}^2$ by blowing up twice as
\begin{align*}
\mathcal{A}_{(1,\infty)}^{(n)}&:=\left\{ \left((x-1,y^{-1}),[\xi_1:\eta_1],[u_1:v_1] \right)\ \Big|\ \right. \\
&\qquad \eta_1 (x-1)=\xi_1 y^{-1},
(\xi_1+\alpha_n \eta_1)v_1=\eta_1(1-x)u_1 \ \Big\} \; \subset {\mathbb A}^2 \times {\mathbb P} \times {\mathbb P}.
\end{align*}
Similarly,
\begin{align*}
\mathcal{A}_{(-1,\infty)}^{(n)}&:=\left\{ \left((x+1,y^{-1}),[\xi_2:\eta_2],[u_2:v_2] \right)\ \Big|\
\right.\\
&\qquad \qquad \eta_2 (x+1)=\xi_2 y^{-1},(-\xi_2+\beta_n \eta_2)v_2=\eta_2(1+x)u_2 \ \Big\},\\
\mathcal{A}_{(\infty,1)}^{(n)}&:=\left\{ \left((x^{-1},y-1),[\xi_3:\eta_3],[u_3:v_3] \right)\ \Big|\
\right. \\
& \qquad \qquad \xi_3 (y-1)=\eta_3 x^{-1}, (\eta_3+\alpha_n \xi_3)v_3=\xi_3(1-y)u_3 \ \Big\},\\
\mathcal{A}_{(\infty,-1)}^{(n)}&:=\left\{ \left((x^{-1},y+1),[\xi_4:\eta_4],[u_4:v_4] \right)\ \Big|\
\right. \\
& \qquad \qquad \xi_4 (y+1)=\eta_4 x^{-1}, (-\eta_4+\beta_n \xi_4)v_3=\xi_4(1+y)u_4 \ \Big\}.
\end{align*}
The bi-rational map \eqref{dP2} is extended to the bijection $\tilde{\phi}_n: \ \tilde{\Omega}^{(n)} \rightarrow \tilde{\Omega}^{(n+1)}$
which decomposes as $\tilde{\phi}_n:=\iota_n \circ \tilde{\omega}_n$.
Here $\iota_n$ is a natural isomorphism which gives $\tilde{\Omega}^{(n)} \cong \tilde{\Omega}^{(n+1)}$, that is,
on $\mathcal{A}_{(1,\infty)}^{(n)}$ for instance, $\iota_n$ is expressed as
\begin{align*}
&\left((x-1,y^{-1}),[\xi :\eta ],[u :v ] \right) \in \mathcal{A}_{(1,\infty)}^{(n)} \\
&\rightarrow \quad
\left((x-1,y^{-1}),[\xi -\delta/2\cdot\eta:\eta ],[u :v ] \right) \in \mathcal{A}_{(1,\infty)}^{(n+1)}.
\end{align*}
The automorphism $\tilde{\omega}_n$ on $\tilde{\Omega}^{(n)}$ is induced from \eqref{dP2} and gives the mapping
\[
\mathcal{A}_{(1, \infty)}^{(n)} \rightarrow \mathcal{A}_{(\infty,1)}^{(n)}, \;
\mathcal{A}_{(\infty,1)}^{(n)} \rightarrow \mathcal{A}_{(-1,\infty)}^{(n)}, \;
\mathcal{A}_{(-1, \infty)}^{(n)} \rightarrow \mathcal{A}_{(\infty,-1)}^{(n)}, \;
\mathcal{A}_{(\infty,-1)}^{(n)} \rightarrow \mathcal{A}_{(1,\infty)}^{(n)}.
\]
Under the map $\mathcal{A}_{(1, \infty)}^{(n)} \rightarrow \mathcal{A}_{(\infty,1)}^{(n)}$,
\begin{align*}
x=1 \ \rightarrow \ E_2^{(\infty,1)} &\qquad u_3=\left(y-\frac{\beta_n}{2}\right)v_3, \\
E_1^{(1,\infty)} \ \rightarrow \ E_1^{(\infty,1)} &\qquad [\xi_1:-\eta_1]=[\alpha_n \xi_3+\eta_3:\xi_3], \\
E_2^{(1,\infty)} \ \rightarrow \ y'=1 &\qquad x'=\frac{u_1}{v_1}+\frac{\beta_n}{2},
\end{align*}
where $(x,y) \in \mathcal{A}_{(1, \infty)}^{(n)}$, $(x',y')\in \mathcal{A}_{(\infty,1)}^{(n)}$, $E_1^{\mbox{\scriptsize p}}$ and $E_2^{\mbox{\scriptsize p}}$ are the exceptional curves in $\mathcal{A}_{\mbox{\scriptsize p}}^{(n)}$ obtained by the first blowing up and the second blowing up respectively at the point p $\in \{(\pm 1, \infty),(\infty,\pm 1) \}$.
Similarly under the map $\mathcal{A}_{(\infty,1)}^{(n)} \rightarrow \mathcal{A}_{(-1,\infty)}^{(n)}$,
\begin{align*}
E_1^{(\infty,1)} \ \rightarrow \ E_1^{(-1,\infty)} &\qquad [\xi_3:\eta_3]=[\eta_2:(\beta_n-\alpha_n) \eta_2-\xi_2], \\
E_2^{(\infty,1)} \ \rightarrow \ E_2^{(-1,\infty)} &\qquad [u_3:v_3]=[-\beta_n u_2: \alpha_n v_2].
\end{align*}
The mapping on the other points are defined in a similar manner.
Note that $\tilde{\omega}_n$ is well-defined in the case $\alpha_n=0$ or $\beta_n=0$.
In fact, for $\alpha_n=0$, $E_2^{(1,\infty)}$ and $E_2^{(\infty,1)}$ can be identified with the lines $x=1$ and $y=1$ respectively.
Therefore we have found that, through the construction of the space of initial conditions, the dP${}_{\mbox{\scriptsize{II} }}$ equation can be well-defined over finite fields.
However there are some unnecessary elements in the space of initial conditions when we consider a finite field, because we are working on a discrete topology and do not need continuity of the map.
Let $\tilde{\Omega}^{(n)}$ be the space of initial conditions and $|\tilde{\Omega}^{(n)}|$ be the number of elements of it.
For the dP${}_{\mbox{\scriptsize{II} }}$ equation, we obtain $|\tilde{\Omega}^{(n)}|=(q+1)^2-4+4(q+1)-4+4(q+1)=q^2+10q+1$, since ${\mathbb P}{\mathbb F}_q$ contains $q+1$ elements.
However an exceptional curve $E_1^{\mbox{\scriptsize p}}$ is transferred to another exceptional curve $E_1^{\mbox{\scriptsize p}'}$, and $[1:0] \in E_2^{\mbox{\scriptsize p}}$ to
$[1:0] \in E_2^{\mbox{\scriptsize p}'}$ or to a point in $E_1^{\mbox{\scriptsize p}'}$. Hence we can reduce the space of initial conditions $\tilde{\Omega}^{(n)}$ to the minimal space of initial conditions $\Omega^{(n)}$ which is the minimal subset of $\tilde{\Omega}^{(n)}$ including ${\mathbb P}{\mathbb F}_q\times {\mathbb P}{\mathbb F}_q$, closed under the time evolution.
By subtracting unnecessary elements we find $|\Omega^{(n)}|=(q+1)^2-4+4(q+1)-4=q^2+6q-3$.
In summary, we obtain the following theorem:
\begin{Theorem}\label{thminit}
The domain of the dP${}_{\mbox{\scriptsize{II} }}$ equation over ${\mathbb F}_q$ can be extended to the minimal domain $\Omega^{(n)}$ on which the time evolution at time step $n$ is well defined. Moreover $|\Omega^{(n)}|=q^2+6q-3$.
\end{Theorem}
\begin{figure}
\centering
\includegraphics[width=12cm,bb=-152 152 510 662]{figure4.eps}
\caption{The orbit decomposition of the space of initial conditions $\tilde{\Omega}^{(n)}$ and the reduced one $\Omega^{(n)}$ for $q=3$.}
\label{figure1painleve}
\end{figure}
In figure \ref{figure1painleve}, we show a schematic diagram of the map $\tilde{\omega}_n$ on $\tilde{\Omega}^{(n)}$, and its restriction map $\omega_n:=\tilde{\omega}_n|_{\Omega^{(n)}}$ on $\Omega^{(n)}$
with $q=3$, $\alpha_0=1$ and $\beta_0=2$.
We can also say that figure \ref{figure1painleve} is a diagram for the autonomous version of the equation \eqref{dP2} when $\delta=0$.
In the case of $q=3$, we have $|\tilde{\Omega}^{(n)}|=40$ and $|\Omega^{(n)}|=24$.
The above approach is equally valid for other discrete Painlev\'{e} equations and we can define them over finite fields by constructing isomorphisms on the spaces of initial conditions.
Thus we conclude that a discrete Painlev\'{e} equation can be well defined over a finite field by redefining the initial domain properly.
Note that, for a general nonlinear equation, explicit construction of the space of initial conditions over a finite field is not so straightforward (see \cite{Takenawa} or a higher order lattice system) and it will not help us to obtain the explicit solutions. We will return to this topic in chapter \ref{sec5}.
\section{Combinatorial construction of the initial value space}
In the previous section, we investigated the space of initial conditions of the dP${}_{\mbox{\scriptsize{II} }}$ equation by considering a finite field analog of the Sakai theory.
In this section we introduce another method to construct the space of initial conditions over finite fields by directly and intuitively adding finite number of points to the original space ${\mathbb F}_q \times {\mathbb F}_q$.
We take $\alpha_0=1$, $\beta_0=2$, $q=3$ and $\delta=0$ (autonomization) in dP${}_{\mbox{\scriptsize{II} }}$ equation.
The mapping over ${\mathbb P}{\mathbb F}_3 \times {\mathbb P}{\mathbb F}_3$ is
\begin{equation}
\phi:\ \left\{
\begin{array}{cl}
x'&=-y+\dfrac{1}{1-x}+\dfrac{2}{1+x},\\
y'&=x.
\end{array}
\right.
\end{equation}
First we formally set a division $\dfrac{j}{\infty}\equiv 0$.
We extend the space ${\mathbb P}{\mathbb F}_3 \times {\mathbb P}{\mathbb F}_3$ to $\Omega$, and the map $\phi$ to $\hat{\phi}$, so that
$\hat{\phi}:\Omega\to\Omega$
is a bijective mapping and that $\hat{\phi}|_{({\mathbb P}{\mathbb F}_3)^2}=\phi$.
Since,
\[
\phi(1,0)=\phi(1,1)=\phi(1,2)=(\infty,1)
\]
the mapping $\phi$ is not injective. We want $\hat{\phi}$ to be bijective, therefore we set
\begin{equation}
\begin{array}{cl}
\hat{\phi}(1,0)&=(\infty,1)_1\in\Omega,\\
\hat{\phi}(1,1)&=(\infty,1)_2\in\Omega,\\
\hat{\phi}(1,2)&=(\infty,1)_3\in\Omega,
\end{array}
\end{equation}
where $(\infty, 1)_i$, $(i=1,2,3)$ denote distinct points in the extended space $\Omega$.
In the same manner, the point $(\infty,2)\in({\mathbb P}{\mathbb F}_3)^2$ is divided into three distinct points
$(\infty,2)_1$, $(\infty,2)_2$, $(\infty,2)_3$ in $\Omega$.
Next, since $\phi(\infty,1)=(2,\infty)$ and $\phi(\infty,2)=(1,\infty)$, both the points $(2,\infty)$ and $(1,\infty)$ must be divided into three distinct points in $\Omega$ in order to assure the bijectivity of $\hat{\phi}$.
Lastly, $\phi(1,\infty)$ (and therefore $\hat{\phi}\left((1,\infty)_i \right)$ for $i=1,2,3$) are not well-defined because $x'=-\infty+\frac{1}{0}+1$ is indeterminable. Since we have $y'=1$, we have no choice but to define $\hat{\phi}(1,\infty)_{i}=(j,1)\in\Omega$ in order for the map $\hat{\phi}$ to be well-defined and bijective. Here, $i=1,2,3$ and $j=1,2,3$. Note that $j\neq \infty$ since $(\infty,1)$ has already appeared as the image of the point $(1,0)$.
The same discussion applies to the image of the point $(2,\infty)$.
Summing up the discussions above, we obtain all transitions inside the newly constructed initial value space $\Omega$.
\begin{equation*}
\hat{\phi}:\left\{
\begin{array}{rl|rl}
\Omega&\to \Omega & \Omega & \to \Omega\\
(0,0)&\to (0,0) & (2,0)&\to (\infty,2)_1\\
(0,1)&\to (2,0) & (2,1)&\to (\infty,2)_2\\
(0,2)&\to (1,0)&(2,2)&\to (\infty,2)_3\\
(0,\infty)&\to (\infty,0)&(2,\infty)_1&\to (0,2)\\
(\infty,0)&\to (0,\infty)&(2,\infty)_2&\to (1,2)\\
(\infty,\infty)&\to (\infty,\infty)&(2,\infty)_3&\to (2,2)\\
(1,0)&\to (\infty,1)_1&(\infty,1)_{1}&\to (2,\infty)_{i}\\
(1,1)&\to (\infty,1)_2&(\infty,1)_{2}&\to (2,\infty)_{j}\\
(1,2)&\to (\infty,1)_3&(\infty,1)_{3}&\to (2,\infty)_{k}\\
(1,\infty)_1&\to (0,1)&(\infty,2)_{1}&\to (1,\infty)_{l}\\
(1,\infty)_2&\to (1,1)&(\infty,2)_{2}&\to (1,\infty)_{m}\\
(1,\infty)_3&\to (2,1)&(\infty,2)_{3}&\to (1,\infty)_{n}
\end{array}
\right.
\end{equation*}
Here $\{i,j,k\}=\{l,m,n\}=\{1,2,3\}$ and the order of three numbers of each set is not determined with the method in this section. To uniquely determine the correspondences,
we need to use singularity confinement method.
Note that, apart from the ambiguity above, $\Omega$ exactly corresponds to the space $\Omega^{(n)}$ constructed in the previous section and we have $|\Omega|=|\Omega^{(n)}|=24$.
See the figure \ref{figure1painleve}, and \ref{intuitivedP} for comparison.
\begin{figure}
\centering
\includegraphics[width=9cm,bb=50 100 400 550]{figure5.eps}
\caption{The orbit decomposition of the space $\Omega$ for $q=3$ by the mapping $\hat{\phi}$.}
\label{intuitivedP}
\end{figure}
\chapter{One-dimensional systems over a local field and their reduction modulo a prime}\label{chapter3}
We define a generalized notion of good reduction and explain how the notion can be used to detect the integrability of several dynamical systems.
\section{Almost good reduction}\label{AGRsection}
\begin{Definition}[\cite{KMTT}]\label{AGRdef}
A non-autonomous rational system
\[
\phi_n:\ {\mathbb Q}_p^2 \to ({\mathbb P} {\mathbb Q}_p)^2\ (n \in {\mathbb Z})
\]
has an almost good reduction modulo $\mathfrak{p}$ on the domain $\mathcal{D}^{(n)}\subseteq {\mathbb Z}_p^2\cap \phi_n^{-1}({\mathbb Q}_p^2)$, if there
exists a positive integer $m_{\mbox{\rm \scriptsize p};n}$ for any $\mbox{\rm p}=(x,y) \in \mathcal{D}^{(n)}$ and time step $n$ such that
\begin{equation}
\widetilde{\phi_n^{m_{\mbox{\rm \tiny p};n}}(x,y)}=\widetilde{\phi_n^{m_{\mbox{\rm \tiny p};n}}}(\tilde{x},\tilde{y}),
\label{AGR}
\end{equation}
where $\phi_n^m :=\phi_{n+m-1} \circ \phi_{n+m-2} \circ \cdots \circ \phi_n$.
\end{Definition}
\begin{figure}
\centering
\includegraphics[width=12cm, bb=50 200 650 500]{figure6.eps}
\caption{Almost good reduction property}
\label{figureagr}
\end{figure}
In this chapter, we take $\mathcal{D}^{(n)}={\mathbb Z}_p^2\cap \phi_n^{-1}({\mathbb Q}_p^2)$
and explain that having almost good reduction on $\mathcal{D}^{(n)}$ is equivalent for the mapping to be integrable. If $\mathcal{D}^{(n)}$ does not depend on $n$, we simply write $\mathcal{D}^{(n)}=\mathcal{D}$.
The almost good reduction property is equivalent
for the diagram in figure \ref{figureagr} to be commutative.
Note that if we can take $m_{\mbox{\rm \scriptsize p};n}=1$, the system has a good reduction.
Let us first see the significance of the notion of \textit{almost good reduction}. Let us consider the mapping $\Psi_\gamma$:
\begin{equation}
\left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{ax_n+1}{x_n^\gamma y_n}\\
y_{n+1}&=x_n
\end{array}
\right.,
\label{discretemap}
\end{equation}
where $a\in\mathbb{Z}_p^{\times}$ ($\leftrightarrow v_p(a)=0$) and $\gamma \in {\mathbb Z}_{\ge 0}$ are parameters.
The map \eqref{discretemap} is known to be integrable if and only if $\gamma=0,1,2$.
When $\gamma=0,1,2$, \eqref{discretemap} belongs to the QRT family \cite{QRT} and is integrable in the sense that it has a conserved quantity. We also note that when $\gamma=0,1,2$, \eqref{discretemap} is an autonomous version of the $q$-discrete Painlev\'{e} I equation.
\begin{Theorem}
The rational mapping \eqref{discretemap} with $a\in\mathbb{Z}_p^{\times}$ and $\gamma\in\mathbb{Z}_{\ge 0}$ has an almost good reduction modulo $\mathfrak{p}$ on the domain $\mathcal{D}$ if and only if $\gamma=0,1,2$.
Here $\mathcal{D}={\mathbb Z}_p^2\cap\Psi_{\gamma}^{-1}({\mathbb Q}_p^2)$. If $\gamma>0$ then
$\mathcal{D}=\{(x,y) \in {\mathbb Z}^2_p \ |x \ne 0, y \ne 0\}$. If $\gamma=0$ then
$\mathcal{D}=\{(x,y) \in {\mathbb Z}^2_p \ |y \ne 0\}$.
\label{PropQRT}
\end{Theorem}
\textbf{Proof}\;\;
(i) First note that
\[
\widetilde{\Psi_2(x_n,y_n)}=\widetilde{\Psi}_2(\tilde{x}_n,\tilde{y}_n) \qquad \mbox{for $\tilde{x}_n \ne 0, \ \tilde{y}_n \ne 0$},
\]
since $\tilde{x}_{n+1}=\pi\left(\frac{a\tilde{x}_n+1}{\tilde{x}^2_n \tilde{y}_n}\right)$ in this case.
(ii) For $(x_n,y_n)\in\mathcal{D}$ with $\tilde{x}_n=0$ and $\tilde{y}_n \ne 0$, we find that
$\widetilde{\Psi_2^k}(\tilde{x}_n=0,\tilde{y}_n)$ is not defined for $k=1,2$,
however it is defined if $k=3$ and we have
\[
\widetilde{\Psi_2^3(x_n,y_n)}=\widetilde{\Psi_2^3}(\tilde{x}_n=0,\tilde{y}_n)=\left(\dfrac{1}{a^2\tilde{y}_n},0\right).
\]
Since $\tilde{x}_n=0$ is equivalent to $|x|\le p^{-1}\ (\leftrightarrow v_p(x)\ge 1)$, the calculation is done by taking $x_n=p^k\cdot e$ where $k \ge 1$ and $|e|_p=1\ (\leftrightarrow v_p(e)=0 \leftrightarrow e\in\mathbb{Z}_p^{\times})$, and iterating the mapping.
(iii-a) If $\tilde{x}_n \ne 0$ and $\tilde{y}_n=0$ and $v_p(ax_n+b)<v_p(y_n)$ then, by a similar calculation to (ii), we obtain
\[
\widetilde{\Psi_2^5(x_n,y_n)}=\widetilde{\Psi_2^5}(\tilde{x}_n,\tilde{y}_n=0)=\left(0,\dfrac{a^2}{\tilde{x}_n}\right).
\]
(iii-b) If $\tilde{x}_n \ne 0$ and $\tilde{y}_n=0$ and $v_p(ax_n+b)\ge v_p(y_n)$ then,
the apparent singularity is canceled, since $\tilde{x}_{n+2}$ is finite. Then we have,
\[
\widetilde{\Psi_2(x_n,y_n)}=\widetilde{\Psi}_2(\tilde{x}_n,\tilde{y}_n).
\]
(iv) Finally, if $\tilde{x}_n=\tilde{y}_n = 0$, we find that
$\widetilde{\Psi_2^k}(\tilde{x}_n,\tilde{y}_n)$ is not defined for $k=1,2,..,7$,
however
\[
\widetilde{\Psi_2^8(x_n,y_n)}=\widetilde{\Psi_2^8}(\tilde{x}_n=0,\tilde{y}_n=0)=\left(0,0\right) .
\]
In the case (iv), we have an exceptional case just like (iii-b): numerator $ax_{n+i}+1$ may become $0$ modulo $p$ during the time evolution. The singularity is also confined in this exceptional case, because we just arrive at non-infinite values with fewer iterations than (iv) in this case.
Hence the map $\Psi_2$ has almost good reduction modulo $\mathfrak{p}$ on $\mathcal{D}$.
In a similar manner, we find that $\Psi_\gamma$ ($\gamma=0,1$) also has almost good reduction modulo $\mathfrak{p}$ on $\mathcal{D}$.
On the other hand, for $\gamma \ge 3$ and $\tilde{x}_n=0$, we easily find that
\[
{}^\forall k \in {\mathbb Z}_{\ge 0}, \;\; \widetilde{\Psi_\gamma^{k}(x_n,y_n)} \ne \widetilde{\Psi_\gamma^{k}}(\tilde{x}_n=0,\tilde{y}_n),
\]
since the order of $p$ diverges as we iterate the mapping from $x_n=p^k\cdot e$ where $k>0,\ e\in\mathbb{Z}_p^{\times}$.
Thus we have proved the theorem.\hfill\hbox{$\Box$}\vspace{10pt}\break
In this theorem we omitted the case of $a=0$.
However we can also treat this case.
In the case $\gamma=2$ and $a=0$, for example,
if we take
\[
f_{2k}:=x_{2k}x_{2k-1},\ f_{2k-1}:=(x_{2k-1}x_{2k-2})^{-1}
\]
\eqref{discretemap} turns into the trivial linear mapping $f_{n+1}=f_n$ which has apparently good reduction modulo $\mathfrak{p}$.
Note that having an almost good reduction is equivalent to the integrability of the equation in these examples.
\section{Refined Almost Good Reduction}
Next, we introduce another generalization of the good reduction property, which can be used as a `refined' almost good reduction.
We decompose the domain $\mathbb{Q}_p^2$ of the system $\phi$ into three disjoint parts, so that we have the following three types of points\footnote{Another way to define $D_N$ is to take $D_N:={\mathbb Z}_p^2\cap \phi^{-1}({\mathbb Z}_p^2)$. The results are essentially the same in this way, however, the computation becomes more complicated.}:
\[
\mathbb{Q}_p^2=D_N\sqcup D_S\sqcup E,
\]
where
\begin{equation}
\left\{
\begin{array}{ll}
D_N&:=\{(x,y)\in\mathbb{Z}_p^2|\ \tilde{\phi}(\tilde{x},\tilde{y})\ \mbox{is well defined in}\ \mathbb{F}_p^2\},\\
D_S&:=\mathbb{Z}_p^2\setminus D_N,\\
E&:=\{(x,y)\in\mathbb{Q}_p^2|\ \tilde{x}=\infty\ \mbox{or}\ \tilde{y}=\infty\}.
\end{array}
\right.
\end{equation}
For example, in the case of $\phi=\Psi_2$ in the previous section,
\[
D_N=\left(\mathbb{Z}_p^{\times}\right)^2,\ \ D_S=\mathbb{Z}_p^2\setminus \left(\mathbb{Z}_p^{\times}\right)^2,\ \ E=\mathbb{Q}_p^2\setminus \mathbb{Z}_p^2.
\]
\begin{Definition}
The mapping $\phi:\ \mathbb{Q}_p^2\to({\mathbb P}\mathbb{Q}_p)^2$ has the \textit{refined} AGR, if, for every point $(x,y)\in D_N$, there exists an integer $m>0$ such that
\[
\phi^m(x,y)\in D_N.
\]
Here the domain $D_N\subset \mathbb{Q}_p^2$ is defined as
\[
D_N:=\{(x,y)\in\mathbb{Z}_p^2|\ \tilde{\phi}(\tilde{x},\tilde{y})\ \mbox{is well defined in}\ \mathbb{F}_p^2\}.
\]
\end{Definition}
We call the domain $D_N$ the `normal' domain.
\begin{Proposition}
The system $\Psi_2$ has a refined AGR.
\end{Proposition}
\textbf{Proof}\;\;
First let us fix the initial condition $(x_0,\, x_{-1})\in D_N$ ($x_0,x_{-1}\in\mathbb{Z}_p^{\times}$).
(i) If $\tilde{x}_0\neq \widetilde{-1/a}$, then
\[
x_1=\frac{ax_0+1}{x_0^2 x_{-1}}\in \mathbb{Z}_p^{\times}.
\]
Therefore we have $(x_1, x_0)\in D_N$. We can take $m=1$ in this case.
(ii) If $\tilde{x}_0= \widetilde{-1/a}$, then we have $x_1\in p\mathbb{Z}_p$ ($\tilde{x}_1=0$). Thus $(x_1,x_0)\in D_S$. Continuing the iterations, we obtain $\tilde{x}_2=\infty$, $\tilde{x}_3=0$, $\tilde{x}_4=\widetilde{-1/a}$. Therefore,
\[
(x_2,x_1)\in E,\ (x_3,x_2)\in E,\ (x_4,x_3)\in D_S.
\]
Finally when $m=5$, we have $\tilde{x}_5=\tilde{x}_{-1}$, and
\[
(x_5,x_4)\in D_N.
\]
By an assumption that $x_{-1}\in \mathbb{Z}_p^{\times}$, we have $\tilde{x}_5\neq 0,\, \infty$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
We can prove by an argument similar to that in section \ref{AGRsection} that the mapping $\Psi_\gamma$ $(\gamma \ge 3)$ does not have a refine AGR. We can apply refine AGR to non-autonomous systems with minor modifications.
The AGR and the refined AGR are both effective as integrability detectors of dynamical systems over $\mathbb{F}_p$, as we will explain in the following sections. The refined AGR can be more suitable than AGR when we investigate in detail the behaviors around the singularities and zeros of the mapping over $\mathbb{Q}_p$.
On the other hand we have to note that refined AGR requires heavier computation than AGR does to be proved, in particular for non-autonomous systems. Also note that refined AGR and AGR are not equivalent, nor is one of them stronger/weaker than the other one.
In the case of the discrete Painlev\'{e} equations, we have basically the same results for both of the criteria. Therefore we will mainly explain the results regarding the AGR property in the following sections for simplicity.
\section{Time evolution over finite fields}
We explain how to define the time evolution of discrete dynamical systems over finite fields. Of course, we cannot determine the time evolution solely from the information over finite fields, however, we can propose one reasonable way of evolution by applying the refined AGR.
Let $\phi$ be a dynamical system with refine AGR property.
Let us fix the initial condition $(v,w)\in{\mathbb F}_p^2$.
(i) In the case of $\pi^{-1}(v)\times \pi^{-1}(w)\subset D_N$:
We define $(x_0,x_{-1}):=(v,w)$. By the refined AGR property we have a positive integer $m$ such that
\[
\phi^m (x_0,x_{-1})\in D_N.
\]
We define $\phi^m(v,w):=(\pi(x_m),\pi(x_{m-1}))\in{\mathbb F}_p^2$.
By an assumption, we do not encounter indeterminacies in this calculation.
We also define the intermediate states as $\phi^j(v,w):=(\pi(x_j),\pi(x_{j-1}))\in ({\mathbb P}{\mathbb F}_p)^2$, for $j=1,2,\cdots, m-1$.
Since $\phi^m(v,w)\in D_N$ again, we can continue the time evolution.
(ii) In the case of $\pi^{-1}(v)\times \pi^{-1}(w)\not\subset D_N$:
We cannot define the time evolution by the refined AGR. In this case, we encounter the indeterminate points $\phi^i(v,w)$ for some $i>0$.
We can determine one path of evolution by considering $\phi(v',w')$
for some $v'\in\pi^{-1}(v)$ and $w'\in\pi^{-1}(w)$. However, we have an ambiguity with respect to the choice of the inverse image of $v,w$.
If the mapping $\phi$ has the ordinary AGR in section \ref{AGRsection}, it helps us to define the time evolution for a few steps, however, $\phi^m(v,w)$ is not necessarily in $D_N$.
When we need to know all the trajectories, we may return to the chapter \ref{sec3} and extend the space of initial conditions.
\section{Discrete Painlev\'{e} II equation over finite fields and its special solutions}
Now let us examine the dP${}_{\mbox{\scriptsize{II} }}$ \eqref{dP2} over ${\mathbb Q}_p$.
We suppose that $p \ge 3$, and redefine the coefficients $\alpha_n$ and $\beta_n$ so that
they are periodic with period $p$:
\begin{align*}
\alpha_{i+mp}&:=\frac{(i\delta+z_0+a+n_\alpha p)}{2},\ \beta_{i+mp}:=\frac{(-i\delta-z_0+a+n_\beta p)}
{2},\\
(m\in{\mathbb Z}&,\ i\in \{0,1,2,\cdots,p-1\}),
\end{align*}
where the integer $n_\alpha$ ($n_\beta$) is chosen such that $0 \in \{\alpha_i\}_{i=0}^{p-1}$ $(0 \in \{\beta_i\}_{i=0}^{p-1})$.
As a result, we have $\tilde{\alpha}_{n}=\widetilde{\frac{n \delta +z_0+a}{2}}$, $\tilde{\beta}_{n}=\widetilde{\frac{-n \delta -z_0+a}{2}}$ and $|\alpha_n|_p,\ |\beta_n|_p\in\{0,1\}$ for any integer $n$.
\begin{Theorem}
Let $p\ge 7$.
Under the above assumptions, the dP${}_{\mbox{\scriptsize{II} }}$ equation has an almost good reduction modulo $\mathfrak{p}$ on $\mathcal{D}:={\mathbb Z}_p^2\cap\phi_n^{-1}({\mathbb Q}_p^2)=\{ (x,y) \in {\mathbb Z}_p^2\ |x \ne \pm 1\}$.
\label{PropdP2}
\end{Theorem}
\textbf{Proof}\;\;
We put $(x_{n+1},y_{n+1})=\phi_n(x_n,y_n)=\left( \phi_n^{(x)}(x_n,y_n),\phi_n^{(y)} (x_n,y_n) \right)$.
When $\tilde{x}_n \ne \pm 1$, we have
\[
\tilde{x}_{n+1}=\dfrac{\tilde{\alpha_n}}{1-\tilde{x}_n}+\dfrac{\tilde{\beta_n}}{1+\tilde{x}_n}-\tilde{y}_{n},
\quad \tilde{y}_{n+1}=\tilde{x}_n.
\]
Hence $\widetilde{\phi_n(x_n,y_n)}=\tilde{\phi}_n(\tilde{x}_n,\tilde{y}_n)$.
When $\tilde{x}_n=1$, we can write $x_n=1+p^k e$ $(k \in {\mathbb Z}_+,\ |e|_p=1)$.
We have to consider four cases\footnote{Precisely speaking, there are some special cases for $p=3,\ 5$ where we have to consider the fact $\alpha_n=\alpha_{n+p}$ or $\beta_n=\beta_{n+p}$. In these cases the map does not have an almost good reduction on $\mathcal{D}$. They are treated later in this section.}:\\
\noindent
(i) For $\alpha_n= 0 $,
\[
\tilde{x}_{n+1}=\tilde{\phi}_n^{(x)}(\tilde{x}_n,\tilde{y}_n)=\widetilde{\left(\frac{\beta_n}{2}\right)}-\tilde{y}_n.
\]
Hence we have $\widetilde{\phi_n(x_n,y_n)}=\tilde{\phi}_n(\tilde{x}_n,\tilde{y}_n)$.\\
(ii) In the case $\alpha_n \neq 0$ and $\beta_{n+2} \neq 0$,
\begin{align*}
x_{n+1}&=-\dfrac{(\alpha_n-\beta_n)(1+ep^k)+a}{ep^k(2+ep^k)}-y_n=-\dfrac{2\alpha_n+(\alpha_n-\beta_n)ep^k}{ep^k(2+ep^k)}-y_n,\\
x_{n+2}&=-\frac{\alpha_n^2+\mbox{polynomial of $O(p)$}}{\alpha_n^2+\mbox{polynomial of $O(p)$}},\\
x_{n+3}&=\dfrac{\{2\alpha_{n}y_n+2\delta \beta_{n+1}+(2-\delta)a \}\alpha_n^3 +\mbox{polynomial of $O(p)$}}{2\beta_{n+2}
\alpha_n^3 + \mbox{polynomial of $O(p)$} },
\end{align*}
Thus we have
\[
\tilde{x}_{n+3}=\frac{2\tilde{\alpha}_{n}\tilde{y}_n+2\delta \tilde{\beta}_{n+1}+(2-\delta)a}{2 \tilde{\beta}_{n+2}},
\quad \tilde{y}_{n+3}=-1,
\]
and $\widetilde{\phi_n^3(x_n,y_n)}=\widetilde{\phi_n^3}(\tilde{x}_n,\tilde{y}_n)$.\\
(iii) In the case $\alpha_n \neq 0$, $\beta_{n+2}= 0$ and $a \ne -\delta$, we have to calculate up to $x_{n+5}$.
After a lengthy calculation we find
\[
\tilde{x}_{n+4}=\widetilde{\phi_n^{5}}^{(y)}(1,\tilde{y}_n)=1,\;\mbox{and}\;
\tilde{x}_{n+5}=\widetilde{\phi_n^{5}}^{(x)}(1,\tilde{y}_n)=-\frac{a\delta-(a-\delta)\tilde{y}_n}{a+\delta},
\]
and we obtain $\widetilde{\phi_n^5(x_n,y_n)}=\widetilde{\phi_n^5}(\tilde{x}_n,\tilde{y}_n)$.\\
(iv) Finally, in the case $\alpha_n \neq 0$, $\beta_{n+2}= 0$ and $a = -\delta$ we have to calculate up to $x_{n+7}$.
The result is
\[
\tilde{x}_{n+6}=\widetilde{\phi_n^{7}}^{(y)}(1,\tilde{y}_n)=-1,\;\mbox{and}\;
\tilde{x}_{n+7}=\widetilde{\phi_n^{7}}^{(x)}(1,\tilde{y}_n)=\frac{1+2\tilde{y}_n}{2},
\]
and we obtain $\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(\tilde{x}_n,\tilde{y}_n)$.
Hence we have proved that the dP${}_{\mbox{\scriptsize{II} }}$ equation has almost good reduction modulo $\mathfrak{p}$ at $\tilde{x}_n=1$.
We can proceed in the case $\tilde{x}_n=-1$ in an exactly similar manner and find;
(v) For $\beta_n= 0$,
we have $
\tilde{x}_{n+1}=\tilde{\phi}_n^{(x)}(\tilde{x}_n=-1,\tilde{y}_n)=\widetilde{\left(\frac{\alpha_n}{2}\right)}-\tilde{y}_n$.
Therefore we have $\widetilde{\phi_n(x_n,y_n)}=\widetilde{\phi_n}(\tilde{x}_n,\tilde{y}_n)$.\\
(vi) In the case $\beta_n \neq 0$ and $\alpha_{n+2} \neq 0$,
\[
\widetilde{\phi_n^3(x_n,y_n)}=\widetilde{\phi_n^3}(\tilde{x}_n=-1,\tilde{y}_n)
=\left(\frac{a(-2+\delta)-2\delta\alpha_{n+1}+2\beta_n \tilde{y}_n}{2 \alpha_{n+2}}, 1 \right).
\]
(vii) In the case $\beta_n \neq 0$, $\alpha_{n+2} = 0$ and $a \ne \delta$,
\[
\widetilde{\phi_n^5(x_n,y_n)}=\widetilde{\phi_n^5}(\tilde{x}_n=-1,\tilde{y}_n)
=\left(\frac{a \delta+(a+\delta)\tilde{y}_n}{a-\delta}, -1 \right).
\]
(viii) In the case $\beta_n \neq 0$, $\alpha_{n+2} = 0$ and $a = \delta$,
\[
\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(\tilde{x}_n=-1,\tilde{y}_n)=\left(\frac{-1+2\tilde{y}_n}{2}, 1 \right).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
From this theorem, the evolution of the dP${}_{\mbox{\scriptsize{II} }}$ equation \eqref{dP2equation} over ${\mathbb P}{\mathbb F}_p$ can be constructed from the following seven cases which determine $u_{n+1},u_{n+2},\cdots$ from the initial values $u_{n-1}$ and $u_n$.
Note that we can assume that $u_{n-1} \ne \infty$ because all the cases in which the dependent variable $u_n$ becomes $\infty$ are included below\footnote{For $p \le 5$, there are some exceptional cases as shown in the proof of theorem \ref{PropdP2}.}. Here $a=\alpha_n+\beta_n$.
\begin{enumerate}
\item For $u_n \in \{2,3,...,p-2\}$, or $u_n=1$ and $\alpha_n = 0$, or $u_n=p-1$ and $\beta_n =0$,
\[
u_{n+1}=\dfrac{{\alpha}_n}{1-u_n}+\dfrac{{\beta}_n}{1+u_n}-u_{n-1}.\\
\]
\item For $u_n=1$, $\alpha_n \ne 0$ and $\beta_{n+2} \ne 0$,
\[
u_{n+1}=\infty,\ u_{n+2}=p-1,\ u_{n+3}=\frac{2\alpha_n u_{n-1}+2\delta\beta_{n+1}+(2-\delta)a}{2\beta_{n+2}}.
\]
\item For $u_n=1$, $\alpha_n \ne 0$, $\beta_{n+2} = 0$ and $a+\delta \ne 0$,
\begin{align*}
&u_{n+1}=\infty,\ u_{n+2}=p-1,\ u_{n+3}=\infty,\ u_{n+4}=1,\\
&\qquad u_{n+5}=-\frac{a\delta-(a-\delta){u}_{n-1}}{a+\delta}.
\end{align*}
\item For $u_n=1$, $\alpha_n \ne 0$, $\beta_{n+2} = 0$ and $a+\delta = 0$,
\begin{align*}
&u_{n+1}=\infty,\ u_{n+2}=p-1,\ u_{n+3}=\infty,\ u_{n+4}=1,\ u_{n+5}=\infty,\\
&\qquad u_{n+6}=p-1,\ u_{n+7}=\frac{1+2{u}_{n-1}}{2}.
\end{align*}
\item For $u_n=p-1$, $\beta_n \ne 0$ and $\alpha_{n+2} \ne 0$,
\[
u_{n+1}=\infty,\ u_{n+2}=1,\ u_{n+3}=\frac{a(-2+\delta)-2\delta\alpha_{n+1}+2\beta_n {u}_{n-1}}{2 \alpha_{n+2}}.
\]
\item For $u_n=p-1$, $\beta_n \ne 0$, $\alpha_{n+2} = 0$ and $a \ne \delta$,
\[
u_{n+1}=\infty,\ u_{n+2}=1,\ u_{n+3}=\infty,\ u_{n+4}=p-1,\ u_{n+5}=\frac{a \delta+(a+\delta){u}_{n-1}}{a-\delta}.
\]
\item For $u_n=p-1$, $\beta_n \ne 0$, $\alpha_{n+2} = 0$ and $a = \delta$,
\begin{align*}
&u_{n+1}=\infty,\ u_{n+2}=1,\ u_{n+3}=\infty,\ u_{n+4}=p-1,\ u_{n+5}=\infty, \\
&\qquad u_{n+6}=1,\ u_{n+7}=\frac{-1+2{u}_{n-1}}{2}.
\end{align*}
\end{enumerate}
\subsection{Exceptional cases where $p=3$ and $p=5$.}
Now we study the exceptional cases: $p=3$ and $p=5$.
In these cases, the almost good reduction property does not hold for all points in $\mathcal{D}=\{(x,y)\in{\mathbb Z}_p^2|x\neq \pm 1\}$. The situations change depending on the value `$x\!\!\mod p^2$'.
Here $(x\!\!\mod p^2)$ for a $p$-adic integer $x\in\mathbb{Z}_p$ is defined as $x=x_0+x_1p$ from the $p$-adic expansion
\[
x=x_0+x_1p+x_2p^2+x_3p^3+\cdots,
\]
of $x$ where each $x_i\in\{0,1,2,\cdots, p-1\}$.
Let us first consider the case of $p=3$.
We explain the details via an example when $\delta=2$, $\alpha_0=1$ and $\beta_0=2$.
The dP${}_{\mbox{\scriptsize{II} }}$ equation in this case takes the following three forms periodically:
\begin{equation}
\left\{
\begin{array}{cl}
\phi_{n+3j}:&\; x_{n+1+3j}=-x_{n-1+3j}+\dfrac{-2}{1-x_{n+3j}}+\dfrac{2}{1+x_{n+3j}},\\
\phi_{n+1+3j}:&\;x_{n+2+3j}=-x_{n+3j}+\dfrac{-1}{1-x_{n+1+3j}}+\dfrac{1}{1+x_{n+1+3j}},\\
\phi_{n+2+3j}:&\;x_{n+3+3j}=-x_{n+1+3j},
\end{array}
\right.
\end{equation}
where $j$ is an integer.
Unfortunately, the dP${}_{\mbox{\scriptsize{II} }}$ equation over ${\mathbb Z}_3^2$ with $\delta=2$, $\alpha_0=1$ and $\beta_0=2$ does not have an almost good reduction.
However, it has a somewhat weaker property than the almost good reduction on the following domain $\mathcal{D}$:
\[
\mathcal{D}=\{(x,y)\in{\mathbb Z}_3\times{\mathbb Z}_3|\ x\not\in \pm 1+9{\mathbb Z}_3 \}.
\]
\begin{Proposition}
Let $\mathcal{D}$ as above.
For every $(x,y)\in\mathcal{D}$, there exists a positive integer $m>0$ such that
\[
\widetilde{\phi_n^m(x_n,y_n)}=\tilde{\phi_n^m}\left((x_n\!\!\!\mod 9), \tilde{y}_n\right)
\]
holds. $($We will call this property `weak' almost good reduction.$)$
If $x \in 1+9 {\mathbb Z}_3$, then the solution modulo $\mathfrak{p}$ goes into the periodic orbit:
\[
(1,k)\mapsto(\infty,1)\mapsto(2,\infty)\mapsto(\infty,2)\mapsto(1,\infty)\mapsto(\infty,1)\cdots
\]
for $k=0,1,2$.
If $x \in -1+9 {\mathbb Z}_3$, then the solution goes into the periodic orbit:
\[
(2,k)\mapsto(\infty,2)\mapsto(1,\infty)\mapsto(\infty,1)\mapsto(2,\infty)\mapsto(\infty,2)
\]
for $k=0,1,2$.
\end{Proposition}
\textbf{Proof}\;\;
(i) If $\tilde{x}_n\neq \pm 1$ then we have $\widetilde{\phi_n(x_n,y_n)}=\widetilde{\phi_n}(\tilde{x}_n,\tilde{y}_n)$.
(ii) If $\tilde{x}_n=1$ then we have three cases to consider:
(ii-a) If ($\tilde{x}_{n-1}=1$ and $x_n\in 7+9{\mathbb Z}_3$) or ($\tilde{x}_{n-1}=2$ and $x_n\in 4+9{\mathbb Z}_3$) then,
\[
\widetilde{\phi_n^5(x_n,y_n)}=\widetilde{\phi_n^5}(x \!\!\!\mod 9,\ \tilde{y}_n)
=\left( 0, 1 \right).
\]
(ii-b) If ($\tilde{x}_{n-1}=0$ and $x_n\in 7+9{\mathbb Z}_3$) or ($\tilde{x}_{n-1}=1$ and $x_n\in 4+9{\mathbb Z}_3$) then,
\[
\widetilde{\phi_n^5(x_n,y_n)}=\widetilde{\phi_n^5}(x \!\!\!\mod 9,\ \tilde{y}_n)
=\left( 1, 1 \right).
\]
(ii-c) If ($\tilde{x}_{n-1}=2$ and $x_n\in 7+9{\mathbb Z}_3$) or ($\tilde{x}_{n-1}=0$ and $x_n\in 4+9{\mathbb Z}_3$) then,
\[
\widetilde{\phi_n^5(x_n,y_n)}=\widetilde{\phi_n^5}(x \!\!\! \mod 9,\ \tilde{y}_n)
=\left( 2, 1 \right).
\]
(ii-d) If $x_n\in 1+9{\mathbb Z}_3$ then, both the reduced mappings and the reduced coordinates return to the original position after iterating the mappings $12$ times
from the lemma \ref{caseD}.
(iii) If $\tilde{x}_n=-1$ then we have three points to consider: $(2,0)$, $(2,1)$ and $(2,2)$.
The proof is much the same as in the case of (ii).\hfill\hbox{$\Box$}\vspace{10pt}\break
\begin{Lemma}\label{caseD}
For the initial value $(x_{n+1},x_n)\in\mathbb{Q}_3\times\mathbb{Q}_3$, with $x_n\in1+9\mathbb{Z}_3$ and $x_{n+1}\in\mathbb{Q}_3\setminus 3\mathbb{Z}_3$,
we have $x_{n+12}\in 1+9\mathbb{Z}_3$ and $x_{n+13}\in \mathbb{Q}_3\setminus 3\mathbb{Z}_3$ for
\[
\phi_{n+1}^{12}(x_{n+1},x_n)=(x_{n+13},x_{n+12}).
\]
\end{Lemma}
\textbf{Proof}\;\;
We can write $x_n=1+9m$ and $x_{n+1}=\dfrac{n}{3}$ with $m\in\mathbb{Z}_3$ and $n\in\mathbb{Q}_3\setminus 3\mathbb{Z}_3$.
By iterating the mappings we have
\begin{eqnarray*}
x_{n+12}&=&\frac{4n^6-8m n^7+4m^2n^8+O(3)}{4n^6-8m n^7+4m^2n^8+O(3)}\equiv 1,\\
x_{n+13}&=&\frac{16n^9-64mn^{10}-64m^3n^{12}+16m^4n^{13}+O(3)}{O(3)}\equiv \infty,
\end{eqnarray*}
where $O(3)$ denotes a polynomial whose coefficients are multiples of three.
Therefore we obtain $x_{n+12}\in 1+9\mathbb{Z}_3$ and $x_{n+13}\in \mathbb{Q}_3\setminus 3\mathbb{Z}_3$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
In the case of $p=5$, we have a similar result.
Let us consider the dP${}_{\mbox{\scriptsize{II} }}$ equations with the same parameters as in the case of $p=3$: $\delta=2$, $\alpha_0=1$ and $\beta_0=2$.
Then the dP${}_{\mbox{\scriptsize{II} }}$ equation is expressed as the following five maps.
\begin{equation}
\left\{
\begin{array}{cl}
\phi_{n+5j}:&\; x_{n+1+5j}=-x_{n-1+5j}+\dfrac{-4}{1-x_{n+5j}}+\dfrac{2}{1+x_{n+5j}},\\
\phi_{n+1+5j}:&\;x_{n+2+5j}=-x_{n+5j}+\dfrac{-3}{1-x_{n+1+5j}}+\dfrac{1}{1+x_{n+1+5j}},\\
\phi_{n+2+5j}:&\;x_{n+3+5j}=-x_{n+1+5j}+\dfrac{-2}{1-x_{n+2+5j}},\\
\phi_{n+3+5j}:&\;x_{n+4+5j}=-x_{n+2+5j}+\dfrac{-1}{1-x_{n+3+5j}}+\dfrac{-1}{1+x_{n+3+5j}},\\
\phi_{n+4+5j}:&\;x_{n+5+5j}=-x_{n+3+5j}+\dfrac{-2}{1+x_{n+4+5j}},
\end{array}
\right.
\end{equation}
where $j$ is an integer.
\begin{Proposition}
The dP${}_{\mbox{\scriptsize{II} }}$ equation above have `weak' almost good reduction on the following domain $\mathcal{D}$:
\[
\mathcal{D}=\{(x,y)\in\mathbb{Z}_5^2|x\neq +1+25\mathbb{Z}_5\}.
\]
If $x\in 1+25\mathbb{Z}_5$ then, the time evolution goes into a periodic orbit:
\[
(4,\infty)\to(\infty,4)\to(1,\infty)\to(\infty,1).
\]
\end{Proposition}
\textbf{Proof}\;\;
(i) If $\tilde{x}_n\neq \pm 1$ then we have $\widetilde{\phi_n(x_n,y_n)}=\widetilde{\phi_n}(\tilde{x}_n,\tilde{y}_n)$.
(ii) If $\tilde{x}_{n}=1$ then, the time evolution depends on $x_n\!\!\! \mod 25=1,6,11,16$ or $21$.
If $x_n\in 1+25\mathbb{Z}_5$ then, the orbit is periodic with a period $4$. We classify other four cases below.
(ii-a) If $x_n\in 6+25\mathbb{Z}_5$ then,
\[
\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(x_n\!\!\!\! \mod 25=6,\tilde{y}_n)
=\left( \widetilde{y_n - 1}, -1 \right).
\]
(ii-b) If $x_n\in 11+25\mathbb{Z}_5$ then,
\[
\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(x_n\!\!\!\! \mod 25=11,\tilde{y}_n)
=\left( \widetilde{y_n + 1}, -1 \right).
\]
(ii-c) If $x_n\in 16+25\mathbb{Z}_5$ then,
\[
\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(x_n\!\!\!\! \mod 25=16,\tilde{y}_n)
=\left( \widetilde{y_n}, -1 \right).
\]
(ii-d) If $x_n\in 21+25\mathbb{Z}_5$ then,
\[
\widetilde{\phi_n^7(x_n,y_n)}=\widetilde{\phi_n^7}(x_n\!\!\!\! \mod 25=21,\tilde{y}_n)
=\left( \widetilde{y_n + 2}, -1 \right).
\]
(iii) If $\tilde{x}_{n}=-1$ then,
\[
\widetilde{\phi_n^3(x_n,y_n)}=\widetilde{\phi_n^3}(\tilde{x}_n=1,\tilde{y}_n=1)
=\left( \widetilde{7-y_n}, 1 \right).
\]
In the case of (iii), the time evolution up to the third iteration does not depend on $x_n\!\!\! \mod 25$, but depends only on $(x_n\!\!\! \mod 5)=\tilde{x}_n$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
Note that in this case, the singularities are confined if $x\in -1+25\mathbb{Z}_5$, unlike the result in the case of $p=3$.
\subsection{Its Special solutions}
Next we consider special solutions to \eqref{dP2equation} over ${\mathbb P}{\mathbb F}_p$.
For the dP${}_{\mbox{\scriptsize{II} }}$ equation over ${\mathbb C}$, rational function solutions have already been obtained \cite{Kajiwara}.
Let $N$ be a positive integer and $\lambda \ne 0$ be a constant. Suppose that
$a=-2(N+1)/ \lambda$, $\delta=z_0=2/\lambda$,
\[
L_k^{(\nu)}(\lambda):=\left\{
\begin{array}{cl}
\displaystyle
\sum_{r=0}^k(-1)^r
\begin{pmatrix}
k+\nu\\
k-r
\end{pmatrix}
\dfrac{\lambda^r}{r!}&\quad(k \in {\mathbb Z}_{\ge 0}),\\
0 &\quad (k \in {\mathbb Z}_{<0}),
\end{array}
\right.
\]
and
\begin{equation}
\tau_N^n:=\det
\begin{vmatrix}
L_{N+1-2i+j}^{(n)}(\lambda)
\end{vmatrix}_{1\le i,j\le N}.
\label{Ltau}
\end{equation}
Then a rational function solution of the dP${}_{\mbox{\scriptsize{II} }}$ equation is given by
\begin{equation}
u_n=\frac{\tau_{N+1}^{n+1}\tau_{N}^{n-1}}{\tau_{N+1}^n\tau_N^n}-1.
\label{rationaldP2}
\end{equation}
If we deal with the terms in \eqref{Ltau} and \eqref{rationaldP2} by arithmetic operations over ${\mathbb F}_p$,
we encounter terms such as $1/p$ or $p/p$ and \eqref{rationaldP2} is not well-defined.
However, from theorem \ref{PropdP2}, we find that \eqref{rationaldP2} gives a solution to the dP${}_{\mbox{\scriptsize{II} }}$ equation over ${\mathbb P}{\mathbb F}_q$ by
the reduction from ${\mathbb Q} (\subset {\mathbb Q}_p)$, as long as the solution avoids the points $(\tilde{\alpha}_n=0,\ u_n=1)$ and $(\tilde{\beta_n}=0,\ u_n=-1)$, which is equivalent to the solution satisfying
\begin{equation}
\tau_{N+1}^{-N-1} \tau_N^{-N-3}\not\equiv 0,\ \frac{\tau_{N+1}^{N+1}\tau_N^{N-1}}{\tau_{N+1}^N\tau_N^N}\not\equiv 2, \label{taucond}
\end{equation}
where the superscripts are considered modulo $p$. Note that $\tau_N^n\equiv \tau_N^{n+p}$ for all integers $N$ and $n$.
In the table below, we give several \textit{rational solutions to the dP${}_{\mbox{\scriptsize{II} }}$ equation} with $N=3$ and $\lambda=1$ over ${\mathbb P}{\mathbb F}_q$ for $q=3,5,7$ and $11$. We see that the period of the solution is $p$.
\begin{scriptsize}
\[
\begin{array}{|c|c|c|l|}
\hline
& & & \\[-2mm]
\raise3mm\hbox{$p$} & \raise3mm\hbox{$\tau_{N+1}^{-N-1}\tau_N^{-N-3}$} & \raise3mm\hbox{$\frac{\tau_{N+1}^{N+1}\tau_N^{N-1}}{\tau_{N+1}^N\tau_N^N}$}
&\enskip
\raise3mm\hbox{$\tilde{u}_1,\tilde{u}_2,\tilde{u}_3,\tilde{u}_4,\tilde{u}_5,\tilde{u}_6,\tilde{u}_7,\tilde{u}_8,\tilde{u}_9,\tilde{u}_{10},\ldots$} \\ \hline & & & \\[-3mm]
3 & \infty & \infty &\
\raise1mm\hbox{$\underbrace{1,2,\infty}_{\mbox{period
$3$}},1,2,\infty,1,2,\infty,1,2,\infty,1,2,\infty,\ldots$}
\\[5mm] \hline
& & & \\[-3mm]
5 & \infty & 4 &\
\raise1mm\hbox{$\underbrace{4,2,3,1,\infty}_{\mbox{period
$5$}},4,2,3,1,\infty,4,2,3,1,\infty,4,\ldots$} \\[5mm] \hline & & & \\[-3mm]
7 & \infty & 0 &\
\raise1mm\hbox{$\underbrace{1,\infty,6,5,1,\infty,6}_{\mbox{period
$7$}},1,\infty,6,5,1,\infty,6,1,\infty,\ldots$} \\[5mm] \hline & & & \\[-3mm]
11 & 0 & 7 &\
\raise1mm\hbox{$\underbrace{\infty,1,6,1,\infty,10,\infty,1,0,2,10}_{\mbox{period
$11$}},\infty,1,6,1,\ldots$} \\[5mm] \hline \end{array} \]
\end{scriptsize}
We see from the case of $p=11$ that we may have an appropriate solution even if the condition \eqref{taucond} is not satisfied, although this is not always true.
The dP${}_{\mbox{\scriptsize{II} }}$ equation has linearized solutions also for $\delta=2a$ \cite{Tamizhmani}.
With our new method, we can obtain the corresponding solutions without difficulty.
\section{$q$-discrete Painlev\'{e} equations over finite fields}
\subsection{$q$-discrete Painlev\'{e} I equation}
One of the forms of the $q$-discrete analogs of the Painlev\'{e} I equation is as follows:
\begin{equation}
x_{n+1}x_{n-1}=\frac{aq^nx_n+b}{x_n^2}, \label{qp1eq}
\end{equation}
where $a$ and $b$ are parameters \cite{Sakai}.
We rewrite \eqref{qp1eq} for our convenience as a form of dynamical system with two variables:
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{aq^n x_n+b}{x_n^2 y_n},\\
y_{n+1}&=x_n.
\end{array}
\right.
\label{qP1}
\end{equation}
We can prove the AGR property for this equation:
\begin{Theorem}
Suppose that $a, b, q$ are integers not divisible by $p$, then the mapping \eqref{qP1} has an almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}:={\mathbb Z}_p^2\cap\Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ |x \ne 0, y\ne 0\}$.
\label{PropqP1}
\end{Theorem}
\textbf{proof}\;\;
Let $(x_{n+1},y_{n+1})=\Phi_n(x_n,y_n)$.
Just like we have done before, we have only to examine the cases $\tilde{x}_n=0$, and $\tilde{y}_n=0$.
We use the abbreviation $\tilde{q}=q, \tilde{a}=a, \tilde{b}=b$ for simplicity. By direct computation we obtain;
(i) If $\tilde{x}_n=0$ and $\tilde{y}_n\ne 0$, then
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(0,\tilde{y}_n)=\left(\frac{b^2}{a^2 q^2 \tilde{y}_n},0\right).
\]
(ii) If $\tilde{y}_n=0$ and $\tilde{x}_n\ne 0$, then
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n,0)=\left(0, \frac{a^2 q^4}{b \tilde{x}_n}\right).
\]
(iii) If $\tilde{x}_n=0$ and $\tilde{y}_n= 0$, then
\[
\widetilde{\Phi_n^8(x_n,y_n)} = \widetilde{\Phi_n^8}(0,0)=\left( 0, 0\right).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
The same is true for a refined AGR property.
\begin{Proposition}
Suppose that $a, b, q$ are integers not divisible by $p$, then the mapping \eqref{qP1} has a refined almost good reduction. Here the normal domain is
$D_N=(\mathbb{Z}_p^{\times})^2$.
\end{Proposition}
\textbf{Proof}\;\;
First let us fix the initial condition $(x_0,\, x_{-1})\in D_N$ ($x_0,x_{-1}\in\mathbb{Z}_p^{\times}$).
(i) If $\tilde{x}_0\neq \pi(-b/a)$, then
\[
x_1=\frac{ax_0+1}{x_0^2 x_{-1}}\in \mathbb{Z}_p^{\times}.
\]
Therefore we have $(x_1, x_0)\in D_N$. We can take $m=1$ in this case.
(ii) If $\tilde{x}_0= \pi(-b/a)$, then we have $x_1\in p\mathbb{Z}_p$ ($\tilde{x}_1=0$). Thus $(x_1,x_0)\in D_S$. Continuing the iterations, we obtain $\tilde{x}_2=\infty$, $\tilde{x}_3=0$ and $\tilde{x}_4=\displaystyle\pi\left(\frac{-b}{a q^4}\right)$. Therefore,
\[
(x_2,x_1)\in E,\ (x_3,x_2)\in E,\ (x_4,x_3)\in D_S.
\]
At the next step,
\[
\tilde{x}_5=\pi\left(\frac{q^6(a^3(q-q^5)+b^2 \tilde{x}_{-1})}{b^2}\right).
\]
(ii-1) If $\tilde{x}_{-1}\neq \pi\left(a^3(q^5-q)/b^2)\right)$ then
\[
(x_5,x_4)\in D_N.
\]
(ii-2) If $\tilde{x}_{-1}= \pi\left(a^3(q^5-q)/b^2)\right)$ then
we have to continue the iterations further until we obtain
\[
\tilde{x}_8=-\frac{b}{a q^8},\ \tilde{x}_9=-\frac{a^3(q^4-1)q^{19}}{b^2}.
\]
Here note that $\tilde{x}_9=0$ is equivalent to $q^4\equiv 1\mod \pi$, which is in turn equivalent to $\tilde{x}_{-1}=0$. Therefore by the assumption, we have $\tilde{x}_9\neq 0$.
Thus we have proved that
\[
(x_9,x_8)\in D_N.\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
\subsection{$q$-discrete Painlev\'{e} II equation}
We study the $q$-discrete analog of the Painlev\'{e} II equation ($q$P${}_{\mbox{\scriptsize{II} }}$ equation):
\begin{equation}
(z(q\tau)z(\tau)+1)(z(\tau)z(q^{-1}\tau)+1)=\frac{a \tau^2 z(\tau)}{\tau-z(\tau)},
\label{qP2eq}
\end{equation}
where $a$ and $q$ are parameters \cite{Kajiwaraetal}.
It is also convenient to rewrite \eqref{qP2eq} as
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{a(q^n\tau_0)^2x_n-(q^n\tau_0-x_n)(1+x_ny_n)}{x_n(q^n\tau_0-x_n)(x_ny_n+1)},\\
y_{n+1}&=x_n,
\end{array}
\right.
\label{qP2}
\end{equation}
where $\tau=q^n\tau_0$.
Similarly to the dP${}_{\mbox{\scriptsize{II} }}$ equation, we can prove the following theorem:
\begin{Theorem}
Suppose that $a, q, \tau_0$ are integers not divisible by $p$, then the mapping \eqref{qP2} has an almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}^{(n)}:={\mathbb Z}_p^2\cap\Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ |x \ne 0, x \ne q^n\tau_0,\ xy+1 \ne 0\}$.
\label{PropqP2}
\end{Theorem}
\textbf{Proof}\;\;
Let $(x_{n+1},y_{n+1})=\Phi_n(x_n,y_n)$.
Just like the proof of theorem \ref{PropdP2}, we have only to examine the cases $\tilde{x}_n=0, \widetilde{q^n\tau_0}$
and $-\tilde{y}_n^{-1}$.
We use the abbreviation $\tilde{q}=q, \tilde{\tau}=\tau, \tilde{a}=a$ for simplicity.
By direct computation, we obtain;\\
(i) If $\tilde{x}_n=0$ and $ -1+q^2-aq^2\tau^2+q^3\tau^2-q^2\tau \tilde{y}_n \ne 0$,
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=0,\tilde{y}_n)=\left( \frac{ 1-q^2+aq^2\tau^2-q^3\tau^2-aq^4\tau^2+q^2\tau \tilde{y}_n}{q^2\tau( -1+q^2-aq^2\tau^2+q^3\tau^2-q^2\tau \tilde{y}_n ) } , q^2\tau \right).
\]
(ii) If $\tilde{x}_n=0$ and $ -1+q^2-aq^2\tau^2+q^3\tau^2-q^2\tau \tilde{y}_n = 0$,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n=0,\tilde{y}_n) =\left( \frac{1-q^2+q^7\tau^2-aq^8\tau^2}{q^4\tau}, 0 \right).
\]
(iii) If $\tilde{x}_n=\tau$ and $1+\tau \tilde{y}_n\ne 0$,
\begin{align*}
&\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=\tau,\tilde{y}_n)\\
&\quad =\left(\frac{ 1-q^2+(a+q-aq^2)q^2\tau^2+(1-q^2)\tau\tilde{y}+(1-aq)q^3\tau^3\tilde{y} }{q^2\tau(1+\tau\tilde{y}_n)}, 0 \right).
\end{align*}
(iv) If $\tilde{x}_n=\tau$ and $1+\tau \tilde{y_n}= 0$,
\[
\widetilde{\Phi_n^7(x_n,y_n)} = \widetilde{\Phi_n^7}(\tilde{x}_n=\tau,\tilde{y}_n)=\left(\frac{1}{aq^{12}\tau^3}, - aq^{12}\tau^3 \right).
\]
(v) If $\tilde{x_n}\tilde{y}_n+1=0$,
\[
\widetilde{\Phi_n^7(x_n,y_n)} = \widetilde{\Phi_n^7}(\tilde{x}_n=-\tilde{y}_n^{-1}, \tilde{y}_n)=\left(-\frac{1}{aq^{12}\tau^4\tilde{y}_n}, aq^{12}\tau^4\tilde{y}_n \right).
\]
Thus we complete the proof. \hfill\hbox{$\Box$}\vspace{10pt}\break
Note that the `refined' AGR is not properly defined for the $q$P${}_{\mbox{\scriptsize{II} }}$ equation, since we have the term $(q^n \tau_0-x_n)$ in the denominator of $x_{n+1}$, which prevents the definition of the normal domain $D_N$.
We can overcome this problem if we are to define the normal domain $D_N^{(n)}$ to be non-autonomous, however, the computation becomes heavier.
\subsection{Special solutions of $q$P${}_{\mbox{\scriptsize{II} }}$ equation}
From the previous theorem, we can define the time evolution of the $q$P${}_{\mbox{\scriptsize{II} }}$ equation explicitly just like the dP${}_{\mbox{\scriptsize{II} }}$ equation in the previous section.
We consider special solutions for $q$P${}_{\mbox{\scriptsize{II} }}$ equation \eqref{qP2eq} over ${\mathbb P}{\mathbb F}_p$.
In \cite{HKW} it has been proved that \eqref{qP2eq} over ${\mathbb C}$ with $a=q^{2N+1}$ $(N \in {\mathbb Z})$ is solved by
the functions given by
\begin{align}
z^{(N)} (\tau) &=
\begin{cases}
\displaystyle \frac{g^{(N)} (\tau) g^{(N+1)} (q \tau)}{q^N g^{(N)} (q \tau) g^{(N+1)} (\tau)}
& (N \ge 0) \\
\displaystyle \frac{g^{(N)} (\tau) g^{(N+1)} (q \tau)}{q^{N+1} g^{(N)} (q \tau) g^{(N+1)} (\tau)} & (N<0)
\end{cases}, \label{eq:gtoz} \\
g^{(N)} (\tau) &=
\begin{cases}
\det |w(q^{-i+2j-1}\tau)|_{1\le i,j\le N} & (N>0) \\
1 & (N=0) \\
\det |w(q^{i-2j}\tau)|_{1\le i,j\le |N|}& (N<0)
\end{cases}, \label{eq:det_sol_g}
\end{align}
where $w(\tau)$ is a solution of the $q$-discrete Airy equation:
\begin{equation}
w(q\tau)-\tau w(\tau)+w(q^{-1}\tau)=0.
\label{dAiryeq}
\end{equation}
As in the case of the dP${}_{\mbox{\scriptsize{II} }}$ equation, we can obtain the corresponding solutions
to \eqref{eq:gtoz} over ${\mathbb P}{\mathbb F}_p$ by reduction modulo $\mathfrak{p}$ according to the theorem \ref{PropqP2}.
For that purpose, we have only to solve \eqref{dAiryeq} over ${\mathbb Q}_p$.
By elementary computation we obtain:
\begin{equation}
w(q^{n+1}\tau_0)=c_1P_{n}(\tau_0;q)+c_0P_{n-1}(q\tau_0;q),
\label{Airy:sol}
\end{equation}
where $c_0,\ c_1$ are arbitrary constants and $P_n(x;q)$ is defined by the tridiagonal determinant:
\[
P_n(x;q):=
\left|
\begin{array}{ccccc}
qx&-1&&&\\
-1&q^2x&-1&&\hugesymbol{0}\\
&\ddots&\ddots&\ddots& \\
&&-1&q^{n-1}x&-1\\
\hugesymbol{0}&&&-1&q^{n}x
\end{array}
\right|.
\]
The function $P_n(x;q)$ is the polynomial of $n$th order in $x$,
\[
P_n(x;q)=\sum_{k=0}^{[n/2]}(-1)^k a_{n;k}(q)x^{n-2k},
\]
where $a_{n;k}(q)$ are polynomials in $q$.
If we let $i \ll j $ denotes $i<j-1$, and
$
c(j_1,j_2,...,j_k):=\sum_{r=1}^k (2j_r+1),
$
then, we have
\begin{eqnarray*}
a_{n;0}&=&q^{n(n+1)/2},\\
a_{n;k}&=&\sum_{1\le j_1 \ll j_2 \ll \cdots \ll j_k \le n-1} q^{n(n+1)/2 -c(j_1,j_2,...,j_k)}.
\end{eqnarray*}
Therefore the solution of $q$P${}_{\mbox{\scriptsize{II} }}$ equation over ${\mathbb P}{\mathbb F}_p$ is obtained by reduction modulo $\mathfrak{p}$ from \eqref{eq:gtoz}, \eqref{eq:det_sol_g}
and \eqref{Airy:sol} over ${\mathbb Q}$ or ${\mathbb Q}_p$.
\subsection{$q$-discrete Painlev\'{e} III equation}
The $q$-discrete analog of the Painlev\'{e} III equation has the following form
\[
x_{n+1}x_{n-1}=\frac{ab(x_n-cq^n)(x_n-dq^n)}{(x_n-a)(x_n-b)},
\]
where $a,b,c,d$ and $q$ are parameters \cite{RGH}.
It is convenient to rewrite it as the following coupled form
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{ab(x_n-cq^n)(x_n-dq^n)}{y_n(x_n-a)(x_n-b)},\\
y_{n+1}&=x_n.
\end{array}
\right.
\label{qP3}
\end{equation}
\begin{Theorem}
Suppose that $a,b,c,d,q$ are parameters in $\{1,2,\cdots,p-1\}$ and that $a,b,c,d$ are distinct and we also suppose that $a+b\not \equiv (c+d)q^3$, then the mapping \eqref{qP3} has an almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}:={\mathbb Z}_p^2\cap \Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ |x\neq a,b,\,y\neq 0 \}$.
\label{PropqP3}
\end{Theorem}
\textbf{Proof}\;\;
Let $(x_{n+1},y_{n+1})=\Phi_n(x_n,y_n)$.
In the case when $\tilde{x}_n\neq \tilde{a},\tilde{b}$ and $\tilde{y}_n\neq 0$, we have
\begin{equation}
\left\{
\begin{array}{cl}
\tilde{x}_{n+1}&=\dfrac{\tilde{a}\tilde{b}(\tilde{x}_n-\tilde{c}\tilde{q}^n)(\tilde{x}_n-\tilde{d}\tilde{q}^n)}{\tilde{y}_n(\tilde{x}_n-\tilde{a})(\tilde{x}_n-\tilde{b})},\\
\tilde{y}_{n+1}&=\tilde{x}_n.
\end{array}
\right.
\end{equation}
from the relation \eqref{prel}.
Hence $\widetilde{\Phi_n(x_n,y_n)}=\widetilde{\Phi_n}(\tilde{x}_n,\tilde{y}_n)$.
We have to examine the other cases. From here we sometimes abbreviate $\tilde{a}$ as $a$, $\tilde{b}$ as $b$ for simplicity.
(i) If $\tilde{x}_n=\tilde{a}$ and $(a-b)(a+b-cq-dq)\tilde{y}_n^t\not \equiv b(a-c)(a-d)$,
neither $\widetilde{\Phi_n}(\tilde{a},\tilde{y}_n)$ nor $\widetilde{\Phi_n^2}(\tilde{a},\tilde{y}_n)$ is well-defined. However,
$\widetilde{\Phi_n^3}(\tilde{a},\tilde{y}_n)$ is well-defined and we have,
\begin{eqnarray*}
&&\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=\tilde{a},\tilde{y}_n)\\
&=&\left(\frac{a(b-cq^2)(b-dq^2)\tilde{y}_n}{b(a-c)(a-d)-(a-b)(a+b-cq-dq)\tilde{y}_n},b\right).
\end{eqnarray*}
(ii) If $\tilde{x}_n=\tilde{a}$ and $(a-b)(a+b-cq-dq)\tilde{y}_n^t\equiv b(a-c)(a-d)$, none of $\widetilde{\Phi_n^i}(\tilde{a},\tilde{y}_n)$ is well-defined for $i=1,2,3,4$. However, $\widetilde{\Phi_n^5}(\tilde{a},\tilde{y}_n)$ is well-defined and we have,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n=\tilde{a},\tilde{y}_n)=\left(\frac{b(a-cq^4)(a-dq^4)}{(a-b)(a+b-cq^3-dq^3)},a\right).
\]
(iii) If $\tilde{x}_n=\tilde{b}$ and $(a-b)(a+b-cq-dq)\tilde{y}_n^t \not \equiv -a(b-c)(b-d)$,
\begin{eqnarray*}
&&\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=\tilde{b},\tilde{y}_n)\\
&=&\left(\frac{b(a-cq^2)(a-dq^2)\tilde{y}_n}{a(b-c)(b-d)+(a-b)(a+b-cq-dq)\tilde{y}_n},a\right).
\end{eqnarray*}
(iv) If $\tilde{x}_n=\tilde{b}$ and $(a-b)(a+b-cq-dq)\tilde{y}_n^t \equiv -a(b-c)(b-d)$, we have,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n=\tilde{b},\tilde{y}_n)=\left(-\frac{a(b-cq^4)(b-dq^4)}{(a-b)(a+b-cq^3-dq^3)},b\right).
\]
(v) If $\tilde{y}_n=0$ and $\tilde{x}_n\not = 0$,
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n,\tilde{y}_n=0)=\left(0,\frac{ab}{\tilde{x}_n}\right).
\]
(vi) If $\tilde{y}_n=0$ and $\tilde{x}_n = 0$,
\[
\widetilde{\Phi_n^4(x_n,y_n)} = \widetilde{\Phi_n^4}(\tilde{x}_n=0,\tilde{y}_n=0)=\left(0,0\right).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
\subsection{$q$-discrete Painlev\'{e} IV equation}
The $q$-discrete analog of the Painlev\'{e} IV equation has the following form:
\[
(x_{n+1}x_n-1)(x_nx_{n-1}-1)=\frac{aq^{2n}(x_n^2+1)+bq^{2n}x_n}{cx_n+dq^n},
\]
where $a,b,c,d$ and $q$ are parameters \cite{RGH, RG}.
It can be rewritten as follows:
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{\tau^2(ax_n^2+bx_n+a)+(x_ny_n-1)(x_n+\tau)}{x_n(x_ny_n-1)(x_n+\tau)},\\
y_{n+1}&=x_n,
\end{array}
\right.
\label{qP4}
\end{equation}
where $\tau=q^n\tau_0$.
Here we took $\tau_0=\frac{d}{c}$ and redefined $a,b$ as $\frac{ac}{d^2}\to a$ and $\frac{bc}{d^2}\to b$.
\begin{Theorem}
Suppose that $|a|_p=|b|_p=|q|_p=|\tau_0|_p=1$, then the mapping \eqref{qP4} has an almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}^{(n)}:={\mathbb Z}_p^2 \cap \Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ |x\neq 0,\ xy\neq 1\ x\neq -\tau \}$, on the condition that
$aq^2\tau_0\neq 1$ and $aq^4\tau_0\neq 1$.
\label{PropqP4}
\end{Theorem}
\textbf{Proof}\;\;
In the proof we use the abbreviation as $\tilde{a}\to a,\ \tilde{b}\to b,\tilde{\tau}_0\to\tau_0$.
(i) If $\tilde{x}_n=0$ and $1+q^3\tau_0^2+q^2(-1-b\tau_0^2+\tau_0 y_n+a\tau_0-a\tau_0^2 y_n)\not\equiv 0$,
\begin{eqnarray*}
&&\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=0,\tilde{y}_n)\\
&=&\left(\frac{-1-q^3\tau_0^2-bq^4\tau_0^2+aq^6\tau_0^3+q^2(1+b\tau_0^2-\tau_0 y_n+a\tau_0^2 y_n)}{q^2\tau_0\{1+q^3\tau_0^2+q^2(-1-b\tau_0^2+\tau_0 y_n+a\tau_0-a\tau_0^2 y_n)\}},-q^2\tau_0\right).
\end{eqnarray*}
(ii) If $\tilde{x}_n=0$ and $1+q^3\tau_0^2+q^2(-1-b\tau_0^2+\tau_0 y_n+a\tau_0-a\tau_0^2 y_n)\equiv 0$,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n=0,\tilde{y}_n)=\left(\frac{-1+q^2+aq^4\tau_0+q^7\tau_0^2-bq^8\tau_0^2}{q^4\tau_0(-1+aq^4\tau_0)},0\right),
\]
where we assumed that $aq^4\tau_0\neq 1$.
(iii) If $\tilde{x}_n=-q^n\tau_0$ and $\tilde{y}_n\neq -\tau_0^{-1}$,
\begin{eqnarray*}
\widetilde{\Phi_n^3(x_n,y_n)}& =& \widetilde{\Phi_n^3}(\tilde{x}_n=-q^n\tau_0,\tilde{y}_n)\\
&=&\left(\frac{-1-\tau_0 y_n+(q^3-bq^4)\tau_0^2(1+\tau y_n)+C}{q^2\tau_0(-1+aq^2\tau_0)(1+\tau_0 y_n)},0\right),
\end{eqnarray*}
where
\[
C=q^2\{1+b\tau_0^2+\tau_0 y_n+a\tau_0^2(-\tau_0+y_n)\}.
\]
Here we assumed $aq^2\tau_0\neq 1$.
(iv) If $\tilde{x}_n=-q^n\tau_0$ and $\tilde{y}_n= -\tau_0^{-1}$,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}(\tilde{x}_n=-q^n\tau_0,\tilde{y}_n=-\tau_0^{-1})=\left(-\frac{1}{aq^6\tau_0^2},-aq^6\tau_0^2\right).
\]
(v) If $\tilde{x}_n\tilde{y}_n=1$,
\[
\widetilde{\Phi_n^5(x_n,y_n)} = \widetilde{\Phi_n^5}\left(\tilde{x}_n=\frac{1}{\tilde{y}_n},\tilde{y}_n\right)=\left(\frac{1}{aq^6\tau_0^3\tilde{y}_n},aq^6\tau_0^3\tilde{y}_n\right).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
\subsection{$q$-discrete Painlev\'{e} V equation}
The $q$-discrete analog of the Painlev\'{e} V equation has the following form:
\[
(x_{n+1}x_n-1)(x_nx_{n-1}-1)=\frac{abq^n(x_n-c)(x_n-1/c)(x_n-d)(x_n-1/d)}{(x_n-aq^n)(x_n-bq^n)},
\]
where $a,b,c,d$ and $q$ are parameters \cite{RGH}.
It can be rewritten as the following form:
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=\dfrac{1}{x_n}\left(\dfrac{abq^n(x_n-c)(x_n-1/c)(x_n-d)(x_n-1/d)}{(x_n-aq^n)(x_n-bq^n)(x_ny_n-1)}+1\right),\\
y_{n+1}&=x_n.
\end{array}
\right.
\label{qP5}
\end{equation}
\begin{Theorem}
Suppose that $a,b,c,d,q$ are in $\{1,2,\cdots, p-1\}$ and $a,b,c,d,c^{-1},d^{-1}$ are distinct from each other, then the mapping \eqref{qP5} has almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}^{(n)}:={\mathbb Z}_p^2\cap \Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ | x\neq aq^n,bq^n,\ xy\neq 1\}$.
\label{PropqP5}
\end{Theorem}
\textbf{Proof}\;\;
The calculation is extremely lengthy and we need about 13 gigabytes of memory. We deal with $n=0$ for simplicity. (Since ord$_p(q)=0$, the same argument applies to other cases.)
(i) If $\tilde{x}_n=a$,
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=a,\tilde{y}_n)=\left(\frac{1}{bq},bq\right).
\]
(ii) If $\tilde{x}_n=b$,
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}(\tilde{x}_n=b,\tilde{y}_n)=\left(\frac{1}{aq},aq\right).
\]
(iii) If $\tilde{x}_n\tilde{y}_n=1$,
\[
\widetilde{\Phi_n^3(x_n,y_n)} = \widetilde{\Phi_n^3}\left(\tilde{x}_n,\tilde{y}_n=\frac{1}{\tilde{x}_n}\right)=\left(\frac{1}{abq\tilde{y}_n},abq\tilde{y}_n\right).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
\section{Hietarinta-Viallet equation}
The Hietarinta-Viallet equation \cite{HV} is the following difference equation:
\begin{equation}
x_{n+1}+x_{n-1}=x_n+\frac{a}{x_n^2}, \label{HV1}
\end{equation}
with $a$ as a parameter.
The equation \eqref{HV1} passes the singularity confinement test \cite{Grammaticosetal}, which is a notable test for integrability of equations, but yet is not integrable in the sense that its algebraic entropy is positive and that the orbits display chaotic behaviors.
We prove that the AGR is satisfied for this Hietarinta-Viallet equation.
We again rewrite \eqref{HV1} as the following coupled form:
\begin{equation}
\Phi_n: \left\{
\begin{array}{cl}
x_{n+1}&=x_n+\dfrac{a}{x_n^2}-y_n,\\
y_{n+1}&=x_n.
\end{array}
\right.
\label{HV}
\end{equation}
\begin{Theorem}
Suppose that $|a|_p=1$, then the mapping \eqref{HV} has an almost good reduction
modulo $\mathfrak{p}$ on the domain
$\mathcal{D}:={\mathbb Z}_p^2\cap \Phi_n^{-1}({\mathbb Q}_p^2)=\{(x,y)\in {\mathbb Z}_p^2\ |x\neq 0\}$.
\label{PropHV}
\end{Theorem}
\textbf{Proof}\;\;
If $\tilde{x}_n=0$,
\[
\widetilde{\Phi_n^4(x_n,y_n)} = \widetilde{\Phi_n^4}(\tilde{x}_n=0,\tilde{y}_n)=(\tilde{y}_n,0).\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
\begin{Proposition}
Suppose that $|a|_p=1$, then the mapping \eqref{HV} has a refined almost good reduction.
Here the normal domain is $D_N=\mathbb{Z}_p^{\times}\times {\mathbb Z}_p$. Other domains are defined as
$D_S=p{\mathbb Z}_p \times {\mathbb Z}_p$, $E={\mathbb Q}_p^2\setminus {\mathbb Z}_p^2$.
\end{Proposition}
\textbf{Proof}\;\;
First let us fix the initial condition $(x_0,x_{-1})\in D_N$, $(x_0\in{\mathbb Z}_p^{\times})$.
(i) If $\pi(a/x_0^2+x_0)\neq \tilde{x}_{-1}$ then, we have $\tilde{x}_1\neq 0$. Thus
\[
(x_1,x_0)\in D_N.
\]
(ii) If $\pi(a/x_0^2+x_0)= \tilde{x}_{-1}$ then, we have $x_1\in p{\mathbb Z}_p\ (\tilde{x}_1=0)$, therefore
\[
(x_1,x_0)\in D_S.
\]
By iterating further, we obtain the followings:
$\tilde{x}_2=\tilde{x}_3=\infty$, $\tilde{x}_4=0$, $\tilde{x}_5=\tilde{x}_0$.
Therefore,
\[
(x_2,x_1)\in E,\ (x_3,x_2)\in E,\ (x_4,x_3)\in E,\ (x_5,x_4)\in D_N.\hfill\hbox{$\Box$}\vspace{10pt}\break
\]
Therefore we learn that the AGR and refined AGR work similarly to the singularity confinement test in distinguishing the integrable systems from the non-integrable ones.
In fact, the AGR and refined AGR can be seen as an arithmetic analog of the singularity confinement test.
\section{The $p$-adic singularity confinement}
The above approach is closely related to the singularity confinement method which is an effective test to judge the integrability of the given equations \cite{Grammaticosetal}.
In the proof of the theorem \ref{PropdP2}, we have taken
\[
x_n=1+e p^k\ \ (e\in\mathbb{Z}_p^{\times},\ k>0)
\]
instead of taking $x_n=1+\epsilon$ and showed that the limit
\[
\lim_{|e p^k|_p \to 0}(x_{n+m}, x_{n+m+1})
\]
is well defined for some positive integer $m$.
Here $ep^k\ (k>0)$ is an alternative in ${\mathbb Q}_p$ for the infinitesimal parameter $\epsilon>0$ in the singularity confinement test in ${\mathbb C}$. Note that $p^k\ (k>0)$ is a `small' number in terms of the $p$-adic metric $(|p^k|_p=p^{-k})$. In fact, in most cases, we may just replace $\epsilon$ for $p$ in order to test the $p$-adic singularity confinement.
From this observation and previous theorems, we postulate that having almost good reduction in arithmetic mappings is similar to passing the singularity confinement test.
\section{Relation to the `Diophantine integrability'}
Lastly we discuss a relationship between the systems over finite fields and the algebraic entropies of the systems.
Let $\phi$ be a difference equation and let the degree of the map $\phi$ be $d>0$. We define the degree of the iterates $\phi^n$ as $\deg (\phi^n)=d_n$.
The na\"{i}ve composition suggests $d_n=d^n$, however, common factors can be eliminated, lowering the degree of the iterates. Algebraic entropy $E$ of $\phi$ is the following well-defined quantity \cite{BV}.
\[
E:=\lim_{n\to\infty}\frac{1}{n}\log d_n\; (\ge 0).
\]
The existence of $E\ge 0$
We can postulate from a lot of examples that the mapping $\phi$ is
integrable if and only if $E=0$, that is, $d_n$ has a polynomial growth.
We can construct an arithmetic analog of the algebraic entropy which has first been introduced in \cite{Halburd}.
If we consider the map with rational numbers as coefficients, and choose initial values to be rational numbers, then we have $x_n\in \mathbb{Q}$ for all $n\in\mathbb{Z}_{>0}$. The arithmetic complexity of rational numbers can be expressed by the height function $H(x)$:
\[
H(x)=\max\{|u|,|v|\},
\]
where $x=\frac{u}{v}$ and $u$ and $v$ are integers without common factors. ($H(0)=0$.)
The map $\phi$ is said to be `Diophantine integrable' if and only if $\log H(x_n)$ grows as slowly as some polynomial.
Thus we define the arithmetic analog of algebraic entropy, which may be called as a `Diophantine entropy' as
\[
\epsilon:=\lim_{n\to\infty} \frac{1}{n}\log\left(\log H(x_n)\right).
\]
Precisely speaking, the value $\epsilon$ depends on the choice of initial data of the systems, however, we conjecture that the value $\epsilon$ is independent of that choice for most of the initial conditions. We conjecture that for most of the dynamical systems with rational numbers as coefficients, two values $E$ and $\epsilon$ is the same.
We have the following two conjectures from numerical observations:
\paragraph{(i)} The Hietarinta-Viallet equation \eqref{HV1} has $\epsilon=\log\left(\frac{3+\sqrt{5}}{2}\right)$, which is exactly equal to the original algebraic entropy $E=\log\left(\frac{3+\sqrt{5}}{2}\right)$ obtained in \cite{Takenawa,HV,BV}.
\paragraph{(ii)} In the case of the equation \eqref{discretemap}, $\epsilon=\log 3>0$ for $\gamma=3$, while, for $\gamma=1,2$, we have $\epsilon=0$ and $\log H(x_n)$ has a polynomial growth of second degree for generic initial conditions.
Therefore, in these cases, the Diophantine entropy $\epsilon$ motivated by the Diophantine integrability is expected to be equivalent to the (original) algebraic entropy $E$.
We do not explain the proof of these conjectures, some part of which is incomplete. We give some numerical examples which support (i) and (ii).
\paragraph{(i)} Let us suppose that $p=3$, $(x_0,x_1)=(1,3)$ and that the parameter $a=1$ in the Hietarinta-Viallet equation \eqref{HV1}. Then
\begin{eqnarray*}
\{\log_3 H(x_n)\}_{n=0}^\infty&=&\{0,1,2,7,19,50,132,347,911,2385,6245,\\
&& 16352,42811,112082,293434,768221,\cdots\}.
\end{eqnarray*}
Here we only displayed the integer part of the values.
We can see numerically that
\[
\left. \frac{\log_3 H(x_{n+1})}{\log_3 H(x_n)}\right|_{n\to\infty}\sim 2.61804 \sim \frac{3+\sqrt{5}}{2}.
\]
\paragraph{(ii)} In the case of the equation \eqref{discretemap}, we give two examples.
First let us suppose that $p=3$, $\gamma=3$, $(x_0,x_1)=(1,3)$, and that the parameter in the equation \eqref{discretemap} is $a=2$. Then,
\[
\{\log_3 H(x_n)\}_{n=0}^\infty=\{0,1,3,8,26,79,236,711,2133,6400,19201,\cdots\}.
\]
Therefore we see numerically that
\[
\frac{\log_3 H(x_{n+1})}{\log_3 H(x_n)} \sim 3,\ \ \epsilon\sim \log 3.
\]
On the other hand, if $\gamma=1$, we have
\[
\{\log_3 H(x_n)\}_{n=0}^\infty=\{0,1,1,1,2,3,3,4,5,7,7,8,9,12,13,14,16,18,20,\cdots\}.
\]
Therefore we see numerically that
\[
\frac{\log_3 H(x_{n+1})}{\log_3 H(x_n)} \sim 1,\ \ \epsilon\sim 0.
\]
The rate of growth of $\log_3 H(x_n)$ is quadratic:
if we estimate $x_0,\cdots,x_{100}$ using a cubic polynomial, we obtain
\[
\log_3 H(x_n)\sim 0.120+0.185n+0.0454n^2+5\times 10^{-7} n^3,
\]
which indicates a quadratic growth.
In the case of original algebraic entropy $E$, we can rigorously obtain the recurrence relation for the sequence $d_n$ of degrees of rational functions with several methods. However, in the case of `Diophantine entropy', it is not easy in many cases to exactly estimate the elimination of common factor $p$ between the numerator and the denominator.
This idea is essentially equivalent to studying the growth of the number of digits of the numerator (or denominator) of $x_n\in\mathbb{Q}$ when expressed as $p$-adic expansions. Therefore the procedure can be seen as an analog of algebraic entropy of a system over a finite field $\mathbb{F}_p$.
As a technique of the numerical simulations, instead of the height $H(x_n)$, we can also use only the denominator $r_n$ or the numerator $s_n$ of $x_n=s_n/r_n$: i.e.,
both of the values
\[
\lim_{n\to \infty}\frac{1}{n}\log\left(\log | s_n|\right),\; \lim_{n\to \infty}\frac{1}{n}\log\left(\log | r_n |\right)
\]
should give the same value as $\epsilon$ due to a result by Silverman \cite{Silvermanduke}.
The biggest benefit of this `Diophantine' approach might be that the time of computation is greatly reduced by using rational numbers instead of the using formal variables. This allows us to obtain a conjecture for the integrability, and a conjecture for the value of algebraic entropy with comparably short time.
In 2014, after the thesis is submitted, a series of generalized versions of the Hietarinta-Viallet equation
\[
x_{n+1}=-x_{n-1}+x_n+\frac{1}{x_n^k},
\]
where $k\ge 2$ is under investigation by the author and his collaborators.
They numerically computed an approximation to the Diophantine entropy $\epsilon$ of this system for $k=2,3,4,5,\cdots$ and conjectured the exact values of algebraic entropy $E$ from these approximations.
We have found that the situation depends on the parity of the integer $k$.
This topic will be dealt with in other papers.
\section{Systems over the extended fields}
In the preceding subsections we have successfully defined the dynamical systems over the finite field $\mathbb{F}_p$ through the extensions to / reductions from the field of $p$-adic numbers $\mathbb{Q}_p$.
In this subsection we generalize this result to the systems over a larger finite field $\mathbb{F}_{p^m}$ where $m>1$, and then study the ways of reduction to some finite field from the field of complex values $\mathbb{C}$.
Since a field extension $L$ of the degree $m$ over $\mathbb{Q}_p$ is a simple extension, there exist an element $\alpha\in L$ such that $L=\mathbb{Q}_p(\alpha)$. The reduction from $L$ to the set $\mathbb{F}_p(\alpha)\cup \{\infty\}$ is defined naturally using the reduction map \eqref{padicreductionmap} in the previous sections:
\begin{eqnarray*}
L=\mathbb{Q}_p(\alpha)&\ni& x_0+x_1\alpha+x_2\alpha^2+\cdots+x_{m-1}\alpha^{m-1}\\
&\mapsto & \tilde{x}_0+\tilde{x}_1\alpha+\tilde{x}_2\alpha^2+\cdots+\tilde{x}_{m-1}\alpha^{m-1} \in \mathbb{F}_p(\alpha)\cup\{\infty\}.
\end{eqnarray*}
For example let us define dynamical systems over $\mathbb{F}_{p^2}$ and discuss the properties of the reductions.
Let $\alpha$ be a generator of $\mathbb{F}_{p^2}$ over $\mathbb{F}_p$.
Then we have $\mathbb{F}_{p^2}={\mathbb F}_p(\alpha)$ and $\alpha\in{\mathbb F}_{p^2}\setminus {\mathbb F}_p$ and $a:=\alpha^2\in\mathbb{F}_p$.
\begin{Lemma}\label{lem411}
The field ${\mathbb Q}_p(\alpha)$ is the extension field of ${\mathbb Q}_p$ of degree two.
\end{Lemma}
\textbf{Proof}\;\;
We can see $a$ as an element of ${\mathbb Z}_p^{\times}$. Since $a\!\!\mod p{\mathbb Z}_p\in {\mathbb F}_p^{\times}$ is not a square element in ${\mathbb F}_p^{\times}$, $a$ is not a square in ${\mathbb Q}_p^{\times}$ either. Therefore $\alpha$ in not in ${\mathbb Q}_p$.\hfill\hbox{$\Box$}\vspace{10pt}\break
We define the reduction map $\pi_2$ from ${\mathbb Q}_p(\alpha)$ to ${\mathbb F}_{p^2}$ as follows:
\begin{equation}
\pi_2:\;\;{\mathbb Q}_p(\alpha)\ni x+y\alpha\; (x,y\in\mathbb{Q}_p)\longmapsto \left\{
\begin{array}{cl}
\tilde{x}+\tilde{y}\alpha \in{\mathbb F}_p(\alpha)& (x,y\in{\mathbb Z}_p)\\
\infty & (\mbox{otherwise})
\end{array}
\right.
\end{equation}
Note that $\pi_2|_{{\mathbb Z}_p[\alpha]}$ is a ring homomorphism.
We define the almost good reduction in a similar manner to the case of systems over ${\mathbb F}_p$.
\begin{Definition}
A non-autonomous rational system $\phi_n$: $({\mathbb Q}_p(\alpha))^2 \to ({\mathbb Q}_p(\alpha))^2$ $(n \in {\mathbb Z})$ has an almost good reduction modulo $\pi_2$ on the domain $\mathcal{D} \subseteq ({\mathbb Z}_p[\alpha])^2$, if there
exists a positive integer $m_{S;n}$ for any $S=(x,y) \in \mathcal{D}$ and time step $n$ such that
\begin{equation}
\pi_2({\phi_n^{m_{S;n}}(x,y)})=\widetilde{\phi_n^{m_{S;n}}}(\pi_2({x}),\pi_2({y})),
\label{AGRsquare}
\end{equation}
where $\phi_n^m :=\phi_{n+m-1} \circ \phi_{n+m-2} \circ \cdots \circ \phi_n$.
\end{Definition}
Next we apply these results to the field $\mathbb{C}$.
Note that we already have a method to obtain cellular automata from the discrete systems via extended ultra-discretization \cite{TYajima}.
We take a different approach, which is based on the arithmetic of $p$-adic numbers. We use without proof the following fact in the number theory.
\begin{Lemma}
The field $\mathbb{Q}_p$ has a square root of $-1$ if and only if $p\equiv 1\!\mod 4$.
\end{Lemma}
From this fact we consider the following two cases:
\begin{itemize}
\item If $p=2$ or $p\equiv 3\!\mod 4$, then the lemma \ref{lem411} holds for $\alpha=\sqrt{-1}$.
Thus we obtain the following reduction mapping $\pi_{\mathbb{C}}$:
\begin{equation}
\pi_{\mathbb{C}}:\;\;{\mathbb Q}_p(\sqrt{-1})\ni x+\sqrt{-1}y\;\longmapsto \left\{
\begin{array}{cl}
\tilde{x}+\sqrt{-1}\tilde{y} \in{\mathbb F}_{p^2}& (x,y\in{\mathbb Z}_p)\\
\infty & (\mbox{otherwise})
\end{array}
\right.
\end{equation}
\item If $p\equiv 1\mod 4$, on the other hand, $\mathbb{Q}_p(\sqrt{-1})=\mathbb{Q}_p$ holds.
Therefore, the reduction mapping $\pi_{\mathbb{C}}$ takes values in ${\mathbb P}{\mathbb F}_p=\mathbb{F}_p\cup\{\infty\}$.
\end{itemize}
The values of the form $x+\sqrt{-1}y\in\mathbb{C}$, $(x,y\in\mathbb{Q})$ can be reduced to either $\mathbb{F}_p\cup\{\infty\}$ or $\mathbb{F}_{p^2}\cup\{\infty\}$.
Note that we cannot apply this method if $x,y$ are not rational numbers.
By using this approach to the equations with complex variables such as a discrete version of the nonlinear Schr\"{o}dinger equation (dNLS) and a discrete sine-Gordon equation, we expect to obtain the cellular automata related to the equations. One of the future problems is to investigate the cellular automata (ultra-discrete) analogs of the breather solutions of dNLS.
\chapter{Two-dimensional systems over finite fields} \label{sec5}
In chapter \ref{sec3}, we have successfully determined the time evolution of the discrete Painlev\'{e} equations through the construction of their space of initial conditions by blowing-up twice at each of the singular points so that the mapping becomes bijective.
However, for a general nonlinear equation, explicit construction of the space of initial conditions over a finite field is not so straightforward (for example see \cite{Takenawa} or consider the higher dimensional lattice systems). Therefore it does not help us to obtain the explicit solutions.
In this section we study the soliton equations evolving as a two-dimensional lattice over finite fields by following the discussions made in \cite{KMT}.
\section{Discrete KdV equation over the field of rational functions}\label{section51}
Let us consider the discrete KdV equation \eqref{dKdV1}
over a finite field ${\mathbb F}_q$ where $q=p^m$, $p$ is a prime number and $m\in{\mathbb Z}_{+}$.
Let us reproduce the discrete KdV equation here:
\begin{equation*}
\frac{1}{x_{n+1}^{t+1}}-\frac{1}{x_n^t}+\frac{\delta}{1+\delta}\left(x_n^{t+1}-x_{n+1}^t \right)=0.
\end{equation*}
Here $n,t \in {\mathbb Z}$ and $\delta$ is a parameter. If we take
\[
\frac{1}{y_n^t}:=(1+\delta)\frac{1}{x_n^{t+1}}-\delta x_n^t
\]
we obtain equivalent coupled equations
\begin{equation}
\left\{
\begin{array}{cl}
x_n^{t+1}&=\dfrac{(1+\delta)y_n^t}{1+\delta x_n^ty_n^t},\vspace{2mm} \\
y_{n+1}^{t}&=\dfrac{(1+\delta x_n^ty_n^t)x_n^t}{1+\delta}.
\end{array}
\right.
\label{dKdV2}
\end{equation}
Clearly \eqref{dKdV2} does not determine the time evolution when $1+\delta x_n^t y_n^t\equiv 0$.
Over a field of characteristic 0 such as ${\mathbb C}$, the time evolution of $(x_n^t,y_n^t)$ will not hit this exceptional line
for generic initial conditions, but on the contrary, the evolution comes to this exceptional line in many cases over a finite field as a division by $0$ appears.
The mapping, $(x_n^t,y_n^t) \mapsto (x_n^{t+1}, y_{n+1}^t)$, is lifted to an automorphism of the surface $\tilde{X}$,
where $\tilde{X}$ is obtained from ${\mathbb P}^1 \times {\mathbb P}^1$ by blowing up twice at $(0,\infty)$ and $(\infty, 0)$ respectively:
\begin{align*}
\tilde{X}&={\mathbb A}_{(0,\infty)} \cup {\mathbb A}_{(\infty,0)},\\
{\mathbb A}_{(0,\infty)}&:=\left\{ \left(\left(x, y^{-1}\right), [\xi:\eta], [u:v]\right) \Big| \ x \eta=y^{-1} \xi,\ \eta u = y^{-1} (\eta+\delta \xi) v \right\}\\
&\subset {\mathbb A}^2 \times {\mathbb P}^1\times{\mathbb P}^1, \\
{\mathbb A}_{(\infty,0)}&:=\left\{ \left(\left(x^{-1}, y\right), [\xi:\eta], [u:v]\right) \Big|\ x^{-1}\eta=y\xi,\ (\eta + \delta \xi) u = y \eta v\right\}\\
&\subset {\mathbb A}^2 \times {\mathbb P}^1\times{\mathbb P}^1,
\end{align*}
where $[a:b]$ denotes a set of homogeneous coordinates for ${\mathbb P}^1$.
To define the time evolution of the system with $N$ lattice points from \eqref{dKdV2}, however, we have to consider the mapping
\[
(y_1^t;x_1^t,x_2^t,...,x_N^t) \longmapsto (x_1^{t+1},x_2^{t+1},...,x_N^{t+1};y_{N+1}^t).
\]
Since there seems no reasonable decomposition of $\tilde{X}$ into a direct product of two independent spaces, successive use of \eqref{dKdV2} becomes impossible.
Note that if we blow down $\tilde{X}$ to ${\mathbb P}^1 \times {\mathbb P}^1$, the information of the initial values is lost in general.
If we intend to construct an automorphism of a space of initial conditions, it will be inevitable to start from ${\mathbb P}^{N+1}$ and blow-up to some huge manifold, which is beyond the scope of the present paper.
There should be so many exceptional hyperplanes in the space of initial conditions if it does exist, and it is practically impossible to check all the ``singular'' patterns in the na\"{i}ve extension of the singularity confinement test. Another difficulty is that, in high dimensional lattice systems, we cannot properly impose the boundary conditions to be compatible with the extension of the spaces.
These difficulties seem to be some of the reasons why the singularity confinement method has not been used for construction of integrable partial difference equations or judgment for their integrability, though some attempts have been proposed in the bilinear form \cite{RGS}.
On the other hand, when we fix the initial condition for a partial difference equation, the number of singular patterns is restricted in general and we have only to enlarge the domain so that the mapping becomes well defined.
This is the strategy that we will adopt in this section.
Suppose that $x_1^0=6,\ x_2^0=5,\ y_1^0=2,\ y_1^1=2$, then
we have
\[
x_1^{1}=4/13 \equiv 3,\quad y_2^{0}=78/2 \equiv 4 \mod 7.
\]
With further calculation we have
\[
x_1^2=4/7 \equiv 4/0,\quad y_2^1=21/2 \equiv 0,\quad x_2^1=8/21 \equiv 1/0.
\]
Since $4/0$ and $1/0$ are not defined over ${\mathbb F}_7$, we now extend ${\mathbb F}_7$ to ${\mathbb P}{\mathbb F}_7$ and
take $\dfrac{j}{0}\equiv \infty$ for $j\in\{1,2,3,4,5,6\}$.
However, at the next time step, we have
\[
x_2^2=\frac{2 \cdot 0}{1+ \infty \cdot 0},\qquad y_3^1=\frac{(1+ \infty \cdot 0)\cdot \infty}{2}
\]
and reach a deadlock.
The first idea to overcome this problem is to consider the equation over the field of rational functions \cite{KMT}.
We try the following two procedures:
(I) we keep $\delta$ as a parameter for the same initial condition, and obtain as a system over ${\mathbb F}_7(\delta)$,
\begin{align*}
x_1^{1}&=\frac{2(1+\delta)}{1+5\delta},\quad y_2^{0}=\frac{6(1+5\delta)}{1+\delta},\\
x_2^1&=\frac{6(1+\delta)(1+5\delta)}{1+3\delta+3\delta^2},
\quad y_2^1=\frac{2(1+2\delta+4\delta^2)}{(1+5\delta)^2},\quad
x_1^2=\frac{2(1+\delta)(1+5\delta)}{1+2\delta+4\delta^2},\\
x_2^2&=\frac{4(1+\delta)(2+\delta)(3+2\delta)}{(1+5\delta)(5+5\delta+2\delta^2)},\quad y_3^1=\frac{2(5+5\delta+2\delta^2)}{(2+\delta)^2}.
\end{align*}
(II) Then we put $\delta=1$ to have a system over ${\mathbb P}{\mathbb F}_7$ as
\begin{align*}
x_1^{1}&=3,\quad y_2^{0}=4,\quad x_2^1=72/7\equiv \infty,
\quad y_2^1=14/36\equiv 0,\quad x_1^2=24/7 \equiv \infty,\\
x_2^2&=120/72 \equiv 4,\quad y_3^1=24/9 \equiv 5.
\end{align*}
Thus all the values are uniquely determined over ${\mathbb P}{\mathbb F}_7$.
Figures \ref{figure1} and \ref{figure2} show a time evolution pattern of the discrete KdV equation \eqref{dKdV2} over ${\mathbb P}{\mathbb F}_7$ for the initial conditions $x_1^0=6,\ x_2^0=5,\ x_3^0=4,\ x_4^0=3,\ x_j^0=2\ (j\ge 5)$ and $y_1^t=2\ (t\ge 0)$.
This example suggests that the equation \eqref{dKdV2} should be understood as evolving over the field ${\mathbb F}_q(\delta)$, the rational function field with indeterminate $\delta$ over ${\mathbb F}_q$.
To obtain the time evolution pattern over ${\mathbb P}{\mathbb F}_q$, we have to substitute $\delta$ with a suitable value $\delta_0 \in{\mathbb F}_q$ ($\delta_0=1$ in the example above).
This substitution can be expressed as the following reduction map:
\begin{equation}\label{reductionmapping}
{\mathbb F}_q(\delta)^{\times}\rightarrow {\mathbb P}{\mathbb F}_q:\ (\delta-\delta_0)^s\frac{g(\delta-\delta_0)}{f(\delta-\delta_0)}\mapsto \left\{
\begin{array} {cl}
0 & (s>0),\vspace{1mm}\\
\infty & (s<0),\vspace{1mm}\\
g(0)/f(0) & (s=0),
\end{array}
\right.
\end{equation}
where $s\in {\mathbb Z}$, $f(h),\ g(h)\in{\mathbb F}_q[h]$ are co-prime polynomials and $f(0)\neq 0, g(0)\neq 0$.
With this prescription, we know that $0/0$ does not appear and we can uniquely determine the time evolution for generic initial conditions defined over ${\mathbb F}_q$.
Of course we can also overcome the indeterminacy by using the filed of $p$-adic numbers as we have done in previous sections. This approach is introduced in section \ref{padicdkdv}.
\begin{figure}
\centering
\includegraphics[width=10cm,bb=80 250 500 750]{figure7.eps}
\caption{An example of the time evolution of the coupled discrete KdV equation \eqref{dKdV2} over ${\mathbb P}{\mathbb F}_7$ where $\delta=1$.}
\label{figure1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=12cm,bb=80 540 500 720]{figure8.eps}
\caption{The time evolution pattern of $x_n^t$ (left) and $y_n^t$ (right) of the discrete KdV equation \eqref{dKdV2} over ${\mathbb P}{\mathbb F}_7$ where $\delta=1$.}
\label{figure2}
\end{figure}
\section{Soliton solutions of the (generalized) discrete KdV equations over the field of rational functions}\label{sectiondkdvsoliton}
First we consider the $N$-soliton solutions to \eqref{dKdV1} over ${\mathbb F}_q$.
It is well-known that the $N$-soliton solution is given as
\begin{equation}\label{dkdvsoliton}
\begin{array}{cl}
x_n^t&=\dfrac{\sigma_n^t\sigma_{n+1}^{t-1}}{\sigma_{n+1}^t\sigma_n^{t-1}},\\
\sigma_n^t&:=\det_{1\le i,j\le N}\left( \delta_{ij}+\dfrac{\gamma_i}{l_i+l_j-1}\left(\dfrac{1-l_i}{l_i}\right)^t
\left(\dfrac{l_i+\delta}{1+\delta-l_i}\right)^n\right)
\end{array}
\end{equation}
where $\gamma_i,\ l_i$ $(i=1,2,...,N)$ are arbitrary parameters but $l_i \ne l_j$ for $i \ne j$.
When $l_i,\ \gamma_i$ are chosen in ${\mathbb F}_q$, $x_n^t$ becomes a rational function in ${\mathbb F}_q(\delta)$.
Hence we obtain soliton solutions over ${\mathbb P}{\mathbb F}_q$ by substituting $\delta$ with a value in ${\mathbb F}_q$.
The figure \ref{figure4} shows one and two soliton solutions for the discrete KdV equation \eqref{dKdV1} over the finite fields ${\mathbb P}{\mathbb F}_{11}$ and ${\mathbb P}{\mathbb F}_{19}$. Here we have chosen the values $(1-l_i)/l_i$ and $(l_i+\delta)/(1+\delta-l_i)$ so that their reduction by the reduction map \eqref{reductionmapping} is neither $0$ nor $\infty$. In this case, the reduced soliton solutions exhibit periodicity with periods $q-1$ as in figure \ref{figure4}.
The corresponding time evolutionary patterns over the field ${\mathbb R}$ are also presented for comparison.
Note that, if some of the reduced values of $(1-l_i)/l_i$ or $(l_i+\delta)/(1+\delta-l_i)$ take $0$ or $\infty$, the reduced soliton solutions do not exhibit periodicity in general: they might become stationary, vanish after a few time steps, or look like the normal solitary waves.
These phenomena are described in detail in section \ref{padicdkdv}.
\begin{figure}
\centering
\includegraphics[width=12cm, bb=60 350 520 730]{figure9.eps}
\caption{The soliton solutions of the discrete KdV equation \eqref{dKdV1} over ${\mathbb R}$ (left) and finite fields (right).
The top one is the one-soliton over ${\mathbb P}{\mathbb F}_{11}$ where $\delta=7,\ \gamma_1=2,\ l_1=9$.
The bottom one is the two-soliton solution over ${\mathbb P}{\mathbb F}_{19}$ where $\delta=8,\ \gamma_1=15,\ l_1=2,\ \gamma_2=9,\ l_2=4$.
Elements of the finite fields ${\mathbb P}{\mathbb F}_{p}$ are represented on the following grayscale: from $0$ (white) to $p-1$ (gray) and $\infty$ (black).}
\label{figure4}
\end{figure}
Next we consider the generalized form of the discrete KdV equation.
We introduce the following discrete integrable system:
\begin{equation}
\left\{
\begin{array}{cl}
x_n^{t+1}&=\dfrac{\left\{(1-\beta)+\beta x_n^ty_n^t\right\}y_n^t}{(1-\alpha)+\alpha x_n^ty_n^t}\vspace{2mm},\\
y_{n+1}^{t}&=\dfrac{\left\{(1-\alpha)+\alpha x_n^ty_n^t\right\}x_n^t}{(1-\beta)+\beta x_n^ty_n^t},
\end{array}
\right.
\label{YBdKdV}
\end{equation}
with arbitrary parameters $\alpha$ and $\beta$.
This is a natural and important generalization of the discrete KdV equation, partly because it becomes the generalized version of the BBS called `Box Ball Systems with a Carrier' (BBSC) through ultra-discretization. The parameter $\beta$ corresponds to the capacity of the box, and $\alpha$ to the capacity of the carrier. The equation \eqref{YBdKdV} are known to have soliton solutions whose speeds and widths are intuitively understood from the BBSC \cite{KMTJPSJ}.
We consider soliton solutions to the generalized discrete KdV equation \eqref{YBdKdV}.
Note that
by putting $u_n^t:=\alpha x_n^t,\ v_n^t:=\beta y_n^t$, we obtain
\[
\left\{
\begin{array}{cl}
u_n^{t+1}&=\dfrac{(\alpha(1-\beta)+u_n^tv_n^t)v_n^t}{\beta(1-\alpha)+u_n^t v_n^t},\vspace{2mm} \\
v_{n+1}^{t}&=\dfrac{(\beta(1-\alpha)+u_n^tv_n^t)u_n^t}{\alpha(1-\beta)+ u_n^t v_n^t}.
\end{array}
\right.
\]
Hence \eqref{YBdKdV} is
essentially equivalent to the `consistency of the discrete potential KdV equation around a $3$-cube' \cite{Tongasetal}: $(u,v) \to (u',v')$, as
\begin{eqnarray*}
u'=vP,\ v'=uP^{-1},\ P=\dfrac{a+uv}{b+uv}.
\label{YBMAP}
\end{eqnarray*}
The map is also obtained from discrete BKP equation \cite{KakeiNimmoWillox}.
We will obtain $N$-soliton solutions to \eqref{YBdKdV} from the $N$-soliton solutions to the discrete KP equation
by a reduction similar to the one adopted in \cite{KakeiNimmoWillox}.
Let us consider the four-component discrete KP equation:
\begin{align}
&(a_1-b)\tau_{l_1t}\tau_n+(b-c)\tau_{l_1}\tau_{tn}+(c-a_1)\tau_{l_1n}\tau_t=0,
\label{eq1}\\
&(a_2-b)\tau_{l_2t}\tau_n+(b-c)\tau_{l_2}\tau_{tn}+(c-a_2)\tau_{l_2n}\tau_t=0.
\label{eq2}
\end{align}
Here $\tau=\tau(l_1,l_2,t,n)$ is the $\tau$-function of integer variables $(l_1,l_2,t,n) \in {\mathbb Z}^4$, and $a_1$, $a_2$, $b$ and $c$ are arbitrary parameters.
We express the shift operations by subscripts:
$\tau \equiv \tau(l_1,l_2,t,n),\ \tau_{l_1} \equiv \tau(l_1+1,l_2,t,n),\ \tau_{l_1t} \equiv \tau(l_1+1,l_2,t+1,n) \ $
and so on.
If we shift $l_1 \to l_1+1$ in \eqref{eq2}, we have
\begin{equation}
(a_2-b)\tau_{l_1l_2t}\tau_{l_1n}+(b-c)\tau_{l_1l_2}\tau_{l_1tn}+(c-a_2)\tau_{l_1l_2n}\tau_{l_1t}=0.\label{kptaushift}
\end{equation}
Then, by imposing the reduction condition:
\begin{equation}
\tau_{l_1l_2}=\tau,
\label{reduction}
\end{equation}
the equation \eqref{kptaushift} turns to
\[
(a_2-b)\tau_{t}\tau_{l_1n}+(b-c)\tau\tau_{l_1tn}+(c-a_2)\tau_{n}\tau_{l_1t}=0.
\]
Hence, putting $f:=\tau,\ g:=\tau_{l_1}$, we obtain
\begin{align*}
&(a_1-b)g_{t}f_n+(b-c)gf_{tn}+(c-a_1)g_{n}f_t=0,
\\
&(a_2-b)f_{t}g_{n}+(b-c)fg_{tn}+(c-a_2)f_{n}g_{t}=0,
\end{align*}
and
\begin{equation*}
\frac{fg_{tn}}{g f_{tn}}=\frac{(a_2-b)f_{t}g_{n}+(c-a_2) f_{n}g_{t}}{(a_1-b)g_{t}f_n+(c-a_1) g_{n}f_t}=\frac{(c-a_2)+(a_2-b)\frac{f_{t}g_{n}}{f_{n}g_{t}}}{(a_1-b)+(c-a_1) \frac{g_{n}f_t}{g_{t}f_n}}.
\end{equation*}
Now we denote
\begin{equation}
x_n^t:=\frac{f g_n}{g f_n},\qquad y_n^t:=\frac{g f_t}{f g_t}.
\label{xy}
\end{equation}
From the equality
\[
x_n^{t+1}y_{n+1}^t=x_n^ty_n^t=\frac{f_tg_n}{f_ng_t}, \quad
\frac{x_n^{t+1}}{y_n^t}=\frac{f g_{tn}}{g f_{tn}},
\]
we find that $x_n^t,\ y_n^t$ defined in \eqref{xy} satisfy the equation \eqref{YBdKdV} by defining $\alpha:=(c-a_1)/(c-b),\ \beta:=(a_2-b)(c-b)$.
The $N$-soliton solution to \eqref{eq1} and \eqref{eq2} is known as
\begin{equation*}
\tau=\det_{1\le i,j \le N}\left[ \delta_{ij}+\frac{\gamma_i}{p_i-q_j}\left(\frac{q_i-a_1}{p_i-a_1 }\right)^{l_1}
\left(\frac{q_i-a_2}{p_i-a_2 }\right)^{l_2}\left(\frac{q_i-b}{p_i-b }\right)^{t}\left(\frac{q_i-c}{p_i-c }\right)^{n} \right],
\end{equation*}
where $\{p_i,q_i\}_{i=1}^N$ are distinct parameters from each other and $\{ \gamma_i \}_{i=1}^N$ are
arbitrary parameters \cite{DJKM}.
The reduction condition \eqref{reduction} gives the constraint,
\[
\left(\frac{a_1-p_i}{a_1-q_i}\right)\left(\frac{a_2-p_i}{a_2-q_i}\right)=1,
\]
to the parameters $\{p_i,\, q_i\}$.
Since $p_i \ne q_i$, the restriction is equivalent to $p_i+q_i=a_1+a_2$.
By rewriting $\dfrac{p_i-a_1}{c-b} \rightarrow p_i$,
$\dfrac{\gamma_i}{c-b} \rightarrow \gamma_i $, defining $\Delta := \dfrac{a_1-a_2}{c-b}$ and taking $l_1=l_2$ we have
\begin{align}
f&=\det_{1\le i,j \le N}\left[ \delta_{ij}+\frac{\gamma_i}{p_i+p_j+\Delta }\left(\frac{-p_i+\beta}{p_i+1-\alpha }\right)^{t}
\left(\frac{p_i+1-\beta}{-p_i+\alpha}\right)^{n}
\right], \label{fform}\\
g&=\det_{1\le i,j \le N}\left[ \delta_{ij}+\frac{\gamma_i}{p_i+p_j+\Delta }\frac{-\Delta -p_i}{p_i}\left(\frac{-p_i+\beta}{p_i+1-\alpha }\right)^{t}
\left(\frac{p_i+1-\beta}{-p_i+\alpha}\right)^{n}
\right]. \label{gform}
\end{align}
Thus we obtain the $N$-soliton solution of \eqref{YBdKdV} by \eqref{xy}, \eqref{fform} and \eqref{gform}.
Although the generalized discrete KdV equation has more than one parameters $\alpha$ and $\beta$, we can do the same approach of using the field of rational function as in the case of \eqref{dKdV2}.
If we want to consider the equation at $\alpha=a, \beta=b$ $(a, b \in {\mathbb F}_q)$, then we substitute $\alpha=a+\epsilon,\ \beta=b+\epsilon$ using a new parameter $\epsilon$, which will be considered as a variable. Then we can construct soliton solutions in ${\mathbb F}_q(\epsilon)$ by a reduction for suitable values of
$\{p_i,\ \gamma_i\}$ and $\Delta$. The reduced solutions defined in ${\mathbb P}{\mathbb F}_q$ are obtained by putting $\epsilon=0$ and are expressed as $\tilde{f},\ \tilde{g},\ \tilde{x}_n^t$ and $\tilde{y}_n^t$.
Lastly, let us comment on the periodicity of the soliton solutions over ${\mathbb P}{\mathbb F}_q$.
We have
\begin{eqnarray*}
\tilde{f}(n+q-1,\,t)&=&\tilde{f}(n,\,t+q-1)=\tilde{f}(n,\,t),\\
\tilde{g}(n+q-1,\,t)&=&\tilde{g}(n,\,t+q-1)=\tilde{g}(n,\,t),
\end{eqnarray*}
for all $t,\,n\in{\mathbb Z}$ since we have $a^{q-1}\equiv 1$ for all $a \in {\mathbb F}^{\times}_q$.
Thus the functions $\tilde{f}$ and $\tilde{g}$ have periods $q-1$ over ${\mathbb F}_q$.
However we cannot conclude that $\tilde{x}_n^t$ and $\tilde{y}_n^t$ are also periodic with periods $q-1$, unlike the case in the discrete KdV equation.
The values of $\tilde{x}_n^t$ may not be periodic when $\tilde{f}(n,t)\tilde{g}(n+1,t)=0$ and $\tilde{g}(n,t)\tilde{f}(n+1,t)=0$ (See \eqref{xy}).
First we write $f(n,t)g(n+1,t)$ and $g(n,t)f(n+1,t)$ as follows:
\begin{eqnarray*}
f(n,t)g(n+1,t)&=&\epsilon^l k(\epsilon),\\
g(n,t)f(n+1,t)&=&\epsilon^m h(\epsilon),
\end{eqnarray*}
where $l,\ m\in{\mathbb Z},\ h(0)\neq 0,\ k(0)\neq 0$ and $k(\epsilon),\ h(\epsilon)\in {\mathbb F}_q[\epsilon]$.
We also write $f(n+q-1,t)g(n+q,t)=\epsilon^{l'} k'(\epsilon),\; g(n+q-1,t)f(n+q,t)=\epsilon^{m'} h'(\epsilon)$ in the same manner.
Let us write down the reduction map again:
\[
\tilde{x}_n^t=
\left\{
\begin{array} {cl}
k(0)/h(0) & (l=m),\vspace{1mm}\\
0 & (l>m),\\
\infty & (l<m).
\end{array}
\right.
\]
In the case when $\tilde{f}(n,t)\tilde{g}(n+1,t)=0$ and $\tilde{g}(n,t)\tilde{f}(n+1,t)=0$, $x_n^t=\dfrac{f(n,t)g(n+1,t)}{g(n,t)f(n+1,t)}\in{\mathbb F}_q(\epsilon)$ and $x_n^{t+q-1}=\dfrac{f(n+q-1,t)g(n+q,t)}{g(n+q-1,t)f(n+q,t)}\in{\mathbb F}_q(\epsilon)$ may have different reductions with respect to $\epsilon$, since $l'$ is not necessarily equal to $l$, and neither is $m'$ equal to $m$.
The left part of figure \ref{figure8} shows a gray-tone plot of a two-soliton solution. In some points $\tilde
{x}_n^t$ does not have period 12 (for example $x_2^2\neq x_2^{14}$) while almost all other points do have this periodicity.
\begin{figure}
\centering
\includegraphics[width=12cm, bb=50 430 550 740]{figure10.eps}
\caption{The two-soliton solution of the generalized discrete KdV equation \eqref{YBdKdV} over ${\mathbb P}{\mathbb F}_{13}$ calculated in two different ways (left: $\tilde{x}_n^t$, right: $\hat{x}_n^t$), where $\alpha=14/15,\,\beta=5/6,\, r_1=-1/6,\,l_1=2/15,\, r_2=-1/30,\, l_2=1/30$. Elements of ${\mathbb P}{\mathbb F}_{13}$ are represented on the grayscale: from $0$ (white) to $12$ (gray) and $\infty$ (black).}
\label{figure8}
\end{figure}
If we want to recover full periodicity, there is another reduction to obtain the reduced variables from $x_n^t$ and $y_n^t$.
This time, we define another reduction to the finite field $\hat{x}_n^t$ as
\[
\hat{x}_n^t=
\left\{
\begin{array} {cl}
k(0)/h(0) & (l=0,\ m=0),\vspace{1mm}\\
0 & (\mbox{otherwise}).
\end{array}
\right.
\]
The right part of figure \ref{figure8} shows the same two-soliton solution as in the left part but calculated with this new method. We see that all points have periods 12.
It is important to determine how to reduce values in ${\mathbb P}{\mathbb F}_q(\epsilon)$ to values in ${\mathbb P}{\mathbb F}_q$, depending on the properties one wishes the soliton solutions to possess.
\section{Discrete KdV equation over the field of $p$-adic numbers}\label{padicdkdv}
Instead of dealing with the systems over the field of rational functions, we can consider them over the field of $p$-adic numbers just like we have done for discrete Painlev\'{e} equations.
The calculation of soliton solutions and its reduction to the finite field can be done exactly the same as in section \ref{section51}.
We can define the time evolution of the discrete KdV equation over the field of $p$-adic numbers ${\mathbb Q}_p$, and then obtain the time evolution of the equation over ${\mathbb P}{\mathbb F}_p$ by reducing it.
One of the good thing about this approach is the efficiency in numerical calculations. One of the weakness is that we have to limit ourselves to $m=1$ of $q=p^m$. (This is not a problem if we consider the extended field of ${\mathbb Q}_p$.)
The example with the same initial conditions as in figure \ref{figure1} is presented in figure \ref{figure9}.
\begin{figure}
\centering
\includegraphics[width=12cm, bb=50 350 540 750]{figure11.eps}
\caption{The time evolution of the discrete KdV equation over the field ${\mathbb Q}_7$ and its reduction to ${\mathbb P}{\mathbb F}_7$.}
\label{figure9}
\end{figure}
The values $x_2^2=0$ and $x_3^1=\infty$ in the figure \ref{figure9} are different from those ($x_2^2=4,\ x_3^1=5$) reduced from the field ${\mathbb F}_p(\delta)$ in figure \ref{figure1}. The two systems do not present the same singularities for the same initial conditions in general because of the structural difference in the addition between the fields ${\mathbb Q}_p$ and ${\mathbb F}_p(\delta)$. However, the overall appearance of soliton solutions are unchanged.
We add some examples of the soliton solutions we have missed in the section \ref{sectiondkdvsoliton}.
We describe the behavior of solutions of the discrete KdV equation \eqref{dkdvsoliton}. We consider the $p$-adic valuations of the parameter values $X:=(1-l_i)/l_i$ and $Y:=(l_i+\delta)/(1+\delta-l_i)$.
First, note that if we take $X,Y$ satisfying $v_p(X)=0$ and $v_p(Y)=0$ then the soliton solutions of the discrete KdV equation \eqref{dKdV2} over ${\mathbb P}{\mathbb F}_p$ is always periodic with respect to $t$ and $n$ with a period $p-1$ just like in figure \ref{figure4}.
Second, if we have $v_p(X)\neq 0$ or $v_p(Y)\neq 0$ then, at least one of the reductions of $X,Y$ by the reduction map \eqref{padicreductionmap} are either $0$ or $\infty$. We have two cases:
(i) If $v_p(X)\neq 0$ and $v_p(Y)\neq 0$, then the soliton solution over ${\mathbb P} \mathbb{F}_p$ looks similar to that over the field $\mathbb{R}$.
The solitary waves which include the value $\infty$ move both to the left and to the right, over the background arrays of $1$'s. We introduce two examples where $p=2$ and $p=3$ respectively.
The first example concerns the $2$-soliton solution over ${\mathbb P}{\mathbb F}_2$ of the form $x_n^t=(\sigma_n^t \sigma_{n+1}^{t-1})/(\sigma_{n+1}^t \sigma_n^{t-1})$, where
\[
\sigma_n^t=\left(
\begin{array}{cl}
1+\frac{1}{9}\left(-\frac{4}{5}\right)^t \left(-\frac{7}{2}\right)^n & \frac{1}{10}\left(-\frac{4}{5}\right)^t \left(-\frac{8}{3}\right)^n\\
\frac{1}{10}\left(-\frac{5}{6}\right)^t \left(-\frac{7}{2}\right)^n & 1+\frac{1}{11}\left(-\frac{5}{6}\right)^t \left(-\frac{8}{3}\right)^n
\end{array}
\right).
\]
Note that all values concerning the speed of solitons ($v_2(-4/5)$, $v_2(-7/2)$, $v_2(-5/6)$, $v_2(-8/3)$) are non-zero.
The second one is the $2$-soliton solution over ${\mathbb P}{\mathbb F}_3$, written as
\[
\sigma_n^t=\left(
\begin{array}{cl}
1+\frac{1}{11}\left(-\frac{5}{6}\right)^t \left(-\frac{8}{3}\right)^n & \frac{1}{18}\left(-\frac{5}{6}\right)^t \left(-\frac{3}{2}\right)^n\\
\frac{1}{18}\left(-\frac{12}{13}\right)^t \left(-\frac{8}{3}\right)^n & 1+\frac{1}{25}\left(-\frac{12}{13}\right)^t \left(-\frac{3}{2}\right)^n
\end{array}
\right).
\]
See the figures \ref{figure11} and \ref{figure12} for their solitonic shapes.
\begin{figure}
\centering
\includegraphics[width=12cm, bb=70 350 550 750]{figure12.eps}
\caption{The plot of the $2$-soliton solution of the discrete KdV equation over the field $\tilde{x}_n^t \in {\mathbb P}{\mathbb F}_2$ where $\delta=2$, $\gamma_1=\gamma_2=1$ and $l_1=5,l_2=6$. The dot `.' denotes that $\tilde{x}_n^t=1$. The range is $-9\le t,n\le 9$.}
\label{figure11}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=12cm, bb=70 350 550 750]{figure13.eps}
\caption{The plot of the $2$-soliton solution of the discrete KdV equation over the field
$\tilde{x}_n^t \in {\mathbb P}{\mathbb F}_3$ where $\delta=2$, $\gamma_1=\gamma_2=1$ and $l_1=6,l_2=13$. The dot `.' denotes that $\tilde{x}_n^t=1$.
The range is $-9\le t,n\le 9$.}
\label{figure12}
\end{figure}
(ii) If just one of the values $v_p(X)$ and $v_p(Y)$ is zero, the speed of solitary waves is either $0$ or $\infty$. In the figure \ref{figure13}, we present the example of the $2$-soliton solution over ${\mathbb P}{\mathbb F}_3$ with the speed of solitons $0$ and $\infty$ respectively.
\begin{figure}
\centering
\includegraphics[width=9cm, bb=60 460 450 740]{figure14.eps}
\caption{The plot of the $2$-soliton solution of the discrete KdV equation over the field
$\tilde{x}_n^t \in {\mathbb P}{\mathbb F}_3$ where $\delta=1$, $\gamma_1=\gamma_2=1$ and $l_1=3,l_2=8$. The dot `.' denotes $\tilde{x}_n^t=1$.
The range is $-6\le t,n\le 6$.}
\label{figure13}
\end{figure}
We can prove that the case (i) occurs if and only if $\tilde{\delta}=0,-1$.
\begin{Proposition}
We obtain $v_p(X)\neq 0$ and $v_p(Y)\neq 0$ for some $l_i$
if and only if the parameter $\delta\in\mathbb{Q}_p$ satisfies $\pi(\delta)=0$ or $\pi(\delta)=1$.
The solitary waves over $\mathbb{P}\mathbb{F}_p$ go to the right if $\pi(\delta)=0$, and to the left if $\pi(\delta)=-1$.
\end{Proposition}
\textbf{Proof}\;\;
Let us rewrite $l_i\to l$.
We have $v_p(X)\neq 0$ if and only if
\begin{equation}
v_p(l)=0\ \mbox{and}\ \pi(l)=1,\label{Xcase1}
\end{equation}
or
\begin{equation}
v_p(l)>0 \label{Xcase2}.
\end{equation}
In the case of \eqref{Xcase1},
\[
v_p(Y)=\left\{
\begin{array}{cl}
\infty & (v_p(\delta)>0),\\
(1+\delta )/ \delta & (v_p(\delta)=0),\\
1 & (v_p(\delta)<0).
\end{array}
\right.
\]
Therefore, $v_p(Y)\neq 0$ if and only if ($v_p(\delta)=0$ and $\pi(\delta)=-1$) or $v_p(\delta)>0$.
In the case of \eqref{Xcase2},
\[
v_p(Y)=\left\{
\begin{array}{cl}
0 & (v_p(\delta)>0),\\
\delta /(1+\delta) & (v_p(\delta)=0),\\
1 & (v_p(\delta)<0).
\end{array}
\right.
\]
Therefore, $v_p(Y)\neq 0$ if and only if ($v_p(\delta)=0$ and $\pi(\delta)=-1$) or $v_p(\delta)>0$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
Note that if $\pi(\delta)=0$ or $-1$, the discrete KdV equation \eqref{dKdV1} is reduced to the linear difference equations
\begin{equation}
x_{n+1}^{t+1}\equiv x_n^t,\ \mbox{or},\ x_n^{t+1}\equiv x_{n+1}^t, \label{reduced1}
\end{equation}
respectively. These equations have trivial waves with speed $\pm 1$, and do not have soliton solutions.
What we are observing in this section is the reduction of the soliton solution of the discrete KdV equation \eqref{dKdV1} over $\mathbb{Q}_p$, \textit{not} the solutions of the `reduced' discrete KdV equations \eqref{reduced1}. Through our methods, we successfully extract the solitonic structure of solutions over the finite field.
\subsection{Relation to the cellular automata}
In this section, we study how the discrete KdV equation over $\mathbb{Q}_p$ is related to the Box and Ball System (BBS) by taking the $p$-adic valuations. The BBS is a famous cellular automaton obtained by taking the ultra-discrete limit of the discrete KdV equation (section \ref{udintegrable}).
Let us fix $\delta=p^m,(m>0)$ for the discrete KdV equation \eqref{dKdV2}.
We define the new system $\{ \hat{x}_n^t\}$ from $\{x_n^t \}$ as
\[
\hat{x}_n^t:=-\mbox{Round} \left( \frac{v_p (x_n^t)}{m}\right),
\]
where Round$(x)\in\mathbb{Z}$ denotes the closest integer to $x\in\mathbb{R}$.
Then the system $\hat{x}_n^t$ goes to the time evolution of the BBS in the limit $m\to\infty$, or $p\to\infty$.
Note that if $x_n^t=p^{-m}$ then we have $\hat{x}_n^t=1$, and that if $x_n^t=1$ we have $\hat{x}_n^t=0$.
Here is an example where $p=2$ and $m=10$. We start from the initial condition of the equation \eqref{dKdV2}:
\[
\{x_n^0\}=\{\cdots,1,1,p^{-m},p^{-m},p^{-m},1,1,1,p^{-m},p^{-m},1,1,1,p^{-m},1,1,\cdots\}.
\]
Then the evolution of $\hat{x}_n^t$ is as in figure \ref{figure15}.
\begin{figure}
\centering
\includegraphics[width=12cm, bb=70 485 530 740]{figure15.eps}
\caption{Plot of the `roundoff' $p$-adic valuations $\hat{x}_n^t$ of the discrete KdV equation \eqref{dKdV2}. Here the dot `.' represents the point where $\hat{x}_n^t=0$.}
\label{figure15}
\end{figure}
The time evolution obtained here is almost the same as the three soliton interaction of BBS in figure \ref{figure2}.
The block of $p^{-m}$'s in the initial condition of \eqref{dKdV2} is a $p$-adic analog of a BBS soliton (array of $1$'s) in the system $\hat{x}_n^t$.
The underlying fact is that the ultra-discrete limit is a super-exponential estimate, whose $p$-adic analog is taking $p$-adic valuations of the variables.
\section*{Concluding remarks and future problems}
We studied the discrete dynamical systems over finite fields. We investigated how to define them without indeterminacies, how to judge their integrability by a simple test similar to the singularity confinement method, and how to obtain the special solutions of them, in particular the solitary wave solutions.
In the first part of the paper, we constructed the space of initial conditions for discrete Painlev\'{e} equations defined over a finite field via the application of the Sakai theory.
In particular, we defined the time evolution graph for the discrete Painlev\'{e} II equation over finite field ${\mathbb F}_{q}$.
We have found out that, in case of the systems over the finite field, the space of initial conditions can be minimized compared to those obtained through Sakai theory, because of the discrete topology of the space.
The second part concerns the extension of the value spaces to local fields, in particular, to the field of $p$-adic numbers.
Our idea is to define the equations over the field of $p$-adic numbers $\mathbb{Q}_p$ and then reduce them to the finite field ${\mathbb F}_p$.
We generalized good reduction in order to be applied to integrable mappings, in particular, to the discrete Painlev\'{e} equations. We called this generalized notion an `almost good reduction' (AGR). It has been proved that AGR is satisfied for discrete Painlev\'{e} II equation and for $q$-discrete Painlev\'{e} I, II, III, IV and V equations. We have found out that AGR was satisfied for the Hietarinta-Viallet equation, and hence was an integrability detector which worked as an arithmetic analog of the singularity confinement test.
In the third part, we applied our methods to the two-dimensional lattice systems, in particular, to the discrete KdV equation and its generalized equation.
We obtained the solitary wave solutions defined over finite fields and showed that they have periods $q-1$ in generic cases. Other special solitary wave solutions which only appear over finite fields have also been presented and their properties have been studied.
One of the future problems is to construct a theory to solve the initial value problems over the non-archimedean valued fields. We also wish to study further the properties of the reduction modulo a prime of the higher dimensional lattice integrable equations.
In this paper, we have not dealt with the theory of continuous integrable equations over the field of $p$-adic numbers and over the finite fields. For example, $p$-adic soliton theory has been investigated by G. W. Anderson \cite{Anderson}. The continuous Painlev\'{e} equations over finite fields have been studied in terms of their symmetric solutions by K. Okamoto and S. Okumura. We also wish to study the relation of our methods to these approaches.
\section*{Acknowledgments}
The author would particularly like to thank his advisor, Professor Tetsuji Tokihiro for
generous support and advice throughout his Ph.D course studies.
He has greatly benefited from Professors Jun Mada and K. M. Tamizhmani who collaborated in the research and jointly published several papers. He would like to thank Professor Ralph Willox
for carefully reading the papers and making insightful suggestions.
He would like to thank Professors Shinsuke Iwao, Nalini Joshi, Saburo Kakei, Shigeo Kusuoka, Kenichi Maruno, Yousuke Ohyama, Hidetaka Sakai, Junkichi Satsuma, Junichi Shiraishi, for valuable discussions and comments.
This work is partially supported by Grant-in-Aid for JSPS Fellows 24-1379.
|
2,869,038,156,694 | arxiv | \section{Introduction}
Decision making is an ubiquitous task that is not done in vacuum. Our decisions are constrained by our own preferences, by our social network, by the context, by the environment and so on. Moreover, we are surrounded by little nudges: indirect suggestions that are generated by some external agent to influence the decision making process of groups or individuals \cite{nudge2008}.
An example are the so called ``Dark Patterns'' defined as ``user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions'' \cite{Mathur2019DarkPatterns}. In such paper,
authors make recommendations to study, mitigate, and minimize the use of these patterns.
These and other nudging techniques are gaining attention in the last years \cite{WeinmannSchneiderBrocke2015, SchneiderWeinmannBrocke2018}
But these nudges can be issued by more sophisticated systems which through the use of proper data collection, modeling and learning are able to exploit our ``history'' and preferences trying to induce us taking some decisions. In what follows, we will refer to these systems as ADM: Automated Decision Making systems .
Nowadays, there is an increasing concern on how such ADM are designed and deployed and several countries, research centers and institutions are devoting efforts on how to best address this concern.
As relevant examples we can cite
As relevant examples we can cite a white paper (written by Informatics
Europe and the ACM Europe Policy Committee), presenting specific
recommendations from the European technical and scientific community about how policy makers, legislators, and concerned individuals might best respond to the rapid growth of ADM \cite{LarusHankinCarsonEtAl2018}.
We can also mention the European approach to Artificial Intelligence,
where there is a High-Level Expert Group on Artificial Intelligence (AI-HLEG)\cite{HLE2018} that recently (June 2019) presented their Policy and Investment Recommendations for Trustworthy AI \cite{Policy2019} during the first European AI Alliance Assembly\footnote{\url{https://ec.europa.eu/digital-single-market/en/news/first-european-ai-alliance-assembly}}.
Besides these initiatives, the interested reader may find in \cite{Dutton2019}, a summary of 26 strategies for A.I. from different countries worldwide.
Overall, many of the principles, recommendations and guidelines can be summarized in four key issues: Fairness, Accountability, Transparency, and Ethics.
If we focus on the people using those ADM, the concept of ``trust'' emerges as one of the most relevant ones.
The European Policy mentioned before makes clear that trust is a prerequisite to ensure a human-centric approach to AI and identify seven key requirements that AI applications should respect to be considered trustworthy.
Why a user should trust the output (a decision, a recommendation) of such ADM? how good/bad decisions/recommendations affect the level of trust on the behavior of the system? How aspects like data collection, privacy management, strategic manipulation, nudging and so on affect trust? These are all relevant questions.
In this context, one may argue that as the ADM start to behave following the key issues mentioned above, the trust of the people in the system will be only affected by the system's output, as other aspects (like privacy management) will be properly managed by some external certification authority.
It is well known that defining trust is far from trivial as different disciplines define it differently. Here, and considering the so called recommender systems \cite{Jugovac2017,VILLEGAS2018173,LU20121} as a particular case of an ADM, we
define trust as the willingness of the user to accept a recommendation based on a subjective belief that the recommender tool will exhibit reliable behavior to maximize the user’s interest under uncertainty of a given situation, based on past experiences with the tool. Our definition resembles the one presented in \cite{Cho2015}.
Thus, the aim of this paper is to study the dynamics of trust on an multi-user scenario with an ideal recommender system.
We depart from a fair and unbiased idealized recommendation tool which issues binary suggestions \textsl{on using or not a resource with limited capacity}. Typical examples are take/do not take a given route, go/do not go to a restaurant or a bar, and so on.
The use of a limited capacity resource forces the use of different recommendations even for users with the same profile.
Consider, for example, route navigation apps. If all the drivers are recommended an alternative route to avoid a traffic jam ahead, then, the alternative route will be also congested within a short period of time and can generate disturbances in the neighborhoods where the traffic was diverted. So, it becomes clear that not all the users should receive the same recommendation (this is an aspect related with ``fairness'').
In our model, users simultaneously receive a recommendation on going or not going to a bar. The users decide at the same time whether they will go to the bar or not. As the bar has a limited capacity, it's no fun to go there if it is too crowded.
Every user has a level of trust on the recommender that is increased/decreased if the recommendation was good or not.
The amount of increase/decrease in the level of trust is the key to model different user attitudes\footnote{
This model resembles the ``El Farol'' bar problem \cite{ElFarol94}, a typical example of the so called Minority Games \cite{MinorityGames2013}. Here, we eliminate some assumptions like the existence of payoffs (in terms of game theory), a history of bar attendances and the use of several prediction strategies by the users. In turn, all the users employ the same trust based decision rule, while the recommender system (not present in the original problem) uses a very simple recommendation strategy.}.
We will explore how good/bad recommendations may affect the trust in the recommender, considering different users attitude towards recommendation errors: tolerant, neutral or intolerant (in a continuum and not as discrete categories).
In situations of repeated interactions, we will analyze how the overall trust in the recommender evolves and how the users attitude significantly affects the results.
Using simulations, experiments will be done and conclusions will be outlined.
Consequently, the paper is structured as follows.
In Section \ref{model} the components and the inner working of proposed model is described.
Then, in Section \ref{experiments} the main experiments and results are described and analyzed. They are related with a) the evolution of trust in the recommendations, and b) the influence of the user attitude on trust dynamics.
Finally, Section \ref{conclus} is devoted to conclusions and further work.
\section{Model Description\label{model}}
The proposed model is based on three components: a resource, the users and the recommender.
These components are described below and then, the interactions among them is presented.
\begin{figure*
\centering
\begin{tabular}{cc}
\includegraphics[width=0.8\linewidth]{./esquema2} \\
\includegraphics[width=0.8\linewidth]{./ejemplo5iters} \\
\end{tabular}
\caption{Top: Graphical representation of the model. Bottom: Example showing five iterations of the model with $N=5$ users and $L=3$. Recommendation, decision and level of trust for every user are shown. Also the attendance $O^t$ and the average trust $\bar{\alpha}^t$ are displayed in the last two columns.}
\label{fig:esquema}
\end{figure*}
\subsection*{a) The Resource}
We depart from a resource with a limited capacity (let's suppose a bar) $C$ and a ``comfort level'' $L$ (as in the El Farol problem \cite{ElFarol94}) which is the maximum number of users that makes the place not crowded.
We assume that a number $N > L$ of users exist (not all the users can simultaneously go to the bar).
The attendance $O^t$ is the number of users that decided to go to the bar at time $t$.
\subsection*{b) Users}
We have a set of homogeneous users $A = \{a_1,a_2, \ldots, a_N\}$, where
every $a_i$ has:
\begin{itemize}
\item $\alpha_i^t \in [0,1]$: level of trust on the recommendation at time $t$.
\item $recom_i^t, dec_i^t \in \{GO, STAY\}$: recommendation received and the decision taken at time $t$, respectively.
\end{itemize}
\noindent
\textbf{Decision Rule}: the user will accept the recommendation ($dec_i^t = recom_i^t$) with a value proportional to $\alpha_i^t$.
If the user rejects the recommendation, it will do the other action.
Notice that when $\alpha_i^t = 1$, the user will always accept the recommendation, but when $\alpha_i^t = 0$, it will do the opposite of the recommendation.\\
\noindent
\textbf{Trust Revision Protocol:}
every user has a protocol to modify its level of trust $\alpha_i^{t+1}$ in the recommender in terms of the last recommendation received ( $recom_i^t$) and the last attendance to the bar ($O^t$).
The users have two parameters $\beta, \gamma$ called the positive and negative feedback (or learning factor) respectively.
In this initial setting, all users are considered homogeneous (they share the same decision rule and the trust revision protocol).\\
\noindent
The trust will increase, making $\alpha_i^{t+1} = \alpha_i^t + \beta$ if the recommendation was ``good''. This will happen either when:
\begin{enumerate}
\item the tool recommended to GO and the bar was not crowded ($recom_i^t = GO$ and $ O^t \leq L$), or
\item the tool recommended to STAY and the bar was crowded ($recom_i^t = STAY$ and $ O^t > L$)
\end{enumerate}
\noindent
In turn, the trust will decrease, making $\alpha_i^{t+1} = \alpha_i^t - \gamma$ if the recommendation was ``bad''. Either when:
\begin{enumerate}
\item the tool recommended to GO and the bar was crowded ($recom_i^t = GO$ and $ O^t > L$), or
\item the tool recommended to STAY and the bar was not crowded ($recom_i^t = STAY$ and $ O^t \leq L$)
\end{enumerate}
The use of different values for $\beta, \gamma$ has two reasons: 1) trust can be gained or lost at different rates, 2) the relation between both parameters allows to represent different user attitudes, leading to:
\begin{itemize}
\item \textit{Neutral User}: $\gamma = \beta$, the same feedback is added/substracted to the current level of trust.
\item \textit{Tolerant User}: $\gamma < \beta$, the loose of trust occurs slower than trust gain: which means that the agent is tolerant to recommendation errors.
\item \textit{Intolerant User}: $\gamma > \beta$: the agent penalizes the recommendation errors. For example, when $\gamma = 2 \times \beta$ then an error in the recommendation has two times more impact in the level of trust than a good recommendation. In other words, after a bad recommendation, the user will need two good ones to recover the original level of trust.
\end{itemize}
\subsection*{c) The Recommender}
The recommender knows the set of users but does not have access to their internal levels of trust (the $\alpha_i^t$ values).
Just the last decision taken by every user $a_i$ (the value $dec_i^t$) is available to the recommender.
Given that the users are homogeneous, a profile based recommendation would not be possible (remember that the bar has a limited capacity). So, as a starting point, the recommender uses a very simple rule for the assignment of recommendations:
\begin{itemize}
\item randomly select a set $G$ of $L$ users.
\item every user $a_i \in G$ receives a $GO$ recommendation
\item every user $a_i \notin G$ receives a $STAY$ recommendation
\end{itemize}
This recommender would be an ideal one from the user perspective: it has no room for manipulation, its behavior is clearly unbiased, it does not have access to the user's private information, it does not store any users' historical data and so on.
\subsection*{Working scheme\label{scheme}}
The elements of the model are depicted in Fig. \ref{fig:esquema} (top).
\noindent
At every time step $t$, there are three stages.
\begin{enumerate}
\item \textit{Recommendation Stage}: the recommender sends a recommendation $rec_i^t$ to every user $a_i$.
\item \textit{Decision Stage}: using the decision rule described previously, every user takes a decision $dec_i^t$. As expected, the recommendation can be followed or not.
\item \textit{Update Statistics Stage}: taking into account the users decisions, some measures are calculated (see below) and informed to the users. Then they adapt their levels of trust using the revision protocol described previously.
\end{enumerate}
At every time step $t$ the following measures are calculated.
\begin{itemize}
\item Attendance $O^t$: number of users that decided to go to the bar.
\item Average trust on the recommendations:
\begin{equation*}
\bar{\alpha}^t = \frac{1}{N}\sum_{i=1}^N \alpha_i^t
\end{equation*}
\end{itemize}
Please note that the value $\bar{\alpha}^t$ is just informative and does not affect neither the decision of the users, nor the way recommendations are issued.
Let's suppose $N=5, L=3, \beta = \gamma = 0.05$ (neutral users). An example with five iterations is displayed in Fig. \ref{fig:esquema} (bottom).
For each user, three values are shown: $(rec_i, dec_i) \:\: \alpha_i$. We use the value `G' in $rec_i^t$ or $dec_i^t$ to denote a GO recommendation or decision while `S' states a STAY one.
Then, the attendance $O^t$ and the average trust $\bar{\alpha}^t$ appear.
Consider user $a_1$ when $t=1$ (first row). The recommendation was GO but the user decided to STAY. As the bar was not crowded ($O^t = 2$), the recommendation was good so the level of trust of $a_1$ is increased.
In turn, consider user $a_3$.
The recommendation was STAY but the user decided to GO. As the bar was not crowded ($O^t = 2$), the recommendation was bad. Trust should be decreased but as it could not be lower than zero, it stays in the minimum possible value.
We can also consider the dynamic behavior of every user. If we focus on user $a_5$, we observe that for $t=1,2,3$, it received a GO recommendation that the user does not follow. In those time steps, the bar was not crowded, so the level of trust of the user was increased.
When $t=4$, the recommendation was STAY but it decided to GO. The bar was not crowded, so the trust was reduced.
A similar analysis can be done for the rest of users.
In this example, the average trust increased in every iteration.
It is important to remark that as the recommender has no access to the (private) level of trust of the users, it can not
broadcast any sort of average trust to them. Neither is possible the communication among the users. Both aspects, although important, would add additional features to the model that may affect the analysis of the trust dynamics.
\section{Experiments\label{experiments}}
Two experiments are conducted. The first one is aimed at understanding trust dynamics (how the average trust change with the time), while the second one focuses on how trust dynamics is affected by the user attitudes with respect to recommendation errors.
\subsection{The evolution of the average level of trust \label{exp1}}
This experiment is aimed to understand how the trust changes with the time. Some preliminary experiments showed us that the average level of trust converge to 1, so here we pose the following questions:
\begin{enumerate}
\item \textit{Is there any $t^*$ which makes the individual values $\alpha_i^{t^*}, \: \forall \: i \in [1 \ldots N]$ converge?.}
\item \textit{In such a case, the convergence value is the same for all the users?. }
\item \textit{Does the number of users has any implications on the results?}
\end{enumerate}
For different values of $N$, $L = 0.6 \times N$, and $\gamma = \beta = 0.05$ and a maximum of 250 iterations, we run 100 independent repetitions of the simulation.
Results are shown in Table \ref{tab:time2eq}, where for each value of $N$, the number of repetitions that converged ($runs$), out of 100, and the average number of iterations done to converge ($I2C$) are displayed.
The first element to highlight is that all the repetitions converged, and the average level of trust reached the value 1. In other words, always $\exists \:t^* \:|\: \alpha_i^{t^*} = 1, \:\: \forall i \in [1 \ldots N]$, where $t^*$ is the time (or iterations) to convergence.
This is extremely relevant because when such situation occurs, all the users will accept the recommendation, which means that (from the point of view of the recommender and the resource usage) the problem became an assignment problem instead of a recommendation one.
Another point to analyze is the relation between the number of users $N$ and the average number of iterations to converge $I2C$. The plot in Fig. \ref{fig:NvsIters} shows this relation, which perfectly adjusts to a power law with $y = 25.586 x^{0.3021},\: R^2 = 0.9977$.
\begin{table}
\centering
\begin{tabular}{r r c r r r}
\hline
N & L & $runs$ & $I2C$ & stdDev\\
\hline
20 & 12 & 100 & 64.61 & 8.08 \\
40 & 24 & 100 & 78.29 & 10.04 \\
60 & 36 & 100 & 88.96 & 11.89 \\
80 & 48 & 100 & 95.00 & 12.68 \\
100 & 60 & 100 & 101.03 & 13.60 \\
200 & 120 & 100 & 123.85 & 19.50 \\
300 & 180 & 100 & 138.81 & 19.05 \\
400 & 240 & 100 & 159.41 & 20.07 \\
500 & 300 & 100 & 164.58 & 26.72 \\
600 & 360 & 100 & 181.74 & 30.34 \\
700 & 420 & 100 & 188.29 & 32.58 \\
800 & 480 & 100 & 192.70 & 27.86 \\
900 & 540 & 100 & 198.71 & 33.95 \\
1000 & 600 & 100 & 208.01 & 39.90 \\
\hline
\end{tabular}
\caption{Results from 100 repetitions for different number of users. All the repetitions converged. The average number of iterations to convergence ($I2C$) and the corresponding standard deviation are shown in the last two columns.
\label{tab:time2eq}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{./NvsIters}
\caption{Average iterations to converge in terms of the number of users}
\label{fig:NvsIters}
\end{figure}
\subsection{On the influence of users' attitude \label{exp2}}
Now, in this experiment, we explore how the trust dynamics change in terms of the users attitude.
The question we posed here is:\textit{ Does the user attitude (the relation between $\gamma$ and $\beta$) has any impact in the time to convergence?}.
Recall that a good recommendation makes $\alpha_i^{t+1} = \alpha_i^t + \beta$; otherwise trust changes as $\alpha_i^{t+1} = \alpha_i^t - \gamma$.
We fix $N=100, L=60$. We keep $\beta = 0.05$ and we define $\gamma$ as $\phi \times \beta$ with $\phi = \{0.5, 0.6, \ldots 1.8, 1.9, 2.0\}$
The $\phi$ value allows to model the user attitude as a continuum between tolerant to intolerant attitude.
When $\phi=1$, then a neutral user is modeled while $\phi < 1$ allows to model a tolerant one.
Finally, when $\phi > 1$ an intolerant user is obtained.
For each value of $\phi$ we run 100 repetitions of the simulation, each one with a maximum of 5000 iterations.
The results are shown in Table \ref{tab:influNegFeedback2}.
Focusing first in the left part of the Table, two different behaviors are clearly observed.
The first one is when $\phi \leq 1.4$, where all the runs converged.
In these cases, a clear exponential relation is observed between $\phi$ ($\gamma$) and the time to converge.
When the negative feedback is lower than the positive one $\gamma < \beta$ (i.e. $\phi < 1$) the time to converge is shorter than when $\gamma = \beta$ ($\phi=1$). These would be the behavior of ``tolerant'' users that forgive the recommendation errors.
When $1 < \phi \leq 1.4$ we are in the presence of users that are less tolerant to recommendation errors. The higher the $\gamma$ (the negative feedback), the harder the convergence.
When $\phi = 1.5$ an important change in the behavior of the model appeared. Just 60 \% of the runs converged while such percentage reduced to just 3\% when $\phi = 1.6$ . Moreover, when $\phi > 1.6$, the simulations did not converge within the iterations limits posed.
To better understand the changes between $1.4 \leq \phi \leq 1.6$, we made another experiment with fine grained values for $\phi = \{1.4, 1.41, 1.42, \ldots 1.59, 1.6\}$.
The results are shown in Table \ref{tab:influNegFeedback2} (right). The simulation converged in all the runs when $\phi \leq 1.46$. For higher $\phi$ values, the number of converged runs reduces following a cuadratic relation ($y=4226.7x^2 - 13689x+11084 \:\:$,$R^2 = 0.973$) (see Fig. \ref{fig:phivsconverg}). Please note that these changes in $\phi$ values imply just a modification of $\gamma$ at the fourth decimal place.
\begin{table}
\centering
\footnotesize
\begin{tabular}{@{}l@{}cc@{}r}
\begin{tabular}{cc@{}rr}
$\phi$ & $\gamma$ & $runs$ &I2C \\
\hline
0.5 & 0.025 & 100 &52 \\
0.6 & 0.030 & 100 &58 \\
0.7 & 0.035 & 100 &65 \\
0.8 & 0.040 & 100 &73 \\
0.9 & 0.045 & 100 &86 \\
1.0 & 0.050 & 100 &100 \\
1.1 & 0.055 & 100 &127 \\
1.2 & 0.060 & 100 &162 \\
1.3 & 0.065 & 100 &238 \\
\hline
1.4 & 0.070 & 100 &544 \\
1.5 & 0.075 & 60 & 2205 \\
1.6 & 0.080 & 3 &4557 \\
\hline
1.7 & 0.085 & 0 & - \\
1.8 & 0.090 & 0 & - \\
1.9 & 0.095 & 0 & - \\
2.0 & 0.100 & 0 & - \\
\hline
\end{tabular} & & &
\begin{tabular}{ccrr}
$\phi$ & $\gamma$ & &I2C \\
\hline
1.40 & 0.0700 & 100 & 544 \\
1.41 & 0.0705 & 100 & 604 \\
1.42 & 0.0710 & 100 & 722 \\
1.43 & 0.0715 & 100 & 910 \\
1.44 & 0.0720 & 100 & 1091 \\
1.45 & 0.0725 & 100 & 1292 \\
1.46 & 0.0730 & 100 & 1378 \\
1.47 & 0.0735 & 94 & 1554 \\
1.48 & 0.0740 & 85 & 2043 \\
1.49 & 0.0745 & 86 & 2065 \\
1.50 & 0.0750 & 60 & 2205 \\
1.51 & 0.0755 & 54 & 2494 \\
1.52 & 0.0760 & 34 & 2324 \\
1.53 & 0.0765 & 26 & 1999 \\
1.54 & 0.0770 & 30 & 2537 \\
1.55 & 0.0775 & 22 & 2206 \\
1.56 & 0.0780 & 13 & 2171 \\
1.57 & 0.0785 & 11 & 3347 \\
1.58 & 0.0790 & 4 & 1028 \\
1.59 & 0.0795 & 7 & 2481 \\
1.60 & 0.080 & 3 & 4557 \\
\end{tabular}\\
\end{tabular}
\caption{Influence of $\phi$ in the average time to convergence. Values for $I2C$ are rounded. \label{tab:influNegFeedback2}}
\end{table}
These results raise another question: \textit{when a simulation does not converge, which is the average level of trust reached?}
Figure \ref{fig:boxplotphi} shows boxplots corresponding to the average trust values achieved for $\phi \geq 1.6$. It is clear that as the negative feedback increases (the users are more intolerant to recommendation errors), it becomes harder to the average trust to increase. In fact, such value never gets higher than 0.4.
Recall that when $\phi = 2$, then a recommendation error has two times more impact in the trust than a good recommendation. The plot shows that in this case, the average trust is almost always below than $0.3$ which in turn means that 7 out of 10 recommendations (70\%) are rejected by the users.
\begin{figure}
\centering
\includegraphics[clip=true,width=0.77\linewidth]{./phiVSconverg}
\caption{Number of repetitions that converged in terms of $\phi$}
\label{fig:phivsconverg}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip=true,width=0.78\linewidth]{./boxplotPhi}
\caption{Average trust after 5000 iterations for different values of $\phi$ (Recall $\gamma = \phi \times \beta$)}
\label{fig:boxplotphi}
\end{figure}
\section{Conclusions \label{conclus}}
We focused in an idealized recommender tool that issues binary recommendations over using or not a resource with limited capacity.
We proposed a simple model to study both:
1) the evolution of trust and 2) the impact of users attitude on the trust dynamics.
We focused on two research questions for which, the main conclusions are outlined.
\noindent
\textit{ 1) Is there any $t^*$ which makes the individual confidences $\alpha_i^{t^*}, \: \forall \: i \in [1 \ldots N]$ converge?. In such a case, the convergence value is the same for all the users?.}
The experiments confirmed that the answer to this question is YES.
Using neutral users (the same positive and negative learning factors $\beta = \gamma = 0.05$), all the simulations ended with all the users having $\alpha_i = 1$. In other words, at some point in time, all the users accept the recommendation.
It was also observed that the value $t^*$ (the time to convergence) follows a power law relation with the number of users.
From the point of view of the resource usage, this is very important: if the users accept the recommendation, then the recommender can properly balance the attendance to bar. Moreover, the recommendation problem can be transformed onto an assignment problem and then a more ``fair'' approach for recommendations can be implemented (instead of a random one).
\noindent
\textit{ 2) Does the user attitude (the relation between $\gamma$ and $\beta$) has any impact in the time to convergence?}.
The answer is YES. The relation between both parameters has a very strong impact in the time (or number of iterations) to converge.
Recall that each time the recommender produced a good recommendation, the user's trust is increased in $\beta$ units, while it is decreased by $\gamma$ if the recommendation was bad.
When $\beta > \gamma$, users are tolerant to recommendation errors. A good recommendation weights more than an error. Under this configuration, the simulation always converged.
As the difference $\gamma - \beta$ became bigger, the number of simulations that converged reduced following a quadratic relation. This is related with the fact that the average trust on the recommendation stayed in low values (below 0.4).
Another important observation is how sensitive is this simple model with respect to small variations in $\gamma$. With $\gamma = 0.07$, all the simulations converged, while using $\gamma = 0.085$ none of them did.
This ``sensitivity to initial conditions'' is a very well known situation in the complex systems field \cite{Mitchell2009}.
If using this simple model, such variations are observed, then one should be very careful when analyzing more complex ones, as very small variations may lead to very big changes in the system behavior.
Overall, we consider this model and the results obtained as first step towards understanding the impact of trust dynamics in recommendation tools for resources with limited capacity.
\section*{Acknowledgments}
Research supported in part by project TIN2017-86647-P (Spanish
Ministry of Economy and Competitiveness, includes FEDER funds from the European Union).
|
2,869,038,156,695 | arxiv | \section{Introduction}
The Large Synoptic Survey Telescope (LSST) is a next-generation survey
project, coupling a world-class telescope facility with cutting-edge
data management software and calibration efforts. Its primary science
drivers are to constrain dark matter and dark energy, to map the Milky
Way and Local Volume, to catalog the Solar System, and to explore
the transient optical sky. The catalogs generated by LSST during its
ten years of operation will enable a multitude of science
investigations beyond these primary science drivers, many of which are
explored in the LSST Science Book (\cite{scibook}).
The inventory of the Solar System is one of the primary science
drivers for LSST. Fulfilling this science goal will involve
discovering millions of minor planets, increasing the number of known
objects in every small body population by a factor of 10 to 100 above
current levels. Many of these objects will receive large numbers
($>100$) of observations, over a long time span (several years) and with
extremely accurate astrometry (10mas errors), resulting in highly
accurate orbits suitable for a wide range of theoretical studies or
for targeted follow up observations for specific purposes (such as
spectroscopy or occultation studies.
These large number of observations will also provide the basis for
sparse lightcurve inversion, which requires at least 100 observations
over a range of phase angles. It will be possible to determine
the spin states and shapes for thousands of Main belt
asteroids. Frequent observations, spread among a wide range of times
and at variety of different points along each object's orbit, are also
ideal for detecting activity, either collisionally-induced activity or
surface activity induced by volatile outgassing.
Each object will obtain observations in different filters, primarily $g$,
$r$, $i$ and $z$ but also $u$ and $y$, with photometric calibration of
each measurement accurate to 10mmags (\cite{lsstoverview}). This will enable study of the
composition of these objects. Adding color information also
provides statistical constraints on the albedos of the objects,
allowing a tighter estimate of the diameters and thus size
distribution of the population. With combined color and orbital
information, identification of collisional families becomes more
robust. (See the Solar System chapter from \cite{scibook} for further discussion of these
topics). Table \ref{table1} provides a summary of the expected number
of objects in each population, as well as their typical arc length and
number of observations.
\begin{table}
\begin{center}
\caption{Summary of small body populations observed with LSST}
\label{table1}
{\scriptsize
\begin{tabular}{|l|c|c|c|c|}\hline
Population & Currently known$^1$ & LSST discoveries$^2$ &
Num. of obervations$^3$
& Arc length (years)$^3$ \\ \hline
Near Earth Objects \\(NEOs) & 12,832 & 100,000 & (H$\leq$20) 90 & 7.0 \\
\hline
Main Belt Asteroids \\(MBAs) & 636,499 & 5,500,000 & (H$\leq$19) 200 &
8.5
\\ \hline
Jupiter Trojans & 6,387 & 280,000 & (H$\leq$16) 300 & 8.7 \\ \hline
TransNeptunian and \\ Scattered Disk Objects \\ (TNOs and SDOs) & 1,921 &
40,000
&
(H$\leq$6)
450
& 8.5 \\ \hline
\end{tabular}
}
\end{center}
\vspace{1mm}
\scriptsize{
{\it Notes:}\\
$^1$As reported by the MPC (May 2015).
$^2$Expected at the end of LSST's ten years of operations.
$^3$Median number of observations and observational arc length for the brightest objects near
100\% completeness (as indicated). }
\end{table}
Construction for LSST is ongoing, with first light scheduled for 2020,
a scientific commissioning program following in the next year, and the
start of survey operations in 2022. Details of LSST operations are
currently being examined. In particular, the survey strategy continue
to be analyzed up to and during operations in order to maximize the
science return across the wide variety of goals for LSST. In this
proceedings, we will describe the planned LSST configuration, and
expected LSST performance in discovering and characterizing Near Earth
Objects (NEOs) and Main Belt asteroids (MBAs), then present software
tools that can aid the planetary astronomy community in extending
this analysis.
\section{The LSST telescope}
The primary science goals for LSST drive the design of the telescope
and camera. The choice of telescope mirror size, field of view,
filters and typical exposure times combine to achieve the desired
single image depth, coadded image depth, number of repeat visits,
visit distribution among filters, and survey footprint.
The resulting final design is an optical telescope with
$ugrizy$ filters and a primary mirror of 8.4m in
diameter (the effective diameter is 6.5m after accounting for obscuration
and vignetting). The telescope has a fast f/1.2 focal ratio; together
with the 3.2 Gigapixel camera, this provides a 9.6 square degree field
of view with 0.2 ''/pixel platescale. Short exposures and a rapid
survey strategy covering the entire visible sky every three to four
nights in multiple filters complete the basic strategy to meet these
science goals.
The details of the observing strategy will be discussed further in
Section~\ref{surveystrategy}, but at the base of the cadence is the
pair of back-to-back 15 second exposures that make up a 30 second
`visit'. For most purposes, this 30 second visit can be considered the
equivalent exposure time for LSST; the back-to-back `snaps' will be
processed separately to help reject cosmic rays (and could be used to
help determine velocity direction for trailing moving objects), but
the images will be combined for most purposes and individual image
depths correspond to the 30 second visit $5\sigma$ point-source
limiting magnitude. This drives further design choices for the
telescope; in order to maintain a high duty cycle, the camera readout time
is only 2 seconds per exposure and the slew-settle time between
nearby fields is only 5 seconds per visit.
The fill factor of the camera is 90\%, counting active silicon within
a $3.5^\circ$ diameter circle inscribed in the field of view; the fill
factor counting only chip gaps but over the entire (non-circular)
focal plane is slightly higher, but similar. See
Figure~\ref{focalplane} for an illustration of the focal plane.
On-site monitoring has provided information on the expected free
atmosphere FWHM and sky brightness (see Table~\ref{table2}). The
telescope hardware is expected to contribute an additional 0.4'' to
the delivered seeing. The expected dark sky skybrightness is generated using
detailed sky spectra obtained elsewhere (\cite{patat}), modified to
match broad-band sky brightness measurements reported from Cerro
Pachon and other nearby sites.
Expected throughput curves for each component of the hardware system
are maintained by system engineering (see the github
repo\footnote{\url{http://github.com/lsst-pst/syseng_throughputs}} for
latest values). These are based on data from prototype sensors and
the expected performance of the mirrors and filters and lenses,
including broadband coatings and loss estimates due to condensation
and contamination. The throughput curves for each filter are
illustrated in Figure~\ref{throughputs}.
Combining all of the information above, we can calculate the
expected five-sigma point-source limiting magnitudes for LSST, under
fiducial seeing and dark sky conditions - see Table~\ref{table2}.
As LSST continues to move toward operations, the expected values for each
of these components will be replaced by `as delivered'
versions. Up-to-date values will be maintained in the github
repositories and reported in the LSST Overview Paper (\cite{lsstoverview}).
\begin{table}[tbh]
\begin{center}
\caption{$ugrizy$ $5\sigma$ point source limiting magnitudes$^1$}
\label{table2}
{\scriptsize
\begin{tabular}{|l|c|c|c|c|c|c|}\hline
& $u$ & $g$ & $r$ & $i$ & $z$ & $y$ \\ \hline
Median atmospheric IQ$^2$ & 0.66 & 0.61 & 0.56 & 0.53 & 0.50 & 0.48 \\
\hline
Dark sky brightness (mag/sq'')$^3$ & 23.0 & 22.2 & 21.2 & 20.5 & 19.6 & 18.6
\\ \hline
$5\sigma$ limiting magnitude & 23.6 & 24.9 & 24.4 & 24.0 & 23.4 & 22.5
\\ \hline
\end{tabular}
}
\end{center}
\vspace{1mm}
\scriptsize{
{\it Notes:}\\
$^1$Please see the LSST Overview Paper (\cite{lsstoverview}) for
updated values.
$^2$Based on Cerro Pachon site monitoring.
$^3$Based on dark sky spectra convolved with the LSST bandpasses,
validated with site monitoring data from Cerro Pachon.}
\end{table}
\begin{figure}[tb]
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{focalplane}
\captionsetup{width=0.9\linewidth}
\caption{Layout of the LSST focal plane. The solid circle indicates
the inscribed circular field of view (3.5$^\circ$ diameter). The
plotted points indicate active silicon.\label{focalplane}}
\end{center}
\end{minipage}
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{throughputs}
\captionsetup{width=0.9\linewidth}
\caption{Expected LSST throughput response in $ugrizy$, including
an atmospheric throughput curve (the dotted line). The
expected dark sky brightness in AB magnitudes is also shown (the
thin black line).
\label{throughputs}}
\end{center}
\end{minipage}
\end{figure}
\section{LSST data management}
LSST will acquire millions of images -- on the order of 2.5 million
visits, each consisting of a pair of exposures. The LSST Data
Management (DM) software pipeline has the task of turning these images
into catalogs enabling the primary science goals. In general these
catalogs can be thought of as falling into three categories: Level 1,
Level 2 and Level 3.
Level 1 data products are created during nightly processing. The
images in each visit are combined to reject cosmic rays, then
subtracted from a template image created from previously acquired
imaging (typically 6 months to a year earlier). The detections
measured in each difference image correspond to transients, variables,
moving objects, and artifacts. These outputs will be run through
machine learning algorithms to help reject artifacts. The resulting
detections, along with relevant information from existing catalogs
such as identification of known variable stars or the location of
nearby background galaxies, will be released within 60 seconds of the
end of each visit as the LSST Alert stream.
In addition, these difference image catalogs (after removing known
variable stars) will be used to feed the LSST moving object processing
system (MOPS). MOPS will link detections from different visits within
a night into tracklets, combine these with tracklets from
other nights into tracks, and finally fit the tracks with orbits; it will also extend
known orbits with new detections of these objects. These moving object
catalogs will updated and released on a daily basis.
The Alert stream and the moving object catalogs (the linked orbits and
their individual detections) make up the Level 1 data products. It is
worth noting that moving objects which are measurably trailed in any
individual visit will be clearly identifiable in the Alert stream as
such; very fast-moving objects thus have an additional discovery
avenue via Alerts.
Level 2 data products are created during yearly data processing and
include a more precise level of calibration in photometry and
astrometry. During the yearly data processing, all existing images
will be reprocessed using the most recent software release (including
reprocessing these images through MOPS, likely using slightly
improved templates for image differencing). These
data release catalogs will reach 10mmag absolute photometric accuracy
and 10mas absolute astrometric accuracy. The increased accuracy is
possible due to various algorithms that compute global solutions;
these are not run during nightly data processing.
Level 3 data products indicate data products resulting from
independently written (non-project) software, created using LSST data
access center compute resources. These data products will typically be
generated using extensions to the LSST DM software, and may or may be
publicly available depending on the user. Publicly available Level 3
data products which prove particularly useful could become fully
federated with LSST databases.
The LSST DM pipeline will be entirely open source and publicly
available. The various repositories that make up the DM software stack
can currently be found on github\footnote{
\url{http://github.com/lsst}}; more information about the stack and
instructions for installing the LSST software stack can be found at
\url{http://dm.lsst.org}. Details of the data products (images and
catalogs) are defined in the LSST Data Products Definitions Document
(DPDD)\footnote{
\url{http://www.lsst.org/content/data-products-definition-document}}
. All LSST data products will be immediately publicly available to
institutions with data rights.
\section{LSST survey strategy}
\label{surveystrategy}
The basic parameters of LSST -- telescope
size, field of view -- have been fixed. In
addition, given the survey length, visit exposure time, and constraints on the survey
footprint, an approximate outer envelope of the survey characteristics can be
estimated: the survey has about 2.5 million visits to
distribute over about 25,000 square degrees for all survey fields,
with about 825 visits per field in the main survey footprint
($\approx18,000$ sq deg) to distribute among $ugrizy$ filters. Most
fields in the main survey footprint can be observed twice per night
(with an interval of about 30 minutes) every three to four days, on an
ongoing basis over their observing season, repeating for ten
years. This is the strawman LSST observing strategy at present.
However, the details of the observing cadence have {\bf not} yet been
fixed. For example, instead of distributing visits fairly evenly in time for all
fields over all ten years, a variant may be to
concentrate a subset of those visits for some fields into a shorter
period of time (a `rolling cadence'). One option that may be interesting for
studying solar system objects could include taking more frequent
observations for fields near opposition and then reducing the number
of observations for fields away from opposition. The process of
optimizing the survey strategy in terms of cadence is just starting to
get underway.
LSST has several tools to help this process of survey strategy
optimization. The first is the LSST Operations Simulator (OpSim)
(\cite{opsim}), which combines a realistic weather history and a
high-fidelity telescope model with a scheduler driven by a set of
proposals that attempt to parametrize a basic observing strategy ({\it
e.g.}, a proposal for the main survey footprint that specifies the
main survey footprint, skybrightness and seeing limits, number of
visits desired in each filter, and the time window between pairs of
visits in each night). The output of OpSim is a simulated pointing
history, complete with observing conditions and individual visit
limiting magnitudes, that demonstrate how LSST might survey the sky. A
simple visualization of an OpSim run is shown in
Figure~\ref{footprint}; this also shows the footprint for the
survey in various proposals and filters.
The second tool is a user-extensible python package called the LSST
Metrics Analysis Framework (MAF) (\cite{maf}). MAF was created to help
analyze OpSim outputs. Using MAF, it is simple to write short pieces
of python code (`metrics') that can be plugged into the framework to
evaluate some aspect of OpSim. By collecting these metrics from a wide
representation of the astronomical community, we can evaluate OpSim
surveys created with a variety of scheduler configurations and
maximize science return from LSST across a wide range of science
goals. An example of using MAF to evaluate the median time between
revisits at each point on the sky is given in Figure~\ref{time}.
OpSim and MAF are open-source software packages,
provided as part of the LSST Systems Engineering Simulations
effort. Instructions for installing them are available at
online\footnote{
\url{https://confluence.lsstcorp.org/display/SIM/Catalogs+and+MAF}}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\textwidth]{Nvisits}
\caption{The distribution of visits across the LSST survey footprint,
in a sample OpSim simulated survey. The main survey covers the area
from $-65^\circ<$ Dec $<5^\circ$, excluding a small area around the
galactic plane. The area from Dec=$5^\circ$ up to $10^\circ$ north
of the ecliptic is covered in an additional observing program; other
programs cover the South Celestial Pole and the Galactic Plane.
\label{footprint}}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{internight}
\includegraphics[width=0.4\textwidth]{intranight}
\caption{Left: the median number of nights between consecutive
visits to a field, for an OpSim simulated survey. Right: A histogram
of revisit times, within each night.
\label{time}}
\end{figure}
\section{Evaluating the LSST survey strategy for Solar System science}
The LSST Metrics Analysis Framework (MAF) can also be used to evaluate the
performance of OpSim simulated surveys with respect to Solar System
science goals. MAF will allow a user to specify a population of moving
objects (by providing their orbital parameters), specify a particular
OpSim survey, and then generate their simulated observations. MAF uses
the open-source package OpenOrb (\cite{oorb}) in generating the
ephemeris information for these simulated observations.
In many cases, the input population of moving objects can be
small (on the order of a few thousand); MAF is able to clone the resulting
detections over a range of $H$ magnitudes, so that
metrics can be evaluated over a wide range of $H$ while only
using a relatively small set of orbits. MAF tests using 10,000 MBAs
produced identical metric results as tests using 2,000 MBAs;
statistically, this method of cloning the input population is
adequate for most purposes.
As MAF generates the detection lists for each object, the reference
$H$ value in the orbit file is used to generate an apparent $V$ band
magnitude, then the (optionally user-assigned) spectrum is used to
generate a magnitude in the LSST bandpass. When the object is cloned
over the user-specified range of $H$, these apparent magnitudes are
adjusted accordingly. When evaluating a specific metric ({\it e.g.}
the number of observations obtained for each orbit in the sample for a
range of $H$ magnitudes), the desired SNR cutoff can be specified and
calculated, including trailing losses and the $5\sigma$ limiting magnitude for each visit.
Trailing loss estimates are provided by MAF. Trailing losses occur
whenever the motion of a moving object spreads their light over a
wider area than a simple stellar PSF. There are two aspects
of trailing loss to consider: simple SNR losses and detection losses.
The first is simply the degradation in SNR that occurs (relative to a
stationary PSF) because the trailed object includes a larger number of
background pixels in its footprint. This will affect photometry and
astrometry, but typically doesn't directly affect whether an object is
detected or not. The second effect (detection loss) is not related to
measurement errors but does typically affect whether an object passes
a detection threshhold. Detection losses occur because source
detection software is optimized for detecting point sources;
a stellar PSF-like filter is used when identifying sources that pass
above the defined threshhold, but this filter is non-optimal for
trailed objects. This can be mitigated with improved software ({\it
e.g.} detecting to a lower SNR threshhold and attempting to detect
sources using a variety of trailed PSF filters). Both trailing losses can
be fit as:
\begin{eqnarray}
\Delta \, m & = &-1.25 \, log_{10} \left( 1 + \frac{a \, x^2} { 1 + b\,
x} \right) \\
x & = & \frac{v \, T_{exp}} {24 \, \theta}
\end{eqnarray}
where $v$ is the velocity (in degrees/day), $T_{exp}$ is the exposure
time (in seconds), and $\theta$ is the FWHM (in arcseconds). For
SNR trailing losses, we find $a = 0.67$ and $b = 1.16$; for
detection losses, we find $a=0.42$ and $b=0$. An illustration of the
magnitude of these trailing loss effects for 0.7'' seeing is given in
Figure~\ref{trailinglosses}. When considering whether a source would
be detected at a given SNR using typical source detection software,
the detection loss should be used.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{trailing_losses}
\caption{Trailing losses for 30 second LSST visits, assuming seeing of
0.7''. The dotted line shows SNR trailing losses, the solid line
indicates detection trailing losses. With software improvements
detection losses can be mitigated.
\label{trailinglosses}}
\end{figure}
MAF can also include the details of the camera focal plane layout, as
illustrated in Figure~\ref{focalplane}; detections which would fall into
chip gaps are then removed.
To demonstrate the potential of LSST as a tool for studying the
Solar System, we calculate a variety of metrics for a set of small
body population samples ranging from Potentially Hazardous Asteroids
(PHAs) to TransNeptunianObjects (TNOs). The 2000 sample orbits used for each
population except the PHAs come from the Pan-STARRS Synthetic Solar
System Model \cite{s3m}; the PHA orbits are taken from the Minor
Planet Center\footnote{\url{http://www.minorplanetcenter.net/}} database,
trimmed to the subset of $\approx1500$ objects larger than 1km in
diameter. In all metrics shown here, we then clone these orbits over a
range of $H$ magnitudes, assumed the (larger) detection trailing
loss, included the camera focal plane footprint, and only used
resulting detections above a SNR=5 cutoff.
First we simply count the total number of observations for each orbit
as a function of $H$; the mean value for all orbits in each population
is shown in Figure~\ref{nobs}. Similarly, we can look at the time of
the first and last observation to get the overall observational arc
length; the mean values of the observational arc are shown in
Figure~\ref{arclength}.
\begin{figure}[tb]
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{nobs}
\captionsetup{width=0.9\linewidth}
\caption{The mean number of observations (per object) for each of our sample small
body populations, as a function of $H$ magnitude.
\label{nobs}}
\end{center}
\end{minipage}
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{arclength}
\captionsetup{width=0.9\linewidth}
\caption{The mean observational arc (in days) for each of our sample
small body populations, as a function of $H$ magnitude.
\label{arclength}}
\end{center}
\end{minipage}
\end{figure}
We also calculate the number of `discovery opportunities' available
for each object and use this to calculate the overall completeness
across the population (counting an object as `discovered' if it had at
least one discovery opportunity). The definition of a discovery
opportunity can be varied by the user, but here we look at a variety
of cases. First, the current basic MOPS requirement: detections on
three different nights within a window of 15 nights, with detections
in at least two visits per night separated by less than 90
minutes. Second, an extended MOPS requirement intended to be more
rigorous while still nominally matching the typical observing pattern
in our OpSim survey: detections on four different nights within a
window of 20 nights, with at least two visits per night. Third, a
relaxed discovery criteria intended to demonstrate the effect of
improving MOPS software: 3 nights within 30 days, again with at least
2 visits per night. Finally, a `magic' discovery criteria intended to
get an idea of the upper limit for detection if linking software is
not a constraint: 6 observations in 60 nights. The cumulative
completeness (completeness for $H \le X$) is calculated by multiplying
the differential completeness values by a power law
($\alpha=0.3$). The results are shown in Figure~\ref{completeness},
including the value $H_{50}$, corresponding to the $H$ value where the
cumulative completeness drops to 50\% of the overall population. It
can be see that these varying discovery scenarios have the largest
effects on the PHA and NEO populations. With more rigorous
requirements, the $H_{50}$ values are increased by a few to several
tenths of a magnitude; with relaxed requirements, these values are
pushed faintward by a few tenths. The peak completeness levels change
by a few percent for PHAs and NEOs only. This suggests that even with
the basic cadence LSST is doing fairly well at discovering moving
objects; with improvements in the cadence (probably some version of a
rolling cadence to concentrate more visits into a given chunk of time)
it could do better; and money spent on improving linking software,
even by a relatively modest amount, directly leads to improvements in
completeness.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.45\linewidth]{CumulativeCompleteness_standard}
\includegraphics[width=0.45\linewidth]{CumulativeCompleteness_four} \\
\includegraphics[width=0.45\linewidth]{CumulativeCompleteness_long}
\includegraphics[width=0.45\linewidth]{CumulativeCompleteness_magic}
\caption{Cumulative completeness calculated using various discovery
criteria as outlined in the text, for each of our sample small body
populations. $H_{50}$ in the plot legends indicates the $H$ value where
the cumulative completeness falls to 50\% of the overall population.
\label{completeness}}
\end{center}
\end{figure}
To demonstrate a more specialized MAF metric, aimed at evaluating the
capability of LSST to help determine the source of activity in active
asteroids, we also present the result of a `activity detection'
metric. Here we take the detections of each object after applying all
the focal plane and SNR cuts and bin the times of these detections
according to either time since the start of the survey (interesting if
activity is due to collisions and thus random) or by time relative to the
period of the object (the interesting timescale if activity is
periodic and associated with the object's orbit). We
then have calculated the probability of observing the object on a
given timescale, and if we assume LSST DM will provide enough
information to identify activity if it is present, this is equivalent
to the probability of detecting activity on that given
timescale. Repeating this exercise over a variety of timescales, we
come up with the probabilities of detecting activity shown in
Figures~\ref{activityTime} and \ref{activityPeriod}.
\begin{figure}[bt]
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{activityTime}
\captionsetup{width=0.9\linewidth}
\caption{The likelihood of detecting activity which lasts at least a
given amount of time (in days); mean (solid)
probabilities across each of the sample populations, as well as maximum
(dashed) and minimum (dotted) probabilities for individual objects
within the sample.
\label{activityTime}}
\end{center}
\end{minipage}
\begin{minipage}{.5\textwidth}
\begin{center}
\includegraphics[width=0.9\linewidth]{activityPeriod}
\captionsetup{width=0.9\linewidth}
\caption{The likelihood of detecting activity which lasts at least a
given fraction of the orbital period of the object; mean (solid)
probabilities across each of the sample populations, as well as maximum
(dashed) and minimum (dotted) probabilities for individual objects
within the sample.
\label{activityPeriod}}
\end{center}
\end{minipage}
\end{figure}
\section{Discovering moving objects with MOPS}
The discovery requirement with the LSST Moving Object Pipeline System
(MOPS) is: detections on least three separate nights within a 15 night
window, with at least two detections (visits) in each night, separated
by 15 to 90 minutes. The detections within each night are joined into
tracklets, and then tracklet detections from multiple nights are
linked into tracks, which can then be fed to orbit determination
algorithms to filter true from false linkages.
Tests of prototype LSST MOPS with these requirements and an additional
constraint that the velocity on each night must be below 0.5
degrees/day, running on modest hardware (16 cores, $<20$ GB of RAM)
showed that in the absence of noise, moving object detections at the
full depth and cadence of LSST could be easily linked together into
tracks. These tests were repeated with increasing levels of random
noise in the input detection lists. At a ratio of 4:1 noise:real
detections, MOPS was still successful in creating tracks from the
input catalogs; although the software compute requirements (runtime
and memory usage) increased, there was no significant loss in terms of
found
objects\footnote{\url{https://github.com/lsst/mops_daymops/blob/master/doc/report2011/LDM-156.pdf}}.
The noise, or false positive rate of difference image detections, is a
crucial consideration for MOPS. Reducing the false positive rate
places high quality demands on the camera and optical system to reduce
defects and ghosting and on the difference image software to reduce
artifacts. Prototype LSST sensors have been delivered and are testing
within specification and are cosmetically clean. Amp-amp crosstalk is
well within specifications, and CCD-CCD crosstalk is too small to be
measurable. Tests have not shown any measurable charge persistence
under expected operating conditions. The optical system has been
extensively modeled and has extremely small optical ghosting over the
full focal plane. LSST is investing a significant amount of effort
into its difference image software, both for moving object detection
and for the purposes of the Alert pipeline. Existing surveys such as
the Palomar Transient Factory and the Dark Energy Survey are already
using advanced difference image pipelines on cosmetically clean and
well characterized systems to achieve false positive rates of 13:1
(noise:real); with the addition of machine-learning algorithms to
filter artifacts, these pipelines can achieve rates of 1:3 noise:real
detection rates (\cite{goldstein}). Existing surveys are already achieving false-positive
rates within the acceptable range of our prototype MOPS.
Work is ongoing to understand the limitations and capabilities of
MOPS, and the prototype MOPS software will also be improved
prior to survey operations.
\section{Conclusion}
The catalogs of minor planets that will come from LSST over its
lifetime have enormous potential for planetary science. Small body
populations throughout the Solar System will see an increase of 10-100
times more objects than currently known, including Earth minimoon,
irregular satellite, and cometary populations. Many of these new
objects will have large numbers of observations over the course of
several years, in multiple filters, allowing for scientific
characterization of the physical properties of these populations.
LSST provides simulation tools (OpSim and MAF) to assess the impact of
the survey strategy on specific science goals. We encourage feedback
from the community, especially in terms of metrics, to help maximize
the scientific return of LSST. Further development and evaluation of
LSST DM pipelines is ongoing, particularly in the areas of difference
imaging and MOPS. First light for LSST is in 2020, with survey
operations starting 2022.
|
2,869,038,156,696 | arxiv | \section{Introduction}
We consider the expansion of the powers of the determinant polynomial,
and discuss the existence of zero coefficients.
D. G. Glynn proved that the coefficients of the $m$th power of the determinant polynomial of order $n$
are all nonzero,
if $m = p-1$ with a prime $p$.
This result is remarkable because this leads a proof of
the Alon--Tarsi conjecture in dimension $p-1$.
In this article,
we show that the converse of Glynn's result also holds, if $n \geq 3$.
The proof is quite elementary.
Let us explain the assertion precisely.
Let $X = (x_{ij})_{1 \leq i,j \leq n}$ be
an $n$ by $n$ matrix whose entries are indeterminates.
We define the coefficients $C_L$ by the following expansion of $(\det X)^m$:
\begin{equation}
(\det X)^m = \sum_{L \in \Psi(m)} C_L x^L.
\label{eq:definition of C_L}
\end{equation}
Here $\Psi(m)$ is the set of all $n$ by $n$ matrices
of nonnegative integers with each row and column summing to $m$:
\[
\Psi(m) = \left\{ (l_{ij})_{1 \leq i,j \leq n}
\,\left|\,
\begin{aligned}
&l_{ij} \in \mathbb{Z}_{\geq 0}, \\
&\textstyle\text{$\sum_{i=1}^n l_{ij} = m$ for any $j = 1,2,\ldots,n$}, \\
&\textstyle\text{$\sum_{j=1}^n l_{ij} = m$ for any $i = 1,2,\ldots,n$}
\end{aligned}
\right.
\right\}.
\]
Moreover,
we put $x^L = \prod_{1 \leq i,j \leq n} x_{ij}^{l_{ij}}$ for $L = (l_{ij})_{1 \leq i,j \leq n}$.
D. G. Glynn proved the following theorem for these coefficients $C_L$:
\begin{theorem}\label{thm:when m = p-1}\slshape
If $p$ be prime,
we have $C_L \ne 0$ for all $L \in \Psi(p-1)$.
\end{theorem}
\begin{remark}
Actually, Glynn proved a stronger theorem,
namely, that we have $L! C_L \equiv (-1)^n \pmod p$ for all $L \in \Psi(p-1)$.
Here we put $L! = \prod_{i,j=1}^n l_{ij}!$ for $L = (l_{ij})_{1 \leq i,j \leq n}$.
\end{remark}
In the present article, we prove that the inverse of Theorem~\ref{thm:when m = p-1} also holds, when $n \geq 3$:
\begin{theorem}\label{thm:when m ne p-1}\slshape
Assume that $n \geq 3$.
Let $m$ be a natural number which cannot be expressed as $m = p-1$ with a prime $p$.
Then, there exists $L \in \Psi(m)$ satisfying $C_L = 0$.
\end{theorem}
Thus, we see that the following two conditions on $m$ are equivalent, when $n \geq 3$:
\begin{itemize}
\item We have $C_L \ne 0$ for all $L \in \Psi(m)$.
\item There is a prime $p$ satisfying $m=p-1$.
\end{itemize}
\section{The coefficients are all nonzero, when $m = p-1$}
Theorem~\ref{thm:when m = p-1} was shown by D. G. Glynn in \cite{G2}.
The key of the proof is the hyperdeterminant
introduced by Glynn himself in \cite{G1}
(this hyperdeterminant occurs only for fields of prime characteristic $p$).
Nowadays, a proof of Theorem~\ref{thm:when m = p-1}
without hyperdeterminant is also known \cite{K1}, \cite{K2}.
Theorem~\ref{thm:when m = p-1} is remarkable,
because this leads to a special case of the Alon--Tarsi conjecture on Latin squares.
Let $\operatorname{els}(n)$ and $\operatorname{ols}(n)$
denote the numbers of even and odd Latin squares of size $n$, respectively.
We can easily show $\operatorname{els}(n) = \operatorname{ols}(n)$
when $n$ is an odd number greater than $1$.
In contrast, on the case that $n$ is even,
the following conjecture was proposed by N. Alon and M. Tarsi \cite{AT}:
\begin{conjecture}
When $n$ is even, we have
$\operatorname{els}(n) \ne \operatorname{ols}(n)$.
\end{conjecture}
Glynn proved that this conjecture is true,
when $n = p-1$ with a prime $p$:
\begin{theorem}\label{thm:AT conjecture for p-1}\slshape
For any prime $p$, we have
$\operatorname{els}(p-1) \ne \operatorname{ols}(p-1)$.
\end{theorem}
This is deduced from Theorem~\ref{thm:when m = p-1}
by looking at the coefficient corresponding to the all-ones matrix
$J_n = (1)_{1 \leq i,j \leq n} \in \Psi(n)$.
Indeed we have the relation
\[
C_{J_n} =
(-)^{n(n-1)/2}
(\operatorname{els}(n) - \operatorname{ols}(n))
\]
as a corollary of the relation
between the ordinary parity and the symbol parity of Latin squares given in \cite{J}.
\begin{remark}
The Alon--Tarsi conjecture is also proved when $n=p+1$ with an odd prime $p$
\cite{D}.
This conjecture is related to various problems including Rota's basis conjecture.
See \cite{FM} for the results related to this conjecture.
\end{remark}
\section{There exists a zero coefficient, when $m \ne p-1$}
Let us prove Theorem~\ref{thm:when m ne p-1}.
This theorem was first found in Master's thesis of the second author \cite{S}.
Let $m$ be a natural number which cannot be expressed as $m = p-1$ with a prime $p$.
We can specifically find $L \in \Psi(m)$ satisfying $C_L = 0$ as follows.
When $n=3$,
we consider the following $3$ by $3$ matrix:
\[
L_3(a,b) =
\begin{pmatrix}
ab+b-1 & a & 1 \\
a & ab & b \\
1 & b & ab+a-1
\end{pmatrix}.
\]
Here, $a$ and $b$ are natural numbers satisfying $(a+1)(b+1) = m+1$
(there exist such $a$ and $b$, because $m+1$ is a composite number).
For general $n \geq 3$, we consider the following $n$ by $n$ matrix:
\[
L_n(a,b) =
\begin{pmatrix}
L_3(a,b) & & & \\
& m & & & \\
& & \ddots & & \\
& & & & m
\end{pmatrix}.
\]
Then the coefficient corresponding to this matrix is zero:
\begin{proposition}\slshape
When $n \geq 3$,
we have $C_{L_n(a,b)} = 0$.
\end{proposition}
This proposition follows from the following two lemmas.
Firstly, $C_{L_n(a,b)}$ is expressed as the difference of two multinomial coefficients:
\begin{lemma}\label{lem:difference of two multinomial coefficients}\slshape
We have
\[
C_{L_n(a,b)} =
(-)^{a+b+1} {m \choose ab-1,0,0,a,b,1}
+(-)^{a+b}{m \choose ab,1,1,a-1,b-1,0},
\]
where
\[
{m \choose m_1,m_2,m_3,m_4,m_5,m_6} = \frac{m!}{m_1! m_2! m_3! m_4! m_5! m_6!}.
\]
\end{lemma}
Secondly, these two multinomial coefficients are equal to each other:
\begin{lemma}\label{lem:two multinomial coefficients are equal}\slshape
We have
\[
{m \choose ab-1,0,0,a,b,1}
= {m \choose ab,1,1,a-1,b-1,0}.
\]
\end{lemma}
Lemma~\ref{lem:two multinomial coefficients are equal} follows by a direct calculation,
and Lemma~\ref{lem:difference of two multinomial coefficients} is proved as follows:
\begin{proof}[Proof of Lemma~\ref{lem:difference of two multinomial coefficients}]
First we consider the case of $n=3$.
We put
\begin{align*}
\alpha_1 &= x_{11} x_{22} x_{33}, &
\alpha_2 &= x_{12} x_{23} x_{31}, &
\alpha_3 &= x_{13} x_{21} x_{32}, \\
\beta_1 &= x_{12} x_{21} x_{33}, &
\beta_2 &= x_{11} x_{23} x_{32}, &
\beta_3 &= x_{13} x_{22} x_{31},
\end{align*}
such that
\[
\det X = \alpha_1 + \alpha_2 + \alpha_3 - \beta_1 - \beta_2 - \beta_3,
\]
and $(\det X)^m$ is expanded as follows:
\begin{equation}
(\det X)^m = \sum_{k_1 + k_2 + k_3 + l_1 + l_2 + l_3 = m}
(-)^{l_1+l_2+l_3} {m \choose k_1,k_2,k_3,l_1,l_2,l_3}
\alpha_1^{k_1} \alpha_2^{k_2} \alpha_3^{k_3} \beta_1^{l_1} \beta_2^{l_2} \beta_3^{l_3}.
\label{eq:multinomial expansion}
\end{equation}
Let us determine all $6$-tuples $(k_1,k_2,k_3,l_1,l_2,l_3)$ of nonnegative integers
satisfying the following relation:
\begin{equation}
\alpha_1^{k_1} \alpha_2^{k_2} \alpha_3^{k_3} \beta_1^{l_1} \beta_2^{l_2} \beta_3^{l_3}
= x^{L_3(a,b)}.
\label{eq:x^L_3}
\end{equation}
Since the left hand side can be expressed as
\begin{align*}
\alpha_1^{k_1} \alpha_2^{k_2} \alpha_3^{k_3} \beta_1^{l_1} \beta_2^{l_2} \beta_3^{l_3}
& =
x_{11}^{k_1 + l_2}
x_{12}^{k_2 + l_1}
x_{13}^{k_3 + l_3} \\
& \phantom{{}={}}
x_{21}^{k_3 + l_1}
x_{22}^{k_1 + l_3}
x_{23}^{k_2 + l_2} \\
& \phantom{{}={}}
x_{31}^{k_2 + l_3}
x_{32}^{k_3 + l_2}
x_{33}^{k_1 + l_1},
\end{align*}
this relation is equivalent with the following system of nine linear equations:
\[
\begin{pmatrix}
k_1 + l_2 & k_2 + l_1 & k_3 + l_3 \\
k_3 + l_1 & k_1 + l_3 & k_2 + l_2 \\
k_2 + l_3 & k_3 + l_2 & k_1 + l_1
\end{pmatrix}
= L_3(a,b)
=
\begin{pmatrix}
ab+b-1 & a & 1 \\
a & ab & b \\
1 & b & ab+a-1
\end{pmatrix}.
\]
Solving this, we see that
$(k_1,k_2,k_3,l_1,l_2,l_3) \in \mathbb{Z}_{\geq 0}^6$ satisfying (\ref{eq:x^L_3}) are
\[
(ab-1,0,0,a,b,1), \qquad
(ab,1,1,a-1,b-1,0).
\]
Therefore, comparing (\ref{eq:definition of C_L}) and (\ref{eq:multinomial expansion}),
we have
\[
C_{L_3(a,b)} =
(-)^{a+b+1} {m \choose ab-1,0,0,a,b,1}
+(-)^{(a-1) + (b-1) + 0} {m \choose ab,1,1,a-1,b-1,0},
\]
namely the assertion in the case of $n=3$.
The case of $n > 3$ is also almost the same.
Indeed,
to calculate $C_{L_n(a,b)}$,
we need to look at the following relation instead of (\ref{eq:x^L_3}):
\[
\prod_{1\leq i\leq m}x_{1\sigma_i(1)}x_{2\sigma_i(2)}\cdots x_{n\sigma_i(n)} = x^{L_n(a,b)}.
\]
Since $\sigma_1, \ldots, \sigma_m$ satisfying this relation belong to
\[
\{ \sigma \in S_n \,|\, \text{$\sigma(k) = k$ for any $k = 4,5,\ldots,n$}\} \simeq S_3,
\]
the proof is reduced to the case of $n=3$.
\end{proof}
|
2,869,038,156,697 | arxiv | \section{LMXBs with absorption lines}
The Low Mass X-ray Binaries (LMXBs) are binary accreting systems, where
the mass flows onto the neutron star (NS) or the black hole (BH) from the companion.
The Secondary is usually main sequence star of the late type
K or M that is of the low weight.
When the magnetic field of such a system is negligible, flowing matter with a certain
angular momentum can form accretion disk around compact object.
According to any global disk models considered for the mass of
central object that is close to 10 $M_{\odot}$ or less, the effective
temperature of the inner radii reaches $10^7$ K (R\'o\.za\'nska et al. 2011).
The temperature at a given radius always increases with the
decreasing mass of the central object and with the increasing accretion rate.
For such high temperatures of an accretion disk atmosphere, thermal lines
from H-like and He-like iron ions are formed, and
those lines should be visible between 6.7 to 9.2 keV,
where the latter value is the energy of the last iron bound-free transition
(from H-like ion to complete ionization). Narrow lines created in the
inner disk region are relativistically smeared, but if they are created
far enough, can survive relativistic motion.
\begin{figure*}[t]
\centering
\psbox[xsize=8cm,rotate=r]{arozanska_fig1.ps}
\psbox[xsize=8cm,rotate=r]{arozanska_fig2.ps}
\caption{Focus on the resonant He and H-like iron lines for the fitting done
in the 6-9 keV energy range, fitted to the {\it Suzaku} data of \mbox{\rm 4U 1630-472}\ .
Black crosses show the data, black dotted lines represent the
{\sc atm} atmospheric model, while red solid lines
show the total model. Clearly, complicated
line profiles computed in our model, match the observations very well.
Left panel shows slightly better fit done for viewing angle $i=11\pm 5^{\circ}$
with $\chi ^2/d.o.f. = 1.52$
while right panel shows fit for $i=70\pm 6^{\circ}$ with $\chi ^2/d.o.f. = 1.57$
}
\label{fig:obs}
\end{figure*}
Emission from the accretion disk around a compact object is the
commonly accepted model for the soft X-ray bump observed in
LMXBs.
Nevertheless, it is well known that observed X-ray spectra from
compact binaries do not always exhibit a disk component.
Owing to instrument's technical limits and to
spectral state transition, multi-temperature disk component
may not be detected in full X-ray energy range.
To explain data by the accretion disk emission, we have to be sure
that observations were taken when the source was in the so-called
``soft state'', which is dominated by disk-like component.
Several recently observed X-ray binaries have exhibited absorption
lines from highly ionized iron (Borin et al. 2004, Kubota et al. 2007,
Miller et al. 2008, D\'iaz Trigo et al. 2012).
Many of those sources show dips in their light curves, which are believed
to be caused by obscuration of the central X-ray source by a dense material
located at the outer edge of an accretion disk.
Such obscuring material was accumulated during the accretion phase from the
companion star onto the disk (White \& Swank 1982). The presence of dips and
the lack of total X-ray eclipses by the companion star indicate that the
system is viewed relatively close to edge-on, at an inclination angle in
the range $\sim 60-80^{\circ} $ (Frank et al. 1987).
The He-like and H-like Fe absorption features indicate that highly ionized
plasma is present in these systems. Study of these lines is extremely
important for characterizing the geometry and physical properties of plasma.
Recently, it has been shown that the presence of absorption lines is not
necessarily related to the viewing
angle since non-dipping sources also show those features (D\'iaz Trigo et al. 2012).
Moreover, the Fe{\sc xxv} absorption line was also observed during
non-dipping intervals in XB1916-053 (Borin et al. 2004).
On the other hand, it was shown by Ponti et al. (2012) that the strength of such lines
depends on the spectral state of the X-ray binary.
In this paper, we show that such absorption lines can be well fitted by
a single disk model, where the line profile is computed taking into account all proper
line broadenings (Sec.~\ref{sec:obs}).
There is a growing number of indications that the density of winds in
LMXBs is of the order of $10^{17}$ cm$^{-3}$ (D\'iaz Trigo et al. 2013,
Miller et al. 2013). Additionally, this wind is located within $r=10^{10}$ cm.
Bellow, we calculate vertical disk structure (R\'o\.za\'nska et al. 1999) in the way
commonly used for studying accretion disk stability curves (Smak 1983, Hameury \& Lasota 2005).
Assuming that disk is fully thermalized we show how the density number at $\tau=2/3$ of
the disk gas depends on the distance from the central object.
We show in Sec.~\ref{sec:mod}, that the observed wind physical conditions
(Miller et al. 2013) coincide with
density of an accretion disk atmosphere assuming standard geometrically
thin, optically thick disk (Shakura \& Sunyaev 1973).
Conclusions are given in Sec.~\ref{sec:con}
\section{The case of \mbox{\rm 4U 1630-472}\ }
\label{sec:obs}
Recently, we presented fitting of complex continuum and line numerical
models to X-ray spectra of \mbox{\rm 4U 1630-472}\ (R\'o\.za\'nska et al. 2014).
In our models, the spectrum of disk emission
was obtained from careful radiative transfer computations including Compton
scattering on free electrons. The Fe line profiles are computed
as the convolution of natural, thermal, and pressure broadening mechanisms.
The advantage of our models
is that the continuum is fitted with lines simultaneously, which has never
been done before in the analysis of X-ray absorption lines seen in LMXBs.
The usual procedure is to fit both the disk emission as a standard model
in XSPEC fitting package, and Gaussian lines, where the energy of line
centroid is a free parameter of fitting. In such a case, lines are usually
blue-shifted indicating that absorbing matter outflows.
The accretion-disk atmosphere spectra fit the {\it Suzaku} data
for \mbox{\rm 4U 1630-472}\ very well as presented at Fig.~\ref{fig:obs}.
The best fit we have obtained for the inclination angle $i=11^{\circ}$
- left panel.
For higher angles, i.e. $i=70^{\circ}$, the fit is just slightly worse - right panel.
This angle is within the range of inclination suggested in the literature
when taking dipping behavior into account and assuming absorption
in the wind.
The small difference in fit quality between different inclinations does not
allow us to claim constraints on inclination angle.
We modeled continuum and line spectra using a single model.
Absorption lines of highly ionized iron can originate
in the upper parts of the disk static atmosphere, which is intrinsically hot
because of the high disk temperature. Iron line profiles computed with
natural, thermal, and pressure broadening match observations very well
(R\'o\.za\'nska et al. 2014).
In this work we do not aim to tightly constrain parameters of the object
but rather to show that emission from the accretion disk atmosphere is an
important mechanism that gives a vital explanation or at least part
of an answer to the question of the origin of iron absorption in
X-ray binaries.
The major conclusion of our analysis is that the shape of disk spectrum
is interpreted well as \mbox{\rm 4U 1630-472}\ emission and that absorption lines do not need
to set any velocity shift to explain data. Therefore, the wind explanation
for absorbing matter is questionable and not unique. We showed that X-ray
data of current quality can be interpreted in several ways, and we cannot
easily solve this ambiguity.
\section{Connection of the wind with an accretion disk atmosphere}
\label{sec:mod}
The wind modelling of X-ray absorption has one major difficulty:
we need independent volume density measurement to determine the
wind location according to the standard formula:
\begin{equation}
\xi = {L_{ion} \over n_{0}R^2} ,
\label{eq:ion}
\end{equation}
where $L_{ion}$ is the wind ionizing luminosity, $n_{0}$ is the hydrogen
number density at the wind illuminated surface, and $R$ is the distance of
the absorber from an UV/X-ray source, i.e. inner disk, compact object or
eventual X-ray corona.
Photoionzation models are degenerated, when we assume hard X-ray power-law
as the spectral energy distribution (SED) of ionizing source. In such the case,
transmitted spectrum from rare more distant cloud looks the same
as from dense cloud located close to the center. Therefore, it is not possible to
estimate volume density from photoionization calculations for hard
power-low X-ray continuum.
Up to now, there are four independent methods of density diagnostic in the
wind, but all of them work only in the particular range of parameters: i) variability
method (Krolik \& Kriss 1995), ii) measurement of the ratio of photoexcitation
lines of Fe{\sc xxii} (Mauche et al. 2003)
iii) measurement of ionic column densities of the
excited metastable states of low ionization ions C{\sc ii}, Fe{\sc ii}
(Korista et al. 2008), and
iv) photoionization modelling valid only if SED is dominated by soft UV/Soft-X-ray
component
(R\'o\.za\'nska et al. 2008).
In the case of winds in LMXBs, where ionization is very high, the second method
is the most suitable, as it was shown recently by Miller et al. (2013) in case
of MAXI J1305-704 {\it Chandra} data.
The authors estimated gas density number to be higher than $10^{17}$ cm$^{-3}$.
Together with the luminosity of the source to be
$L=10^{37}$ erg/s and ionization parameter $log(\xi)=2.05$, it gives the wind location within
$R=3.9 \times 10^8$ - $3 \times 10^9$ cm, depending on the fitted model
(see Tab.~5 in Miller et al. 2013 paper).
In other sources when exited levels are not detected,
with Eq.~\ref{eq:ion} we can always put upper limit for
the density assuming that the wind size is comparable to the distance from
an ionizing source. It was done for several objects and summarized at
Fig.~1 of paper by D\'iaz Trigo \& Boirin (2013).
\begin{figure}
\centering
\psbox[xsize=8.5cm]{arozanska_radial.eps}
\caption{Number density value at $\tau=2/3$ in accretion disk around
black hole of 10 and 5 Solar masses
versus the distance from the black hole. The assumed accretion
rates are 0.007 and 0.01 of Eddington accretion rate respectively.
The density was calculated
using full set of differential equations with parameters given in the
text. Points mark the physical parameters of the wind found
by Miller et al. 2013 in case of MAXI J1305-704 {\it Chandra} data.
We adopted points from models: 1,2,3,4,5 and 8 fitted to the data
and listed in Tab.~5 of this paper.}
\label{fig:mod}
\end{figure}
In this paper, we made calculations of accretion disk vertical structure
solving standard differential equations for 1D fully thermalized
gas (Pojma\'nski 1985, R\'o\.za\'nska et al 1999). Such calculations are extensively used to
compute time evolution of accretion disk instabilities which
explain outbursts presented in optical/X-ray data of accreting compact objects
(Smak 1983, Hameury \& Lasota 2005).
For our purpose, we calculated vertical structure on different distances from
the central black hole of the 5 and 10 Solar masses with accretion rates 0.01 and
0.007 of Eddington unit, respectively. Such accretion rates are used
in order to reproduce luminosity
used by Miller et al. 2013 for distance determination.
Additionally, we assume standard accretion efficiency
for non-rotating black hole 1/16, and viscosity parameter $\alpha=0.1$.
The set of parameters is very basic, and we are aware that different disk
model can change results, but up to know standard disk very well explains
observations.
At each distance from the black hole we determined the density number of fully
thermalized, i.e. at $\tau=2/3$, gas. In Fig.~\ref{fig:mod} we present
radial dependence of such density for two cases of black hole masses and
accretion rates pointed in the right upper corner. The radiative
transfer in those calculations is solved by diffusion approximation,
where radiative flux is proportional to the temperature gradient and
inversely proportional to the opacity of matter. All visible bumps are
due to opacities which are taken to be Rosseland means, but self consistently
computed for Solar abundances (R\'o\.za\'nska et al. 1999).
Additionally, several points measured by Miller et al. (2013) are putted on the
figure. We have taken models numbered: 1,2,3,4,5, and 8 from their paper listed
in Tab.~5. Those points indicate wind physical conditions
in MAXI J1305-704 determined by
fitting photoionized XSTAR model to the data.
The fitting is done on the level where, ionic column
densities of photoexited Fe{\sc xii} lines computed in the model are compared with
those observed by {\it Chandra} X-ray telescope.
One can see strong connection of the wind physical parameters
with eventual accretion disk atmosphere. This fact may suggest that the
wind consist the same material as in upper thermalized disk atmosphere.
\section{Discussion}
\label{sec:con}
The high density of absorbing matter in LMXBs was first suggested by
Frank et al. (1987), who proposed the two-phase
medium to explain X-ray observations of such sources.
The Authors have concluded that the absorbing material has
density number of the order of $10^{16}$ cm$^{-3}$ for cold phase, and
$10^{13}$ cm$^{-3}$ for hot phase, and it is located
within radius of 10$^{10}$ cm.
Up to now, there are only a few sources, where the wind density number was
determined from careful spectral fitting (for example Miller et al. 2013),
but all of them are showing high values of the wind density around 10$^{16-17}$ cm$^{-3}$.
Such densities are present in upper thermalized disk atmospheres,
within exactly the same distance from black hole as wind location derived from
observations. This is a strong argument that the wind can originate from
upper parts of the disk slabs, and the only question arises how the
wind is blowing?
There are several mechanisms proposed as: thermal wind, radiation pressure driven
wind or magnetic wind, but none of these processes can be self consistently
computed using present computer power.
Additionally, to obtain a self-consistent model of radiation pressure
winds it is critical to include a more detailed
treatment of radiative transfer and ionization in
the next generation of hydrodynamic simulations.
The second argument, showing connection of the wind with accretion disk
atmospheres is our analyze of \mbox{\rm 4U 1630-472}\ {\it Suzaku} data. All fitted models were
computed for high density atmospheres and the line profiles
agree with observations.
In the wind theory, it is widely accepted that the wind can be launched
at the accretion disk surface. Upper layers of atmosphere can become
unstable and start to blow material out owing to the radiation pressure.
Our analysis does not contradict this fact, but instead shows that absorption
at upper atmospheric layers cannot be distinguished from the absorption
in the wind, which may be launched in the same region.
Data with higher spectral resolution are needed to distinguish between those two
models. Future satellites with calorimeters, such
as {\it ASTRO-H} or {\it Athena+},
will yield the answer.
\vspace{1pc}
\section*{Acknowledgements}
This research was supported by Polish National Science
Center grant No. 2011/03/B/ST9/03281. It received funding
from the European Union Seventh Framework Program (FP7/2007-2013)
under grant agreement No.312789.
\section*{References}
\re
Boirin, L., Parmar, A.N., et al. 2004, A\&A, 418, 1061
\re
D\'iaz Trigo M., Sidoli L., et al. 2012, A\&A, 543, A50
\re
D\'iaz Trigo M., Boirin L., et al. 2013, Acta Polytechnica, 53, 659
\re
Frank J., King A.R., Lasota J.-P., 1987, A\&A, 178,137
\re
Hameury J.-M., Lasota J.-P., 2005, A\&A, 443, 283
\re
Korista K.T., Bautista M.A., et al. 2008, ApJ, 688, 108
\re
Krolik J.H., Kriss G.A., 1995, ApJ, 447, 512
\re
Kubota A., Dotani T., et al. 2007, PASJ, 59, 185
\re
Mauche C.W., Liedahl D.A., et al. 2003, ApJ, 588, L101
\re
Miller J.M., Raymond J., et al. 2008, ApJ, 680, 1359
\re
Miller J.M., Raymond J., et al. 2013, arXiv:1306.2915v1
\re
Pojma\'nski G., 1985, Acta Astronomica, 36, 69
\re
Ponti G., Fender R.P. et al. 2012, MNRAS, 422, L11
\re
R\'o\.za\'nska A., Czerny B., et al. 1999, MNRAS, 305, 481
\re
R\'o\.za\'nska A., Kowalska I., et al. 2008, A\&A, 487, 895
\re
R\'o\.za\'nska A., Madej J., et al. 2011, A\&A, 527, A47
\re
R\'o\.za\'nska A., Madej, J., et al. 2014 A\&A, 562, A81
\re
Shakura N.I., Sunyaev R.A., 1973, A\&A, 24, 337
\re
Smak J., 1983, Acta Astron., 33, 333
\re
White N.E., Swank J.H., 1982, ApJ, 253, L61
\label{last}
\end{document}
|
2,869,038,156,698 | arxiv | \section{Introduction}
It is an important open question whether a notion of locality can be
maintained in fundamental theories involving gravity such as e.g.
string theory. This problem appears to be closely related to the question
whether (pseudo-) Riemannian geometry still makes sense down to arbitrarily
small distances.
Arguments have been put forward that indicate the impossibility of
measuring arbitrarily small distances when combining quantum theory and general
relativity, see e.g. \cite{DFR}\cite{FGR}. If Riemannian geometry can not
be measured then it becomes questionable whether it is a useful concept
for the description of physical situations in which
space-time uncertainties are relevant.
The status of locality in string theory is unclear due to the absence of
a fundamental, background-independent formulation. There are hints indicating
that strings cannot probe arbitrarily small distances due to quantum
fluctuations of their shape, which are related to the fact that
effective actions for the fields corresponding to the low energy states
of the string are generically\footnote{if $l_s/l_R$ is not
very small, where $l_s$ is the string length and $l_R$ denotes
the characteristic curvature radius} highly nonlocal. More recent developments
have exhibited on the one hand D-branes as probes that sometimes are
able to resolve smaller distance scales than strings
\cite{DKPS}\footnote{The situation for $l_s/l_R\sim \CO(1)$
it is not clear to the author. It has been proposed in \cite{D} that
D0-branes behave like {\it point}-particles in some regime with
$l_s/l_R\sim \CO(1)$ so that the metric probed by these objects would
define the pseudo-Riemannian structure of a background. On the other hand
it appears to the author that the description of D-brane
effective actions in terms of noncommutative geometry that was found in
some specific backgrounds (reviewed in \cite{D}) should be generic rather
than exceptional.},
but have on the other hand discovered new
hints towards fundamental nonlocalities from non-perturbative dualities
such as the dualities between strings on Anti-de Sitter (AdS) spaces and
conformal field theories conjectured by
Maldacena (see e.g. \cite{BDHM}).
Having available more general frameworks for the formulation of field
theories that take into account fundamental space-time uncertainties or
nonlocal effects may be an important ingredient for the further
development of gravitational or string theories. One rather general framework
that has been proposed in this context is given by the noncommutative
geometry \cite{Co}, where non-commutativity of the operator algebra supposed
to describe position measurements prevents localization to arbitrarily small
distances. One thereby naturally obtains space-time uncertainties as
e.g. in the explicit example proposed in \cite{DFR}.
There is a generalization of the concept of
geodesic distance for noncommutative
spaces \cite{Co}\cite{FG}, which is based on the observation that the
geodesic distance on a Riemannian manifold ${\mathcal M}$ can be reconstructed from the
{\it algebraic} data $({\mathcal A},{\mathcal H},\Delta)$, where ${\mathcal A}$ is the algebra of (say)
smooth, bounded functions on ${\mathcal M}$, ${\mathcal H}$ may be taken as the space
$L^2({\mathcal M},dv)$ on which ${\mathcal A}$ acts as multiplication operators and
$\Delta$ is the Laplacian on ${\mathcal M}$ considered as a self-adjoint operator on ${\mathcal H}$.
Physically this simply means that the geodesic distance can be reconstructed
from the quantum mechanics of point particles on ${\mathcal M}$. This observation
leads to a natural definition of geodesic distance for more general
choices of data $({\mathcal A},{\mathcal H},\Delta)$, in which the algebra ${\mathcal A}$ may be
noncommutative. A possibility that does not seem to have attracted much
attention is the case in which ${\mathcal A}$ is still commutative, but
$\Delta$ is replaced by some other, maybe nonlocal, self-adjoint operator on
${\mathcal H}$.
Unfortunately, an
analogous generalization in the case of Minkowskian signature metrics
does not seem to exist presently (to the author's knowledge).
Physically one might guess that reconstructing the Riemannian metric
from the quantum mechanics on ${\mathcal M}$ should be replaced by a study of
propagation of wave-packets according to the covariant wave equation.
This suggests that it should ultimately be possible to define
generalizations of pseudo-Riemannian structures from data such as
$({\mathcal A},{\mathcal H},\square)$,
where now $\square$ is some generalization of the covariant wave operator.
It is in this spirit that deformations of the covariant wave equation
will be considered as defining a deformed pseudo-Riemannian geometry.
Alternatively, one may simply take the point of view that nonlocal
deformations of the covariant wave equation
are one way to parametrize certain modifications
of the propagation of fields on a manifold due to
gravitational/stringy nonlocal effects.
The present paper will describe an example of
a nonlocal deformation of the wave equation in the geometry of a
two-dimensional black hole. In this example the generalization
appears exclusively in the nonlocality of the deformed covariant wave operator,
the underlying algebra of functions is {\it commutative}.
However, the specific deformation studied has its roots in a
{\it noncommutative} quantum
deformation of the algebra of functions on $SL(2,{\mathbb R})$
and was found by generalizing the
relation between $SL(2,{\mathbb R})$ and the two-dimensional black hole that exists
classically (e.g. \cite{Wi}\cite{DVV}), as will be further discussed
elsewhere. By similar constructions one can alternatively obtain
{\it noncommutative} deformations of
Euclidean/Minkowskian $AdS_3$, the BTZ-black hole, and the
two-dimensional euclidean black hole.
The paper starts with a presentation of some results on wave propagation
and scattering in the case of the classical two-dimensional black hole.
Some important results had been obtained in \cite{DVV}, but for the
purpose of comparison with the deformed case it appeared to be necessary
to complete (e.g. by the solution of the Cauchy problem)
and generalize (to arbitrary mass) the discussion therein.
The following section then carries out a study of the deformed case along
similar lines as in the classical case. In order to focus on the
physically interesting aspects and to keep the discussion brief, the basic
technical results, being analogous to the classical case, are only announced.
One should note however that their proof requires methods rather different
from the well-known techniques that can be used in the classical case, so a
full account of the technicalities will be given elsewhere.
\section{The two-dimensional black hole}
A two-dimensional analogue of the black hole
\footnote{See e.g. \cite{Wi}\cite{DVV} for more extensive discussions}
can be found as solution of the
equations of motion for the dilaton gravity theory defined by the action
\begin{equation}
S=\int d^2x \sqrt{G} e^{\Phi}\Bigl(R+(\nabla\Phi)^2-8\Bigr).
\end{equation}
It is given by the following expressions for metric and dilaton field:
\begin{equation}\label{metric}
ds^2=\frac{dudv}{1-uv}, \qquad \phi=\log(1-uv).
\end{equation}
One should note that the metric has all the characteristic
features of a black hole:
There is a horizon at $uv=0$ and a curvature singularity at $uv=1$. The
following figure shows the Penrose diagram of the fully extended geometry
supplemented by regions behind the singularities (regions III and VI):
\begin{center}
\setlength{\unitlength}{0.0005in}
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
{\renewcommand{\dashlinestretch}{30}
\begin{picture}(4824,4839)(0,-10)
\put(4362,1662){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}-}}}}}
\path(1062,3762)(1437,4137)
\path(1373.360,4030.934)(1437.000,4137.000)(1330.934,4073.360)
\path(3762,3762)(3387,4137)
\path(3493.066,4073.360)(3387.000,4137.000)(3450.640,4030.934)
\path(1212,3612)(2412,4812)(3612,3612)
(4812,2412)(3612,1212)
\dottedline{45}(1212,3612)(12,2412)(1212,1212)
(2412,12)(3612,1212)
\dottedline{45}(1212,1212)(2412,2412)
\path(2412,2412)(3612,3612)
\path(1212,3612)(1215,3611)(1221,3608)
(1231,3603)(1247,3595)(1269,3584)
(1296,3571)(1327,3556)(1362,3539)
(1399,3521)(1436,3503)(1473,3486)
(1510,3468)(1545,3452)(1578,3436)
(1610,3422)(1639,3409)(1667,3396)
(1693,3385)(1718,3374)(1742,3364)
(1766,3355)(1789,3346)(1812,3337)
(1835,3329)(1858,3320)(1882,3312)
(1905,3304)(1930,3296)(1955,3289)
(1980,3281)(2006,3274)(2032,3267)
(2058,3260)(2085,3253)(2112,3247)
(2139,3242)(2166,3237)(2192,3232)
(2218,3228)(2244,3224)(2270,3221)
(2294,3218)(2319,3216)(2342,3214)
(2366,3213)(2389,3212)(2412,3212)
(2435,3212)(2458,3213)(2482,3214)
(2505,3216)(2530,3218)(2555,3221)
(2580,3224)(2606,3228)(2632,3232)
(2658,3237)(2685,3242)(2712,3247)
(2739,3253)(2766,3260)(2792,3267)
(2818,3274)(2844,3281)(2870,3289)
(2894,3296)(2919,3304)(2942,3312)
(2966,3320)(2989,3329)(3012,3337)
(3035,3346)(3058,3355)(3082,3364)
(3106,3374)(3131,3385)(3157,3396)
(3185,3409)(3214,3422)(3246,3436)
(3279,3452)(3314,3468)(3351,3486)
(3388,3503)(3425,3521)(3462,3539)
(3497,3556)(3528,3571)(3555,3584)
(3577,3595)(3593,3603)(3603,3608)
(3609,3611)(3612,3612)
\dottedline{450}(1212,1212)(1231,1221)(1296,1253)(1399,1303)(1510,1356)
(1610,1402)(1693,1439)(1766,1469)(1835,1495)(1905,1520)(1980,1543)(2058,1564)
(2139,1582)(2218,1596)(2294,1606)(2366,1611)(2435,1612)(2505,1608)(2580,1600)
(2658,1587)(2739,1571)(2818,1550)(2894,1528)(2966,1504)(3035,1478)(3106,1450)
(3185,1415)(3279,1372)(3388,1321)(3497,1268)(3577,1229)(3609,1213)
\put(3612,2337){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}I}}}}}
\put(2337,3537){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}III}}}}}
\put(1137,2337){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}IV}}}}}
\put(2337,1137){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}VI}}}}}
\put(2337,1887){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}V}}}}}
\put(912,3612){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\itdefault}u}}}}}
\put(3837,3612){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\itdefault}v}}}}}
\put(2337,2787){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\familydefault}{\mddefault}{\updefault}II}}}}}
\put(3087,1887){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\updefault}${\mathcal H}^-$}}}}}
\put(3087,2787){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\updefault}${\mathcal H}^+$}}}}}
\put(4287,3087){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\updefault}${\mathcal I}^+$}}}}}
\put(4287,1587){\makebox(0,0)[lb]{\smash{{{\SetFigFont{8}{14.4}{\rmdefault}{\mddefault}{\updefault}${\mathcal I}^-$}}}}}
\path(1212,3612)(3612,1212)
\end{picture}
}\end{center}
In the present paper only regions I, II and III will be considered, which is
why the rest is represented by dotted lines. One may consider the metric
in regions I and II as an idealization of a black hole formed by gravitational
collapse if one imposes the boundary condition that there is no flux of
matter from regions IV and V into II and I respectively, cf. \cite{Un}.
The corresponding wave operator reads
\begin{equation}\label{waveop}
\begin{aligned}
\square=&-\frac{1}{2e^{\Phi}\sqrt{-g}}
\partial_{\mu}e^{\Phi}g^{\mu\nu}\sqrt{-g}\partial_{\nu} \\
=& -\frac{1}{2}\bigl(\partial_u(1-uv)\partial_v+\partial_v(1-uv)\partial_u\bigr).
\end{aligned}\end{equation}
\subsection{Solutions to the wave equation in region I}
The wave equation to be solved reads
\begin{equation}\label{waveeq}
\square f=\bigl(m^2-\fr{1}{4}\bigr)f.
\end{equation}
Splitting off $\frac{1}{4}$ on the right hand side is necessary for the mass
$m$ to be the mass as defined by an asymptotic observer.
In order to solve the wave-equation \rf{waveeq} it is useful to
introduce variables $r=\log(-uv)$, $t=\log(-u/v)$ in which
the covariant wave operator
$\square$ takes the form
\begin{equation}\label{dalem}
\square=e^{-r}\partial_r^{}(1+e^{r})\partial_r^{} - (e^{-r}+1)\partial_t^2.
\end{equation}
The operator $\Delta$ can furthermore be brought into the form of a
one-dimensional Schr\"{o}dinger operator by considering the field $g=
e^{\Phi/2}f=\sqrt{1+e^r}f$ instead of $f$. The wave equation for
$g$ then reads
\[ \bigl(\partial_t^2+\Delta'\bigr)g=0,\qquad
\Delta'=-\partial_r^2 + V(r),\qquad V(r)=\frac{e^r}{4(1+e^r)^2}+m^2
\frac{e^r}{1+e^r}.
\]
Any solution which at some fixed time $t$ allows expansion of $g(r,t)$ and
$\dot{g}(r,t)\equiv \partial_t g(r,t)$ into generalized
eigenfunctions of the Schr\"{o}dinger operator $\Delta'$
can then be written in the form
\[
g(r,t)=\int d\mu(\omega) \;\; \bigl(e^{-i\omega t} W_{\omega}(r)+
e^{+i\omega t} \bar{W}_{\omega}(r)\bigr) \qquad\text{with}\qquad
\Delta_h W_{\omega}(r) = \omega^2 W_{\omega}(r).
\]
The eigenvalue equation for $W_{\omega}(r)$ is brought into
the form of a hypergeometric differential equation by $W_{\omega}(r)=(-x)^{i\omega}
(1-x)^{-1/2}F_{\omega}(x)$, $x=-e^r$.
One has two linearly independent solutions:
\[
U_{k}(r)= N_{k} e^{-i\omega r}(1+e^r)^{\frac{1}{2}}
F\bigl(\fr{1}{2}+i(k-\omega),\fr{1}{2}-i(k+\omega),1-2i\omega,-e^r)
\]
and its complex conjugate $V_k(r)\equiv\bar{U}_k(r)$,
where $k$ is fixed by the mass-shell condition
$\omega^2-k^2=m^2$, and $N_k$ is a normalization factor to be fixed below.
The asymptotic behavior for $r\rightarrow \infty$ corresponding to large spacelike
distance from the black hole is given by plane waves:
\begin{equation}\label{asym}
U_k(x) \sim N_k
\bigl(B_{+}(k)e^{-ikr}+B_{-}(k)e^{ikr}\bigr),\qquad
B_{\pm}(k)=\frac{\Gamma(1-2i\omega)\Gamma(\mp 2ik)}{\Gamma^2\bigl(\frac{1}{2}-i(\omega
\pm k)\bigr)}.
\end{equation}
It can then be checked that $\Delta'$ is essentially
self-adjoint in $L^2({\mathbb R})$:
There are no square-integrable eigenfunctions of $\Delta'$, so the deficiency
indices vanish. It can furthermore be shown that
the set of generalized eigenfunctions
$\{U_k(x); k\in{\mathbb R}_+\}\cup\{V_k(x); k\in{\mathbb R}_+\}$ constitutes
a plane wave basis for $L^2({\mathbb R})$. The normalization $N_k$ is finally
given by
\[
\int_{{\mathbb R}}dr \;\;\bar{U}_{k_2}(r) U_{k_1}(r)=2\pi\delta(k_2-k_1),
\qquad
\int_{{\mathbb R}}dr \;\;\bar{U}_{k_2}(r) V_{k_1}(r) = 0,
\]
if the normalization $N_{k}$ is chosen as
\[
N_{k}=\frac{1}{B_{+}(k)}=\frac{\Gamma^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}{\Gamma(1-2i\omega)\Gamma(-2ik)},
\]
corresponding to normalizing the ``incoming'' plane wave in \rf{asym}
to unity.
This yields existence and uniqueness
of a solution to the Cauchy-problem for suitable
subspaces ${\mathcal S} \subset L^2({\mathbb R})$ of test-functions, where ${\mathcal S}$ could be
for example the usual Schwartz space of $L^2({\mathbb R})$. It takes the form
\begin{equation}\label{exp}
g(r,t)=
\int_0^{\infty}dk\Bigl( e^{-i\omega t}\bigl(
a_kU_{k}(r)+b_k V_{k}(r)\bigr)
+ e^{i\omega t}\bigl(
\bar{a}_k\bar{U}_{k}(r)+\bar{b}_k \bar{V}_{k}(r)\bigr)\Bigr),
\end{equation}
where the coefficients are given
in terms of the values $g(r,t)$, $\dot{g}(r,t)$ at fixed
time $t$ via
\begin{equation}\label{inifourier}
\begin{aligned}
a_k=a_k[g,\dot{g}]=&
\frac{1}{4\pi}\int_{{\mathbb R}}dr \;\;e^{i\omega t}\bar{U}_{k}(r)\Bigl(g(r,t)+
\frac{i}{\omega}\dot{g}(r,t)\Bigr) \\[1ex]
b_k=b_k[g,\dot{g}]=&
\frac{1}{4\pi}\int_{{\mathbb R}}dr \;\;e^{i\omega t}\bar{V}_{k}(r)\Bigl(g(r,t)-
\frac{i}{\omega}\dot{g}(r,t)\Bigr).
\end{aligned}
\end{equation}
In view of a similar phenomenon that will be found in the deformed case
it may be worthwhile noting that expansion into eigenfunction of $\Delta'$
is possible for considerably more general choices of
subspaces ${\mathcal T}$ , ${\mathcal S}\subset {\mathcal T} \subset L^2({\mathbb R})$
\footnote{see the first two
sections of \cite{Be} for a lucid discussion of the conditions ${\mathcal T}$ has to
satisfy in order to allow expansion of {\it any} $g\in{\mathcal T}$ into
generalized eigenfunctions}. Moreover, the expression \rf{exp}
still makes sense for $a_k=a_k[g,\dot{g}]$, $b_k=b_k[g,\dot{g}]$ corresponding
to initial data $g$, $\dot{g}$ in ${\mathcal T}$. But the resulting
function $g(r,t)$ will then generically not be differentiable; it can
therefore be considered as a solution of the wave-equation only in
the distributional sense. This
phenomenon is familiar from the simple case $\partial_t^2f=\partial_x^2f$:
One may consider $f(x-t)$ to be a distributional solution of
$\partial_t^2f=\partial_x^2f$ even if $f$ is not differentiable.
\subsection{Scattering in the black hole geometry}
By the method of stationary phase it is possible to
show that for $t\rightarrow -\infty$
any solution \rf{exp} is asymptotic to a function $g^{\text{\tiny as}}(r,t)$
in the sense that
\[
\lim_{t\rightarrow -\infty}\int_{{\mathbb R}}dr\;\;\lvert g(r,t)-g^{\text{\tiny as}}(r,t)\rvert^2 =0.
\]
The function $g^{\text{\tiny as}}(r,t)$ is expressed in terms of $a_k$, $b_k$ as follows
\[
g^{\text{\tiny as}}=g^{\text{\tiny as}}_1+g^{\text{\tiny as}}_2\qquad\begin{aligned}
g^{\text{\tiny as}}_1(r,t)=&\int_{0}^{\infty}dk \;\bigl(e^{-i\omega(t-r)}b_k \bar{N}_k
+ e^{i\omega(t-r)}\bar{b}_k N_k\bigr)\\[1ex]
g^{\text{\tiny as}}_2(r,t)=&\int_{0}^{\infty}dk \;\bigl(e^{-i(\omega t+k r)}
(a_k +b_k)+ e^{i(\omega t+k r)}
(a_k +b_k)\bigr).
\end{aligned}\]
The functions $g^{\text{\tiny as}}_1$ ($g^{\text{\tiny as}}_2$)
describe right- (left-) moving wave-packets
coming in from ${\mathcal H}_{-}$ (${\mathcal I}_{-}$).
However, the right-moving
plane waves at ${\mathcal H}_{-}$ represent an inflow from region $V$
into region $I$. In order to be consistent with the interpretation of
regions I/II as being an idealization of a black hole formed by
gravitational collapse, it is necessary to impose the boundary condition
of vanishing $g^{\text{\tiny as}}_1$ corresponding to $b_k\equiv 0$.
The scattering problem for a wave-packet with asymptotic form $g^{\text{\tiny in}}$
consists therefore in determining the late-time
asymptotics $g^{\text{\tiny out}}(r,t)$ defined by
\[
\lim_{t\rightarrow \infty} \int_{{\mathbb R}}dr\;\;
\lvert g(r,t)-g^{\text{\tiny out}}(r,t)\rvert^2 =0.
\]
for $f(r,t)$ subject to the boundary condition $b_k=0$. It is given by
$g^{\text{\tiny out}}(r,t)=g^{\text{\tiny out}}_1(r,t)+g^{\text{\tiny out}}_2(r,t)$:
\[ \begin{aligned}
g^{\text{\tiny out}}_1(r,t)=& \int_{m}^{\infty}d\omega\;\;
\bigl(e^{-i\omega(t+r)}T_ka_k + e^{i\omega(t+r)}\bar{T}_k\bar{a}_k \bigr)\\[1ex]
g^{\text{\tiny out}}_2(r,t)=&\int_{0}^{\infty}dk
\;\bigl( e^{-i(\omega t+k r)}R_ka_k+e^{i(\omega t+k r)}\bar{R}_k\bar{a}_k\bigr),
\end{aligned}
\]
where $g^{\text{\tiny out}}_1(r,t)$ describes the matter that falls through the future
horizon ${\mathcal H}_+$,
whereas $g^{\text{\tiny out}}_2(r,t)$ represents the part that escapes towards
space-like infinity. The corresponding
``transmission'' amplitude $T_k$ and ``reflection amplitude $R_k$ are
respectively given by
\[
T_k= \frac{\omega}{k}N_k= \frac{\Gamma^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}{\Gamma(-2i\omega)\Gamma(1-2ik)}\qquad
R_k= \frac{N_k}{N_{-k}}=\frac{\Gamma^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}
{\Gamma^2\bigl(\frac{1}{2}+i(k-\omega)\bigr)}
\frac{\Gamma(+2ik)}{\Gamma(-2ik)}
\]
Note that information is conserved: $|T_k|^2+|R_k|^2=1$.
\subsection{Continuation into region II}
The continuation of the wave-packet \rf{exp} into region
II is trivial when expressing the modes $e^{-i\omega t}U_{k}(r)$
in terms of $u$, $v$-coordinates:
\begin{equation}\label{contII}
e^{-i\omega t}U_{k}(r)\equiv {\mathcal U}_{k}(u,v)=u^{-2i\omega}F\bigl(
\fr{1}{2}+i(k-\omega),\fr{1}{2}-i(k+\omega),1-2i\omega,uv\bigr),
\end{equation}
where ${\mathcal U}_{k}(u,v)$ is analytic in $v$ near the horizon $v=0$. The expression
\[
f(u,v)=\int_m^{\infty}d\omega \;\;\bigl({\mathcal U}_{k}(u,v)T_{k}a_k +
\bar{{\mathcal U}}_{k}(u,v)\bar{T}_{k}\bar{a}_k\bigr)
\]
therefore defines a wave-packet $f$ in the union of regions I and II. This
continuation becomes unique by imposing the condition of vanishing
on the boundary between regions IV and II, which is motivated by arguments
analogous to those that motivated vanishing on ${\mathcal H}_-$.
However, the singularity at $uv=1$ prevents further continuation into region
III. Technically this follows from the fact that the modes
${\mathcal U}_{k}(u,v)$ develop a singularity of the form $\log(1-uv)$.
Predictability of the evolution of the wave-packet $f$ breaks down at
the singularity
since there is no unique way of defining $\log(x)$ for negative $x$.
\section{Propagation and scattering in the deformed black hole}
The deformation of the covariant wave-equation that will be considered
takes the form
\begin{equation}\label{q-box}
\square_h f=\bigl\{ m^2-\fr{1}{4}\bigr\}_h^{} f, \qquad
\bigl\{ m^2-\fr{1}{4}\bigr\}_h^{}\equiv
\biggl(\frac{\sinh^2(\pi h m)}{ \sin^2(\pi h)}
-\frac{\sin^2(\frac{\pi h}{2})}{\sin^2(\pi h)}
\biggr) ,
\end{equation}
where $h\in(0,1)$ is the deformation parameter and
the differential operator $\square$ that appeared in the classical
case \rf{dalem} has been replaced by the following {\it finite
difference} operator:
\[ \square_{h}=
e^{-r}D_r^{}(1+e^{r})D_r^{} - (e^{-r}+1)D_t^2\qquad
\begin{aligned}
D_r \equiv &
\frac{\delta_r^{+}-\delta_r^{-}}{2i\sin(\pi h)},\qquad
\delta_r^{\pm}f(r,t)=f(r\pm\pi i h,t),\\
D_t \equiv &
\frac{\delta_t^{+}-\delta_t^{-}}{2i\sin(\pi h)},
\qquad \delta_t^{\pm}f(r,t)=f(r,t\pm\pi i h).
\end{aligned}\]
One obviously recovers the classical
counterparts in the limit $h\rightarrow 0$. What appears to be unusual about this
kind of deformation is the appearance of {\it imaginary} shifts of the
arguments $r,t$. In order for operators such as $\square_{h}$ to be
defined as operators on spaces of functions of {\it real}
variables $r,t$ one needs an unambigous prescription to extend the
functions defined for real values of the arguments to the relevant strips
in the complex plane. The most natural such extension seems to be given
by requiring analyticity in the strip
\[
{\mathcal S}=\bigl\{ (r,t)\in{\mathbb C}^2 ; |\Im(r)|<2\pi h, \;\; |\Im(t)|<2\pi h \bigr\},
\]
and existence of the limits
\[
f(r\pm 2\pi ih,t)=\lim_{\epsilon\rightarrow 0} f(r\pm 2\pi ih\mp i\epsilon,t), \qquad
f(r,t\pm 2\pi ih)=\lim_{\epsilon\rightarrow 0} f(r,t\pm 2\pi ih\mp i\epsilon), \qquad \epsilon>0
\]
for almost any $r,t\in{\mathbb R}$.
This choice can alternatively be justified by considering the reduction from
the quantized space $SL_q(2,{\mathbb R})$.
\subsection{Preliminaries}
It is useful to write \rf{q-box} in the form
\[
(D_t^2+\Delta_h)f=0,\qquad \Delta_h=-\frac{1}{1+e^r}D_r(1+e^r)D_r +
\bigl\{ m^2-\fr{1}{4}\bigr\}_h^{}
\frac{e^r}{1+e^r},
\]
or in terms of $g=(1+e^r)^{\frac{1}{2}}f$
\[
(D_t^2+\Delta'_h)g=0,\qquad \Delta'_h=-\frac{1}{\sqrt{(1+e^r)}}D_r(1+e^r)
D_r\frac{1}{\sqrt{(1+e^r)}} + \bigl\{ m^2-\fr{1}{4}\bigr\}_h^{}
\frac{e^r}{1+e^r}.
\]
As in the classical case one may try to construct a representation for the
general solution by means of an eigenfunction expansion for
$\Delta_h$:\footnote{In this case it is technically more convenient to
consider $\Delta_h$ instead of $\Delta_h'$} If
\[
f(r,t)=\int d\mu(\omega) a_{\omega}(t) W_{\omega}(r) \qquad\text{with}\qquad
\Delta_h W_{\omega}(r) = [\omega]_h^2 W_{\omega}(r), \qquad
[z]_h\equiv \frac{\sinh(\pi h z)}{\sin(\pi h)} ,
\]
then $a_{\omega}(t)$ will be determined as a solution of
$D_t^2a_{\omega}(t)=-[\omega]_h^2a_{\omega}(t) $. The most general
solution of the latter equation that has the required analyticity
is given by
\[
a_{\omega}(t) = \sum_{m\in{\mathbb Z}}
\bigl(A_m e^{-i(\omega+i\kappa m)t} + B_m e^{i(\omega+i\kappa m)t}\bigr).
\]
Note that the contributions from $m\neq 0$ blow up for $t\rightarrow\infty$
or $t\rightarrow -\infty$. They correspond to the ``runaway''-solutions that
typically cause problems for higher-derivative equations of motion. However,
in the present case one has not any problem to simply throw away the
``badly-behaved'' solutions with $n\neq 0$. Together with the
eigenfunction decomposition of $\Delta_h$ one will thereby obtain a perfectly
well-defined initial value problem:
\subsection{Eigenfunction expansion for $\Delta_h$}
$\Delta_h$ will be considered as an operator defined on the
dense domain ${\mathcal D}\subset L^2({\mathbb R},d\lambda(r))$, $d\lambda(r)=dr(1+e^r)$
which consists of functions
that allow extension to a function
holomorphic in the strip $\{ r\in{\mathbb C}; |\Im(r)|< 2\pi h\}$
and satisfy
\[
\sup_{|\eta|<2\pi h}\int_{{\mathbb R}}d\lambda(r)\;\;|f(r+i\eta)|^2
<\infty.
\]
\begin{thm}
The operator $\Delta_h$ is essentially self-adjoint.
There exists an expansion into generalized eigenfunctions of $\Delta_h$.
\end{thm}
\begin{thm}
The following set ${\mathfrak B}$ constitutes a basis of generalized eigenfunctions
for $\Delta_h$:
\[
{\mathfrak B}=\bigl\{ U_{h,k}; k\in {\mathbb R}_+\bigr\}\cup \bigl
\{ V_{h,k}; k\in {\mathbb R}_+\bigr\},
\]
where the eigenvalue $\omega^2$ is given in terms of the parameter $k$
by the {\it h-mass-shell} relation
\begin{equation}\label{h-mass}
[\omega]_h^2-[\kappa]_h^2=[m]_h^2
\end{equation}
and $U_{h,k}(r)$ is given in terms of the
q-hypergeometric functions introduced in the Appendix (eqn. \rf{qhyp}) as
\[
U_{h,k}(r)=N_{h,k}^+ e^{-i\omega r}
F_h\bigl(\fr{1}{2}+i(k-\omega),\fr{1}{2}-i(k+\omega),1-2i\omega,-e^r\bigr),
\]
whereas $V_{h,k}(r)=\bar{U}_{h,k}(r)$,the complex conjugate of $U_{h,k}(r)$.
\end{thm}
\begin{thm} The functions $U_{h,k}(r)$, $\bar{U}_{h,k}(r)$ are orthonormalized
according to
\[
\int_{{\mathbb R}}d\lambda(r)\;\; \bar{U}_{h,k_2}(r) U_{h,k_1}(r)=2\pi\delta(k_2-k_1),
\qquad
\int_{{\mathbb R}}d\lambda(r)\;\; \bar{V}_{h,k_2}(r) U_{h,k_1}(r) = 0,
\]
if the normalization $N_{h,k}$ is chosen as
\[
N_{h,k}=\frac{\Gamma_h^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}{\Gamma_h(1-2i\omega)\Gamma_h(-2ik)}.
\]
\end{thm}
The normalization is as in the classical case such that the ``incoming wave''
in the ($r\rightarrow\infty$)-asymptotics is normalized to one:
\[
U_{h,k}(r)\sim e^{-\frac{1}{2}r}\biggl(e^{-ikr} + \frac{N_{h,k}}{N_{h,-k}}
e^{+ikr}\biggr).
\]
Although these theorems are precise analogues
of the corresponding statements in the undeformed case, their proof is quite
different. To the authors knowledge these are the first nontrivial
results on spectral analysis of finite difference operators of this type.
\subsection{Initial value problem}
The results just given allow one to write any solution as
\begin{equation}\label{q-exp}
f(r,t)=
\int_0^{\infty}dk\Bigl( e^{-i\omega t}\bigl(
a_k U_{h,k}(r)+b_k V_{h,k}\bigr)
+ e^{i\omega t}\bigl(
\bar{a}_k\bar{U}_{h,k}(r)+\bar{b}_k \bar{V}_{h,k}\bigr)\Bigr),
\end{equation}
where the coefficients are given
in terms of the values $f(r,t)$, $\dot{f}(r,t)\equiv\partial_t f(r,t)$ at fixed
time $t$ via
\begin{equation}\label{q-inifourier}
\begin{aligned}
a_k=a_k[f,\dot{f}]=&
\frac{1}{4\pi}\int_{{\mathbb R}}d\lambda(r) \;\;e^{i\omega t}\bar{U}_{h,k}(r)\Bigl(f(r,t)+
\frac{i}{\omega}\partial_t f(r,t)\Bigr), \\[1ex]
b_k=b_k[f,\dot{f}]=&
\frac{1}{4\pi}\int_{{\mathbb R}}d\lambda(r) \;\;e^{i\omega t}\bar{V}_{h,k}(r)\Bigl(f(r,t)-
\frac{i}{\omega}\partial_t f(r,t)\Bigr).
\end{aligned}
\end{equation}
However, it can be shown that the expression \rf{q-exp} remains sensible
for much more general choices of the coefficients $a_k$, $b_k$
than those provided by \rf{q-inifourier} for solutions $f(r,t)$.
Stated differently,
general choice of $a_k$, $b_k$ in \rf{q-exp} will not yield $f$
that have the analyticity properties necessary to be solutions in the
strict sense, but only in the distributional sense.\footnote{What
appears to be at work is a generalization of the Paley-Wiener
theorems relating analyticity of functions on a strip to exponential decay
properties of their Fourier transforms.}
On the other hand one may observe that the function $f(r,t)$ given by
\rf{q-exp} can alternatively be
characterized as a solution of a {\it second order
differential} equation w.r.t. time of the form
\begin{equation}\label{q-evol}
(\partial_t^2+{\mathcal D}_h)f=0, \qquad \bigl({\mathcal D}_h f\bigr)(r,t)=
\int_{{\mathbb R}}dr' \;{\mathcal K}_h(r,r')f(r',t)
\end{equation}
where the kernel ${\mathcal K}_h(r,r')$ is given by
\begin{equation}\label{dkernel}
d_h(r,r')=\frac{1}{2\pi}\int_0^{\infty}dk \;\;\omega^2\;\;
\bigl(U_{h,k}(r)\bar{U}_{h,k}(r)+ V_{h,k}(r)\bar{V}_{h,k}(r)\bigr),
\end{equation}
and $\omega$ has to be expressed in terms of $k$ by means of the h-mass-shell
relation. The corresponding expression $d(r,r')$ in
the classical case is of course simply equal to $(-\partial_r^2+V(r))\delta(r-r')$.
The fact that $d(r,r')$ is supported on the diagonal can e.g. be found by
extending the $k$-integration in the classical analogue of \rf{dkernel}
to the full axis and closing the contour in the upper (lower) half-plane
depending on the sign of $r-r'$. This will no longer be possible in the
deformed case since $\omega$ as function of $k$
has logarithmic and square-root branch cuts. ${\mathcal D}_h$ is therefore most likely
a genuinely nonlocal operator.
To summarize: The deformed wave equation supplemented
with the condition of absence of ``runaway''-solutions is equivalent to
the nonlocal evolution equation \rf{q-evol}, which manifestly has a
well-posed initial value problem.
\subsection{Scattering in region I}
At this point it becomes possible to carry through a discussion of scattering
in region I of the deformed black hole in complete analogy to the undeformed
case. It basically amounts to adding subscript ``h'' at the appropriate
places.
First of all it turns out that the boundary condition of vanishing on the
past horizon ${\mathcal H}_-$ again corresponds to $b_k\equiv 0$ in \rf{q-evol}. Then
one finds
\begin{thm} The asymptotics $g^{\text{\tiny in}}$, $g^{\text{\tiny out}}$ for $t\rightarrow \mp\infty$ defined by
\[
\lim_{t\rightarrow\mp\infty}\int_{{\mathbb R}}dr |g(r,t)-g^{\text{\tiny in/out}}
(r,t)|^2 =0
\]
is explicitely given by
\[ \begin{aligned}
g^{\text{\tiny in}}(r,t)=& \int_0^{\infty} dk\;\; \bigl( e^{-i(\omega t+kr)}a_k+
e^{i(\omega t+kr)}\bar{a}_k \bigr),\\
g^{\text{\tiny out}}(r,t)=& g^{\text{\tiny out}}_1(r,t)+ g^{\text{\tiny out}}_2(r,t),\qquad
\begin{aligned}
g^{\text{\tiny out}}_1(r,t)=& \int_m^{\infty}d\omega \bigl(e^{-i\omega(t+r)}T_{h,k}a_k+
e^{+i\omega(t+r)}\bar{T}_{h,k}\bar{a}_k\bigr), \\[1ex]
g^{\text{\tiny out}}_2(r,t)=& \int_0^{\infty}dk \bigl(e^{-i(\omega t+k r)}R_{h,k}a_k+
e^{+i(\omega t-kr)}\bar{R}_{h,k}\bar{a}_k\bigr),
\end{aligned}\end{aligned}
\]
where the reflection and transmission coefficients are given
by
\[ \begin{aligned}
T_{h,k}=&\frac{{[}2\omega{]}_h}{{[}2k{]}_h}N_{h,k}=\frac{\Gamma_h^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}{\Gamma_h(-2i\omega)\Gamma_h(1-2ik)}\\
R_{h,k}=&\frac{N_{h,k}}{N_{h,-k}}=\frac{\Gamma_h^2
\bigl(\frac{1}{2}-i(k+\omega)\bigr)}
{\Gamma_h^2\bigl(\frac{1}{2}+i(k-\omega)\bigr)}
\frac{\Gamma_h(+2ik)}{\Gamma_h(-2ik)}
\end{aligned}
\]
\end{thm}
It is noteworthy that one has {\it ordinary plane waves} in the
asymptotic regions! This can be understood by noting that
for scales in $r$, $t$-space that are
large compared to $h$ and $\omega$, $k$, $m$ small compared to $h$ one
may approximate the deformed wave equation by the undeformed one.
The asymptotic observer will see the effect of deformation only by
analyzing high frequencies of the reflected waves.
Finally, one may again check that information is preserved:
$|T_{h,k}|^2+|R_{h,k}|^2=1$.
\subsection{Continuation into regions II/III}
A remarkable qualitative difference to the undeformed case shows up
in considering the continuation of the wave-packet that passes through the
horizon into regions II/III. To this aim one should again use the $u$, $v$-
coordinates. In terms of these one has
\begin{equation}\label{q-contII}
e^{-i\omega t}U_{h,k}(r)\equiv {\mathcal U}_{h,k}(u,v)=u^{-2i\omega}F_h\bigl(
\fr{1}{2}+i(\kappa-\omega),\fr{1}{2}-i(\kappa+\omega),1-2i\omega,uv\bigr),
\end{equation}
which is continuously differentiable w.r.t. $v$ on the future horizon
$v=0$.\footnote{Cf. Proposition 2.1. of the Appendix. Here it is important
to restrict $h$ to be in $(0,1)$} Wave packets
of these modes therefore have a well-defined continuation into region II.
As in the classical case one gets a unique solution in region II
by demanding vanishing on the boundary between regions II and IV. Explicitly
it reads
\begin{equation}\label{wavpkII}
f_{II}(u,v)=\int_0^{\infty}dk \;\;\bigl({\mathcal U}_{h,k}(u,v)T_{h,k}a_k +
\bar{{\mathcal U}}_{h,k}(u,v)\bar{T}_{h,k}\bar{a}_k\bigr).
\end{equation}
But what appears to be remarkable is the fact that the modes ${\mathcal U}_{h,k}(u,v)$
are nonsingular for any $u,v>0$: The singularity has disappeared. In fact,
the integral \rf{qhyp} that defines the q-hypergeometric function in
\rf{q-contII}
converges absolutely for any positive as well as negative values of $uv$.
Using the variable $\rho=\log(uv)$ in region II/III one finds that the
singularity that classically was at $uv=1$ resp. $\rho=0$ now has been
resolved into a series of poles at $\rho=i(nh+(m-1))$, $n,m=1,2,\ldots$.
These poles approach the real axis in the classical limit $h\rightarrow 0$ .
So what is the fate of matter fallen into the black hole in the deformed case?
The further propagation of \rf{wavpkII} in regions II/III
can be described in terms of the time variable
$\tau=\log(u/v)$ the
same way as was discussed in region I. The late time asymptotics
of a wave-packet that has fallen into the black hole is then given by
\[ \begin{aligned}
f^{\text{\tiny out}}_{II}(\rho,\tau)= & \int_0^{\infty}dk\;\;\biggl(
\Bigl(e^{-i(\tau \omega+ k\rho)} S_{h,k}^+ + e^{+i(\tau \omega+ k\rho)}
\bar{S}_{h,k}^+\Bigr)+\Bigl(
e^{-i(\tau \omega- k\rho)} S_{h,k}^- + e^{+i(\tau \omega- k\rho)}\bar{S}_{h,k}^-
\Bigr)\biggr),\\
& \text{where}\quad
S_{h,k}^+=e^{-\frac{\pi i}{2}}e^{-\pi(\omega-k)}\qquad\qquad
S_{h,k}^-=e^{-\frac{\pi i}{2}}e^{-\pi(\omega+k)}R_{h,k}.
\end{aligned} \]
\section{Conclusions}
In the author's opinion there are three main lessons to learn from the
example studied in the present paper:
\begin{enumerate}
\item There are nonlocal evolution laws that may be interpreted as
describing propagation of fields on deformed geometries which
allow one to avoid some of the usual problems
associated with nonlocalities in a natural way.
\item This way of deformation indeed provides a resolution of singularities
present in the classical geometry, which allows one to
propagate wave-packets through the region that classically was
singular. Such deformations may therefore
open ways to resolve the black hole information problem.
\item There are ways to describe some kinds of
small-scale ``fuzziness'' or nonlocality that do not require
non-commutativity of the underlying algebra of functions.
\end{enumerate}
The example presented here is of course somewhat artificial in being
distinguished by its simplicity and solvability. Its main value is to
illustrate the above-mentioned qualitative points which one may expect
to persist in considerably more general types of deformations.
More can be done along similar lines as presented here: First
one may observe that the present discussion already contains important
ingredients for studying the quantization of solutions of the deformed
wave equation, with the aim of ultimately determining how the
effect of deformation shows up in the spectrum of the Hawking-radiation.
Furthermore, it was mentioned in the introduction that the present
model is just one case of a class of models that can be constructed and
investigated along similar lines. In contrast to the present one however,
the other models are all noncommutative deformations.
Of particular interest may be to study the deformation of $SL(2,{\mathbb R})\simeq
ADS_3$ as a model for the possible nonlocality (e.g. \cite{BDHM})
of string theory on backgrounds with $AdS_3$, similarly to what was
recently proposed in \cite{JR}.
Finally it should be emphazized that the real task remains
to find more concrete evidence on the small scale structure of space-time
from the study of the full-fledged gravitational theories such as
string theory.
\section{Appendix: $q$-special functions for $q=e^{\pi i h}$}
\subsection{q-Gamma function}
The basic building block for the class of special functions to be considered
is the the Double Gamma function introduced by Barnes \cite{Ba}
\begin{defn} The Double Gamma function is defined as
\[
\log\Gamma_2(s|\omega_1,\omega_2)= \Biggl(\frac{\partial}{\partial t}\sum_{n_1,n_2=0}^{\infty}
(s+n_1\omega_1+n_2\omega_2)^{-t}\Biggr)_{t=0}.
\]
\end{defn}
\begin{defn}
The h-Gammafunction $\Gamma_h$:
\[
\quad \Gamma_h(s)=\frac{\Gamma_2\bigl(s|1,\kappa)}
{\Gamma_2\bigl(1+\kappa-s|1,\kappa\bigr)},\qquad \kappa=h^{-1},
\]
\end{defn}
\begin{prop} Properties:
\begin{enumerate}
\item Functional relations:
\[ \Gamma_h(s+1)=2\sin(\pi hs)\Gamma_h(s) \qquad \Gamma_h(s+\kappa)=
2\sin(\pi s)\Gamma_h(s)
\]
\item Zeros at $s=1+\kappa+n+m\kappa$, Poles at $s=s_{n,m}=-n-m\kappa$,
$n,m=0,1,2,\ldots $.
\[
\lim_{s\rightarrow s_{mn}}s \Gamma_h(s)=\frac{1}{2\pi b}
\frac{(-)^{n+m+mn}}{[n]!_h[m]!_{h^{-1}}} \qquad [n]_h!=
\prod_{r=1}^{n}(q^r-q^{-r})
\]
\item Asymptotics: For $|s|\rightarrow\infty$ in any sector not containing the
real line one has
\[
\log \Gamma_h(s)\sim \mp
\frac{\pi ih}{2}\bigl(s^2-s(1+\kappa)\bigr) +\CO(s^{-1})
\quad\text{for}\quad \pm\Im(s)>0 \]
\end{enumerate}\end{prop}
Proof: \cite{Sh}
\begin{defn} The q-hypergeometric function will be defined as
\begin{equation}\label{qhyp}
F_h(\alpha,\beta;\gamma;z)=\frac{\Gamma_h(\gamma)}{\Gamma_h(\alpha)\Gamma_h(\beta)}
\int_{-i\infty}^{i\infty}ds
\frac{(-z)^s}{\sin(\pi s)}
\frac{\Gamma_h(\alpha+s)\Gamma_h(\beta+s)}{\Gamma_h(\gamma+s)
\Gamma_h(1+s) },
\end{equation}
where the contour is to the right of the poles at $s=-\alpha-n-m\kappa$
$s=-\beta-n-m\kappa$ and to the left of the poles at $s=n+m\kappa$
$s=1+\kappa-\gamma+n+m\kappa$, $n,m=0,1,2,\ldots$.
\end{defn}
This definition of a q-hypergeometric function is closely related to
the one first given in \cite{NU}.
\begin{prop} Properties:
\begin{enumerate}
\item Asymptotic behavior for $x\rightarrow 0$
\[
F_h(\alpha,\beta,\gamma;z)=
1+\CO(z)+\frac{\Gamma_h(1+\kappa+\alpha-\gamma)\Gamma_h(1+\kappa+\alpha-\gamma)
\Gamma_h(\gamma)}{\Gamma_h(\alpha)\Gamma_h(\beta)\Gamma_h(2+2\kappa-\gamma)}
(-z)^{1+\kappa-\gamma}
\bigl(1+\CO(z)\bigr)
\]
\item Asymptotic behavior for $x\rightarrow-\infty$
\[
F_h(\alpha,\beta,\gamma;z)=
\frac{\Gamma_h(\gamma)\Gamma_h(\beta-\alpha)}{\Gamma_h(\beta)\Gamma_h(\gamma-\alpha)}
(-z)^{-\alpha}\bigl(1+\CO(z^{-1})\bigr)+
\frac{\Gamma_h(\gamma)\Gamma_h(\alpha-\beta)}{\Gamma_h(\alpha)\Gamma_h(\gamma-\beta)}
(-z)^{-\beta}\bigl(1+\CO(z^{-1})\bigr)
\]
\end{enumerate}\end{prop}
\newcommand{\CMP}[3]{{\it Comm. Math. Phys. }{\bf #1} (#2) #3}
\newcommand{\LMP}[3]{{\it Lett. Math. Phys. }{\bf #1} (#2) #3}
\newcommand{\IMP}[3]{{\it Int. J. Mod. Phys. }{\bf A#1} (#2) #3}
\newcommand{\NP}[3]{{\it Nucl. Phys. }{\bf B#1} (#2) #3}
\newcommand{\PL}[3]{{\it Phys. Lett. }{\bf B#1} (#2) #3}
\newcommand{\MPL}[3]{{\it Mod. Phys. Lett. }{\bf A#1} (#2) #3}
\newcommand{\PRL}[3]{{\it Phys. Rev. Lett. }{\bf #1} (#2) #3}
\newcommand{\AP}[3]{{\it Ann. Phys. (N.Y.) }{\bf #1} (#2) #3}
\newcommand{\LMJ}[3]{{\it Leningrad Math. J. }{\bf #1} (#2) #3}
\newcommand{\FAA}[3]{{\it Funct. Anal. Appl. }{\bf #1} (#2) #3}
\newcommand{\PTPS}[3]{{\it Progr. Theor. Phys. Suppl. }{\bf #1} (#2) #3}
\newcommand{\LMN}[3]{{\it Lecture Notes in Mathematics }{\bf #1} (#2) #2}
|
2,869,038,156,699 | arxiv | \section{Introduction} \label{Introduction}
Ordinary Differential Equations (ODEs) have been widely used to model dynamic systems in various scientific fields such as physics \citep{ferrell2011modeling,xiu2003oscillation,massatt1983limiting}, biology \citep{hemker1972numerical,huang2006hierarchical,li2011large,lu2011high}, and economics \citep{tu2012dynamical,weber2011optimal,norton1981market}. In recent years, ODEs are also attracting increasing attention in the machine learning community. For instance, ODEs have been used to build new families of deep neural networks \citep{chen2018neural,yan2019robustness,zhang2019approximation} and the connection between ODEs and structural causal models has been established \citep{mooij2013ordinary, rubenstein2016deterministic}.
Existing works mostly focus on the parameter estimation of ODEs \citep{hemker1972numerical, li2005parameter,xue2010sieve,varah1982spline,liang2008parameter,ramsay1996principal, ramsay2007parameter,poyton2006parameter}. However, before estimating the unknown parameters, it is essential to perform an identifiability analysis of an ODE system; that is, uncovering the mathematical conditions under which the parameters can be uniquely determined from noise-free observations. If a system is not identifiable, the estimation procedure may produce erroneous and misleading parameter estimates \citet{qiu2022identifiability}. This is detrimental in many applications; for example, the estimated parameter can easily lead to wrong causal structures of non-identifiable ODE systems.
The contribution of this paper are summarized as follows:
{\bf Derive identifiability condition for linear ODEs from discrete observations.} We derive the condition for the identifiability of homogeneous linear ODE systems from discrete observations. Specifically, we consider the setting where the observations are sampled from a single trajectory generated from one initial condition. This setting is prevalent in applications that can only access a single trajectory due to the unrepeatable process of the measurements collection. Our identifiability analysis is built upon \citet{stanhope2014identifiability}, which constructs a systematic study of the identifiability of homogeneous linear ODE systems from a continuous trajectory with known initial conditions. Our analysis extends \citet{stanhope2014identifiability} to more practical scenarios where we only have discrete observations and do not know the initial conditions.
{\bf Derive asymptotic properties for NLS estimator.} Based on our identifiability results, we study the asymptotic properties of parameter estimation from data with measurement noises. We focus on the estimator obtained by the Nonlinear Least Squares (NLS) method, which is simple and widely used in dynamical systems \citep{marquardt1963algorithm, johnson1981analysis,wu1981asymptotic, johnson198516,maeder1990nonlinear}. However, the asymptotic properties of NLS based estimators for ODE systems have not been systematically studied due to the lack of identifiability conditions. We prove that under mild conditions, the NLS estimator is consistent and asymptotic normal with $n^{-1/2}$ convergence rate. In addition, based on the established asymptotic normality theory, we construct the confidence sets of unknown parameters and propose a new method to infer the causal structure of ODE systems, i.e., inferring whether there is a causal link between system variables.
{\bf Extend theoretical results to degraded observations.} We extend the consistency and asymptotic normality results to the observations with degraded quality, including aggregated and time-scaled observations. The aggregated observations are usually caused by time aggregation in the data collection process \citet{silvestrini2008temporal}. The time-scaled observations result from data preprocessing to fit the ODE model. We prove that the ODE model generating the original observations is identifiable from the degraded observations. The asymptotic properties can be naturally extended given the identifiability results. Simulations with various system dimensions are constructed to verify the developed theoretical results.
\section{Identifiability condition of homogeneous linear ODE systems}\label{sec:identifiability}
A homogeneous linear ODE system can be defined as:
\begin{equation}\label{eq:ODE model}
\begin{split}
\Dot{\boldsymbol{x}}(t) &= A \boldsymbol{x}(t) \, , \\
\boldsymbol{x}(0) &= \boldsymbol{x}_{0}\, ,
\end{split}
\end{equation}
where $t \in [0,\infty)$ denotes the independent variable time, $\boldsymbol{x}(t) \in \mathbb{R}^d $ denotes the state of the ODE system at time $t$, $\Dot{\boldsymbol{x}}(t)$ denotes the first derivative of $\boldsymbol{x}(t)$ w.r.t. time t, and we refer to both the parameter matrix $A\in \mathbb{R}^{d \times d}$ and the initial condition $\boldsymbol{x}_0 \in \mathbb{R}^d$ as the system parameters. In this paper we focus on ODE systems with complete observation, i.e., all state variables are observable. Therefore, the measurement model can be described as:
\begin{equation}
\boldsymbol{y}(t) = \boldsymbol{x}(t)\, .
\end{equation}
The solution of ODE system (\ref{eq:ODE model}) can be explicitly expressed as:
\begin{equation}\label{eq:ODE solution}
\boldsymbol{x}(t; \boldsymbol{x}_0, A) = e^{At}\boldsymbol{x}_0 \, ,
\end{equation}
which is also called a trajectory. In this paper, we focus on identifiability analysis of the ODEs from observations sampled from a \textbf{single} $d$-dimensional trajectory generated with an initial condition $\boldsymbol{x}_0$. Under the setting of our paper, the term identifiability means that given the error-free observation from a single trajectory of the ODE system, whether the system parameters $A$ and $\boldsymbol{x}_0$ can be uniquely determined.
\subsection{Identifiability condition from a whole trajectory}
Given a fixed initial condition $\boldsymbol{x}_0$, Stanhope et al. \citet{stanhope2014identifiability} derived a necessary and sufficient condition for identifying the ODE system (\ref{eq:ODE model}) from a whole trajectory, $\{e^{At}\boldsymbol{x}_0\}_{t\in[0,\infty)}$, i.e.,
$\{\boldsymbol{x}_0, A\boldsymbol{x}_0, \ldots, A^{d-1}\boldsymbol{x}_0\}$ are linearly independent. However, in practice, we cannot usually observe the initial condition $\boldsymbol{x}_0$. Under this practical circumstance, we need to treat the initial condition also as a system parameter and identify it from the data. In the following, we extend the identifiability definition and condition in \citet{stanhope2014identifiability} to the case where both parameter matrix $A$ and initial condition $\boldsymbol{x}_0$ are system parameters.
\begin{definition}\label{def:identifiability for A and b}
Let $(M^0, \Omega)$ be given parameter spaces, with $M^0 \subset \mathbb{R}^d$ and $\Omega \subset \mathbb{R}^{d \times d}$. The ODE system $(\ref{eq:ODE model})$ is said to be identifiable in $(M^0, \Omega)$, if for all $\boldsymbol{x}_{0}, \boldsymbol{x}_{0}' \in M^0$ and all $A, A' \in \Omega$, with ($\boldsymbol{x}_{0}, A) \neq (\boldsymbol{x}_{0}', A'$), it holds that $\boldsymbol{x}(\cdot; \boldsymbol{x}_{0}, A) \neq \boldsymbol{x}(\cdot; \boldsymbol{x}_{0}', A')$.
\end{definition}
Here $\boldsymbol{x}(\cdot; \boldsymbol{x}_{0}, A) \neq \boldsymbol{x}(\cdot; \boldsymbol{x}_{0}', A')$ means that there exists at least one $t \geq 0$ such that $\boldsymbol{x}(t; \boldsymbol{x}_{0}, A) \neq \boldsymbol{x}(t; \boldsymbol{x}_{0}', A')$. Then we establish the condition for identifiability of the ODE system based on Definition \ref{def:identifiability for A and b}.
\begin{lemma}\label{theorem:identifiability for A and b}
Suppose that $M^0 \subset \mathbb{R}^d$ and $\Omega \subset \mathbb{R}^{d \times d}$ are both open subsets. Then the ODE system $(\ref{eq:ODE model})$ is identifiable in $(M^0, \Omega)$ if and only if $\{\boldsymbol{x}_0, A\boldsymbol{x}_0, \ldots, A^{d-1}\boldsymbol{x}_0\}$ are linearly independent for all $\boldsymbol{x}_0 \in M^0$ and all $A\in \Omega$.
\end{lemma}
The proof of Lemma~\ref{theorem:identifiability for A and b} is a straightforward extension of the proof of Theorem 2.5 in \citet{stanhope2014identifiability} and can be found in Appendix~\ref{proof:theorem2.1}. From the Lemma, we can see that the condition for identifying the system parameters $(A,\boldsymbol{x}_0)$ is the same as that for only identifying $A$, except that the linear independence condition needs to hold for all possible $\boldsymbol{x}_0$ in $M^0$.
\subsection{Identifiability condition from discrete observations}
In practice, typically, we can only access a sequence of discrete observations sampled from a trajectory instead of knowing the whole trajectory. Thus, from now on, we focus on the case where only discrete observations from a trajectory are available. In particular, We extend the identifiability definition of the ODE system \eqref{eq:ODE model} as follows.
\begin{definition}[$(\boldsymbol{x}_0, A)$-identifiability]\label{def:identifiability for discrete observations}
For $\boldsymbol{x}_0 \in \mathbb{R}^d$ and $A \in \mathbb{R}^{d \times d}$, for any $n_0 \geq 1$, let $t_j, j = 1, \ldots, n_0$ be any $n_0$ time points and $\boldsymbol{x}_j := \boldsymbol{x}(t_j;\boldsymbol{x}_0, A)$ be the error-free observation of the trajectory $\boldsymbol{x}(t; \boldsymbol{x}_0, A)$ at time $t_j$. We say the ODE system $(\ref{eq:ODE model})$ is $(\boldsymbol{x}_0, A)$ identifiable from $\boldsymbol{x}_1, \ldots, \boldsymbol{x}_{n_0}$, if for all $\boldsymbol{x}_{0}'\in \mathbb{R}^d$ and all $A'\in \mathbb{R}^{d \times d}$, with ($\boldsymbol{x}_{0}', A') \neq (\boldsymbol{x}_{0}, A$), it holds that $\exists j$ for $j=1,\ldots,n_0$, such that $\boldsymbol{x}(t_j; \boldsymbol{x}_{0}', A') \neq \boldsymbol{x}(t_j; \boldsymbol{x}_{0}, A)$.
\end{definition}
This definition is inspired by \citet[Definition 1.6]{qiu2022identifiability}, and it is not a simple extension of Definition~\ref{def:identifiability for A and b} to discrete observations when $M^0 := \mathbb{R}^d$ and $\Omega := \mathbb{R}^{d \times d}$. The reason is that in Definition~\ref{def:identifiability for discrete observations}, the initial condition $\boldsymbol{x}_0$ and parameter matrix $A$ are fixed, and $\boldsymbol{x}_0'$ and $A'$ are an arbitrary vector and an arbitrary matrix in $M^0$ and $\Omega$ respectively. However, in Definition~\ref{def:identifiability for A and b}, both $\boldsymbol{x}_0$ and $\boldsymbol{x}_0'$ are arbitrary vectors in $M^0$ and both $A$ and $A'$ are arbitrary matrices in $\Omega$. In other words, Definition~\ref{def:identifiability for discrete observations} describes an intrinsic property of a single system instead of a collective property of a set of systems. In dealing with the identifiability problem of an ODE system, we aim to check whether the true underlying system parameter $(\boldsymbol{x}_0, A)$ is uniquely determined by error-free observations. Therefore, $(\boldsymbol{x}_0, A)$-identifiability described in Definition~\ref{def:identifiability for discrete observations} is a more natural way to define the identifiability of the ODE system from the practical perspective, and all of the other relevant definitions and theorems in the rest of this paper are derived based on Definition~\ref{def:identifiability for discrete observations}.
To derive the identifiability condition from discrete observations, we focus on the equally-spaced observations. Specifically, data are collected on an equally-spaced time grid, i.e. $t_{j+1}-t_j = \Delta t$ for a constant $\Delta t>0$ and $j=1, 2, \ldots$. The motivation is that a time series is most commonly collected equally spaced in practice, which follows the standard rules of collecting data either from a scientific experiment or a natural phenomenon.
Then, based on Definition \ref{def:identifiability for discrete observations}, we derive a sufficient condition for the identifiability of the ODE system from discrete observations.
\begin{theorem}\label{theorem:identifiability from discrete observations}
For $ \boldsymbol{x}_0 \in$ $\mathbb{R}^d$ and $A \in \mathbb{R}^{d \times d}$, the ODE system \textnormal{(\ref{eq:ODE model})} is $(\boldsymbol{x}_0, A)$ identifiable from \textbf{any} $d+1$ equally-spaced error-free observations $\boldsymbol{x}_1, \boldsymbol{x}_2, \cdots, \boldsymbol{x}_{d+1}$, if the following two conditions are satisfied.
\begin{enumerate}
\item[\textnormal{A1}] $\{\boldsymbol{x}_0, A\boldsymbol{x}_0, \ldots, A^{d-1}\boldsymbol{x}_0\}$ are linearly independent.
\item[\textnormal{A2}] Parameter matrix $A$ has $d$ distinct real eigenvalues.
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{theorem:identifiability from discrete observations} can be found in Appendix~\ref{proof:theorem2.2}. Here, \textbf{any} $d+1$ equally-spaced error-free observations means that the time interval between two consecutive observations: $\Delta t$ can take any positive value. In other words, the identifiability of the ODE system will not be influenced by the time-space between consecutive observations.
Now, in addition to the identifiability condition from a whole trajectory (condition~A1), when only discrete observations are available, we further require that $A$ has $d$ distinct real eigenvalues (condition A2). This condition seems to be restrictive, however, due to the limited observations (a set of equally-spaced observations sampled from a single trajectory), the condition cannot be relaxed. The reasons are as follows: (1) \textbf{Distinct eigenvalues}: as discussed in \citet{qiu2022identifiability}, almost every $A\in \mathbb{R}^{d\times d}$ (w.r.t. the Lebesgue measure on $\mathbb{R}^{d\times d}$) has $d$ distinct eigenvalues based on random matrix theory \citep{lehmann1991eigenvalue, tao2012topics}. Therefore, $A$ has $d$ distinct eigenvalues is a natural and reasonable assumption. In addition, to guarantee the ODE system \eqref{eq:ODE model} is $(\boldsymbol{x}_0, A)$ identifiable from any $d+1$ equally-spaced observations sampled from a single trajectory, we need $d$ consecutive observations of them to be linearly independent. To ensure any d equally-spaced observations sampled from a single trajectory are linearly independent, matrix A has $d$ distinct eigenvalues is a necessary condition. Moreover, in Section~\ref{section:asymptotic normality}, to derive the explicit formula of the asymptotic covariance matrix of the NLS estimator's asymptotic normal distribution, we require the parameter matrix has $d$ distinct eigenvalues. We will discuss this in detail in Section~\ref{section:asymptotic normality}. (2) \textbf{Real eigenvalues}: according to \citet[Corollary 6.4]{stanhope2014identifiability}, a matrix with complex eigenvalues is not identifiable from any set of equally-spaced observations sampled from a single trajectory. Therefore, we require the eigenvalues to be real.
Our identifiability condition here is sufficient but not necessary. However, it allows us to derive explicit expressions of $A$ and $\boldsymbol{x}_0$ in terms of the observations. To see this, we set $\Phi(t) := e^{At}$, and let $\boldsymbol{X}_j$ denote the matrix $(\boldsymbol{x}_j, \boldsymbol{x}_{j+1},\ldots,\boldsymbol{x}_{j+d-1})\in\mathbb{R}^{d\times d}$ for $j=1,2$. Then $\boldsymbol{X}_2 = \Phi({\Delta t})\boldsymbol{X}_1$. We show in the proof that $\boldsymbol{X}_1$ is nonsingular if $A$ has $d$ distinct eigenvalues, and thus, $\Phi(\Delta t) = \boldsymbol{X}_2 \boldsymbol{X}_1^{-1}$. Finally, we can obtain a unique real $A$ by taking logarithm of $e^{A \Delta t}=\boldsymbol{X}_2\boldsymbol{X}_1^{-1}$ if $A$ has $d$ distinct real eigenvalues \citet[Theorem 6.3]{stanhope2014identifiability}. The initial condition $\boldsymbol{x}_0$ can then be calculated by $e^{-At_1}\boldsymbol{x}_1$. Please refer to Appendix~\ref{proof:theorem2.2} for the detail.
Worth mentioning that we can not check whether the proposed identifiability conditions are satisfied in practice since we do not have access to the real underlying system parameters in practical applications. Nevertheless, studying the identifiability conditions of the dynamical system provides us with a better understanding of the system. Moreover, in practice, we can use models satisfying the conditions (through constrained parameter estimation) to learn the real-world data to ensure the learned model's identifiability.
\noindent\textit{\bf Remark} If the ODE system is identifiable from a set of discrete observations sampled from a single trajectory, the system can also be identifiable from the whole corresponding trajectory.
\subsection{Identifiability condition from degraded observations}
In practice, there are cases where we have no records of the original observations of the ODE system and can only access the degraded observations instead. Is the ODE system still identifiable? Furthermore, will the parameter change? We are interested in addressing the problem of identifiability under degraded observations. In particular, we are interested in the aggregated and time-scaled observations.
According to Theorem~\ref{theorem:identifiability from discrete observations}, under assumptions A1 and A2, the ODE system (\ref{eq:ODE model}) is $(\boldsymbol{x}_0, A)$ identifiable from $n$ equally-spaced error-free observations $\boldsymbol{X}:=\{\boldsymbol{x}_1, \boldsymbol{x}_2, \ldots, \boldsymbol{x}_n\}$, for any $n > d$. Based on these $n$ observations, in the following, we define the aggregated and time-scaled observations.
\begin{definition}[aggregated observations]\label{def:aggregated}
Let $k\geq 2$ be an integer and $\tilde{n} = \lfloor n/k \rfloor$, where $\lfloor \cdot \rfloor$ denotes the floor function. For each $j=1,2,\ldots,\tilde{n}$, we define $\Tilde{\boldsymbol{x}}_j := \big(\boldsymbol{x}_{(j-1)k+1} + \boldsymbol{x}_{(j-1)k+2} + \cdots + \boldsymbol{x}_{jk}\big)/k$, the average values of $k$ consecutive, non-overlapping observations in $\boldsymbol{X}$, starting from time $\tilde{t}_j:=t_{(j-1)k+1}$. We call $\Tilde{\boldsymbol{X}} := \{\Tilde{\boldsymbol{x}}_1,\Tilde{\boldsymbol{x}}_{2},\ldots,\Tilde{\boldsymbol{x}}_{\tilde{n}}\}$ a set of aggregated observations from $\boldsymbol{X}$.
\end{definition}
For notational simplicity, we use the same notation for aggregated and time-scaled observations from here on.
\begin{definition}[time-scaled observations]\label{def:time-scaled}
Let $k>0$ be a constant and $\tilde{n}=n$. For each $j=1,2,\ldots,\tilde{n}$, defining $\Tilde{t}_j := kt_j$ be the scaled time, the time-scaled observation at time $\Tilde{t}_j$, denoted by $\Tilde{\boldsymbol{x}}_j$, equals $\boldsymbol{x}_j$. We call $\Tilde{\boldsymbol{X}} := \{\Tilde{\boldsymbol{x}}_1,\Tilde{\boldsymbol{x}}_{2},\ldots,\Tilde{\boldsymbol{x}}_{\tilde{n}}\}$ the set of time-scaled observations from $\boldsymbol{X}$ at the scaled time grid $\{\tilde{t}_1,\ldots,\tilde{t}_{\tilde{n}}\}$.
\end{definition}
\noindent\textit{\bf Remark} Aggregated/time-scaled observations follow new ODE systems as (\ref{eq:ODE model}) but with different initial condition and parameter matrix, denoted respectively by $\tilde{\boldsymbol{x}}_0$ and $\tilde{A}$.
Having defined the aggregated/time-scaled observations, we now derive the conditions for identifying the original ODE system with parameters $\boldsymbol{x}_0$ and $A$ from them. In addition to conditions A1 and A2, a common identifiability condition from aggregated/time-scaled observations is
\begin{enumerate}
\item[A3] The sample size of the aggregated/time-scaled observations $\tilde{n} > d$.
\end{enumerate}
\subsubsection{Identifiability condition from aggregated observations}
Though the identifiability of vector auto-regressive model from aggregated observations has been studied \citep{silvestrini2008temporal,gong2017causal}, the identifiability condition of ODEs from aggregated observations remains unknown. Based on Theorem \ref{theorem:identifiability from discrete observations}, we derive the following corollary.
\begin{corollary}[aggregated observations]\label{corollary:aggregate}
If conditions \textnormal{A1-A3} are satisfied, where $\boldsymbol{x}_0 \in \mathbb{R}^d$ and $A\in \mathbb{R}^{d\times d}$, then the aggregated ODE system parameters satisfy $\Tilde{\boldsymbol{x}}_0 = (I+e^{A\Delta_t} + \cdots + e^{A(k-1)\Delta_t})\boldsymbol{x}_0/k$ and $\Tilde{A} = A$, and
the original ODE system \eqref{eq:ODE model} (with parameters $\boldsymbol{x}_0$ and $A$) is $(\boldsymbol{x}_0, A)$ identifiable from the aggregated observations $\Tilde{\boldsymbol{X}}$.
\end{corollary}
The proof of Corollary~\ref{corollary:aggregate} can be found in Appendix~\ref{proof:corollary2.2.1}. This corollary implies that the ODE system (\ref{eq:ODE model}) is still identifiable from aggregated observations under mild conditions. Moreover, the parameter matrix is the same as that of the actual model, i.e., $\tilde{A} = A$.
\subsubsection{Identifiability condition from time-scaled observations}
Time plays a critical role in ODE systems. However, in practical applications, using the actual time of the system directly may cause inconvenience. A common practice is to use the method defined in Definition~\ref{def:time-scaled} to scale the actual timeline into a fixed one, such as [0,1], to simplify the calculation \citet{rubanova2019latent}. How will the time scaling affect the causal relationship between variables, i.e., parameter matrix $A$? This motivates us to derive the following corollary.
\begin{corollary}[time-scaled observations]\label{corollary:timescaled}
If conditions \textnormal{A1-A3} are satisfied, where $\boldsymbol{x}_0 \in \mathbb{R}^d$ and $A\in \mathbb{R}^{d\times d}$, then the time-scaled ODE system parameters satisfy $\Tilde{\boldsymbol{x}}_0 = \boldsymbol{x}_0$ and $\Tilde{A} = A/k$, and
the original ODE system \eqref{eq:ODE model} (with parameters $\boldsymbol{x}_0$ and $A$) is $(\boldsymbol{x}_0, A)$ identifiable from the time-scaled observations $\Tilde{\boldsymbol{X}}$.
\end{corollary}
The proof of Corollary~\ref{corollary:timescaled} can be found in Appendix~\ref{proof:corollary2.2.2}. This corollary implies that the ODE system (\ref{eq:ODE model}) is identifiable from time-scaled observations, and the new parameter matrix is correspondingly reduced by a factor of $k$. This corollary provides theoretical support for time scaling and implies that one can safely scale the actual timeline into a fixed one for the ODE system (\ref{eq:ODE model}).
\section{Asymptotic properties of the NLS estimator}\label{section:large_sample}
Now that we have established the sufficient condition for the identifiability of the ODE system from discrete error-free observations. However, in practical applications, the observations are typically disturbed by measurement noise. In this case, one can not calculate the true unknown parameters explicitly from $e^{A \Delta t}=\boldsymbol{X}_2\boldsymbol{X}_1^{-1}$ and $\boldsymbol{x}_0 = e^{-At_1}\boldsymbol{x}_1$. Instead, we resort to parameter estimation procedures to estimate the unknown parameters. In this section, we will investigate the asymptotic properties of the parameter estimator based on the Nonlinear Least Squares (NLS) method. First, we introduce the measurement model and the NLS method.
\noindent{\bf Measurement model.} Suppose the system state $\boldsymbol{x}(t)$ in ODE system (\ref{eq:ODE model}) is measured with noise at time points $t_1,\ldots,t_n$, with $t_i \in [0,T]$ for all $i=1, \ldots, n$, and $0<T<+\infty$. Abusing notation a bit, from now on, we use $\boldsymbol{\theta} := (\boldsymbol{x}_0, A)\in \mathbb{R}^{d+d^2}$ to vectorize the parameters $\boldsymbol{x}_0, A$. We further let $\Theta := (M^0, \Omega)$ to denote the parameter space, where $M^0 \subset \mathbb{R}^d$ and $\Omega \subset \mathbb{R}^{d\times d}$. The true parameter is denoted as $\boldsymbol{\theta}^* := ( \boldsymbol{x}_0^*, A^*)$. Then the measurement model can be described as:
\begin{equation}\label{eq:measurement model}
\begin{split}
\boldsymbol{y}_i &= \boldsymbol{x}(t_i;\boldsymbol{\theta}^*) + \boldsymbol{\epsilon}_i= e^{A^*t_i}\boldsymbol{x}_0^* + \boldsymbol{\epsilon}_i\,,
\end{split}
\end{equation}
for all $i=1,\ldots,n$, where $\boldsymbol{y}_i \in \mathbb{R}^d$ denotes the noisy observation at time $t_i$ and $\boldsymbol{\epsilon}_i \in \mathbb{R}^d$ is the measurement error at time $t_i$.
\noindent{\bf Nonlinear least squares (NLS) method} is well used for parameter estimation in nonlinear regression models including ODEs \citep{jennrich1969asymptotic,xue2010sieve}. In the following, based on the identifiability condition we build in Section \ref{sec:identifiability}, with mild assumptions, we show the consistency and asymptotic normality of the NLS estimator.
Suppose the ODE system \eqref{eq:ODE model} is $(\boldsymbol{x}_0^*, A^*)$ identifiable from a set of equally-spaced error-free observations sampled from a single trajectory: $\boldsymbol{x}_1, \ldots, \boldsymbol{x}_n$, with $\boldsymbol{x}_i = \boldsymbol{x}(t_i; \boldsymbol{\theta}^*) = e^{A^* t_i}\boldsymbol{x}_0^*$ and $t_i \in [0,T]$ for all $i=1,\ldots,n$, then the true parameter $\boldsymbol{\theta}^* = ( \boldsymbol{x}_0^*, A^*)$ uniquely minimizes:
\begin{equation}\label{eq:M(theta)}
M(\boldsymbol{\theta}) = \cfrac{1}{T}\int_0^T \parallel e^{A^*t}\boldsymbol{x}_0^*-e^{At}\boldsymbol{x}_0\parallel_2^2 dt\, ,
\end{equation}
where $\parallel \cdot \parallel_2$ denotes the Euclidean norm. The proof is straightforward, since the ODE system \eqref{eq:ODE model} is $(\boldsymbol{x}_0^*, A^*)$ identifiable from $\boldsymbol{x}_1, \ldots, \boldsymbol{x}_n$, the ODE system \eqref{eq:ODE model} is also $(\boldsymbol{x}_0^*, A^*)$ identifiable from the corresponding trajectory at time $[0,T]$, which implies that $M(\boldsymbol{\theta})$ attains its unique global minimum at $\boldsymbol{\theta}^*$.
In practical applications, typically, one can only access the noisy observation $\boldsymbol{y}_i$'s as described in \eqref{eq:measurement model}. Therefore, we propose to estimate $\boldsymbol{\theta}^*$ by minimizing the empirical version of $M(\boldsymbol{\theta})$, which is
\begin{equation}\label{eq:Mn(theta)}
M_n(\boldsymbol{\theta})
= \cfrac{1}{n} \sum_{i=1}^n \parallel \boldsymbol{y}_i - e^{At_i}\boldsymbol{x}_0 \parallel_2^2 \, .
\end{equation}
That is, the NLS estimator of $\boldsymbol{\theta}^*$ is defined as
\begin{equation*}
\hat{\boldsymbol{\theta}}_n = \arg\min_{\boldsymbol{\theta}\in \Theta}M_n(\boldsymbol{\theta})\,.
\end{equation*}
\noindent{\bf Assumptions.} Now we investigate the asymptotic properties of the NLS estimator $\hat{\boldsymbol{\theta}}_n$. We first list all the required assumptions:
\begin{enumerate}
\item[A4] Parameter space $\Theta$ is a compact subset of $\mathbb{R}^{d+d^2}$.
\item[A5] Error terms $\{\boldsymbol{\epsilon}_i\}$ for $i=1,\ldots, n$ are i.i.d. random vectors with mean zero and covariance matrix $\Sigma = \text{diag}(\sigma_1^2, \ldots, \sigma_d^2)$, where $0<\sigma_j^2 < \infty$ for all $j=1,\ldots, d$.
\item[A6] We have $n$ equally-spaced observations $\boldsymbol{Y} := \{\boldsymbol{y}_1, \ldots, \boldsymbol{y}_n\}$, where $\boldsymbol{y}_i$ is defined by measurement model (\ref{eq:measurement model}). Without loss of generality, we assume observation time starts with $t_1 = 0$, ends with $t_n = T$, and thus the equal time space $\Delta t=
T/(n-1)$.
\item[A7] $\boldsymbol{\theta}^*$ is an interior point of the parameter space $\Theta$.
\end{enumerate}
In addition to the aforementioned assumptions A4-A7, assumptions A1 and A2 stated in Theorem \ref{theorem:identifiability from discrete observations} are required w.r.t. the true parameter $\boldsymbol{\theta}^* = \{ \boldsymbol{x}_0^*, A^*\}$. These two assumptions guarantee the ODE system (\ref{eq:ODE model}) is $(\boldsymbol{x}_0^*, A^*)$ identifiable from any $d+1$ equally-spaced error-free observations sampled from the trajectory $\boldsymbol{x}(\cdot; \boldsymbol{x}_0^*, A^*)$. A1 and A2 are needed because the identifiability of the system is a prerequisite for obtaining a consistent parameter estimator. A4 is a usually used assumption for the consistency of a parametric model. A5 is a common way to define measurement noise, note that we do not require the error terms follow a normal distribution. It is worth mentioning that the error terms are not necessarily identically distributed. In other words, one can easily generate our theoretical results for cases that only require the error terms being independently distributed. A6 ensures the observations are collected at equally-spaced time points, following the rules of observations collection required by Theorem \ref{theorem:identifiability from discrete observations}. Assumption A7 is a standard condition for proving asymptotic normality (see e.g.~ \citep{kundu1993asymptotic,mira1995nonlinear,xue2010sieve}).
\subsection{Consistency}
In this subsection, we study the consistency of the NLS estimator.
\begin{theorem}\label{theorem:consistency}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$ and assumptions \textnormal{A4-A6} hold, the NLS estimator $\hat{\boldsymbol{\theta}}_n \xrightarrow{p} \boldsymbol{\theta}^*$, as $n\rightarrow \infty$.
\end{theorem}
The proof of Theorem~\ref{theorem:consistency} can be found in Appendix~\ref{proof:theorem3.1}. This theorem shows the consistency of our NLS estimator $\hat{\boldsymbol{\theta}}_n$ to the true $\boldsymbol{\theta}^*$. That is, as $n$ goes to infinity, the NLS estimator $\hat{\boldsymbol{\theta}}_n$ converges to the true system parameters $\boldsymbol{\theta}^*$ with probability approaching 1. Note that the consistency of the estimator is a necessary condition for the estimator's asymptotic normality.
In the case where we only observe degraded data, we let $\Tilde{\boldsymbol{Y}} := (\Tilde{\boldsymbol{y}}_1, \Tilde{\boldsymbol{y}}_2, \ldots, \Tilde{\boldsymbol{y}}_{\tilde{n}})$ denote the noisy aggregated/time-scaled data, which are collected from the original observations $\boldsymbol{Y}$ in the same way as the corresponding error-free observations $\Tilde{\boldsymbol{X}}$ from $\boldsymbol{X}$ defined in Definition~\ref{def:aggregated}/\ref{def:time-scaled}. Then we can estimate the corresponding parameters $\tilde{\boldsymbol{\theta}}^*:=(\tilde{\boldsymbol{x}}^*_0, \tilde{A}^*)$ by minimizing the NLS objective function in \eqref{eq:Mn(theta)}, with the data replaced by $\tilde{\boldsymbol{Y}}$. Let such an estimator be $\hat{\tilde{\boldsymbol{\theta}}}:= (\hat{\tilde{\boldsymbol{x}}}_0,\hat{\tilde{A}})$. Using the relationship between $\tilde{\boldsymbol{\theta}}^*$ and $\boldsymbol{\theta}^*$ found in Corollary~\ref{corollary:aggregate}/\ref{corollary:timescaled}, we can define a mapping $g:\mathbb{R}^{d+d^2}\rightarrow\mathbb{R}^{d+d^2}$ from $\tilde{\boldsymbol{\theta}}^*$ to $\boldsymbol{\theta}^*$, and obtain an estimator of $\boldsymbol{\theta}^*$ by $\hat{\boldsymbol{\theta}}_{\tilde{n}}:=g(\hat{\tilde{\boldsymbol{\theta}}})$.
In the following, we present the expressions of $g$ and the consistency of the NLS estimators by using the aggregated and time-scaled observations, respectively.
\begin{corollary}[aggregated observations]\label{corollary:consistency aggregate}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$ and assumptions \textnormal{A3-A6} hold, the NLS estimator $\hat{\boldsymbol{\theta}}_{\tilde{n}}\xrightarrow{p} \boldsymbol{\theta}^*$, as $\tilde{n}\rightarrow \infty$, where $\hat{\boldsymbol{\theta}}_{\tilde{n}}:=g(\hat{\tilde{\boldsymbol{\theta}}}) := \big(k(I+e^{\hat{\tilde{A}}\Delta_t} + \cdots + e^{\hat{\tilde{A}}(k-1)\Delta_t})^{-1}\hat{\tilde{\boldsymbol{x}}}_0, \hat{\tilde{A}}\big)$.
\end{corollary}
\begin{corollary}[time-scaled observations]\label{corollary:consistency timescaled}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$ and assumptions \textnormal{A3-A6} hold, the NLS estimator $\hat{\boldsymbol{\theta}}_{\tilde{n}}\xrightarrow{p} \boldsymbol{\theta}^*$, as $\tilde{n}\rightarrow \infty$, where $\hat{\boldsymbol{\theta}}_{\tilde{n}}:=g(\hat{\tilde{\boldsymbol{\theta}}}) := (\hat{\tilde{\boldsymbol{x}}}_0, k\hat{\tilde{A}})$.
\end{corollary}
The proofs of Corollary~\ref{corollary:consistency aggregate} and Corollary~\ref{corollary:consistency timescaled} can be found in Appendix~\ref{proof:corollary3.1.1} and Appendix~\ref{proof:corollary3.1.2}, respectively. Now in addition to the assumptions mentioned in Theorem~\ref{theorem:consistency}, assumption A3 is also required for the consistency from degraded observations. These two corollaries show that the NLS estimators obtained from degraded observations are consistent to the true parameters of the original ODE \eqref{eq:ODE model}. Since we have established the identifiability conditions for the original system parameters of the ODE (\ref{eq:ODE model}) from the degraded observations in Corollary~\ref{corollary:aggregate}/\ref{corollary:timescaled} in Section \ref{sec:identifiability}, the consistency of their NLS estimators is a natural result from Theorem~\ref{theorem:consistency}. To see this, we first show that the NLS estimator from aggregated/time-scaled observations converges to the true system parameter corresponding to the new ODE system, i.e., $\hat{\tilde{\boldsymbol{\theta}}}\xrightarrow{p} \tilde{\boldsymbol{\theta}}^*$, as $\tilde{n}\rightarrow \infty$. Since we have derived the mapping $g$ such that $g(\tilde{\boldsymbol{\theta}}^*) = \boldsymbol{\theta}^*$ in Corollary~\ref{corollary:aggregate}/\ref{corollary:timescaled}, then by multivariate continuous mapping theorem, one takes the function $g(\cdot)$ w.r.t. $\hat{\tilde{\boldsymbol{\theta}}}$ and $\tilde{\boldsymbol{\theta}}^*$, respectively, one can reach the conclusion $g(\hat{\tilde{\boldsymbol{\theta}}})\xrightarrow{p} g(\tilde{\boldsymbol{\theta}}^*)$, as $\tilde{n}\rightarrow \infty$. That is, $\hat{\boldsymbol{\theta}}_{\tilde{n}}\xrightarrow{p} \boldsymbol{\theta}^*$, as $\tilde{n}\rightarrow \infty$.
\subsection{Asymptotic normality}\label{section:asymptotic normality}
After establishing the consistency of our NLS estimators, we can study their asymptotic distributions.
\begin{theorem}\label{theorem:Asymptotic normality}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$ and assumptions \textnormal{A4-A7} hold, we have $\sqrt{n}(\hat{\boldsymbol{\theta}}_n -\boldsymbol{\theta}^* )\xrightarrow{d} N(\boldsymbol{0}, H^{-1}VH^{-1}) $, as $n\rightarrow \infty$, where $H= \nabla_{\boldsymbol{\theta}}^2M(\boldsymbol{\theta}^*)$, is the Hessian matrix of $M(\boldsymbol{\theta})$ at $\boldsymbol{\theta}^*$ and $ V = \lim_{n\rightarrow \infty}var(\sqrt{n}\nabla_{\boldsymbol{\theta}} M_n(\boldsymbol{\theta}^*))$, with $\nabla_{\boldsymbol{\theta}} M_n(\boldsymbol{\theta}^*)$ the gradient of $M_n(\boldsymbol{\theta})$ at $\boldsymbol{\theta}^*$. $M(\boldsymbol{\theta})$ and $M_n(\boldsymbol{\theta})$ are defined in Equations \textnormal{(\ref{eq:M(theta)})} and \textnormal{(\ref{eq:Mn(theta)})}, respectively.
\end{theorem}
The proof of Theorem~\ref{theorem:Asymptotic normality} can be found in Appendix~\ref{proof:theorem3.2}. The explicit formulae of matrices $H$ and $V$ are derived, with
\begin{equation}\label{eq:H}
H := H(T,\boldsymbol{\theta}^*) := 2\int_0^T S(\boldsymbol{\theta}^*, t)^{\top}S(\boldsymbol{\theta}^*, t)/T\,dt \,,
\end{equation}
and
\begin{equation}\label{eq:V}
V := V(T,\boldsymbol{\theta}^*) := 4\int_0^T R(\boldsymbol{\theta}^*, t)^{\top}R(\boldsymbol{\theta}^*, t)/T\,dt \,,
\end{equation}
respectively, where
\begin{equation*}
S(\boldsymbol{\theta}^*, t) := \big(e^{A^*t}, Z_{11}^*(t) \boldsymbol{x}_0^*, \cdots, Z_{1d}^*(t) \boldsymbol{x}_0^*, \cdots, Z_{dd}^*(t) \boldsymbol{x}_0^*\big)\,,
\end{equation*}
and
\begin{equation*}
R(\boldsymbol{\theta}^*, t) := \Sigma^{1/2} \big(e^{A^*t}, Z_{11}^*(t) \boldsymbol{x}_0^*, \cdots, Z_{1d}^*(t) \boldsymbol{x}_0^*, \cdots, Z_{dd}^*(t) \boldsymbol{x}_0^*\big)\,,
\end{equation*}
\noindent with $S(\boldsymbol{\theta}^*, t), R(\boldsymbol{\theta}^*, t) \in \mathbb{R}^{d\times (d+d^2)} $ are matrix functions of $\boldsymbol{\theta}^* = (\boldsymbol{x}_0^*, A^*)$ and time $t$. And
\begin{equation*}
Z^*_{jk}(t) = \cfrac{\partial e^{At}}{\partial a_{jk}} \bigg |_{A = A^*}
\end{equation*}
denotes the matrix derivative of $e^{At}$ w.r.t. $a_{jk}$ at the true parameter matrix $A^*$, with $a_{jk}$ denoting the $jk$-th entry of parameter matrix $A$, for all $j,k=1,\ldots,d$. When parameter matrix $A^*$ has $d$ distinct eigenvalues, according to \citep{kalbfleisch1985analysis,tsai2003note} $Z^*_{jk}(t) $ can be explicitly expressed as:
\begin{equation*}
Z_{jk}^*(t) = Q^*[ \{(Q^*)_{.j}^{-1}Q_{k.}^*\}\circ U^*(t)] (Q^*)^{-1} \,,
\end{equation*}
with $Q^*, U^*(t)$ corresponding to the true parameter matrix $A^*$. That is, the Jordan decomposition of $A^*$ is
\begin{equation*}
A^*=Q^*\Lambda^* (Q^*)^{-1}\,,
\end{equation*}
where $\Lambda ^* = diag(\lambda_1^*, \cdots, \lambda_d^*)$, with $\lambda_1^*, \lambda_2^*, \cdots, \lambda_d^*$ being the eigenvalues of $A^*$, and under assumption A2, these eigenvalues are distinct real values. The column vector $(Q^*)_{.j}^{-1}$ stands for the $j$th column of matrix $(Q^*)^{-1}$ and the row vector $Q_{k.}^*$ denotes the $k$th row of matrix $Q^*$.
Let $B\circ C$ denote the Hadamard product, with each element
\begin{equation*}
(B\circ C)_{ij} = (B)_{ij}(C)_{ij}\,,
\end{equation*}
where matrices $B$ and $C$ are of the same dimension.
And $U^*(t)$ has the form:
\begin{equation*}
U^*(t) = \begin{bmatrix}
te^{\lambda_1^*t} &\cfrac{e^{\lambda_1^* t}-e^{\lambda_2^* t}}{\lambda_1^*-\lambda_2^*} & \cdots & \cfrac{e^{\lambda_1^* t}-e^{\lambda_d^* t}}{\lambda_1^*-\lambda_d^*}\\ \\
\cfrac{e^{\lambda_2^* t}-e^{\lambda_1^* t}}{\lambda_2^*-\lambda_1^*} & te^{\lambda_2^* t} & \cdots & \cfrac{e^{\lambda_2^* t}-e^{\lambda_d^* t}}{\lambda_2^*-\lambda_d^*}\\
\vdots & \vdots & \ddots & \vdots \\
\cfrac{e^{\lambda_d^* t}-e^{\lambda_1^* t}}{\lambda_d^*-\lambda_1^*} & \cfrac{e^{\lambda_d^* t}-e^{\lambda_2^* t}}{\lambda_d^*-\lambda_2^*} & \cdots & te^{\lambda_d^* t}
\end{bmatrix}\,.
\end{equation*}
Worth mentioning that in order to attain the explicit expressions of the matrices $H$ and $V$, such that obtaining the explicit formula of the asymptotic covariance matrix $H^{-1}VH^{-1}$, we require the true parameter matrix $A^*$ has $d$ distinct eigenvalues. Otherwise, matrix $U^*(t)$ cannot be expressed, such that the composition of $H$ and $V$: $Z_{jk}^*(t)$ has no closed form. As aforementioned, this is also part of the reason why assumption A2 cannot be relaxed. Another interesting finding is that, under assumption A1, i.e., $\{\boldsymbol{x}_0^*, A^*\boldsymbol{x}_0^*, \ldots, (A^*)^{d-1}\boldsymbol{x}_0^*\}$ are linearly independent, we prove that both matrices $H$ and $V$ are positive definite. And since they are symmetric, they are also nonsingular.
From the theorem, we see that the NLS estimator is asymptotically normal, and the convergence rate is $n^{-1/2}$. Here, the rate meets the one of the standard parametric NLS estimator \citep{jennrich1969asymptotic, wu1981asymptotic}. This result is reasonable because our model in \eqref{eq:measurement model} is a parametric one.
Now, with the asymptotic normality result, if we can estimate the asymptotic covariance matrix $\Sigma_n:= H^{-1}VH^{-1}$, we can perform statistical inference for the unknown system parameters. In particular, we derive the explicit expressions of matrices $H$ and $V$ in \eqref{eq:H} and \eqref{eq:V}. According to their formulae, they are functions of the true system parameter $\boldsymbol{\theta}^*$, which are unknown in practice. Therefore, we approximate $H$ and $V$ by substituting $\boldsymbol{\theta}^*$ with the NLS parameter estimate $\hat{\boldsymbol{\theta}}_n$ in their formulae. Then the inference can be performed based on the approximated covariance matrix, denoted by $\hat{\Sigma}_n$. In the following two subsections, we introduce the details of the inference.
\subsubsection{Confidence sets for unknown parameters}\label{confidence sets}
Based on Theorem \ref{theorem:Asymptotic normality}, we can derive the approximate confidence sets for the unknown true parameters $\boldsymbol{\theta}^*$. There are two kinds of confidence sets.
The first way is to construct the simultaneous confidence region (CR) for all the $d+d^2$ parameters $\boldsymbol{\theta}^*_i$, where $\boldsymbol{\theta}^*_i$ denotes the $i$th entry of $\boldsymbol{\theta}^*$, and $i=1,\ldots,d+d^2$. According to Theorem \ref{theorem:Asymptotic normality}, we have
\begin{equation*}
n(\hat{\boldsymbol{\theta}}_n - \boldsymbol{\theta}^*)^{\top} \Sigma_n^{-1} (\hat{\boldsymbol{\theta}}_n - \boldsymbol{\theta}^*) \xrightarrow{d} \chi_{d+d^2}^2\,, \mbox{ as } n\rightarrow \infty\,.
\end{equation*}
Therefore, we approximate the $1-\alpha$ CR by the set
\begin{equation}\label{eq:CR}
\bigg\{\boldsymbol{\theta}: \quad n(\hat{\boldsymbol{\theta}}_{n} - \boldsymbol{\theta})^{\top} \hat{\Sigma}_n^{-1} (\hat{\boldsymbol{\theta}}_{n} - \boldsymbol{\theta}) \leq \chi_{d+d^2}^2(1-\alpha)\bigg\}\,,
\end{equation}
where $\chi_{m}^2(1-\alpha)$ denotes the upper-tail critical value of $\chi^2$ distribution with $m$ degrees of freedom at significance level $\alpha$.
The second way is to construct a pointwise confidence interval (CI) for each $\boldsymbol{\theta}^*_i$, $i=1,\ldots,d+d^2$. Based on Theorem $\ref{theorem:Asymptotic normality}$, we can derive that
\begin{equation*}
\sqrt{n}(\hat{\boldsymbol{\theta}}_{ni} - \boldsymbol{\theta}^*_i) \xrightarrow{d} N(0, D_i(\Sigma_n))\,, \mbox{ as } n\rightarrow \infty\,,
\end{equation*}
where $\hat{\boldsymbol{\theta}}_{ni}$ denotes the $i$th entry of $\hat{\boldsymbol{\theta}}_n$ and $D_i(M)$ denotes the $i$th diagonal entry of matrix $M$. Then, the $1-\alpha$ CI for each $\boldsymbol{\theta}_i^*$ can be estimated by
\begin{equation}\label{eq:CI}
\bigg[\hat{\boldsymbol{\theta}}_{ni} - z_{\alpha/2}\sqrt{D_i(\hat{\Sigma}_n)/n}\,, \hspace{0.3
cm} \hat{\boldsymbol{\theta}}_{ni} + z_{\alpha/2}\sqrt{D_i(\hat{\Sigma}_n)/n}\bigg]\,,
\end{equation}
where $z_{\alpha/2}$ is the two-tailed critical value of the standard normal distribution at significance level $\alpha$.
\subsubsection{Infer the causal structure of the ODE system}\label{hypothesis test}
Another application of Theorem \ref{theorem:Asymptotic normality} is to infer the causal structure between variables in the ODE system, i.e., testing whether $a_{jk}^* = 0$, where $a_{jk}^*$ denotes the $jk$-th entry of the true parameter matrix $A^*$, $j,k=1,\ldots,d$. If we denote the $d$ variables in the ODE system (\ref{eq:ODE model}) by $v_i$, for $i = 1,\ldots, d$. The entry $a_{jk}^* \neq 0$ implies that there is a direct causal link from variable $v_k$ to variable $v_j$. Note that the ODE system is fully observable, i.e., there is no latent variable interacting with the system.
Specifically, we propose to test
\begin{equation}
H_0: a_{jk}^* = 0 \hspace{0.2cm} \mbox{vs} \hspace{0.2cm} H_1: a_{jk}^* \neq 0
\end{equation}
by verifying the following inequality:
\begin{equation}\label{eq:test ajk}
|\hat{a}_{jk}| > z_{\alpha/2}\sqrt{D_{d+(j-1) d + k}(\hat{\Sigma}_n)/n}\,,
\end{equation}
where $\hat{a}_{jk}$ denotes the estimator of $a_{jk}^*$, which is the test statistic and equals the $d+(j-1)d+k$-th entry of $ \hat{\boldsymbol{\theta}}_n$.
If \eqref{eq:test ajk} holds, we have significant evidence to reject the null hypothesis at the significance level $\alpha$, and conclude that $a_{jk}^* \neq 0$.
\subsubsection{Asymptotic normality of NLS estimators from degraded observations}
Here we present the asymptotic normality of NLS estimators from degraded observations. Recalling the transformation rules from $\tilde{\boldsymbol{\theta}}^*$ to $\boldsymbol{\theta}^*$, $g$, defined in Corollary~\ref{corollary:consistency aggregate}/\ref{corollary:consistency timescaled}, we denote its gradient at $\tilde{\boldsymbol{\theta}}^*$ by $G:=\nabla g(\tilde{\boldsymbol{\theta}}^*)$. We have derived the explicit formulae of $H(T,\boldsymbol{\theta}^*)$ and $V(T, \boldsymbol{\theta}^*)$ as matrix functions of $T$ and $\boldsymbol{\theta}^*$ in ~\eqref{eq:H}/\eqref{eq:V}. Then we establish the following corollaries.
\begin{corollary}[aggregated observations]\label{corollary:normality aggregate}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$, assumptions \textnormal{A3-A6} hold and assumption \textnormal{A7} is satisfied w.r.t. $\tilde{\boldsymbol{\theta}}^*$, we have $\sqrt{\tilde{n}}(\hat{\boldsymbol{\theta}}_{\tilde{n}} -\boldsymbol{\theta}^* )\xrightarrow{d} N(\boldsymbol{0}, G\tilde{H}^{-1}\tilde{V} \tilde{H}^{-1}G^{\top}) $, as $\tilde{n}\rightarrow \infty$, where $\hat{\boldsymbol{\theta}}_{\tilde{n}}$ is defined in Corollary~\ref{corollary:consistency aggregate}, $\tilde{H} = H(\tilde{T},\tilde{\boldsymbol{\theta}}^*)$ and $\tilde{V} = V(\tilde{T},\tilde{\boldsymbol{\theta}}^*)/k$, with $\tilde{T} = (\lfloor
n/k\rfloor-1)kT/(n-1) $ and $\tilde{\boldsymbol{\theta}}^* = (\tilde{\boldsymbol{x}}_0^*, \tilde{A}^*) = \big((I+e^{A^*\Delta_t} + \cdots + e^{A^*(k-1)\Delta_t})\boldsymbol{x}_0^*/k, A^*\big)$.
\end{corollary}
\begin{corollary}[time-scaled observations]\label{corollary:normality timescaled}
Suppose assumptions \textnormal{A1, A2} are satisfied w.r.t. $\boldsymbol{\theta}^*$, assumptions \textnormal{A3-A6} hold and assumption \textnormal{A7} is satisfied w.r.t. $\tilde{\boldsymbol{\theta}}^*$, we have $\sqrt{\tilde{n}}(\hat{\boldsymbol{\theta}}_{\tilde{n}} -\boldsymbol{\theta}^* )\xrightarrow{d} N(\boldsymbol{0}, G\tilde{H}^{-1}\tilde{V} \tilde{H}^{-1}G^{\top}) $, as $\tilde{n}\rightarrow \infty$, where $\hat{\boldsymbol{\theta}}_{\tilde{n}}$ is defined in Corollary~\ref{corollary:consistency timescaled}, $\tilde{H} = H(kT,\tilde{\boldsymbol{\theta}}^*)$ and $\tilde{V} = V(kT,\tilde{\boldsymbol{\theta}}^*)$, with $\tilde{\boldsymbol{\theta}}^* = (\tilde{\boldsymbol{x}}_0^*, \tilde{A}^*) =(\boldsymbol{x}_0^*, A^*/k)$.
\end{corollary}
The proofs of Corollary~\ref{corollary:normality aggregate} and Corollary~\ref{corollary:normality timescaled} can be found in Appendix~\ref{proof:corollary3.2.1} and Appendix~\ref{proof:corollary3.2.2}, respectively.
With the consistency property of the NLS estimators, these two corollaries can be directly derived from Theorem \ref{theorem:Asymptotic normality} by using multivariate Delta method. The explicit formulae for matrices $G$, $\tilde{H}$ and $\tilde{V}$ for aggregated/time-scaled observations are derived in the proofs.
Worth to be noted that matrix $\tilde{V}$ in Corollary~\ref{corollary:normality aggregate} is not $V(T, \boldsymbol{\theta}^*)$ by substituting $T$ and $\boldsymbol{\theta}^*$ with $\tilde{T}$ and $\tilde{\boldsymbol{\theta}}^*$, but also reduced by a factor of $k$. The reason is that, the equation of $V$ includes the variance matrix $\Sigma$ of the error term $\boldsymbol{\epsilon}_i$ in~\eqref{eq:V}. By the generation rules of aggregated observations defined in Definition~\ref{def:aggregated}, the variance of the aggregated noise term $\tilde{\boldsymbol{\epsilon}}_i$ becomes $k$ times smaller than that of the original one, that is $\tilde{\Sigma} = \Sigma/k$. And thanks to the reduced variance, we will show that the parameter estimates of aggregated observations can reach the same level of accuracy as that of the original observations with a much smaller sample size in the simulation results in subsection~\ref{simulation:aggregated}.
Now that we have derived the asymptotic normality results from aggregated/time-scaled observations. We can perform statistical inference for the unknown original system parameters $\boldsymbol{\theta}^*$ using the same way introduced in subsection~\ref{confidence sets} and~\ref{hypothesis test}.
\section{Simulations}\label{sec:simulation}
In this section, we illustrate the theoretical results established in Section \ref{section:large_sample} by simulation .
\subsection{Data simulation}\label{subsec:data simulation}
For each $d=2,3,4$, we first randomly generate a $d\times d$ parameter matrix $A_d^*$ and a $d\times 1$ initial condition $\boldsymbol{x}_{0d}^*$ as the true system parameters for each $d$-dimensional ODE system (\ref{eq:ODE model}). Moreover, to test whether $a_{jk}^* = 0$, we randomly set several entries to zero in each $A_d^*$. Without loss of generality, we set $T=1$. Then $n$ equally-spaced noisy observations are generated based on Equation~\eqref{eq:measurement model} in $[0, 1]$ time interval with error term $\boldsymbol{\epsilon}_i\sim N(\boldsymbol{0}, \text{diag}(0.05^2, \ldots, 0.05^2))$. We tested various sample sizes for each $d$-dimensional ODE system. For each configuration, we run 200 random replications.
The $A_d^*$ and $\boldsymbol{x}_{0d}^*$ are shown below.\\
$A_2^* = \begin{bmatrix}
1.76 & -0.1\\
0.98 & 0
\end{bmatrix}$,
$A_3^* = \begin{bmatrix}
1.76 & 0 & 0.98\\
2.24 & 0 & -0.98\\
0.95 & 0 & -0.1
\end{bmatrix}$,
$A_4^* = \begin{bmatrix}
1.76 & 0.9 & 0 & 2.24\\
1.87 & -0.98 & 0 & -1.15\\
-1.1 & 0 & 0.64 & 0\\
1.26 & 0.12 & 0.94 & 0
\end{bmatrix}$,
\noindent and $\boldsymbol{x}_{02}^*=[1.87, -0.98]^{\top}$,
$\boldsymbol{x}_{03}^* = [0.41,0.14,1.45]^{\top}$,
$\boldsymbol{x}_{04}^* = [-0.42,1.01,1.97,-0.38]^{\top}$.
Note that since the NLS loss function~\eqref{eq:Mn(theta)} is a non-convex function, in practical application, one may require a global optimization technique to obtain the NLS estimates. However, in this paper, we focus on the theoretical statistical properties analysis of the NLS estimator. Theoretically, the non-convexity of the NLS loss function does not influence our derived theoretical results. Therefore, for the purpose of illustrating our theoretical results, we do not apply the global optimization technique in our simulation due to its high computational cost. Instead, we use a bound-constrained minimization technique~\cite{branch1999subspace} to obtain the NLS estimates. In order to get the global minimum NLS estimate (or a local minimum that is close enough to the global minimum), we apply two tricks when implementing the optimization method. Firstly, we initialize the parameter with a value close to the true parameter (e.g., $\boldsymbol{\theta}^* - 0.001$). Secondly, we constrain the bounds of the parameter within a reasonable neighbourhood of the true parameter (e.g.,$[\boldsymbol{\theta}^* - 0.5,\boldsymbol{\theta}^*+0.5]$). According to the simulation results below, the attained NLS estimates are precise enough to illustrate the correctness of our theoretical results. In other words, suppose we apply a global optimization technique to obtain the NLS estimates. In that case, the simulation results will be more supportive of the correctness of our theoretical results with the help of potentially more accurate NLS estimates.
\begin{figure*}[ht]
\centering
\includegraphics[width=15cm]{results.pdf}
\caption{Simulation results for $d=2,3,4$ dimensional ODE system, respectively}
\label{fig:simulation}
\end{figure*}
\subsection{Metrics}
\noindent{\bf Mean Square Error (MSE)} is introduced to check the consistency of the parameter estimator. It is defined as:
\begin{equation*}
\text{MSE} = \frac{1}{N}\sum_{j=1}^{N} \parallel \hat{\boldsymbol{\theta}}_{n}^{(j)} - \boldsymbol{\theta}^*\parallel_2^2\,,
\end{equation*}
where $\hat{\boldsymbol{\theta}}_{n}^{(j)}$ denotes the estimated parameter of the $j$th replication, and $N$ is the number of replications.
\noindent{\bf Within CR rate} is defined as the rate of replications with the true parameters $\boldsymbol{\theta}^*$ included in the CR at $95\%$ confidence level. Whether the $95\%$ CR includes $\boldsymbol{\theta}^*$ at each replication can be calculated by Equation (\ref{eq:CR}). Worth to be mentioned that here we use this metric aiming to test the correctness of our asymptotic normal theory, e.g., the variance matrix $\Sigma_n$, not to infer the confidence region. Therefore, we use the true value of $\Sigma_n$ here rather than the estimated one in Equation (\ref{eq:CR}).
\noindent{\bf Type I/II error rate} is calculated based on the hypothesis test introduced in subsection~\ref{hypothesis test}. We set the significance level $\alpha = 0.05$. Type I error rate is the rate of replications rejecting the null hypothesis, for $a_{jk}^* = 0$. And type II error rate is the rate of replications not rejecting the null hypothesis, for $a_{jk}^* \neq 0$. Whether we reject the null hypothesis or not is calculated by Equation (\ref{eq:test ajk}).
\subsection{Results analysis}\label{results analysis}
The simulation results are presented in Figure~\ref{fig:simulation}. We first explain the legend in the figure. Since $A_2^*$ only has one zero entry $a_{22}^*$, the type I error rate is based on $a_{22}^*$. However, $A_3^*$ and $A_4^*$ have multiple zero entries, and the type I error rate for each zero entry is similar. Therefore, we show the average value of all zero entries in $A_3^*$ and $A_4^*$, labelled as avg. For type II error rate in cases with $d=2$ and $d=3$, we only show the value of $a_{12}^*$ and $a_{33}^*$, respectively. Because all other non-zero entries in $A_2^*$ and $A_3^*$ have zero or close to zero type II error rate since the sample size $n$ is small. For $d=4$, we present the results of all the $11$ non-zero entries in $A_4^*$, but due to space limitations, we only label entry $a_{42}^*$, which has a different trend from others.
It can be seen from the first column in Figure~\ref{fig:simulation} that for all three cases where $d=2,3$ and $4$, MSE decreases and approaches zero with the increase of sample size $n$, which indicates the consistency of the estimators. As can be seen from the figure, for $d=2$ and $d=3$ cases, the within CR rate is around $95\%$, and the type I error rate is around $5\%$ for all different sample sizes $n$. Moreover, the type II error rate reduces as the sample size increases and attains or approaches zero when the sample size is large enough. This result implies the correctness of our asymptotic normality theory and indicates that the test of causal structure inference for the ODE system is powerful.
For the 4-dimensional case, we can see that with the increase of sample size, the within CR rate increases, and the type I error rate decreases, which implies that as the parameter estimates approach their true parameter values, the within CR rate and type I error rate is closer to $95\%$ and $5\%$ respectively. They do not attain their ideal values in our simulation because the parameter estimates are not precise enough under the current sample sizes due to the high dimension of the system parameters. For the same reason, the type II error rate of entry $a_{42}^*$ keeps high. This result is reasonable because $a_{42}^*=0.12$ is close to zero. When the parameter estimate is not accurate enough, it is easy to get a wrong result that does not reject the null hypothesis. The type II error rates for $d=2$ and $d=3$ cases also support this conclusion. We can see that the absolute values of $a_{12}^*$ in $A_2^*$ and $a_{33}^*$ in $A_3^*$ are also small. Nevertheless, as sample size increases, with the help of sufficiently accurate parameter estimates, their type II error rates approach zero.
It is worth noting that in the 4-dimensional case, the type II error rates of other entries approach zero when the sample size is much smaller compared to the case of $a_{42}^*$. This result implies that when the causal effect between variables, i.e., $|a_{jk}^*|$, is significant, the causal structure can be easily and correctly discovered using our method. However, for cases where the causal effect is small or negligible, we need a sufficiently large sample to discover the causal relationship.
\subsection{Simulation results from degraded observations}
In this subsection, we illustrate the corollaries built on aggregated/time-scaled observations in Section~\ref{section:large_sample} by simulation.
We chose the $d=3$ case with the true system parameters $(\boldsymbol{x}_{03}^*, A_3^*)$ the same as the one we used in subsection~\ref{subsec:data simulation}. The original noisy observations are generated using the same way we introduced in subsection~\ref{subsec:data simulation}. And then the aggregated/time-scaled observations are generated from the original ones based on Definition~\ref{def:aggregated}/Definition~\ref{def:time-scaled} with various $k$.
\subsubsection{Simulation results from aggregated observations}\label{simulation:aggregated}
In the following, we show the simulation results for aggregated observations with $k=5,10, \text{and } 20$, respectively. Moreover, to compare the results from the aggregated and the original observations, we also present the simulation results from the original observations in Table~\ref{aggregated_k1}. We use $n$ and $\tilde{n}$ to denote the sample size of the original observations and the aggregated observations, respectively.
\begin{table}[hbt!]
\caption{Simulation results from original observations}
\label{aggregated_k1}
\centering
\begin{tabular}{lllllllllllll}
\toprule
\multicolumn{2}{c}{Sample Size} & \multirow{2}{*}{MSE} & \multirow{2}{*}{CR }
& \multicolumn{3}{c}{Type I Error Rate$\%$} & \multicolumn{6}{c}{Type II Error Rate$\%$} \\
\cmidrule(r){1-2} \cmidrule(r){5-7} \cmidrule{8-13}
$n$ & $\tilde{n}$ & & Rate$\%$ & $a_{12}$ & $a_{22}$ & $a_{32}$ & $a_{11}$ & $a_{13}$ & $a_{21}$ & $a_{23}$ & $a_{31}$ & $a_{33}$\\
\midrule
100 & - & 0.480 & 94 & 3 & 2 & 5.5 & 0 & 0 & 0 & 0 & 0 & 82.5 \\
200 & - & 0.243 & 97.5 & 5.5 & 4.5 & 4 & 0 & 0 & 0 & 0 & 0 & 75.5 \\
500 & - & 0.093 & 97 & 3.5 & 4 & 2.5 & 0 & 0 & 0 & 0 & 0 & 38.5 \\
1000 & - & 0.045 & 95 & 5 & 4.5 & 2 & 0 & 0 & 0 & 0 & 0 & 8.5 \\
2000 & - & 0.023 & 98 & 7 & 4 & 3.5 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[hbt!]
\caption{Simulation results from aggregated observations with $k=5$}
\label{aggregated_k5}
\centering
\begin{tabular}{lllllllllllll}
\toprule
\multicolumn{2}{c}{Sample Size} & \multirow{2}{*}{MSE} & \multirow{2}{*}{CR }
& \multicolumn{3}{c}{Type I Error Rate$\%$} & \multicolumn{6}{c}{Type II Error Rate$\%$} \\
\cmidrule(r){1-2} \cmidrule(r){5-7} \cmidrule{8-13}
$n$ & $\tilde{n}$ & & Rate$\%$ & $a_{12}$ & $a_{22}$ & $a_{32}$ & $a_{11}$ & $a_{13}$ & $a_{21}$ & $a_{23}$ & $a_{31}$ & $a_{33}$\\
\midrule
100 & 20 & 0.498 & 95.5 & 1.5 & 2 & 2.5 & 0 & 0 & 0 & 0 & 0 & 88.5 \\
200 & 40 & 0.247 & 99 & 4.5 & 3.5 & 3 & 0 & 0 & 0 & 0 & 0 & 81 \\
500 & 100 & 0.092 & 97.5 & 3.5 & 4 & 2.5 & 0 & 0 & 0 & 0 & 0 & 42 \\
1000 & 200 & 0.045 & 95.5 & 5 & 4.5 & 2 & 0 & 0 & 0 & 0 & 0 & 9.5 \\
2000 & 400 & 0.023 & 98.5 & 6.5 & 4 & 3.5 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[hbt!]
\caption{Simulation results from aggregated observations with $k=10$}
\label{aggregated_k10}
\begin{tabular}{lllllllllllll}
\toprule
\multicolumn{2}{c}{Sample Size} & \multirow{2}{*}{MSE} & \multirow{2}{*}{CR }
& \multicolumn{3}{c}{Type I Error Rate$\%$} & \multicolumn{6}{c}{Type II Error Rate$\%$} \\
\cmidrule(r){1-2} \cmidrule(r){5-7} \cmidrule{8-13}
$n$ & $\tilde{n}$ & & Rate$\%$ & $a_{12}$ & $a_{22}$ & $a_{32}$ & $a_{11}$ & $a_{13}$ & $a_{21}$ & $a_{23}$ & $a_{31}$ & $a_{33}$\\
\midrule
100 & 10 & 0.536 & 84 & 1 & 0.5 & 2.5 & 0 & 0 & 0 & 0 & 0 & 94.5 \\
200 & 20 & 0.262 & 98 & 3 & 2.5 & 2.5 & 0 & 0 & 0 & 0 & 0 & 85.5 \\
500 & 50 & 0.093 & 98 & 3 & 3 & 1.5 & 0 & 0 & 0 & 0 & 0 & 47 \\
1000 & 100 & 0.045 & 96 & 5 & 4.5 & 2 & 0 & 0 & 0 & 0 & 0 & 8.5 \\
2000 & 200 & 0.023 & 98.5 & 6.5 & 3.5 & 3.5 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[hbt!]
\caption{Simulation results from aggregated observations with $k=20$}
\label{aggregated_k20}
\centering
\begin{tabular}{lllllllllllll}
\toprule
\multicolumn{2}{c}{Sample Size} & \multirow{2}{*}{MSE} & \multirow{2}{*}{CR }
& \multicolumn{3}{c}{Type I Error Rate$\%$} & \multicolumn{6}{c}{Type II Error Rate$\%$} \\
\cmidrule(r){1-2} \cmidrule(r){5-7} \cmidrule{8-13}
$n$ & $\tilde{n}$ & & Rate$\%$ & $a_{12}$ & $a_{22}$ & $a_{32}$ & $a_{11}$ & $a_{13}$ & $a_{21}$ & $a_{23}$ & $a_{31}$ & $a_{33}$\\
\midrule
100 & 5 & 0.858 & 30.5 & 0.5 & 1 & 0.5 & 0 & 0 & 0 & 0 & 3.5 & 99 \\
200 & 10 & 0.301 & 86 & 0.5 & 3 & 1.5 & 0 & 0 & 0 & 0 & 0 & 91 \\
500 & 25 & 0.093 & 98 & 3.5 & 2.5 & 1 & 0 & 0 & 0 & 0 & 0 & 53.5 \\
1000 & 50 & 0.045 & 96.5 & 4 & 3.5 & 2 & 0 & 0 & 0 & 0 & 0 & 11.5 \\
2000 & 100 & 0.023 & 99 & 6 & 4 & 3.5 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}
\end{table}
The results show that for all three cases where $k=5,10 \text{ and } 20$, MSE decreases and approaches zero with the increase of sample size $\tilde{n}$, which indicates the consistency of the estimators. As can be seen from the tables, the within CR rate is around $95\%$ and the type I error rate for each of the zero entries in $A$ is around $5\%$ when the sample size $\tilde{n}$ is large enough in all $k=5,10 \text{ and } 20$ cases. In addition, the type II error rate reduces to zero as the sample size $\tilde{n}$ increases. This result implies the correctness of our asymptotic normality theory and indicates the test of causal structure inference for the ODE system is powerful.
As can be seen from the tables, the type II error rate from the aggregated observations tend to be slightly greater than that of the original observations for each $n$. Specifically, with the greater the $k$ being, the greater the type II error rate is. This is reasonable, because the sample size of the aggregated observations ($\tilde{n}$) is much smaller than that of the original ones ($n$), which causes the lack of accuracy of the parameter estimates when the sample size $\tilde{n}$ is not large enough. Thus leading to a higher MSE and a higher type II error rate.
However, it can be seen from the last two rows in each of the four tables, the MSE's and type II error rates are almost same for each case, which implies that when the sample size of the aggregated observations $\tilde{n}$ is large enough, the parameter estimates of the aggregated observations can reach the same level of accuracy as that of the original observations. In addition, the power of inferring the causal structure of the ODE system from aggregated observations can be as good as that from the original observations. The reason why the aggregated observations with a much smaller sample size $\tilde{n} = n/k$ can still perform as good as the original observations with the corresponding size $n$ is that, the variance of the aggregated noise term $\tilde{\boldsymbol{\epsilon}}_i$ becomes $k$ times smaller than that of the original one $\boldsymbol{\epsilon}_i$, that is
\begin{equation*}
\tilde{\Sigma} = \Sigma/k
\end{equation*}
based on the generation rules of the aggregated observations. Therefore, with a much smaller noise variance, the parameter estimates from aggregated observations can reach the same level of accuracy as that of the original observations with a much smaller sample size.
\subsubsection{Simulation results from time-scaled observations}
In the following, we show the simulation results for time-scaled observations with $k=0.01, 0.1, 1, 10, \text{ and }100$, respectively. Since for all the metrics except MSE, under all cases the simulation results are identical, we present their results in one table. The MSE for each $k$ are the same up to $10^{-5}$, therefore, the differences of MSE are negligible and one can safely conclude that the simulation results for time-scaled observations with various $k$ is the same as that of the original observations (i.e., $k=1$). This implies the correctness of our theoretical results of the time-scaled observations established in Section~\ref{section:large_sample}.
\begin{table}[hbt!]
\caption{Simulation results from time-scaled observations with $k=0.01, 0.1, 1, 10, 100$}
\label{time-scaled}
\centering
\begin{tabular}{lllllllllllll}
\toprule
\multicolumn{2}{c}{Sample Size} & \multirow{2}{*}{MSE} & \multirow{2}{*}{CR }
& \multicolumn{3}{c}{Type I Error Rate$\%$} & \multicolumn{6}{c}{Type II Error Rate$\%$} \\
\cmidrule(r){1-2} \cmidrule(r){5-7} \cmidrule{8-13}
$n$ & $\tilde{n}$ & & Rate$\%$ & $a_{12}$ & $a_{22}$ & $a_{32}$ & $a_{11}$ & $a_{13}$ & $a_{21}$ & $a_{23}$ & $a_{31}$ & $a_{33}$\\
\midrule
100 & 100 & 0.419 & 96 & 2 & 2.5 & 2 & 0 & 0 & 0 & 0 & 0 & 85.5 \\
200 & 200 & 0.269 & 94 & 7 & 6 & 4.5 & 0 & 0 & 0 & 0 & 0 & 72.5 \\
500 & 500 & 0.104 & 91.5 & 5.5 & 5.5 & 6 & 0 & 0 & 0 & 0 & 0 & 38.5 \\
1000 & 1000 & 0.050 & 92.5 & 5 & 6 & 3 & 0 & 0 & 0 & 0 & 0 & 9 \\
2000 & 2000 & 0.022 & 95 & 3.5 & 4.5 & 0.5 & 0 & 0 & 0 & 0 & 0 & 1 \\
\bottomrule
\end{tabular}
\end{table}
\section{Related Work}
In this section, we introduce the related work from three closely related aspects.
\subsection{Identifiability analysis of linear ODE systems}
Most current studies for identifiability analysis of parameters in linear dynamical systems are in the control theory \citep{bellman1970structural, gargash1980necessary,glover1974parametrizations,grewal1976identifiability,rosenbrock1974structural}. In the applied mathematics area, \citep{stanhope2014identifiability,qiu2022identifiability} provided systematic studies of the identifiability analysis of linear ODE systems from a \textbf{single} trajectory. The authors in \citet{stanhope2014identifiability} also discussed using equally-spaced error-free observations to calculate the system parameters explicitly. Our work's main distinction is that we build the identifiability condition entirely on the parameter of interest, i.e., ($A$ and $\boldsymbol{x}_0$), without further linearly independent assumption on the observations. With the help of our identifiability condition, the parameter estimator's asymptotic properties can be established with mild assumptions. Specifically, the covariance matrix of the asymptotic normality distribution can be explicitly expressed.
Thus, we can perform statistical inference for the unknown parameters. Moreover, we treat the initial condition $\boldsymbol{x}_0$ as a parameter while $\boldsymbol{x}_0$ is a given fixed value in their setting.
In the most recent work \citet{qiu2022identifiability}, the authors proposed several quantitative scores for identifiability analysis of linear ODEs in practice.
\subsection{Parameter estimation methods for ODE systems}
The NLS method is applied to estimate parameters in ODE systems in \citep{bock1983recent,biegler1986nonlinear,xue2010sieve}. Xue et al. \citet{xue2010sieve} studied asymptotic properties of the NLS estimator based on approximating ODEs' solutions by using the Runge-Kutta algorithm \citet{dormand1980family}. However, their work requires strong assumptions, which are complex and tedious to verify. In addition to the NLS method, the two-stage smoothing-based estimation method is also well used for parameter estimation in ODE systems and its asymptotic properties have been extensively explored \citep{varah1982spline,chen2008efficient,chen2008estimation,wu2014sparse,brunton2016discovering}. This method usually applies smoothing approaches such as penalized splines to estimate the state variables and their derivatives at the first stage. Thus a large number of observations are needed to ensure the estimates' accuracy.
Principal differential analysis \citep{ramsay1996principal,heckman2000penalized,poyton2006parameter,qi2010asymptotic} and Bayesian approaches \citet{ghosh2021variational} were also proposed to estimate unknown parameters in ODE systems. In recent years, several neural-network-based parameter estimation methods for ODE systems have been proposed~\citep{rubanova2019latent,lu2021learning,qin2019data}. In these works, the authors use multiple (usually a large number) trajectories instead of a single trajectory to train the neural network model and aim to perform trajectory prediction. No identifiability is guaranteed.
\subsection{Connection between causality and differential equations}
As the authors in~\citet{aalen2012causality} suggested, differential equations allow for a natural interpretation of causality in dynamic systems. The authors in~\citep{mooij2013ordinary, rubenstein2016deterministic, bongers2018random}
built an explicit bridge between the differential equations and the causal models by establishing the relationship between ODEs/Random Differential Equations (RDEs) and structural causal models. Hansen and Sokol provided a causal interpretation of Stochastic Differential Equations (SDEs)~\citet{hansen2014causal}. The authors in~\citet{bellot2021neural} proposed a method to consistently discover the causal structure of SDE systems based on penalized neural ODEs~\citet{chen2018neural}. These works aim to build the theoretical connection between causality and differential equations in various ways. And our work proposes a method to infer the causal structure of ODEs from a statistical perspective.
\section{Conclusion}\label{conclusion}
In this paper, we derived a sufficient condition for identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations. Specifically, the observations lie on a single trajectory. Furthermore, we studied the consistency and asymptotic normality of the NLS estimator. The inference of unknown parameters based on the established theoretical results was also investigated. In particular, we proposed a new method to infer the causal structure of the ODE system. Finally, we extended the identifiability and asymptotic properties results to cases with aggregated and time-scaled observations.
A time series is most commonly collected at equally-spaced time points in practice, which motivates us to focus on the study over equally-spaced observations from a single trajectory in this work. However, as the authors pointed out in~\citet{voelkle2013continuous}, using irregularly-spaced observations can be advantageous in obtaining more information from the dynamical system. Therefore, extending the study to cases with irregularly-spaced observations from a single trajectory is a possible direction for future work.
\newpage
|
2,869,038,156,700 | arxiv | \section{Introduction}
Deep Convolutional Neural Networks (CNNs) have achieved substantial advances in a wide range of vision tasks, such as object detection and recognition \cite{alexnet,vggnet,googlenet,resnet,rcnn,faster-rcnn}, depth perception \cite{unsupervised-cnn-depth,monocular-depth}, visual relation detection \cite{zhang2017relation-1,zhang2017relation-2}, face tracking and alignment \cite{facial-detection1,facial-localization2,facial-alignment,wu-face-tracking-iccv,wu-face-tracking-pr}, object tracking \cite{luo2018end}, etc.
However, the superior performance of CNNs usually requires powerful hardware with abundant computing and memory resources. For example, high-end Graphics Processing Units (GPUs).
Meanwhile, there are growing demands to run vision tasks, such as augmented reality and intelligent navigation, on mobile hand-held devices and small drones. Most mobile devices are not equipped with a powerful GPU neither an adequate amount of memory to run and store the expensive CNN model.
Consequently, the high demand for computation and memory becomes the bottleneck of deploying the powerful \comment{capability of }CNNs on most mobile devices.
In general, there are three major approaches to alleviate this limitation. The first is to reduce the number of weights, such as Sparse CNN \cite{sparsecnn}. The second is to quantize the weights (\eg, QNN \cite{qnn} and DoReFa Net \cite{dorefanet}). The third is to quantize both weights and activations, with the extreme case of both weights and activations being binary.
In this work, we study the extreme case of the third approach, {\it i.e.}, the binary CNNs.
It is also called 1-bit CNNs, as each weight parameter and activation can be represented by 1-bit.
As demonstrated in \cite{xnornet}, up to $32 \times$ memory saving and $58 \times$ speedup on CPUs have been achieved for a 1-bit convolution layer, in which the computationally heavy matrix multiplication operations become light-weighted bitwise XNOR operations and bit-count operations. The current binarization method achieves comparable accuracy to real-valued networks on small datasets ({\it e.g.}, CIFAR-10 and MNIST). However on the large-scale datasets ({\it e.g.}, ImageNet), the binarization method based on AlexNet in \cite{binarynet} encounters severe accuracy degradation, {\it i.e.}, from $56.6\%$ to $27.9\%$ \cite{xnornet}. It reveals that the capability of conventional 1-bit CNNs is not sufficient to cover great diversity in large-scale datasets like ImageNet. Another binary network called XNOR-Net \cite{xnornet} was proposed to enhance the performance of 1-bit CNNs, by utilizing the absolute mean of weights and activations.
The objective of this study is to further improve 1-bit CNNs, as we believe its potential has not been fully explored.
One important observation is that during the inference process, 1-bit convolution layer generates integer outputs, due to the bit-count operations. The integer outputs will become real values if there is a BatchNorm \cite{batchnorm} layer. But these real-valued activations are then binarized to $-1$ or $+1$ through the consecutive sign function, as shown in Fig. \ref{fig:shortcut_or_not}(a).
Obviously, compared to binary activations, these integers or real activations contain more information, which is lost in the conventional 1-bit CNNs \cite{binarynet}.
Inspired by this observation, we propose to keep these real activations via adding a simple yet effective shortcut, dubbed Bi-Real net. As shown in Fig. \ref{fig:shortcut_or_not}(b), the shortcut connects the real activations to an addition operator with the real-valued activations of the next block. By doing so, the representational capability of the proposed model is much higher than that of the original 1-bit CNNs, with only a negligible computational cost incurred by the extra element-wise addition \comment{with a negligible computational cost which only incurs an extra element-wise addition}and without any additional memory cost.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figure1.png}
\caption{Network with intermediate feature visualization, yellow lines denote value propagated inside the path being real while blue lines denote binary values. (a) 1-bit CNN without shortcut (b) proposed Bi-Real net with shortcut propagating the real-valued features. }
\label{fig:shortcut_or_not}
\end{figure}
Moreover, we further propose a novel training algorithm for 1-bit CNNs including three technical novelties:
\begin{itemize}
\item \textbf{Approximation to the derivative of the sign function with respect to activations.} As the sign function binarizing the activation is non-differentiable, we propose to approximate its derivative by a piecewise linear function in the backward pass, derived from the piecewise polynomial function that is a second-order approximation of the sign function. In contrast, the approximated derivative using a step function (\ie, $1_{|x|<1}$) proposed in \cite{binarynet} is derived from the clip function (\ie, clip(-1,x,1)), which is also an approximation to the sign function. We show that the piecewise polynomial function is a closer approximation to the sign function than the clip function. Hence, its derivative is more effective than the derivative of the clip function.
\item \textbf{Magnitude-aware gradient with respect to weights.} As the gradient of loss with respect to the binary weight is not large enough to change the sign of the binary weight, the binary weight cannot be directly updated using the standard gradient descent algorithm. In BinaryNet \cite{binarynet}, the real-valued weight is first updated using gradient descent, and the new binary weight is then obtained through taking the sign of the updated real weight. However, we find that the gradient with respect to the real weight is only related to the sign of the current real weight, while independent of its magnitude. To derive a more effective gradient, we propose to use a magnitude-aware sign function during training, then the gradient with respect to the real weight depends on both the sign and the magnitude of the current real weight. After convergence, the binary weight (\ie, -1 or +1) is obtained through the sign function of the final real weight for inference.
\item \textbf{Initialization.} As a highly non-convex optimization problem, the training of 1-bit CNNs is likely to be sensitive to initialization. In [17], the 1-bit CNN model is initialized using the real-valued CNN model with the ReLU function pre-trained on ImageNet. We propose to replace ReLU by the clip function in pre-training, as the activation of the clip function is closer to the binary activation than that of ReLU.
\end{itemize}
Experiments on ImageNet show that the above three ideas are useful to train 1-bit CNNs, including both Bi-Real net and other network structures. Specifically, their respective contributions to the\comment{relative} improvements of top-1 accuracy are up to 12\%, 23\% and 13\% for a 18-layer Bi-Real net.
With the dedicatedly-designed shortcut and the proposed optimization techniques, our Bi-Real net, with only binary weights and activations inside each 1-bit convolution layer, achieves 56.4\% and 62.2\% top-1 accuracy with 18-layer and 34-layer structures, respectively, with up to 16.0$\times$ memory saving and 19.0$\times$ computational cost reduction compared to the full-precision CNN. Comparing to the state-of-the-art model (\eg, XNOR-Net), Bi-Real net achieves 10\% higher top-1 accuracy on the 18-layer network.
\section{Related Work}
\textbf{Reducing the number of parameters.} Several methods have been proposed to compress neural networks by reducing the number of parameters and neural connections. For instance, He et al. \cite{resnet} proposed a bottleneck structure which consists of three convolution layers of filter size 1$\times$1, 3$\times$3 and 1$\times$1 with a shortcut connection as a preliminary building block to reduce the number of parameters and to speed up training. In SqueezeNet \cite{squeezenet}, some 3$\times$3 convolutions are replaced with 1$\times$1 convolutions, resulting in a 50$\times$ reduction in the number of parameters. FitNets \cite{fitnets} imitates the soft output of a large teacher network using a thin and deep student network, and in turn yields 10.4$\times$ fewer parameters and similar accuracy to a large teacher network on the CIFAR-10 dataset. In Sparse CNN \cite{sparsecnn}, a sparse matrix multiplication operation is employed to zero out more than 90\% of parameters to accelerate the learning process. Motivated by the Sparse CNN, Han et al. proposed Deep Compression \cite{deepcompression} which employs connection pruning, quantization with retraining and Huffman coding to reduce the number of neural connections, thus, in turn, reduces the memory usage.
\noindent\textbf{Parameter quantization.} The previous study \cite{fwfa} demonstrated that real-valued deep neural networks such as AlexNet \cite{alexnet}, GoogLeNet \cite{googlenet} and VGG-16 \cite{vggnet} only encounter marginal accuracy degradation when quantizing 32-bit parameters to 8-bit. In Incremental Network Quantization, Zhou et al. \cite{incremental} quantize the parameter incrementally and show that it is even possible to further reduce the weight precision to 2-5 bits with slightly higher accuracy than a full-precision network on the ImageNet dataset. In BinaryConnect \cite{binaryconnect}, Courbariaux et al. employ 1-bit precision weights (1 and -1) while maintaining sufficiently high accuracy on the MNIST, CIFAR10 and SVHN datasets.
Quantizing weights properly can achieve considerable memory savings with little accuracy degradation. However, acceleration via weight quantization is limited due to the real-valued activations (\ie, the input to convolution layers).
Several recent studies have been conducted to explore new network structures and/or training techniques for quantizing both weights and activations while minimizing accuracy degradation. Successful attempts include DoReFa-Net \cite{dorefanet} and QNN \cite{qnn}, which explore neural networks trained with 1-bit weights and 2-bit activations, and the accuracy drops by 6.1\% and 4.9\% respectively on the ImageNet dataset compared to the real-valued AlexNet. Additionally, BinaryNet \cite{binarynet} uses only\comment{ -1 and 1 in all convolution layers } 1-bit weights and 1-bit activations in a neural network and achieves comparable accuracy as full-precision neural networks on the MNIST and CIFAR-10 datasets. In XNOR-Net \cite{xnornet}, Rastegari et al. further improve BinaryNet by multiplying the absolute mean of the weight filter and activation with the 1-bit weight and activation to improve the accuracy. ABC-Net \cite{dji} proposes to enhance the accuracy by using more weight bases and activation bases. The results of these studies are encouraging, but admittedly, due to the loss of precision in weights and activations, the number of filters in the network (thus the algorithm complexity) grows in order to maintain high accuracy, which offsets the memory saving and speedup of binarizing the network.
In this study, we aim to design 1-bit CNNs aided with a real-valued shortcut to compensate for the accuracy loss of binarization. Optimization strategies for overcoming the gradient dismatch problem and discrete optimization difficulties in 1-bit CNNs, along with a customized initialization method, are proposed to fully explore the potential of 1-bit CNNs with its limited resolution.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{bitcount.png}
\caption{The mechanism of xnor operation and bit-counting inside the 1-bit CNNs presented in \cite{xnornet}.}
\label{fig:bitcount}
\end{figure}
\section{Methodology}
\subsection{Standard 1-bit CNNs and Its Representational Capability}
1-bit convolutional neural networks (CNNs) refer to the CNN models with binary weight parameters and binary activations in intermediate convolution layers\comment{, while the weight parameters of the first convolution layer and the fully connected layer are still real}.
Specifically, the binary activation and weight are obtained through a sign function,
\begin{align}
a_b = {\rm Sign}(a_r) = \left\{
\begin{array}{lr}
- 1 & {\rm if} \ \ a_r <0 \\
+ 1 & {\rm otherwise}
\end{array}
\right.
,
\quad \quad
w_b = {\rm Sign}(w_r) = \left\{
\begin{array}{lr}
- 1 & {\rm if} \ w_r <0 \\
+ 1 & {\rm otherwise}
\end{array}
\right.
,
\label{eq: sign_a and sign_w}
\end{align}
where $a_r$ and $w_r$ indicate the real activation and the real weight, respectively.
$a_r$ exists in both training and inference process of the 1-bit CNN, due to the convolution and batch normalization (if used). As shown in Fig. \ref{fig:bitcount}, given a binary activation map and a binary $3\times 3$ weight kernel, the output activation could be any odd integer from $-9$ to $9$. If a batch normalization is followed, as shown in Fig. \ref{fig:reps}, then the integer activation will be transformed into real values.
The real weight will be used to update the binary weights in the training process, which will be introduced later.
Compared to the real-valued CNN model with the 32-bit weight parameter, the 1-bit CNNs obtains up to $32\times$ memory saving.
Moreover, as the activation is also binary, then the convolution operation could be implemented by the bitwise XNOR operation and a bit-count operation\cite{xnornet}.\comment{\ie,
\begin{align}
\mathbf{a_b} \cdot \mathbf{w_b} = \text{bitcount}\text{(XNOR}(\mathbf{a_b},\mathbf{w_b})),
\label{eq: bitcount}
\end{align}
\comment{where $\mathbf{a_b}$ and $\mathbf{w_b}$ indicate the vectors of binary activations $a_{b,i}$ and binary weights $w_{b,i}$, respectively, with $i$ being the entry index. } }
One simple example of the\comment{ above} bitwise operation is shown in Fig. \ref{fig:bitcount}.
In contrast, the convolution operation in real-valued CNNs is implemented by the expensive real value multiplication.
Consequently, the 1-bit CNNs could obtain up to 64$\times$ computation saving.
However, it has been demonstrated in \cite{binarynet} that the classification performance of the 1-bit CNNs is much worse than that of the real-valued CNN models on large-scale datasets like ImageNet.
We believe that the poor performance of 1-bit CNNs is caused by its low representational capacity.
We denote $\mathbb{R}(\mathbf{x})$ as the representational capability of $\mathbf{x}$, \ie, the number of all possible configurations of $\mathbf{x}$, where $\mathbf{x}$ could be a scalar, vector, matrix or tensor.
For example, the representational capability of 32 channels of a binary $14 \times 14$ feature map $\mathbf{A}$ is $\mathbb{R}(\mathbf{A}) = 2^{14 \times 14 \times 32} = 2^{6272}$.
Given a $3 \times 3 \times 32$ binary weight kernel $\mathbf{W}$, each entry of $\mathbf{A} \otimes \mathbf{W}$ (\ie, the bitwise convolution output) can choose the even values from (-288 to 288), as shown in Fig \ref{fig:reps}. Thus, $\mathbb{R}(\mathbf{A} \otimes \mathbf{W})$ = $289^{6272}$. Note that since the BatchNorm layer is a unique mapping, it will not increase the number of different choices but scale the (-288,288) to a particular value. If adding the sign function behind the output, each entry in the feature map is binarized, and the representational capability shrinks to $2^{6272}$ again.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{representation2.png}
\caption{The representational capability ($\mathbb{R}$) of each layer in (a) 1-bit CNNs without shortcut (b) 1-bit CNNs with shortcut. $\mathbf{A}_b^l$ indicates the output of the Sign function; $\mathbf{A}_m^l$ denotes the output of the 1-bit convolution layer; $\mathbf{A}_r^{l+1}$ represents the output of the BatchNorm layer; The superscript $l$ indicates the block index.}
\label{fig:reps}
\end{figure}
\subsection{Bi-Real Net Model and Its Representational Capability}
\comment{As shown in Fig. \ref{fig:reps}, the representational capability of the standard 1-bit CNNs significantly drops from $289^{6272}$ to $2^{6272}$, after the sign function, leading to the significant information loss. }
We propose to preserve the real activations before the sign function to increase the representational capability of the 1-bit CNN, through a simple shortcut.
Specifically, as shown in Fig. \ref{fig:reps}(b), one block indicates the structure that
``Sign $\rightarrow$ 1-bit convolution $\rightarrow$ batch normalization $\rightarrow$ addition operator".
The shortcut connects the input activations to the sign function in the current block to the output activations after the batch normalization in the same block, and these two activations are added through an addition operator, and then the combined activations are inputted to the sign function in the next block.
The representational capability of each entry in the added activations is $289^2$.
Consequently, the representational capability of each block in the 1-bit CNN with the above shortcut becomes $(289^2)^{6272}$.
As both real and binary activations are kept, we call the proposed model as Bi-Real net.
The representational capability of each block in the 1-bit CNN is significantly enhanced due to the simple identity shortcut.
The only additional cost of computation is the addition operation of two real activations, as these real activations already exist in the standard 1-bit CNN (\ie, without shortcuts). Moreover, as the activations are computed on the fly, no additional memory is needed.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{training.png}
\caption{A graphical illustration of the training process of the 1-bit CNNs, with $A$ being the activation, $W$ being the weight, and the superscript $l$ denoting the $l^{\textit{th}}$ block consisting with Sign, 1-bit Convolution, and BatchNorm. The subscript $r$ denotes real value, $b$ denotes binary value, and $m$ denotes the intermediate output before the BatchNorm layer.}
\label{fig:training}
\end{figure}
\subsection{Training Bi-Real Net}
As both activations and weight parameters are binary, the continuous optimization method, \ie, the stochastic gradient descent(SGD), cannot be directly adopted to train the 1-bit CNN.
There are two major challenges. One is how to compute the gradient of the sign function on activations, which is non-differentiable. The other is that the gradient of the loss with respect to the binary weight is too small to change the weight's sign.
The authors of \cite{binarynet} proposed to adjust the standard SGD algorithm to approximately train the 1-bit CNN. Specifically, the gradient of the sign function on activations is approximated by the gradient of the piecewise linear function, as shown in Fig. \ref{fig:activation_back}(b). To tackle the second challenge, the method proposed in \cite{binarynet} updates the real-valued weights by the gradient computed with regard to the binary weight and obtains the binary weight by taking the sign of the real weights.
As the identity shortcut will not add difficulty for training, the training algorithm proposed in \cite{binarynet} can also be adopted to train the Bi-Real net model.
However, we propose a novel training algorithm to tackle the above two major challenges, which is more suitable for the Bi-Real net model as well as other 1-bit CNNs.
Besides, we also propose a novel initialization method.
We present a graphical illustration of the training of Bi-Real net in Fig. \ref{fig:training}. The identity shortcut is omitted in the graph for clarity, as it will not change the main part of the training algorithm.
\subsubsection{Approximation to the derivative of the sign function with respect to activations.}
As shown in Fig. \ref{fig:activation_back}(a), the derivative of the sign function is an impulse function, which cannot be utilized in training.
\comment{\begin{flalign}
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l,t}}
=
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l+1,t}}
\frac{\partial \mathbf{A}_r^{l+1,t}}{\partial \mathbf{A}_m^{l,t}}
\frac{\partial \mathbf{A}_m^{l,t}}{\partial \mathbf{A}_b^{l,t}}\frac{\partial \mathbf{A}_b^{l,t}}{\partial \mathbf{A}_r^{l,t}}
\approx
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l+1,t}}
\theta^{l,t}
\mathbf{W}_b^l\frac{\partial F(\mathbf{A}_r^{l,t})}{\partial \mathbf{A}_r^{l,t}},
\label{eq: derivative wrt A_r}
\end{flalign}
}
\begin{flalign}
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l,t}}
=
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_b^{l,t}}
\frac{\partial \mathbf{A}_b^{l,t}}{\partial \mathbf{A}_r^{l,t}}
=
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_b^{l,t}}
\frac{\partial Sign(\mathbf{A}_r^{l,t})}{\partial \mathbf{A}_r^{l,t}}
\approx
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_b^{l,t}}
\frac{\partial F(\mathbf{A}_r^{l,t})}{\partial \mathbf{A}_r^{l,t}},
\label{eq: derivative wrt A_r}
\end{flalign}
where $F(\mathbf{A}_r^{l,t})$ is a differentiable approximation of the non-differentiable $Sign(\mathbf{A}_r^{l,t})$.
In \cite{binarynet}, $F(\mathbf{A}_r^{l,t})$ is set as the clip function, leading to the derivative as a step-function (see \ref{fig:activation_back}(b)).
In this work, we utilize a piecewise polynomial function (see \ref{fig:activation_back}(c)) as the approximation function, as Eq. \eqref{eq4} left.
\begin{align}
\label{eq4}
F(a_r) = \left\{
\begin{array}{lr}
- 1 & {\rm if} \ a_r < -1 \\
2a_r+a_r^2 \ \ &{\rm if} -1 \leqslant a_r < 0 \\
2a_r-a_r^2 &{\rm if} \ 0 \leqslant a_r < 1 \\
1 & {\rm otherwise}
\end{array}
\right.
,
\quad
\frac{\partial F(a_r)}{\partial a_r} = \left\{
\begin{array}{lr}
2+2a_r \ \ &{\rm if} -1 \leqslant a_r < 0 \\
2-2a_r &{\rm if} \ 0 \leqslant a_r < 1 \\
0 & {\rm otherwise}
\end{array}
\right.
,
\end{align}
\comment{
\begin{align}
\label{eq4}
F(a_r) = \left\{
\begin{array}{lr}
- 1 & {\rm if} \ a_r < -1 \\
2a_r+a_r^2 \ \ \ \ &{\rm if} \ -1 \leqslant a_r < 0 \\
2a_r-a_r^2 &{\rm if} \ 0 \leqslant a_r < 1 \\
1 & {\rm otherwise}
\end{array}
\right.
.
\end{align}
}
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{activations.png}
\caption{(a) Sign function and its derivative, (b) Clip function and its derivative for approximating the derivative of the sign function, proposed in \cite{binarynet}, (c) Proposed differentiable piecewise polynomial function and its triangle-shaped derivative for approximating the derivative of the sign function in gradients computation.}
\label{fig:activation_back}
\end{figure}
\noindent
As shown Fig. \ref{fig:activation_back}, the shaded areas with blue slashes can reflect the difference between the sign function and its approximation. The shaded area corresponding to the clip function is $1$, while that corresponding to Eq. \eqref{eq4} left is $\frac{2}{3}$.
We conclude that Eq. \eqref{eq4} left is a closer approximation to the sign function than the clip function.
Consequently, the derivative of Eq. \eqref{eq4} left is formulated as Eq. \eqref{eq4} right,
which is a piecewise linear function.
\comment{
\begin{align}
\label{eq5}
\frac{\partial F(a_r)}{\partial a_r} = \left\{
\begin{array}{lr}
2+2a_r \ \ \ \ &{\rm if} \ -1 \leqslant a_r < 0 \\
2-2a_r &{\rm if} \ 0 \leqslant a_r < 1 \\
0 & {\rm otherwise}
\end{array}
\right.
,
\end{align}}
\subsubsection{Magnitude-aware gradient with respect to weights.}
Here we present how to update the binary weight parameter in the $l^{th}$ block, \ie, $\mathbf{W}_b^l \in \{-1, +1\}$\comment{, as shown in Fig. \ref{fig:training}}.
For clarity, we assume that there is only one weight kernel, \ie, $\mathbf{W}_b^l$ is a matrix.
\comment{Following the gradient descent algorithm, the update is formulated as
\begin{flalign}
\label{eq6}
\mathbf{W}_b^{l, t+1} = Sign\big( \mathbf{W}_b^{l,t} - \eta \frac{\partial \mathcal{L}}{\partial \mathbf{W}_b^{l,t}} \big),
\end{flalign}
where $t$ denotes the training iteration.
However, in practice, we find that $\mathbf{W}_b^l$ is rarely changed, as the gradient is not large enough to change the sign in most cases. Obviously, the standard gradient descent Eq. \eqref{eq6} is not suitable for updating the binary weight in the 1-bit CNN.}
The standard gradient descent algorithm cannot be directly applied as the gradient is not large enough to change the binary weights.
To tackle this problem, the method of \cite{binarynet} introduced a real weight
$\mathbf{W}_r^l$ and a sign function during training. Hence the binary weight parameter can be seen as the output to the sign function, \ie, $\mathbf{W}_b^l = Sign(\mathbf{W}_r^l)$, as shown in the upper sub-figure in Fig. \ref{fig:training}.
Consequently, $\mathbf{W}_r^l$ is updated using gradient descent in the backward pass,
as follows
\begin{flalign}
\mathbf{W}_r^{l, t+1} =
\mathbf{W}_r^{l,t} - \eta \frac{\partial \mathcal{L}}{\partial \mathbf{W}_r^{l,t} }
=
\mathbf{W}_r^{l,t} - \eta \frac{\partial \mathcal{L}}{\partial \mathbf{W}_b^{l,t}} \frac{\partial \mathbf{W}_b^{l,t}}{\partial \mathbf{W}_r^{l,t} }.
\label{eq: update for W_r}
\end{flalign}
Note that $\frac{\partial \mathbf{W}_b^{l,t}}{\partial \mathbf{W}_r^{l,t} }$ indicates the element-wise derivative\comment{, leading to a matrix}.
In \cite{binarynet}, $\frac{\partial \mathbf{W}_b^{l,t}(i,j)}{\partial \mathbf{W}_r^{l,t}(i,j) }$ is set to $1$ if $\mathbf{W}_r^{l,t}(i,j) \in [-1,1]$, otherwise $0$.
The derivative $\frac{\partial \mathcal{L}}{\partial \mathbf{W}_b^{l,t}}$ is derived from the chain rule, as follows
\begin{flalign}
\frac{\partial \mathcal{L}}{\partial \mathbf{W}_b^{l,t}}
=
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l+1,t}}
\frac{\partial \mathbf{A}_r^{l+1,t}}{\partial \mathbf{A}_m^{l,t}}
\frac{\partial \mathbf{A}_m^{l,t}}{\partial \mathbf{W}_b^{l,t}}
=
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l+1,t}}
\theta^{l,t}
\mathbf{A}_b^l,
\label{eq: derivative wrt W_b}
\end{flalign}
where $\theta^{l,t} = \frac{\partial \mathbf{A}_r^{l+1,t}}{\partial \mathbf{A}_m^{l,t}}$ denotes the derivative of the BatchNorm layer (see Fig. \ref{fig:training}) and has a negative correlation to $\mathbf{W}_b^{l,t}$ . As $\mathbf{W}_b^{l,t} \in \{-1, +1 \}$, the gradient $\frac{\partial \mathcal{L}}{\partial \mathbf{W}_r^{l,t} }$ is only related to the sign of $\mathbf{W}_r^{l,t}$, while is independent of its magnitude.
Based on this observation, we propose to replace the above sign function by a magnitude-aware function, as follows:
\begin{flalign}
\overline{\mathbf{W}}_b^{l,t} = \frac{ \parallel \mathbf{W}_r^{l,t} \parallel_{1,1}}{|\mathbf{W}_r^{l,t}|} Sign(\mathbf{W}_r^{l,t}),
\label{eq: bar_W_b}
\end{flalign}
where $|\mathbf{W}_r^{l,t}|$ denotes the number of entries in $\mathbf{W}_r^{l,t}$.
Consequently, the update of $\mathbf{W}_r^l$ becomes
\begin{flalign}
\mathbf{W}_r^{l, t+1}
=
\mathbf{W}_r^{l,t} - \eta \frac{\partial \mathcal{L}}{\partial \overline{\mathbf{W}}_b^{l,t}} \frac{\partial \overline{\mathbf{W}}_b^{l,t}}{\partial \mathbf{W}_r^{l,t} }
=
\mathbf{W}_r^{l,t} - \eta
\frac{\partial \mathcal{L}}{\partial \mathbf{A}_r^{l+1,t}}
\overline{\theta}^{l,t}
\mathbf{A}_b^l
\frac{\partial \overline{\mathbf{W}}_b^{l,t}}{\partial \mathbf{W}_r^{l,t} },
\label{eq: new update for W_r}
\end{flalign}
where $\frac{\partial \overline{\mathbf{W}}_b^{l,t}}{\partial \mathbf{W}_r^{l,t} } \approx \frac{\parallel \mathbf{W}_r^{l,t} \parallel_{1,1}}{|\mathbf{W}_r^{l,t}|} \cdot \frac{\partial Sign(\mathbf{W}_r^{l,t})}{\partial \mathbf{W}_r^{l,t}} \approx \mathbf{1}_{|\mathbf{W}_r^{l,t}|<1}$
\comment{$= \frac{ Sign(\mathbf{W}_r^{l,t}) }{|\mathbf{W}_r^{l,t}|} = \mathbf{W}_r^{l,t} \cdot$} and $\overline{\theta}^{l,t}$ is associated with the magnitude of $\mathbf{W}_r^{l,t}$.
Thus, the gradient $\frac{\partial \mathcal{L}}{\partial \mathbf{W}_r^{l,t} }$ is related to both the sign and magnitude of $\mathbf{W}_r^{l,t}$.
After training for convergence, we still use $Sign(\mathbf{W}_r^l)$ to obtain the binary weight $\mathbf{W}_b^l$ (\ie, -1 or +1), and use $\theta^{l}$ to absorb $\frac{ \parallel \mathbf{W}_r^{l} \parallel_{1,1}}{|\mathbf{W}_r^{l}|}$ and to associate with the magnitude of $\mathbf{W}_b^{l}$ used for inference.
\subsubsection{Initialization.}
In \cite{dji}, the initial weights of the 1-bit CNNs are derived from the corresponding real-valued CNN model\comment{(\ie, the same model structure as the 1-bit CNNs, except that the sign function on activation is replaced by ReLU)} pre-trained on ImageNet. However, the activation of ReLU is non-negative, while that of Sign is $-1$ or $+1$. Due to this difference,\comment{the weight of} the real CNNs with ReLU may not provide a suitable initial point for training the 1-bit CNNs. Instead, we propose to replace ReLU with $\text{clip}(-1,x,1)$ to pre-train the real-valued CNN model, as the activation of the clip function is closer to the sign function than ReLU. The efficacy of this new initialization will be evaluated in experiments.
\section{Experiments}
In this section, we firstly introduce the dataset for experiments and implementation details in Sec \ref{sec:dataset_implementation}. Then we conduct ablation study in Sec. \ref{sec:ablation_study} to investigate the effectiveness of the proposed techniques. This part is followed by comparing our Bi-Real net with other state-of-the-art binary networks regarding accuracy in Sec \ref{sec:accuracy_comparison}. Sec. \ref{sec:efficiency_comparison} reports memory usage and computation cost in comparison with other networks.
\subsection{Dataset and Implementation Details}
\label{sec:dataset_implementation}
The experiments are carried out on the ILSVRC12 ImageNet classification dataset \cite{imagenet}. ImageNet is a large-scale dataset with 1000 classes and 1.2 million training images and 50k validation images. Compared to other datasets like CIFAR-10 \cite{cifar10} or MNIST \cite{mnist}, ImageNet is more challenging due to its large scale and great diversity. The study on this dataset will validate the superiority of the proposed Bi-Real network structure and the effectiveness of three training methods for 1-bit CNNs. In our comparison, we report both the top-1 and top-5 accuracies.
For each image in the ImageNet dataset, the smaller dimension of the image is rescaled to 256 while keeping the aspect ratio intact. For \textit{training}, a random crop of size 224 $\times$ 224 is selected. Note that, in contrast to XNOR-Net and the full-precision ResNet, we do not use the operation of random resize, which might improve the performance further. For \textit{inference}, we employ the 224 $\times$ 224 center crop from images.
\comment{\noindent\summer{\textbf{Pre-training:} We prepare the real-valued network for initializing binary network in three steps: 1) Train the network with ReLU nonlinearity function from scratch, following the hyper-parameter settings in \cite{resnet}. 2) Replace ReLU with SReLU\cite{srelu} with the range of (-1,1) and the negative slope of 0.1 and finetune the network for 20 epochs. 3) Finetune the network with clip(-1,x,1) nonlinearity instead of SReLU for 12 epochs. \fixme{this does not deserve such a long paragraph. Better to descript it in the training paragraph.}
}}
\noindent\textbf{Training:} We train two instances of the Bi-Real net, including an \textit{18-layer Bi-Real net} and a \textit{34-layer Bi-Real net}. The training of them consists of two steps: training the 1-bit convolution layer and retraining the BatchNorm. In the first step, the weights in the 1-bit convolution layer are binarized to the sign of real-valued weights multiplying the absolute mean of each kernel. We use the SGD solver with the momentum of 0.9 and set the weight-decay to 0, which means we no longer encourage the weights to be close to 0. For the 18-layer Bi-Real net, we run the training algorithm for 20 epochs with a batch size of 128. The learning rate starts from 0.01 and is decayed twice by multiplying 0.1 at the 10\textit{th} and the 15\textit{th} epoch. For the 34-layer Bi-Real net, the training process includes 40 epochs and the batch size is set to 1024. The learning rate starts from 0.08 and is multiplied by 0.1 at the 20\textit{th} and the 30\textit{th} epoch, respectively. In the second step, we constraint the weights to -1 and 1, and set the learning rate in all convolution layers to 0 and retrain the BatchNorm layer for 1 epoch to absorb the scaling factor.
\noindent\textbf{Inference:} we use the trained model with binary weights and binary activations in the 1-bit convolution layers for inference.
\begin{figure}[t]
\label{fig:three_building_blocks}
\centering
\includegraphics[height=2.5cm]{experiment_figure.png}
\caption{Three different networks differ in the shortcut design of connecting the blocks shown in (a) conjoint layers of Sign, 1-bit Convolution, and the BatchNorm. (b) Bi-Real net with shortcut bypassing every block (c) Res-Net with shortcut bypassing two blocks, which corresponds to the ReLU-only pre-activation proposed in \cite{identity_mapping} and (d) Plain-Net without the shortcut. These three structures shown in (b), (c) and (d) have the same number of weights.}
\label{fig:compare_structure}
\end{figure}
\subsection{Ablation Study}
\label{sec:ablation_study}
\noindent\textbf{Three building blocks.} The shortcut in our Bi-Real net transfers real-valued representation without additional memory cost, which plays an important role in improving its capability. To verify its importance, we implemented a Plain-Net structure without shortcut as shown in Fig. \ref{fig:compare_structure} (d) for comparison. At the same time, as our network structure employs the same number of weight filters and layers as the standard ResNet, we also make a comparison with the standard ResNet shown in Fig. \ref{fig:compare_structure} (c). For a fair comparison, we adopt the ReLU-only pre-activation ResNet structure in \cite{identity_mapping}, which differs from Bi-Real net only in the structure of two layers per block instead of one layer per block. The layer order and shortcut design in Fig. \ref{fig:compare_structure} (c) are also applicable for 1-bit CNN. The comparison can justify the benefit of implementing our Bi-Real net by specifically replacing the 2-conv-layer-per-block Res-Net structure with two 1-conv-layer-per-block Bi-Real structure.
As discussed in Sec. 3, we proposed to overcome the optimization challenges induced by discrete weights and activations by 1) approximation to the derivative of the sign function with respect to activations, 2) magnitude-aware gradient with respect to weights and 3) clip initialization. To study how these proposals benefit the 1-bit CNNs individually and collectively, we train the 18-layer structure and the 34-layer structure with a combination of these techniques on the ImageNet dataset. Thus we derive $2 \times 3 \times 2 \time 2 \times 2 \times 2= 48$ pairs of values of top-1 and top-5 accuracy, which are presented in Table \ref{table:ablation study}.
\setlength{\tabcolsep}{1pt}
\begin{table}[t]
\scriptsize
\begin{center}
\caption{Top-1 and top-5 accuracies (in percentage) of different combinations of the three proposed techniques on three different network structures, Bi-Real net, ResNet and Plain Net, shown in Fig.\ref{fig:compare_structure}.}
\label{table:ablation study}
\begin{tabular}{lllllllllllllll}
\hline
\noalign{\smallskip}
Initiali- & Weight &Activation & \multicolumn{2}{c}{Bi-Real-18} & \multicolumn{2}{c}{Res-18} & \multicolumn{2}{c}{Plain-18} &\multicolumn{2}{c}{Bi-Real-34} &\multicolumn{2}{c}{Res-34} &\multicolumn{2}{c}{Plain-34} \\
zation&update&backward&top-1&top-5&top-1&top-5&top-1&top-5&top-1&top-5&top-1&top-5&top-1&top-5\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{5}{*}{ReLU} & \multirow{2}{*}{Original} &Original&\cellcolor{Gray}32.9& \cellcolor{Gray}56.7& 27.8& 50.5& 3.3& 9.5& \cellcolor{Gray}53.1& \cellcolor{Gray}76.9& 27.5& 49.9& 1.4& 4.8\\
\cline{3-15}
\noalign{\smallskip}
& &Proposed&\cellcolor{Gray}36.8& \cellcolor{Gray}60.8& 32.2& 56.0& 4.7& 13.7& \cellcolor{Gray}58.0& \cellcolor{Gray}81.0& 33.9& 57.9& 1.6& 5.3\\
\cline{2-15}
\noalign{\smallskip}
&\multirow{2}{*}{Proposed} &Original&\cellcolor{Gray}40.5& \cellcolor{Gray}65.1& 33.9& 58.1& 4.3& 12.2& \cellcolor{Gray}59.9& \cellcolor{Gray}82.0& 33.6& 57.9& 1.8& 6.1\\
\cline{3-15}
\noalign{\smallskip}
&&Proposed&\cellcolor{Gray}47.5& \cellcolor{Gray}71.9& 41.6& 66.4& 8.5& 21.5& \cellcolor{Gray}61.4& \cellcolor{Gray}83.3& 47.5& 72.0& 2.1& 6.8\\
\cline{2-15}
\noalign{\smallskip}
&\multicolumn{2}{l}{Real-valued Net} &\cellcolor{Gray}68.5& \cellcolor{Gray}88.3& 67.8& 87.8& 67.5& 87.5& \cellcolor{Gray}70.4& \cellcolor{Gray}89.3& 69.1& 88.3& 66.8& 86.8\\
\hline
\noalign{\smallskip}
\multirow{5}{*}{Clip} & \multirow{2}{*}{Original} &Original&\cellcolor{Gray}37.4& \cellcolor{Gray}62.4& 32.8& 56.7& 3.2& 9.4& \cellcolor{Gray}55.9& \cellcolor{Gray}79.1& 35.0& 59.2& 2.2& 6.9\\
\cline{3-15}
\noalign{\smallskip}
& &Proposed&\cellcolor{Gray}38.1& \cellcolor{Gray}62.7& 34.3& 58.4& 4.9& 14.3& \cellcolor{Gray}58.1& \cellcolor{Gray}81.0& 38.2& 62.6& 2.3& 7.5\\
\cline{2-15}
\noalign{\smallskip}
&\multirow{2}{*}{Proposed} &Original&\cellcolor{Gray}53.6& \cellcolor{Gray}77.5& 42.4& 67.3& 6.7& 17.1& \cellcolor{Gray}60.8& \cellcolor{Gray}82.9& 43.9& 68.7& 2.5& 7.9\\
\cline{3-15}
\noalign{\smallskip}
&&Proposed&\cellcolor{Gray}\textbf{56.4}& \cellcolor{Gray}\textbf{79.5}& 45.7& 70.3& 12.1& 27.7& \cellcolor{Gray}\textbf{62.2}& \cellcolor{Gray}\textbf{83.9}& 49.0& 73.6& 2.6& 8.3\\
\cline{2-15}
\noalign{\smallskip}
&\multicolumn{2}{l}{Real-valued Net} &\cellcolor{Gray}68.0& \cellcolor{Gray}88.1& 67.5& 87.6& 64.2& 85.3& \cellcolor{Gray}69.7& \cellcolor{Gray}89.1& 67.9& 87.8& 57.1& 79.9\\
\hline
\noalign{\smallskip}
\multicolumn{5}{l}{Full-precision original ResNet\cite{resnet}}& 69.3& 89.2&&&&& 73.3 &91.3\\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Based on Table \ref{table:ablation study}, we can evaluate each technique's individual contribution and collective contribution of each unique combination of these techniques towards the final accuracy.
1) Comparing the $4^{\textit{th}}-7^{\textit{th}}$ columns with the $8^{\textit{th}}-9^{\textit{th}}$ columns, both the proposed Bi-Real net and the binarized standard ResNet outperform their plain counterparts with a significant margin, which validates the effectiveness of shortcut and the disadvantage of directly concatenating the 1-bit convolution layers. As Plain-18 has a thin and deep structure, which has the same weight filters but no shortcut, binarizing it results in very limited network representational capacity in the last convolution layer, and thus can hardly achieve good accuracy.
2) Comparing the $4^{\textit{th}}-5^{\textit{th}}$ and $6^{\textit{th}}-7^{\textit{th}}$ columns, the 18-layer Bi-Real net structure improves the accuracy of the binarized standard ResNet-18 by about 18\%. This validates the conjecture that the Bi-Real net structure with more shortcuts further enhances the network capacity compared to the standard ResNet structure. Replacing the 2-conv-layer-per-block structure employed in Res-Net with two 1-conv-layer-per-block structure, adopted by Bi-Real net, could even benefit a real-valued network.
3) All proposed techniques for initialization, weight update and activation backward improve the accuracy at various degrees. For the 18-layer Bi-Real net structure, the improvement from the weight (about 23\%, by comparing the $2^{\textit{nd}}$ and $4^{\textit{th}}$ rows) is greater than the improvement from the activation (about 12\%, by comparing the $2^{\textit{nd}}$ and $3^{\textit{rd}}$ rows) and the improvement from replacing ReLU with Clip for initialization (about 13\%, by comparing the $2^{\textit{nd}}$ and $7^{\textit{th}}$ rows). These three proposed training mechanisms are independent and can function collaboratively towards enhancing the final accuracy.
4) The proposed training methods can improve the final accuracy for all three networks in comparison with the original training method, which implies these proposed three training methods are universally suitable for various networks.
5) The two implemented Bi-Real nets (\textit{i.e.} the 18-layer and 34-layer structures) together with the proposed training methods, achieve approximately 83\% and 89\% of the accuracy level of their corresponding full-precision networks, but with a huge amount of speedup and computation cost saving.
\textit{In short}, the shortcut enhances the network representational capability, and the proposed training methods help the network to approach the accuracy upper bound.
\setlength{\tabcolsep}{1pt}
\begin{table}[t]
\begin{center}
\caption{This table compares both the top-1 and top-5 accuracies of our Bi-real net with other state-of-the-art binarization methods: BinaryNet \cite{binarynet} , XNOR-Net \cite{xnornet}, ABC-Net \cite{dji} on both the Res-18 and Res-34 \cite{resnet}. The Bi-Real net outperforms other methods by a considerable margin.}
\label{table:accuracy_comparison}
\begin{tabular}{cccccccc}
\hline\noalign{\smallskip}
& & Bi-Real net & BinaryNet & ABC-Net & XNOR-Net & Full-precision \\
\noalign{\smallskip}
\hline
\multirow{2}{*}{18-layer} & \ Top-1 & 56.4\% & 42.2\% & 42.7\% & 51.2\% & 69.3\% \\
& \ Top-5 & 79.5\% & 67.1\% & 67.6\% & 73.2\% & 89.2\% \\
\hline
\multirow{2}{*}{34-layer} & \ Top-1 & 62.2\% & -- & 52.4\% & -- &73.3\% \\
& \ Top-5 & 83.9\% & -- & 76.5\% & -- &91.3\% \\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Accuracy Comparison With State-of-The-Art}
\label{sec:accuracy_comparison}
While the ablation study demonstrates the effectiveness of our 1-layer-per-block structure and the proposed techniques for optimal training, it is also necessary to compare with other state-of-the-art methods to evaluate Bi-Real net's overall performance. To this end, we carry out a comparative study with three methods: BinaryNet \cite{binarynet}, XNOR-Net \cite{xnornet} and ABC-Net \cite{dji}. These three networks are representative methods of binarizing both weights and activations for CNNs and achieve the state-of-the-art results. Note that, for a fair comparison, our Bi-Real net contains the same amount of weight filters as the corresponding ResNet that these methods attempt to binarize, differing only in the shortcut design.
Table \ref{table:accuracy_comparison} shows the results. The results of the three networks are quoted directly from the corresponding references, except that the result of BinaryNet is quoted from ABC-Net \cite{dji}.
The comparison clearly indicates that the proposed Bi-Real net outperforms the three networks by a considerable margin in terms of both the top-1 and top-5 accuracies. Specifically, the 18-layer Bi-Real net outperforms its 18-layer counterparts BinaryNet and ABC-Net with relative 33\% advantage, and achieves a roughly 10\% relative improvement over the XNOR-Net. Similar improvements can be observed for 34-layer Bi-Real net. In short, our Bi-Real net is more competitive than the state-of-the-art binary networks.
\subsection{Efficiency and Memory Usage Analysis}
\label{sec:efficiency_comparison}
In this section, we analyze the saving of memory usage and speedup in computation of Bi-Real net by comparing with the XNOR-Net \cite{xnornet} and the full-precision network individually.
The memory usage is computed as the summation of 32 bit times the number of real-valued parameters and 1 bit times the number of binary parameters in the network. For efficiency comparison, we use FLOPs to measure the total real-valued multiplication computation in the Bi-Real net, following the calculation method in \cite{resnet}. As the bitwise XNOR operation and bit-counting can be performed in a parallel of 64 by the current generation of CPUs, the FLOPs is calculated as the amount of real-valued floating point multiplication plus 1/64 of the amount of 1-bit multiplication.
\setlength{\tabcolsep}{1pt}
\begin{table}[t]
\begin{center}
\caption{Memory usage and FLOPs calculation in Bi-Real net.}
\label{table:memoy_n_flop}
\begin{tabular}{ccccccc}
\hline\noalign{\smallskip}
& & Memory usage \ & \ Memory saving & \ FLOPs & \ Speedup \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{3}{*}{18-layer} & \ Bi-Real net & 33.6 Mbit & 11.14 $\times$ &1.63 $\times 10^8$ & 11.06 $\times$ \\
& \ XNOR-Net & 33.7 Mbit & 11.10 $\times$ &1.67 $\times 10^8$ & 10.86 $\times$ \\
& \ Full-precision Res-Net & 374.1 Mbit & -- &1.81 $\times 10^9$ & --\\
\hline
\multirow{3}{*}{34-layer} & \ Bi-Real net & 43.7 Mbit & 15.97 $\times$ &1.93 $\times 10^8$ & 18.99 $\times$ \\
& \ XNOR-Net & 43.9 Mbit & 15.88 $\times$ &1.98 $\times 10^8$ & 18.47 $\times$ \\
& \ Full-precision Res-Net & 697.3 Mbit & -- &3.66 $\times 10^9$ & --\\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
We follow the suggestion in XNOR-Net \cite{xnornet}, to keep the weights and activations in the first convolution and the last fully-connected layers to be real-valued. We also adopt the same real-valued 1x1 convolution in Type B short-cut \cite{resnet} as implemented in XNOR-Net. Note that this 1x1 convolution is for the transition between two stages of ResNet and thus all information should be preserved. As the number of weights in those three kinds of layers accounts for only a very small proportion of the total number of weights, the limited memory saving for binarizing them does not justify the performance degradation caused by the information loss.
\comment{As XNOR-Net \cite{xnornet} suggests, the first convolution and the last fully-connected layers have a small number of parameters but are crucial for performance, thus the weights and activations in those two layers should be kept real-valued. We also follow this suggestion in our implementation. Moreover, we adopt the same real-valued 1x1 convolution in Type B short-cut \cite{resnet} as implemented in XNOR-Net. Note that this 1x1 convolution is for the transition between two stages of ResNet and thus all information should be preserved. As the number of weights in 1x1 convolution accounts for only 1\% of the total number of weights, the limited memory saving for binarizing 1x1 convolution does not justify the performance degradation caused by the information loss. }
\comment{Moreover, we adopt the 1 $\times$ 1 convolution for dimension increasing, as it is very important for maintaining high precision of the real-valued path. This convolution layer is also left as real values. With kernel size of only 1, this real-valued convolution layer only induce few additional real-valued parameters and computation.
Compared with XNOR-Net, the proposed Bi-Real net requires more real-valued memory and computation cost due to the 1 $\times$ 1 convolution for dimension increasing. However, every 1-bit convolution layer is pure binary and contains no real-valued parameters. This is beneficial for the hardware transplant, as the 1-bit convolution layer can be accelerated by dedicated hardware such as memristors \cite{memristor-bitwise} or FPGA \cite{fpga}. Moreover, we have no need to compute the absolute mean of the activation map at inference time, which is not counted in the computation cost of XNOR-Net due to the definition of FLOPs. We believe that is a big computation burden in XNOR-Net. Namely, our Bi-Real net does not necessarily demand more computation resource than XNOR-Net if all the computation cost is counted.}
For both the 18-layer and the 34-layer networks, the proposed Bi-Real net reduces the memory usage by 11.1 times and 16.0 times individually, and achieves computation reduction of about 11.1 times and 19.0 times, in comparison with the full-precision network. \comment{This memory saving and speedup is comparable with the XNOR-Net, while our network achieves higher precision than it.}
Without using real-valued weights and activations for scaling binary ones during inference time, our Bi-Real net requires fewer FLOPs and uses less memory than XNOR-Net and is also much easier to implement.
\section{Conclusion}
In this work, we have proposed a novel 1-bit CNN model, dubbed Bi-Real net. Compared with the standard 1-bit CNNs, Bi-Real net utilizes a simple short-cut to significantly enhance the representational capability. Further, an advanced training algorithm is specifically designed for training 1-bit CNNs (including Bi-Real net), including a tighter approximation of the derivative of the sign function with respect the activation, the magnitude-aware gradient with respect to the weight, as well as a novel initialization.
Extensive experimental results demonstrate that the proposed Bi-Real net and the novel training algorithm show superiority over the state-of-the-art methods.
In future, we will explore other advanced integer programming algorithms (\eg, Lp-Box ADMM \cite{wu2018lp}) to train Bi-Real Net.
\bibliographystyle{splncs04}
|
2,869,038,156,701 | arxiv | \section{Introduction}
Our objective in this paper is to derive Carleman estimates for wave operators with critically singular potentials, that is, with potentials that scale like the principal part of the operator.
More specifically, we are interested in the case of potentials that diverge as an inverse square on a convex hypersurface.
For the present paper, we consider the model operator
\begin{equation}\label{operator}
\Box_\kappa := \square + \frac{ \kappa ( 1 - \kappa ) }{(1-|x|)^2} \text{,}
\end{equation}
where $\square:=-\partial_{tt}+\Delta$ is the wave operator, the spatial domain is the unit ball $B_1$ of $\R^n$, and the constant parameter $\kappa \in \R$ measures the strength of the potential.
\subsection{Background}
To understand why we say ``sharp'', let us consider the Cauchy problem associated with this operator,
\begin{align}\label{Cauchyprob}
\begin{split}
\Box_\kappa u = 0 \quad &\text{in } ( -T, T ) \times B_1 \text{,} \\
u(0,x)=u_0(x) \text{,} \qquad &\partial_tu(0,x)=u_1(x) \text{.}
\end{split}
\end{align}
In spherical coordinates, the equation reads as
\[
- \partial_{tt} u + \partial_{rr} u + \frac{n-1}{r} \partial_r u + \frac{ \kappa ( 1 - \kappa ) }{ (1-r)^2 } u + \frac{1}{r^2} \Delta_{ \Sph^{n-1} } u=0 \text{,}
\]
where $\Delta_{\Sph^{n-1}}$ denotes the Laplacian on the unit sphere. The potential is critically singular at $r=1$, where, according to the classical theory of Frobenius for ODEs, the characteristic exponents of this equation are $\kappa$ and $1-\kappa$.
Therefore, if $\kappa$ is not a half-integer (which ensures that logarithmic branches will not appear), solutions to the equation are expected to behave either like $(1-r)^{\kappa}$ or $(1-r)^{1-\kappa}$ as $r\nearrow1$.
As one can infer by plugging these powers in the energy associated with \eqref{Cauchyprob},
\begin{equation}\label{energy1}
\int_{ \{ t \} \times B_1 } \left\{ ( \partial_t u )^2+ (1-r)^{2\kappa} \left| \nabla_{ x } [ (1-r)^{-\kappa} u ] \right|^2 \right\} \text{,}
\end{equation}
the equation admits exactly one finite-energy solution when $\kappa\leq-\frac12$, no finite-energy
solutions when $\kappa\geq \frac12$, and infinitely many
finite-energy solutions when
\begin{equation}\label{interval}
-\frac12<\kappa<\frac12\,.
\end{equation}
In this range \eqref{interval} of the parameter, which we consider in this paper, one must impose a (Dirichlet, Neumann, or Robin) boundary condition on $( -T, T ) \times \partial B_1$.
This is constructed in terms of the natural Dirichlet and Neumann traces, which now include weights and are defined as the limits
\begin{equation}\label{BCs}
\mc{D}_\kappa u := (1-r)^{-\kappa} u|_{r=1} \text{,} \qquad \mc{N}_\kappa u := -(1-r)^{2\kappa} \partial_r [ (1-r)^{-\kappa} u ] |_{r=1} \text{.}
\end{equation}
Notice that singular weights depending on~$\kappa$ appear everywhere in
this problem, and that all the associated quantities reduce to the standard ones in the absence of the
singular potential, i.e., when $\kappa=0$.
A more detailed discussion of the boundary asymptotics of solutions to \eqref{Cauchyprob} is given in the next section.
The Carleman estimates that we will derive in this paper are sharp, in that the weights that appear capture both the optimal decay rate of the
solutions near the boundary, as well as the natural energy~\eqref{energy1} that appears in the well-posedness theory for the equation.
As we will see, this property is not only desirable but also
essential for applications such as boundary observability.
\subsection{Some Existing Results}
The dispersive properties of wave equations with potentials that
diverge as an inverse square at one point~\cite{Vega,Burq2} or an a
(timelike) hypersurface~\cite{Bachelot} have been thoroughly
studied, as critically singular potentials are notoriously difficult to analyze.
Moreover, a well-posedness theory for a diverse family of boundary conditions was developed for the range \eqref{interval} in \cite{Warnick}.
In the case of one spatial dimension, the observability and controllability of wave equations with critically singular potentials has also received considerable attention in the guise of the degenerate wave equation
$$
\partial_{tt}v-\partial_z (z^\alpha \partial_zv)=0\,,
$$
where the variable $z$ takes values in the positive half-line and the
parameter~$\alpha$ ranges over the interval~$(0,1)$ (see~\cite{Gueye} and the references therein). Indeed, it is not
difficult to show that one can relate equations in this form with the
operator $\Box_\kappa$ in one dimension through a suitable change of
variables, with the parameter $\kappa$ being now some function of the
power $\alpha$. The methods employed in those references, which rely on the
spectral analysis of a one-dimensional Bessel-type operator, provide a
very precise controllability result.
On the other hand, no related Carleman estimates that are applicable to observability results have been found.
This manifests itself in two important limitations: firstly, the available inequalities are not robust under
perturbations on the coefficients of the equation, and secondly, the method of proof cannot be extended to higher-dimensional situations.
Recent results for different notions of observability for parabolic equations with inverse square potentials, which are based on Carleman and multiplier methods, can be found,
e.g., in~\cite{BZuazua,VZuazua}.
Related questions for wave equations with singularities all over the boundary have been presented as very challenging in the open problems section of~\cite{BZuazua}.
As stressed there, the boundary singularity makes the multiplier
approach extremely tricky.
In general, one would not expect Carleman estimates to behave well with singular potentials such as $\kappa(1-\kappa) (1-r)^{-2}$.
Since the singularity in the potential scales just as $\Box$, there is no hope in absorbing it into the estimates by means of a perturbative argument.
Indeed, Carleman estimates generally assume~\cite{Tataru,DosSantos} that the potential is at least in $L^{ (n + 1)/2}_{\mathrm{loc}}$, but this condition is not satisfied
here.
Finally, let us mention that a setting which is closely related to ours is that of linear wave equations on asymptotically anti-de Sitter spacetimes, which are conformally equivalent to analogues of \eqref{operator} on curved backgrounds.
It is worth mentioning that waves on anti-de Sitter spaces have attracted
considerable attention in the recent years due to their connection to
cosmology, see e.g.~\cite{Bachelot, Vergara, Enciso,HS, Warnick} and the
references therein.
Carleman estimates for linear waves were established in this asymptotically anti-de Sitter setting in \cite{HS, HS2}, for the purposes of studying their unique continuation properties from the conformal boundary.
In particular, these estimates capture the natural Dirichlet and Neumann data (i.e., the analogues of \eqref{BCs}).
On the other hand, the Carleman estimates in \cite{HS, HS2} are local in nature and apply only to a neighborhood of the conformal boundary, and they do not capture the naturally associated $H^1$-energy.
As a result, these estimates would not translate into corresponding observability results.
\subsection{The Carleman Estimates}
The main result of the present paper is a novel family of Carleman inequalities for the operator~\eqref{operator} that capture both the natural boundary weights and the natural $H^1$-energy described above.
To the best of our knowledge, these are the first available Carleman estimates for an operator with such a strongly singular potential that also captures the natural boundary data and energy.
Moreover, our estimates hold in all spatial dimensions, except for $n = 2$.
A simplified version of our main estimates can be stated as follows:
\begin{theorem}\label{T.Carleman0}
Let $B_1$ denote the unit ball in $\R^n$, with $n \neq 2$, and fix $- \frac{1}{2} < \kappa < 0$.
Moreover, let $u: ( -T, T ) \times B_1 \rightarrow \R$ be a smooth function, and assume:
\begin{enumerate}[i)]
\item The Dirichlet trace $\mc{D}_\kappa u$ of $u$ vanishes.
\item $u$ ``has the boundary asymptotics of a sufficiently regular, finite energy solution of \eqref{Cauchyprob}".
In particular, the Neumann trace $\mc{N}_\kappa u$ of $u$ exists and is finite.
\item There exists $\delta > 0$ such that $u (t) = 0$ for all $T - \delta \leq |t| < T$.
\end{enumerate}
Then, for $\lambda \gg 1$ large enough, independently of $u$, the following inequality holds:
\begin{align} \label{Carleman0}
&\lambda \int_{ ( -T, T ) \times \partial B_1 } e^{ 2 \lambda f } ( \mc{N}_\kappa u)^2 + \int_{ ( -T, T ) \times B_1}
e^{2\lambda f} (\Box_\kappa u )^2 \\
\notag &\quad \gtrsim \lambda \int_{ ( -T,T ) \times B_1 } e^{ 2 \lambda f}
\Big[ (\partial_t u)^2+(1-|x|)^{2\kappa}\,\big| \nabla_{ x } [ (1-|x|)^{-\kappa} u ] \big|^2 \Big]\\
\notag &\quad\qquad + | \kappa | \lambda^3 \int_{ ( -T,T ) \times B_1 } e^{ 2 \lambda f } ( 1 - |x| )^{6\kappa-1} u^2 \text{,}
\end{align}
where $f$ is the weight
\begin{equation} \label{weight}
f(t, x) := -\frac{1}{1+2\kappa}(1-|x|)^{1+2\kappa}-ct^2 \text{,}
\end{equation}
with a suitably chosen positive constant $c$.
\end{theorem}
A more precise, and slightly stronger, statement of our main Carleman estimates is given further below in Theorem \ref{T.Carleman}.
\begin{remark}
Note that in Theorem \ref{T.Carleman0}, we restricted our strength parameter $\kappa$ to the range $- \frac{1}{2} < \kappa < 0$.
This was imposed for several reasons:
\begin{enumerate}[i)]
\item First, a restriction to the values \eqref{interval} was needed, as this is the range for which a robust well-posedness theory exists \cite{Warnick} for the equation \eqref{Cauchyprob}.
\item The case $\kappa = 0$ is simply the standard free wave equation, for which the existence of Carleman and observability estimates is well-known.
\item On the other hand, the aforementioned spectral results \cite{Gueye} in the $(1 + 1)$-dimensional setting suggest that the analogue of \eqref{Carleman0} is false when $\kappa > 0$.
\end{enumerate}
\end{remark}
\begin{remark}
The constant $c$ in \eqref{weight} is closely connected to the total timespan needed for an observability estimate to hold; see Theorem \ref{T.Observability0} below.
In Theorem~\ref{T.Carleman0}, this $c$ depends on $n$, as well as on $\kappa$ when $n = 3$.
\end{remark}
\begin{remark}
The precise formulation of $u$ in Theorem \ref{T.Carleman0} having the ``expected boundary asymptotics of a solution of \eqref{Cauchyprob}" is given in Definition \ref{admissible} and is briefly justified in the discussion following Definition \ref{admissible}.
\end{remark}
\begin{remark}
One can further strengthen \eqref{Carleman0} to include additional positive terms on the right-hand side that depend on $n$; see Theorem \ref{T.Carleman}.
\end{remark}
\subsection{Ideas of the Proof}
We now discuss the main ideas behind the proof of Theorem \ref{T.Carleman0} (as well as the more precise Theorem \ref{T.Carleman}).
In particular, the proof is primarily based around three ingredients.
The first ingredient is to adopt derivative operations that are well-adapted to our operator $\Box_\kappa$.
In particular, we make use of the ``twisted" derivatives that were pioneered in \cite{Warnick}.
The main observation here is that $\Box_\kappa$ can be written as
\[
\Box_\kappa = - \bar{D} D + \text{l.o.t.} \text{,}
\]
where $D$ is the conjugated (spacetime) derivative operator,
\[
D = D_{ t, x } = ( 1 - | x | )^\kappa \nabla_{ t, x } ( 1 - | x | )^{ - \kappa } \text{,}
\]
where $-\bar{D}$ is the ($L^2$-)adjoint of $D$, and where ``l.o.t." represents lower-order terms that can be controlled by more standard means.
As a result, we can view $D$ as the natural derivative operation for $\Box_\kappa$.
For instance, the twisted $H^1$-energy \eqref{energy1} associated with the Cauchy problem \eqref{Cauchyprob} is best expressed purely in terms
of $D$ (in fact, this energy is conserved for the equation $\bar{D} D u = 0$).
Similarly, in our Carleman estimates \eqref{Carleman0} and their proofs, we will always work with $D$-derivatives, rather than the usual derivatives, of $u$.
This helps us to better exploit the structure of $\Box_\kappa$.
The second main ingredient in the proof of Theorem \ref{T.Carleman0} is the classical Morawetz multiplier estimate for the wave equation.
This estimate was originally developed in \cite{Morawetz} in order to establish integral decay properties for waves in $3$ spatial dimensions.
Analogous estimates hold in higher dimensions as well; see \cite{Tao}, as well as \cite{Keith} and references therein for more recent extensions of Morawetz estimates.
At the heart of the proof of Theorem \ref{T.Carleman0} lies a generalization of the classical Morawetz estimate from $\Box$ to $\Box_\kappa$.
In keeping with the preceding ingredient, we derive this inequality by using the aforementioned twisted derivatives in the place of the usual derivatives.
This produces a number of additional singular terms, which we must arrange so that they have the required positivity.
Finally, our generalized Morawetz bound is encapsulated within a larger Carleman estimate, which is proved using geometric multiplier arguments (see, e.g., \cite{AS, HS, HS2, IK, LTX}).
Again, we adopt twisted derivatives throughout this process, and we must obtain positivity for many additional singular terms that now appear.
Recall that in the standard Carleman-based proofs of observability for wave equations, one employs Carleman weights of the form
\[
f_\ast ( t, x ) = | x |^2 - c t^2 \text{,} \qquad 0 < c < 1 \text{.}
\]
For our present estimates, we make use of a novel Carleman weight \eqref{weight} that is especially adapted to the operator $\Box_\kappa$.
In particular, the $( 1 - | x | )^{ 1 + 2 \kappa }$-term in \eqref{weight}, which has rather singular derivatives at $r = 1$, is needed precisely in order to capture the Neumann boundary data in the left-hand side of \eqref{Carleman0}.
\begin{remark}
That Theorem \ref{T.Carleman0} fails to hold for $n = 2$ can be traced to the fact that the classical Morawetz breaks down for $n = 2$.
In this case, the usual multiplier computations yield a boundary term at $r = 0$ that is divergent.
\end{remark}
\begin{remark}
Both the Carleman estimates \eqref{Carleman0} and the underlying Morawetz estimates can be viewed as ``centered about the origin", and both estimates crucially depend on the domain being spherically symmetric.
As a result, Theorem \ref{T.Carleman0} only holds when the spatial domain is an open ball.
We defer questions of whether Theorem \ref{T.Carleman0} is extendible to more general spatial domains to future papers.
\end{remark}
\subsection{Observability}
The breadth of applications of Carleman estimates to a wide range of PDEs \cite{DZZ, Tataru1} is remarkable.
Examples include unique continuation, control theory, inverse problems, as well as showing the absence of embedded eigenvalues in the continuous spectrum of Schr\"{o}dinger operators.
In this paper, we demonstrate one particular consequence of Theorem~\ref{T.Carleman0}: the boundary observability of linear waves involving a critically singular potential.
Roughly speaking, a boundary observability estimate shows that the energy of a wave confined in a bounded region can be estimated quantitatively only by measuring its boundary data over a large enough time interval.
The key point is again that our Carleman estimates \eqref{Carleman0} capture the natural boundary data and energy associated with our singular wave operator.
As a result of this, Theorem \ref{T.Carleman0} can be combined with
standard arguments in order to prove the following rough
statement:~solutions to the wave equation with a critically singular
potential on the boundary of a cylindrical domain satisfy boundary observability estimates, provided that the observation is made over a large enough timespan.
A rigorous statement of this observability property is given in the subsequent theorem.
Notice that, due to energy estimates that we will show later, it is
enough to control the twisted $H^1$-norm of the solution at time zero:
\begin{theorem}\label{T.Observability0}
Let $B_1$, $n$, and $\kappa$ be as in Theorem \ref{T.Carleman0}.
Moreover, let $u$ be a smooth and real-valued solution of the wave equation
\begin{equation}
\label{Obs_wave_0} \Box_\kappa u = X \cdot D u + V u
\end{equation}
on the cylinder $( -T, T ) \times B_1$, where $X$ is a bounded (spacetime) vector field, and where $V$ is a bounded scalar potential.
Furthermore, suppose $u$ satisfies:
\begin{enumerate}[i)]
\item $\mc{D}_\kappa u = 0$.
\item $u$ ``has the boundary asymptotics of a sufficiently regular, finite energy solution of \eqref{Obs_wave_0}".
In particular, the Neumann trace $\mc{N}_\kappa u$ of $u$ exists and is finite.
\end{enumerate}
Then, for sufficiently large $T$, the following observability estimate holds for $u$:
\begin{equation}\label{Obs0}
\int_{ ( -T,T ) \times \partial B_1 } ( \mc{N}_\kappa u )^2 \gtrsim \int_{ \{ 0 \} \times B_1 }\Big[ ( \partial_t u )^{2} + | (1 - |x| )^\kappa \nabla_x [ (1-|x|)^{-\kappa} u ] |^2 + u^2 \Big] \,.
\end{equation}
\end{theorem}
Again, a more precise (and slightly more general) statement of the observability property can be found in Theorem \ref{T.Observability}.
\begin{remark}
The required timespan $2 T$ in Theorem \ref{T.Observability0} can be shown to depend on $n$, as well as on $\kappa$ when $n = 3$.
This is in direct parallel to the dependence of $c$ in Theorem \ref{T.Carleman0}.
See Theorem \ref{T.Observability} for more precise statements.
\end{remark}
\begin{remark}
Once again, a precise statement of the expected boundary asymptotics for $u$ in Theorem \ref{T.Observability0} is given in Definition \ref{admissible}.
\end{remark}
\begin{remark}
If $\Box_\kappa$ in Theorem \ref{T.Observability0} is replaced by $\Box$ (that is, we consider non-singular wave equations), then observability holds for any $T > 1$.
This can be deduced from either the geometric control condition of \cite{BLR} (see also \cite{BG, Macia}) or from standard Carleman estimates \cite{BBE, LTX, Zhang}.
To our knowledge, the optimal timespan for the observability result in Theorem \ref{T.Observability0} is not known.
\end{remark}
\begin{remark}
For non-singular wave equations, standard observability results also involve observation regions that contain only part of the boundary \cite{BLR, BG, LTX, Lions1}.
On the other hand, as our Carleman estimates \eqref{Carleman0} are centered about the origin, they only yield observability results from the entire boundary.
Whether partial boundary observability results also hold for the singular wave equation in Theorem~\ref{T.Observability0} is a topic of further investigation.
\end{remark}
\subsection{Outline of the Paper}
In Section~\ref{S.asymptotics}, we list some definitions that will be pertinent to our setting, and we establish some general properties that will be useful later on.
Section~\ref{S.multipliers} is devoted to the multiplier inequalities that are fundamental to our main Theorem~\ref{T.Carleman0}.
In particular, these generalize the classical Morawetz estimates to wave equations with critically singular potentials.
In Section~\ref{S.Carleman}, we give a precise statement and a proof of our main Carleman estimates (see Theorem \ref{T.Carleman}).
Finally, our main boundary observability result (see Theorem \ref{T.Observability}) is stated and proved in Section \ref{S.Observability}.
\section{Preliminaries} \label{S.asymptotics}
In this section, we record some basic definitions, and we establish the notations that we will use in the rest of the paper.
In particular, we define weights that capture the boundary behavior of solutions to wave equations rendered by~$\Box_\kappa$.
We also define twisted derivatives constructed using the above weights, and we recall their basic properties.
Furthermore, we prove pointwise inequalities in terms of these twisted derivatives that will later lead to Hardy-type estimates.
\subsection{The Geometric Setting}
Our background setting is the spacetime $\R^{ 1 + n }$.
As usual, we let $t$ and $x$ denote the projections to the first and the last $n$ components of $\R^{ 1 + n }$, respectively, and we let $r := | x |$ denote the radial coordinate.
In addition, we let $g$ denote the Minkowski metric on $\R^{ 1 + n }$.
Recall that with respect to polar coordinates, we have that
\[
g = - dt^2 + dr^2 + r^2 g_{ \Sph^{n-1} } \text{,}
\]
where $g_{ \Sph^{n-1} }$ denotes the metric of the $(n-1)$-dimensional unit sphere.
Henceforth, we use the symbol $\nabla$ to denote the $g$-covariant derivative, while we use $\slashed{\nabla}$ to represent the induced angular covariant derivative on level spheres of $( t, r )$.
As before, the wave operator (with respect to $g$) is defined as
\[
\Box = g^{ \alpha \beta } \nabla_{ \alpha \beta } \text{.}
\]
As it is customary, we use lowercase Greek letters for spacetime indices over $\R^{ n + 1 }$ (ranging from $0$ to $n$), lowercase Latin letters for spatial indices over $\R^n$ (ranging from $1$ to $n$), and uppercase Latin letters for angular indices over $\Sph^{ n - 1 }$ (ranging from $1$ to $n - 1$).
We always raise and lower indices using $g$, and we use the Einstein summation convention for repeated indices.
As in the previous section, we use $B_1$ to denote the open unit ball in $\R^n$, representing the spatial domain for our wave equations.
We also set
\begin{equation}
\label{domain} \mc{C} := ( -T, T ) \times B_1 \text{,} \qquad T > 0 \text{,}
\end{equation}
corresponding to the cylindrical spacetime domain.
In addition, we let
\begin{equation}
\label{domain_tbdry} \Gamma := ( -T, T ) \times \partial B_1
\end{equation}
denote the timelike boundary of $\mc{C}$.
To capture singular boundary behavior, we will make use of weights depending on the radial distance from $\partial B_1$.
Toward this end, we define the function
\begin{equation}
\label{y} y: \R^{ 1 + n } \rightarrow \R \text{,} \qquad y := 1 - r \text{.}
\end{equation}
From direct computations, we obtain the following identities for $y$:
\begin{align}
\label{y_id} \nabla^\alpha y \nabla_\alpha y = 1 \text{,} &\qquad \nabla^{ \alpha \beta } y \nabla_\alpha y \nabla_\beta y = 0 \text{,} \\
\notag \Box y = - ( n - 1 ) r^{-1} \text{,} &\qquad \nabla^\alpha y \nabla_\alpha ( \Box y ) = - ( n - 1 ) r^{-2} \text{,} \\
\notag \Box^2 y = ( n - 1 ) ( n - 3 ) r^{-3} \text{,} &\qquad \nabla^{ \alpha \beta } y \nabla_{ \alpha \beta } y = ( n - 1 ) r^{-2} \text{.}
\end{align}
\subsection{Twisted Derivatives}
From here on, let us fix a constant
\begin{equation}
\label{kappa} - \frac{1}{2} < \kappa < 0 \text{,}
\end{equation}
and let us define the twisted derivative operators
\begin{align}
\label{twisted} D \Phi &:= y^\kappa \nabla ( y^{ - \kappa } \Phi ) = \nabla \Phi - \frac{ \kappa }{ y } \nabla y \cdot \Phi \text{,} \\
\notag \bar{D} \Phi &:= y^{ - \kappa } \nabla ( y^\kappa \Phi ) = \nabla \Phi + \frac{ \kappa }{ y } \nabla y \cdot \Phi \text{,}
\end{align}
where $\Phi$ is any spacetime tensor field.
Observe that $- \bar{D}$ is the formal ($L^2$-)adjoint of $D$.
Moreover, the following (tensorial) product rules hold for $D$ and $\bar{D}$:
\begin{equation}
\label{prod_rule} D ( \Phi \otimes \Psi ) = \nabla \Phi \otimes \Psi + \Phi \otimes D \Psi \text{,} \qquad \bar{D} ( \Phi \otimes \Psi ) = \nabla \Phi \otimes \Psi + \Phi \otimes \bar{D} \Psi \text{.}
\end{equation}
In addition, let $\Box_y$ denote the $y$-twisted wave operator:
\begin{equation}
\label{Box_y} \Box_y := g^{ \alpha \beta } \bar{D}_\alpha D_\beta \text{.}
\end{equation}
A direct computation shows that $\Box_y$ differs from the singular wave operator $\Box_\kappa$ from \eqref{operator} by only a lower-order term.
More specifically, by \eqref{y_id} and \eqref{twisted},
\begin{align}
\label{Box_y_kappa} \Box_y &= \Box + \frac{ \kappa ( 1 - \kappa ) \cdot \nabla^\alpha y \nabla_\alpha y }{ y^2 } - \frac{ \kappa \cdot \Box y }{ y } \\
\notag &= \Box_\kappa + \frac{ ( n - 1 )\kappa }{ r y } \text{.}
\end{align}
In particular, \eqref{Box_y_kappa} shows that, up to a lower-order correction term, $\Box_y$ and $\Box_\kappa$ can be used interchangeably.
In practice, the derivation of our estimates will be carried out in terms of $\Box_y$, as it is better adapted to the twisted operators.
Finally, we remark that since $y$ is purely radial,
\[
D_t \phi = \nab_t \phi = \partial_t \phi \text{,} \qquad D_A \phi = \slashed{\nabla}_A \phi = \partial_A \phi
\]
for scalar functions $\phi$.
Thus, we will use the above notations interchangeably whenever
convenient and whenever there is no risk of confusion.
Moreover, we will write
\[
D_X \phi = X^\alpha D_\alpha \phi
\]
to denote derivatives along a vector field $X$.
\subsection{Pointwise Hardy Inequalities}
Next, we establish a family of pointwise Hardy-type inequalities in terms of the twisted derivative operator $D$:
\begin{proposition}\label{P.Hardy}
For any $q \in \R$ and any $u \in C^1 ( \mc{C} )$, the following holds:
\begin{align}
\label{hardy} y^{ q - 1 } ( D_r u )^2 &\geq \frac{1}{4} ( 2 \kappa + q - 2 )^2 y^{ q - 3 } \cdot u^2 - ( n - 1 ) \left( \kappa + \frac{ q - 2 }{2} \right) y^{ q - 2 } r^{-1} \cdot u^2 \\
\notag &\qquad - \nabla^\beta \left[ \left( \kappa + \frac{ q - 2 }{2} \right) y^{ q - 2 } \nabla_\beta y \cdot u^2 \right] \text{.}
\end{align}
\end{proposition}
\begin{proof}
First, for any $p, b \in \R$, we have the inequality
\begin{align*}
0 &\leq ( y^p \cdot \nabla^\alpha y D_\alpha u + b y^{ p - 1 } \cdot u )^2 \\
&= y^{ 2 p } \cdot ( \nabla^\alpha y D_\alpha u )^2 + b^2 y^{ 2 p - 2 } \cdot u^2 + 2 b y^{ 2 p - 1 } \cdot u \nabla^\alpha y D_\alpha u \\
&= y^{ 2 p } \cdot ( D_r u )^2 + b ( b - 2 \kappa - 2 p + 1 ) y^{ 2 p - 2 } \cdot u^2 \\
&\qquad - b y^{ 2 p - 1 } \Box y \cdot u^2 + \nabla^\beta ( b y^{ 2 p - 1 } \nabla_\beta y \cdot u^2 ) \text{,}
\end{align*}
where we used \eqref{twisted} in the last step.
Setting $2 p = q - 1$, the above becomes
\begin{align*}
y^{ q - 1 } ( D_r u )^2 &\geq - b ( b - 2 \kappa - q + 2 ) y^{ q - 3 } \cdot u^2 + b y^{ q - 2 } \Box y \cdot u^2 \\
&\qquad - \nabla^\beta ( b y^{ q - 2 } \nabla_\beta y \cdot u^2 ) \text{.}
\end{align*}
Taking $b = \kappa + \frac{ q - 2 }{2}$ (which extremizes the above) yields \eqref{hardy}.
\end{proof}
\subsection{Boundary Asymptotics}
We conclude this section by discussing the precise boundary limits for our main results.
First, given $u \in C^1 ( \mc{C} )$, we define its Dirichlet and Neumann traces on $\Gamma$ with respect to $\Box_y$ (or equivalently, $\Box_\kappa$) by
\begin{align}
\label{bddconds} \mc{D}_\kappa u: \Gamma \rightarrow \R \text{,} &\qquad \mc{D}_\kappa u: = \lim_{ r \nearrow 1 } ( y^{ -\kappa } u ) \text{,} \\
\notag \mc{N}_\kappa u: \Gamma \rightarrow \R \text{,} &\qquad \mc{N}_\kappa u := \lim_{ r \nearrow 1 } y^{ 2 \kappa } \partial_r ( y^{ - \kappa } u ) \text{.}
\end{align}
Note in particular that the formulas \eqref{bddconds} are directly inspired from \eqref{BCs}.
Now, the subsequent definition lists the main assumptions we will impose on boundary limits in our Carleman estimates and observability results:
\begin{definition} \label{admissible}
A function $u \in C^1 ( \mc{C} )$ is called \emph{boundary admissible} with respect to $\Box_y$ (or $\Box_\kappa$) when the following conditions hold:
\begin{enumerate}[i)]
\item $\mc{N}_\kappa u$ exists and is finite.
\item The following Dirichlet limits hold for $u$:
\begin{equation}
\label{super_dirichlet} ( 1 - 2 \kappa ) \mc{D}_\kappa ( y^{ - 1 + 2 \kappa } u ) = - \mc{N}_\kappa u \text{,} \qquad \mc{D}_\kappa ( y^{ 2 \kappa } \partial_t u ) = 0 \text{.}
\end{equation}
\end{enumerate}
Here, the Dirichlet and Neumann limits are in an $L^2$-sense on $( -T, T ) \times \Sph^{ n - 1 }$.
\end{definition}
The main motivation for Definition \ref{admissible} is that \emph{it captures the expected boundary asymptotics for solutions of the equation $\Box_y u = 0$ that have vanishing Dirichlet data}.
(In particular, note that $u$ being boundary admissible implies $\mc{D}_\kappa u = 0$.)
To justify this statement, we must first recall some results from \cite{Warnick}.
For $u \in C^1 ( \mc{C} )$ and $\tau \in ( -T, T )$, we define the following twisted $H^1$-norms:
\begin{align}
\label{E1} E_1 [ u ] ( \tau ) &:= \int_{ \mc{C} \cap \{ t = \tau \} } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 + u^2 ) \text{,} \\
\bar{E}_1 [ u ] ( \tau ) &:= \int_{ \mc{C} \cap \{ t = \tau \} } ( | \partial_t u |^2 + | \bar{D}_r u |^2 + | \slashed{\nabla} u |^2 + u^2 ) \text{.}
\end{align}
Moreover, if $u \in C^2 ( \mc{C} )$ as well, then we define the twisted $H^2$-norm,
\begin{equation}
\label{E2} E_2 [ u ] ( \tau ) := \bar{E}_1 [ D_r u ] ( \tau ) + E_1 [ \partial_t u ] ( \tau ) + E_1 [ \slashed{\nabla}_t u ] ( \tau ) + E_1 [ u ] ( \tau ) \text{.}
\end{equation}
The results of \cite{Warnick} show that both $E_1 [ u ]$ and $E_2 [ u ]$ are natural energies associated with the operator $\Box_y$, in that their boundedness is propagated in time for solutions of $\Box_y u = 0$ with Dirichlet boundary conditions.
The following proposition shows that functions with uniformly bounded $E_2$-energy are boundary admissible, in the sense of Definition \ref{admissible}.
In particular, the preceding discussion then implies that boundary admissibility is achieved by sufficiently regular (in a twisted $H^2$-sense) solutions of the singular wave equation $\Box_y u = 0$, with Dirichlet boundary conditions.
\begin{proposition} \label{B.asymp}
Let $u \in C^2 ( \mc{C} )$, and assume that:
\begin{enumerate}[i)]
\item $\mc{D}_\kappa u = 0$.
\item $E_2 [ u ] ( \tau )$ is uniformly bounded for all $\tau \in ( -T, T )$.
\end{enumerate}
Then, $u$ is boundary admissible with respect to $\Box_y$, in the sense of Definition \ref{admissible}.
\end{proposition}
\begin{proof}
Fix $\tau \in ( -T, T )$ and $\omega \in \Sph^{ n - 1 }$, and let $0 < y_1 < y_0 \ll 1$.
Applying the fundamental theorem of calculus and integrating in $y$ yields
\begin{align*}
y^{ 2 \kappa } \partial_r ( y^{ - \kappa } u ) |_{ ( \tau, 1 - y_1, \omega ) } - y^{ 2 \kappa } \partial_r ( y^{ - \kappa } u ) |_{ ( \tau, 1 - y_0, \omega ) }
&= \int_{ y_1 }^{ y_0 } y^\kappa \bar{D}_r ( D_r u ) |_{ ( \tau, 1 - y, \omega ) } dy \text{,}
\end{align*}
where we have described points in $\bar{\mc{C}}$ using polar $( t, r, \omega )$-coordinates.
We now integrate the above over $\Gamma = ( -T, T ) \times \Sph^{ n - 1 }$, and we let $y_1 \searrow 0$.
In particular, observe that for $\mc{N}_\kappa u$ to be finite, it suffices to show that
\[
I := \int_\Gamma \left[ \int_0^{ y_0 } y^\kappa \bar{D}_r ( D_r u ) |_{ ( \tau, 1 - y, \omega ) } dy \right]^2 d \tau d \omega < \infty \text{.}
\]
However, by H\"older's inequality and \eqref{kappa}, we have
\[
I \leq \int_\Gamma \left[ \int_0^{ y_0 } y^{ 2 \kappa } dy \int_0^{ y_0 } | \bar{D}_r ( D_r u ) |^2 |_{ ( \tau, 1 - y, \omega ) } dy \right] d \tau d \omega \lesssim \int_{ -T }^T E_2 [u] ( \tau ) \, d \tau \text{.}
\]
Thus, the assumptions of the proposition imply that $I$, and hence $\mc{N}_\kappa u$, is finite.
Next, to prove the first limit in \eqref{super_dirichlet}, it suffices to show that
\begin{equation}
\label{B.asymp_1} J_{ y_0 } := \int_\Gamma \left( y^{ -1 + \kappa } u |_{ ( \tau, 1 - y_0, \omega ) } + \frac{1}{ 1 + 2 \kappa } \mc{N}_\kappa u |_{ ( \tau, \omega ) } \right)^2 d \tau d \omega \rightarrow 0 \text{,}
\end{equation}
as $y_0 \searrow 0$.
Since $\mc{D}_k u = 0$, then fundamental theorem of calculus implies
\begin{align*}
J_{ y_0 } &= \int_\Gamma \left[ - y_0^{ -1 + 2 \kappa } \int_0^{ y_0 } y^{ - 2 \kappa } y^{ 2 \kappa } \partial_r ( y^{ -\kappa } u ) |_{ ( \tau, 1 - y, \omega ) } dy + \frac{1}{ 1 + 2 \kappa } \mc{N}_\kappa u |_{ ( \tau, \omega ) } \right]^2 d \tau d \omega \\
&= \int_\Gamma \left\{ y_0^{ -1 + 2 \kappa } \int_0^{ y_0 } y^{ - 2 \kappa } [ y^{ 2 \kappa } \partial_r ( y^{ -\kappa } u ) |_{ ( \tau, 1 - y, \omega ) } - \mc{N}_\kappa u |_{ ( \tau, \omega ) } ] dy \right\}^2 d \tau d \omega \text{.}
\end{align*}
Moreover, the Minkowski integral inequality yields
\begin{align*}
\sqrt{ J_{ y_0 } } &\leq y_0^{ -1 + 2 \kappa } \int_0^{ y_0 } y^{ -2 \kappa } \left\{ \int_\Gamma [ y^{ 2 \kappa } \partial_r ( y^{ -\kappa } u ) |_{ ( \tau, 1 - y, \omega ) } - \mc{N}_\kappa u |_{ ( \tau, \omega ) } ]^2 d \tau d \omega \right\}^\frac{1}{2} dy \\
&\lesssim \sup_{ 0 < y < y_0 } \left\{ \int_\Gamma [ y^{ 2 \kappa } \partial_r ( y^{ -\kappa } u ) |_{ ( \tau, 1 - y, \omega ) } - \mc{N}_\kappa u |_{ ( \tau, \omega ) } ]^2 d \tau d \omega \right\}^\frac{1}{2} \text{.}
\end{align*}
By the definition of $\mc{N}_\kappa u$, the right-hand side of the above converges to $0$ when $y_0 \searrow 0$.
This implies \eqref{B.asymp_1}, and hence the first part of \eqref{super_dirichlet}.
For the remaining limit in \eqref{super_dirichlet}, we first claim that $\mc{D}_\kappa ( \partial_t u )$ exists and is finite.
This argument is analogous to the first part of the proof.
Note that since
\[
y^{ - \kappa } \partial_t u |_{ ( \tau, 1 - y_1, \omega ) } - y^{ - \kappa } \partial_t u |_{ ( \tau, 1 - y_0, \omega ) } = \int_{ y_1 }^{ y_0 } y^{ -\kappa } D_r \partial_t u |_{ ( \tau, 1 - y, \omega ) } dy \text{,}
\]
then the claim immediately follows from the fact that
\[
\int_\Gamma \left[ \int_0^{ y_0 } y^{ - \kappa } D_r \partial_t u |_{ ( \tau, 1 - y, \omega ) } dy \right]^2 d \tau d \omega \lesssim \int_{ -T }^T E_2 [u] ( \tau ) \, d \tau < \infty \text{.}
\]
Moreover, to determine $\mc{D}_\kappa ( \partial_t u )$, we see that
for any test function $\varphi \in C^\infty_0 ( \Gamma )$,
\[
\int_\Gamma \mc{D}_\kappa ( \partial_t u ) \cdot \varphi = - \lim_{ y \searrow 0 } \int_\Gamma y^{ - \kappa } u |_{ r = 1 - y } \cdot \partial_t \varphi = - \int_\Gamma \mc{D}_\kappa u \cdot \partial_t \varphi = 0 \text{.}
\]
It then follows that $\mc{D}_\kappa ( \partial_t u ) = 0$.
Finally, to prove the second limit of \eqref{super_dirichlet}, it suffices to show
\begin{equation}
\label{B.asymp_2} K_{ y_0 } := \int_\Gamma ( y^{ - \frac{1}{2} } \partial_t u )^2 |_{ ( \tau, 1 - y_0, \omega ) } d \tau d \omega \rightarrow 0 \text{,} \qquad y_0 \searrow 0 \text{.}
\end{equation}
Using that $\mc{D}_\kappa ( \partial_t u ) = 0$ along with the fundamental theorem of calculus yields
\begin{align*}
K_{ y_0 } &= \int_\Gamma \left[ y_0^{ -\frac{1}{2} + \kappa } \int_0^{ y_0 } y^{ - \kappa } D_r \partial_t u |_{ ( \tau, 1 - y, \omega ) } dy \right]^2 d \tau d \omega \\
&\leq y_0^{ -1 + 2 \kappa } \int_\Gamma \left[ \int_0^{ y_0 } y^{ - 2 \kappa } d y \int_0^{ y_0 } ( D_r \partial_t u )^2 |_{ ( \tau, 1 - y, \omega ) } dy \right] d \tau d \omega \\
&\lesssim \int_0^{ y_0 } \int_\Gamma ( D_r \partial_t u )^2 |_{ ( \tau, 1 - y, \omega ) } d \tau d \omega dy \text{.}
\end{align*}
The integral on the right-hand side is (the time integral of) $E_2 [ u ] ( \tau )$, restricted to the region $1 - y_0 < r < 1$.
Since $E_2 [ u ] ( \tau )$ is uniformly bounded, it follows that $K_{ y_0 }$ indeed converges to zero as $y_0 \searrow 0$, completing the proof.
\end{proof}
\begin{remark}
From the intuitions of \cite{Gueye}, one may conjecture that Proposition \ref{B.asymp} could be further strengthened, with the boundedness assumption on $E_2 [ u ]$ replaced by a sharp boundedness condition on an appropriate fractional $H^{ 1 + \kappa }$-norm.
However, we will not pursue this question in the present paper.
\end{remark}
\section{Multiplier Inequalities} \label{S.multipliers}
In this section, we derive some multiplier identities and inequalities, which form the foundations of the proof of the main Carleman estimates, Theorem \ref{T.Carleman}.
As mentioned before, these can be viewed as extensions to singular wave operators of the classical Morawetz inequality for wave equations.
In what follows, we fix $0 < \varepsilon \ll 1$, and we define the cylindrical region
\begin{equation}
\label{C_eps} \mc{C}_\varepsilon := ( -T, T ) \times \{ \varepsilon < r < 1 - \varepsilon \} \text{.}
\end{equation}
Moreover, let $\Gamma_\varepsilon$ denote the timelike boundary of $\mc{C}_\varepsilon$:
\begin{equation}
\label{Gamma_eps} \Gamma_\varepsilon := \Gamma_\varepsilon^- \cup \Gamma_\varepsilon^+ := [ ( -T, T ) \times \{ r = \varepsilon \} ] \cup [ ( -T, T ) \times\{ r = 1 - \varepsilon \} ] \text{.}
\end{equation}
We also let $\nu$ denote the unit outward-pointing ($g$-)normal vector field on $\Gamma_\varepsilon$.
Finally, we fix a constant $c > 0$, and we define the functions
\begin{equation}
\label{f,z} f := - \frac{1}{ 1 + 2 \kappa } \cdot y^{ 1 + 2\kappa } - c t^2 \text{,} \qquad z := - 4 c \text{,}
\end{equation}
which will be used to construct the multiplier for our upcoming inequalities.
\subsection{A Preliminary Identity}
We begin by deriving a preliminary form of our multiplier identity, for which the multiplier is defined using $f$ and $z$:
\begin{proposition} \label{T.mult_general}
Let $u \in C^\infty ( \mc{C} )$, and assume $u$ is supported on $\mc{C} \cap \{ |t| < T - \delta \}$ for some $0 < \delta \ll 1$.
Then, we have the identity,
\begin{align}
\label{gralmult} - \int_{ \mc{C}_\varepsilon } \Box_y u \cdot S_{ f, z } u &= \int_{ \mc{C}_\varepsilon } ( \nabla^{ \alpha \beta } f + z \cdot g^{ \alpha \beta } ) D_\alpha u D_\beta u + \int_{ \mc{C}_\varepsilon } \mc{A}_{ f, z } \cdot u^2 \\
\notag &\qquad - \int_{ \Gamma_\varepsilon } S_{ f, z } u \cdot D_\nu u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta u
D^\beta u \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot u^2 \text{,}
\end{align}
for any $0 < \varepsilon \ll 1$, where
\begin{align}
\label{gral_wAS} w_{ f, z } &:= \frac{1}{2} \left( \Box f + \frac{ 2 \kappa }{y} \nabla_\alpha y \nabla^\alpha f \right) + z \text{,} \\
\notag \mc{A}_{ f, z } &:= - \frac{1}{2} \left( \Box w_{ f, z } + \frac{ 2 \kappa }{y} \nabla_\alpha y \nabla^\alpha w_{ f, z } \right) \text{,} \\
\notag S_{ f, z } &:= \nabla^\alpha f \cdot D_\alpha + w_{ f, z } \text{.}
\end{align}
\end{proposition}
\begin{proof}
Integrating the left-hand side of \eqref{gralmult} by parts twice reveals that
\begin{align*}
- \int_{ \mc{C}_\varepsilon } \Box_y u \cdot \nabla^\alpha f D_\alpha u &= \int_{ \mc{C}_\varepsilon } D_\beta u \cdot D^\beta ( \nabla^\alpha f D_\alpha u ) - \int_{ \Gamma_\varepsilon } \nabla^\alpha f D_\alpha u \cdot D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } \nabla^{ \alpha \beta } f \cdot
D_\alpha u D_\beta u + \int_{ \mc{C}_\varepsilon } \nabla^\alpha f \cdot D_\beta u D_\alpha{}^\beta u \\
&\qquad - \int_{ \Gamma_\varepsilon } \nabla^\alpha f D_\alpha u \cdot D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } \nabla^{ \alpha \beta } f \cdot D_\alpha u D_\beta u + \frac{1}{2} \int_{ \mc{C}_\varepsilon } \nabla^\alpha f \cdot \nabla_\alpha ( D_\beta u D^\beta u ) \\
&\qquad - \int_{ \mc{C}_\varepsilon } \frac{ \kappa }{y} \nabla_\alpha y \nabla^\alpha f \cdot D_\beta u D^\beta u - \int_{ \Gamma_\varepsilon } \nabla^\alpha f D_\alpha u \cdot D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } \left[ \nabla^{ \alpha \beta } f - \frac{1}{2}\left( \Box f + \frac{ 2 \kappa }{y} \nabla_\alpha y \nabla^\alpha f \right) g^{\alpha\beta} \right] \cdot D_\alpha u D_\beta u \\
&\qquad - \int_{ \Gamma_\varepsilon } \nabla^\alpha f D_\alpha u \cdot D_\nu u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta u D^\beta u \text{,}
\end{align*}
where in the above steps, we also applied the identities \eqref{twisted}, \eqref{prod_rule}, \eqref{Box_y}, as well as the observation that $\bar{D}$ is the adjoint of $D$.
A similar set of computations also yields
\begin{align*}
- \int_{ \mc{C}_\varepsilon } \Box_y u \cdot w_{ f, z } u &= \int_{ \mc{C}_\varepsilon } D^\alpha u D_\alpha ( w_{ f, z } u ) - \int_{ \Gamma_\varepsilon } w_{ f, z } \cdot u D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } \nabla_\alpha w_{ f, z } \cdot u D^\alpha u + \int_{ \mc{C}_\varepsilon } w_{ f, z } \cdot D^\alpha u D_\alpha u - \int_{ \Gamma_\varepsilon } w_{ f, z } \cdot u D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } w_{ f, z } \cdot D^\alpha u D_\alpha u + \frac{1}{2} \int_{ \mc{C}_\varepsilon } \nabla_\alpha w_{ f, z } \cdot \nabla^\alpha ( u^2 ) \\
&\qquad - \int_{ \mc{C}_\varepsilon } \frac{ \kappa }{y} \nabla^\alpha y \nabla_\alpha w_{ f, z } \cdot u^2 - \int_{ \Gamma_\varepsilon } w_{ f, z } \cdot u D_\nu u \\
&= \int_{ \mc{C}_\varepsilon } w_{ f, z } \cdot D^\alpha u D_\alpha u - \frac{1}{2} \int_{ \mc{C}_\varepsilon } \left( \Box w_{ f, z } + \frac{ 2 \kappa }{y} \nabla^\alpha y \nabla_\alpha w_{ f, z } \right) \cdot u^2 \\
&\qquad - \int_{ \Gamma_\varepsilon } w_{ f, z } \cdot u D_\nu u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot u^2 \text{.}
\end{align*}
Adding the above two identities results in \eqref{gralmult}.
\end{proof}
\subsection{Computations for $f$ and $z$}
In the following proposition, we collect some computations involving the functions $f$ and $z$ that will be useful later on.
\begin{proposition} \label{T.f,z}
$f$, $w_{ f, z }$, and $\mc{A}_{ f, z }$ (defined as in \eqref{f,z} and \eqref{gral_wAS}) satisfy
\begin{align}
\label{wAS} \nabla_{ \alpha \beta } f &= y^{ 2 \kappa } \cdot \nabla_{ \alpha \beta } r - 2 \kappa y^{ 2 \kappa - 1 } \cdot \nabla_\alpha r \nabla_\beta r - 2 c \cdot \nabla_\alpha t \nabla_\beta t \text{,} \\
\notag w_{ f, z } &= - 2 \kappa \cdot y^{ 2 \kappa - 1 } + \frac{1}{2} ( n - 1 ) \cdot y^{ 2 \kappa } r^{-1} - 3 c \,,\\
\notag \mc{A}_{ f, z } &= 2 \kappa ( 2 \kappa - 1 )^2 \cdot y^{ 2 \kappa - 3 } - \frac{1}{2} ( n - 1 ) \kappa ( 8 \kappa - 3 ) \cdot y^{ 2 \kappa - 2 } r^{-1} \\
\notag &\qquad + \frac{1}{2} ( n - 1 ) ( n - 4 ) \kappa \cdot y^{ 2 \kappa -
1 } r^{-2} + \frac{1}{4} ( n - 1 ) ( n - 3 ) \cdot y^{ 2 \kappa } r^{-3} \text{.}
\end{align}
\end{proposition}
\begin{proof}
First, we fix $q \in \R \setminus \{ -1 \}$, and we let
\begin{equation}
\label{fq} f_q := - \frac{ y^{ 1 + q } }{ 1 + q } \text{.}
\end{equation}
Note that $f_q$ satisfies
\begin{align}
\label{fq_deriv} \nabla_\alpha f_q &= - y^q \cdot \nabla_\alpha y \text{,} \\
\notag \nabla_{ \alpha \beta } f_q &= - y^q \cdot \nabla_{ \alpha \beta } y - q y^{ q - 1 } \cdot \nabla_\alpha y \nabla_\beta y \text{,} \\
\notag \Box f_q &= - y^q \cdot \Box y - q y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha y \text{,} \\
\notag \frac{ 2 \kappa }{y} \cdot \nabla^\alpha y \nabla_\alpha f_q &= - 2 \kappa y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha y \text{.}
\end{align}
Next, using the notations from \eqref{gral_wAS}, along with \eqref{y_id} and \eqref{fq_deriv}, we have
\begin{align}
\label{wq} w_{ f_q, 0 } &= - \frac{1}{2} y^q \cdot \Box y - \left( \kappa + \frac{ q }{2} \right) y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha y \\
\notag &= - \left( \kappa + \frac{ q }{2} \right) \cdot y^{ q - 1 } + \frac{ n - 1 }{2} \cdot y^q r^{-1} \text{.}
\end{align}
Moreover, further differentiating \eqref{wq} and again using \eqref{y_id}, we see that
\begin{align*}
\Box w_{ f_q, 0 } &= - \frac{1}{2} ( q + 2 \kappa ) ( q - 1 ) ( q - 2 ) y^{ q - 3 } \cdot ( \nabla^\alpha y\nabla_\alpha y )^2 \\
&\qquad - ( q - 1 ) [( q + \kappa ) \Box y \nabla^\alpha y \nabla_\alpha y + 2 ( q + 2 \kappa ) \nabla^{ \alpha \beta } y \nabla_\alpha y \nabla_\beta y]\cdot y^{ q - 2 } \\
&\qquad - 2 ( q + \kappa ) y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha ( \Box y ) - ( q + 2 \kappa ) y^{ q - 1 } \cdot \nabla^{ \alpha \beta } y \nabla_{ \alpha \beta } y \\
&\qquad - \frac{1}{2} q y^{ q - 1 } \cdot ( \Box y)^2 - \frac{1}{2} y^q \cdot \Box^2 y \text{,} \\
\frac{ 2 \kappa }{y} \nabla^\alpha y \nabla_\alpha w_{ f_q, 0 } &= - \kappa ( q + 2 \kappa ) ( q - 1 ) y^{ q - 3 } \cdot ( \nabla^\alpha y \nabla_\alpha y )^2 - \kappa q y^{ q - 2 } \cdot \Box y \nabla^\alpha y \nabla_\alpha y \\
&\qquad - 2 \kappa ( q + 2 \kappa ) y^{ q - 2 } \cdot \nabla^{ \alpha \beta } y \nabla_\alpha y \nabla_\beta y - \kappa y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha ( \Box y ) \text{.}
\end{align*}
We can then use the above to compute the coefficient $\mc{A}_{ f_q, 0 }$:
\begin{align}
\label{Aq} \mc{A}_{ f_q, 0 } &= \frac{1}{4} ( q + 2 \kappa ) ( q + 2 \kappa - 2 ) ( q - 1 ) y^{ q - 3 } \cdot ( \nabla^\alpha y \nabla_\alpha y )^2 \\
\notag &\qquad + \frac{1}{2} ( q^2 - q + 2 \kappa q - \kappa ) y^{ q - 2 } \cdot \Box y \nabla^\alpha y \nabla_\alpha y \\
\notag &\qquad + ( q + 2 \kappa ) ( q + \kappa - 1 ) y^{ q - 2 } \cdot \nabla^{ \alpha \beta } y \nabla_\alpha y \nabla_\beta y \\
\notag &\qquad + \frac{1}{2} ( 2 q + 3 \kappa ) y^{ q - 1 } \cdot \nabla^\alpha y \nabla_\alpha ( \Box y ) + \frac{1}{2} ( q + 2 \kappa ) y^{ q - 1 } \cdot \nabla^{ \alpha \beta } y \nabla_{ \alpha \beta } y \\
\notag &\qquad + \frac{1}{4} q y^{ q - 1 } \cdot ( \Box y )^2 + \frac{1}{4} y^q \cdot \Box^2 y \\
\notag &= \frac{1}{4} ( q + 2 \kappa ) ( q + 2 \kappa - 2 ) ( q - 1 ) \cdot y^{ q - 3 } \\
\notag &\qquad - \frac{1}{2} ( n - 1 ) ( q^2 - q + 2 \kappa q - \kappa ) \cdot y^{ q - 2 } r^{-1} \\
\notag &\qquad + \frac{1}{4} ( n - 1 ) [ q ( n - 3 ) - 2 \kappa ] \cdot y^{ q - 1 } r^{-2} + \frac{1}{4} ( n - 1 ) ( n - 3 ) \cdot y^q r^{-3} \text{.}
\end{align}
Notice from \eqref{f,z} and \eqref{fq} that we can write
\[
f = f_{ 2 \kappa } - c t^2 \text{,}
\]
Thus, substituting $q = 2 \kappa$ in \eqref{fq}, we see that the Hessian of $f$ satisfies
\begin{align*}
\nabla_{ \alpha \beta } f &= \nabla_{ \alpha \beta } f_{ 2 \kappa } - c \nabla_{ \alpha \beta } t^2 \\
&= y^{ 2 \kappa } \cdot \nabla_{ \alpha \beta } r - 2 \kappa y^{ 2 \kappa - 1 } \cdot \nabla_\alpha r \nabla_\beta r - 2 c \nabla_\alpha t \nabla_\beta t \text{,}
\end{align*}
which is precisely the first part of \eqref{wAS}.
Moreover, noting that
\[
w_{ - c t^2, 0 } = c \text{,}
\]
then we also have
\begin{align*}
w_{ f, z } &= w_{ f_{ 2 \kappa }, 0 } + w_{ - c t^2, 0 } + z \\
&= - 2 \kappa \cdot y^{ 2 \kappa - 1 } + \frac{1}{2} ( n - 1 ) \cdot y^{ 2 \kappa } r^{-1} - 3 c \,,
\end{align*}
which gives the second equation in \eqref{wAS}.
Finally, noting that
\[
\mc{A}_{ - c t^2, 0 } = 0 \text{,} \qquad - \frac{1}{2} \left( \Box z + \frac{ 2 \kappa }{y} \cdot \nabla^\alpha y \nabla_\alpha z \right) = 0 \text{,}
\]
we obtain, with the help of \eqref{y_id}, the last equation of \eqref{wAS}:
\begin{align*}
\mc{A}_{ f, z } &= \mc{A}_{ f_{ 2 \kappa }, 0 } + \mc{A}_{ - c t^2, 0 } - \frac{1}{2} \left( \Box z + \frac{ 2 \kappa }{y} \cdot \nabla^\alpha y \nabla_\alpha z \right) \\
&= 2 \kappa ( 2 \kappa - 1 )^2 y^{ 2 \kappa - 3 } \cdot ( \nabla^\alpha y \nabla_\alpha y )^2 + \frac{1}{2} \kappa ( 8 \kappa - 3 ) y^{ 2 \kappa - 2 } \cdot \Box y \nabla^\alpha y \nabla_\alpha y \\
&\qquad + 4 \kappa ( 3 \kappa - 1 ) y^{ 2 \kappa - 2 } \cdot \nabla^{ \alpha \beta } y \nabla_\alpha y \nabla_\beta y + \frac{7}{2} \kappa y^{ 2 \kappa - 1 } \cdot \nabla^\alpha y \nabla_\alpha ( \Box y ) \\
&\qquad + 2 \kappa y^{ 2 \kappa - 1 } \cdot \nabla^{ \alpha \beta } y \nabla_{ \alpha \beta } y + \frac{1}{2} \kappa y^{ 2 \kappa - 1 } \cdot ( \Box y )^2 + \frac{1}{4} y^{ 2 \kappa } \cdot \Box^2 y \\
&= 2 \kappa ( 2 \kappa - 1 )^2 \cdot y^{ 2 \kappa - 3 } - \frac{1}{2} ( n - 1 ) \kappa ( 8 \kappa - 3 ) \cdot y^{ 2 \kappa - 2 } r^{-1} \\
&\qquad + \frac{1}{2} ( n - 1 ) ( n - 4 ) \kappa \cdot y^{ 2 \kappa - 1 } r^{-2} + \frac{1}{4} ( n - 1 ) ( n - 3 ) \cdot y^{ 2 \kappa } r^{-3} \text{.} \qedhere
\end{align*}
\end{proof}
\subsection{The Main Inequality}
We conclude this section with the multiplier inequality that will be used to prove our main Carleman estimate:
\begin{proposition} \label{T.multineq}
Let $f$ and $z$ be as in \eqref{f,z}, and let $u \in C^\infty ( \mc{C} )$ be supported on $\mc{C} \cap \{ |t| < T - \delta \}$ for some $0 < \delta \ll 1$.
Then, we have the inequality
\begin{align}
\label{multineq} - \int_{ \mc{C}_\varepsilon } \Box_y u \cdot S_{ f, z } u &\geq \int_{ \mc{C}_\varepsilon } [ ( 1 - 4 c ) \cdot | \slashed{\nabla} u |^2 + 2 c \cdot ( \partial_t u )^2 - 4 c \cdot ( D_r u )^2 ] \\
\notag &\qquad - \frac{1}{2} ( n - 1 ) \kappa \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-2} [ r - ( n - 4 ) y ] \cdot u^2 \\
\notag &\qquad + \frac{1}{4} ( n - 1 ) ( n - 3 ) \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa } r^{-3} \cdot u^2 - \int_{ \Gamma_\varepsilon } S_{ f,z } u \cdot D_\nu u \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta u D^\beta u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot u^2 \\
\notag &\qquad + 2 \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot u^2 \text{,}
\end{align}
for any $0 < \varepsilon \ll 1$, where $w_{ f, z }$ and $S_{ f, z }$ are defined as in \eqref{gral_wAS}.
\end{proposition}
\begin{proof}
Applying the multiplier identity \eqref{gralmult}, with $f$ and $z$ from \eqref{f,z}, and recalling the formulas \eqref{wAS} for $\nabla^2 f$, $w_{ f, z }, $ and $\mc{A}_{ f, z }$, we obtain that
\[
I := - \int_{ \mc{C}_\varepsilon } \Box_y u \cdot S_{ f, z } u
\]
satisfies the identity
\begin{align}
\label{multineq_1} I &= \int_{\mc{C}_\varepsilon} ( y^{ 2 \kappa } \nabla^{ \alpha \beta } r - 2 \kappa y^{ -1 + 2 \kappa } \nabla^\alpha r \nabla^\beta r - 2 c \nabla^\alpha t \nabla^\beta t - 4 c g^{\alpha \beta } ) D_\alpha u D_\beta u \\
\notag &\qquad + 2 \kappa ( 2 \kappa - 1 )^2 \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 3 } u^2 - \frac{1}{2} ( n - 1 ) \kappa ( 8 \kappa - 3 ) \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-1} u^2 \\
\notag &\qquad + \frac{1}{2} ( n - 1 ) ( n - 4 ) \kappa \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 1 } r^{-2} u^2 + \frac{1}{4} ( n - 1 ) ( n - 3 ) \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa } r^{-3} u^2 \\
\notag &\qquad - \int_{ \Gamma_\varepsilon } S_{ f, z } u \cdot D_\nu u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta u D^\beta u + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot u^2 \text{.}
\end{align}
For the first-order terms in the multiplier identity, we notice that
\[
\nabla^{ \alpha \beta } r \cdot D_\alpha u D_\beta u = r^{-1} |\slashed{\nabla} u |^2 \text{,} \qquad | \slashed{\nabla} u |^2 = g^{AB} \slashed{\nabla}_A u \slashed{\nabla}_B u \text{,}
\]
and we hence expand
\begin{align}
\label{multineq_2} &( y^{ 2 \kappa } \cdot \nabla^{ \alpha \beta } r - 2 \kappa y^{ -1 + 2 \kappa } \nabla^\alpha r \nabla^\beta r - 2 c \cdot \nabla^\alpha t \nabla^\beta t - 4 c \cdot g^{ \alpha \beta } ) D_\alpha u D_\beta u \\
\notag &\quad \geq - 2 \kappa y^{ - 1 + 2 \kappa } ( D_r u )^2 + ( y^{ 2 \kappa } r^{-1} - 4 c ) | \slashed{\nabla} u |^2 + 2 c ( \partial_t u )^2 - 4 c ( D_r u )^2 \\
\notag &\quad \geq - 2 \kappa y^{ - 1 + 2 \kappa } ( D_r u )^2 + ( 1 - 4 c ) | \slashed{\nabla} u |^2 + 2 c ( \partial_t u )^2 - 4 c ( D_r u )^2 \text{.}
\end{align}
Moreover, applying the Hardy inequality \eqref{hardy}, with $q = 2 \kappa$, yields
\begin{align}
\label{multineq_3} - 2 \kappa y^{ 2 \kappa - 1 } ( D_r u )^2 &\geq - 2 \kappa ( 2 \kappa - 1 )^2 y^{ 2 \kappa - 3 } u^2 + ( n - 1 ) 2 \kappa ( 2 \kappa - 1 ) y^{ 2 \kappa - 2 } r^{-1} u^2 \\
\notag &\qquad + 2 \kappa ( 2 \kappa - 1 )\nabla^\beta (y^{ 2 \kappa - 2 } \nabla_\beta y \cdot u^2 ) \text{.}
\end{align}
The desired inequality \eqref{multineq} now follows by combining \eqref{multineq_1}--\eqref{multineq_3} and applying the divergence theorem to the last term in \eqref{multineq_3}.
\end{proof}
\section{The Carleman Estimates} \label{S.Carleman}
In this section, we apply the preceding multiplier inequality to obtain our main Carleman estimates.
The precise statement of our estimates is the following:
\begin{theorem} \label{T.Carleman}
Assume $n \neq 2$, and fix $-\frac{1}{2} < \kappa < 0$.
Also, let $u \in C^\infty ( \mc{C} )$ satisfy:
\begin{enumerate}[i)]
\item $u$ is boundary admissible (see Definition \ref{admissible}).
\item $u$ is supported on $\mc{C} \cap \{ |t| < T - \delta \}$ for some $\delta > 0$.
\end{enumerate}
Then, there exists some sufficiently large $\lambda_0 > 0$, depending only on $n$ and $\kappa$, such that the following Carleman inequality holds for all $\lambda \geq \lambda_0$:
\begin{align}
\label{Carleman} &\lambda \int_{\Gamma} e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 + \int_{ \mc{C} } e^{ 2 \lambda f } ( \Box_\kappa u )^2 \\
\notag &\quad \geq C_0 \lambda \int_{\mc{C} } e^{ 2 \lambda f } [ ( \partial_t u )^2 + | \slashed{\nabla} u |^2 + ( D_r u )^2 ] + C_0 \lambda^3 \int_{ \mc{C} } e^{ 2 \lambda f } y^{ 6 \kappa - 1 } u^2 \\
\notag &\quad\qquad + C_0 \lambda \cdot \begin{cases}
\int_{ \mc{C} } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-3} & \quad n \geq 4 \\
\int_{ \mc{C} } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-2} & \quad n = 3 \\
0 & \quad n = 1
\end{cases} \text{.}
\end{align}
where the constant $C_0 > 0$ depends on $n$ and $\kappa$, where
\[
f = - \frac{1}{ 1 + 2 \kappa } \cdot y^{ 1 + 2 \kappa } - c t^2 \,,
\]
as in~\eqref{f,z},
and where the constant $c$ satisfies
\begin{equation}
\label{eqc} 0 < c < \frac{1}{5} \text{,} \qquad
\begin{cases}
c \leq \frac{ 1 }{ 4 \sqrt{3} \cdot T} & \quad n \geq 4 \\
c \leq \min \left\{ \frac{1}{4 \sqrt{15} \cdot T}, \frac{|\kappa|}{120} \right\} & \quad n = 3 \\
c \leq \frac{ 1}{ 4 \sqrt{15} \cdot T} & \quad n = 1
\end{cases} \text{.}
\end{equation}
\end{theorem}
The proof of Theorem~\ref{T.Carleman} is carried out in remainder of this section.
\begin{remark}
We note that parts of this proof will treat the cases $n = 1$, $n = 3$, and $n \geq 4$ separately.
This accounts for the difference in the assumptions for $c$ in \eqref{eqc}, which will affect the required timespan in our upcoming observability inequalities.
\end{remark}
\subsection{The Conjugated Inequality}
From here on, let us assume the hypotheses of Theorem \ref{T.Carleman}.
Let us also suppose that $\lambda_0$ is sufficiently large, with its precise value depending only on $n$ and $\kappa$.
In addition, we define the following:
\begin{equation}
\label{eqv} v := e^{ \lambda f } u \text{,} \qquad \mc{L} v := e^{ \lambda f } \Box_y ( e^{ - \lambda f } v ) \text{.}
\end{equation}
The objective of this subsection is to establish the following inequality for $v$:
\begin{lemma} \label{L.conjest}
For any $\lambda \geq \lambda_0$, we have the inequality
\begin{align}
\label{conjest} \frac{1}{ 4 \lambda } \int_{ \mc{C}_\varepsilon } ( \mc{L} v )^2 &\geq \frac c 2 \int_{ \mc{C}_\varepsilon } \left[( \partial_t v )^2+ | \slashed{\nabla} v |^2 + ( D_r v )^2 \right] - \frac{1}{2} \kappa \lambda^2 \int_{ \mc{C}_\varepsilon } y^{ 6 \kappa - 1 } v^2 \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta v D^\beta v - \int_{ \Gamma_\varepsilon } S_{ f, z } v \cdot D_\nu v \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) -8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot v^2 + 2 \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \\
\notag &\qquad + \begin{cases}
c_1 \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-3} \cdot v^2 & \quad n \geq 4 \\
c_1 \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-2} \cdot v^2 + c_2 \int_{ \Gamma_\varepsilon } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 & \quad n = 3 \\
c_2 \int_{ \Gamma_\varepsilon } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 & \quad n = 1
\end{cases} \,,
\end{align}
where $S_{ f, z }$ and $w_{ f, z }$ are defined as in \eqref{gral_wAS} and \eqref{wAS}, where the constant $c_1 > 0$ depends on $n$ and $\kappa$, and where the constant $c_2 > 0$ depends on $n$.
\end{lemma}
\begin{proof}
First, observe that by \eqref{twisted}--\eqref{Box_y}, we can expand $\mc{L} v$ as follows:
\begin{align}
\label{conjest_01} \mc{L} v &= e^{ \lambda f } \bar{D}^\alpha D_\alpha ( e^{ - \lambda f } v ) \\
\notag &= e^{ \lambda f } \bar{D}^\alpha ( e^{ - \lambda f } D_\alpha v ) - \lambda e^{ \lambda f } \bar{D}^\alpha ( e^{ - \lambda f } \nabla_\alpha f \cdot v ) \\
\notag &= \Box_y v - \lambda \nabla^\alpha f ( D_\alpha \psi + \bar{D}_\alpha v ) - \lambda \Box f \cdot v + \lambda^2 \nabla^\alpha f \nabla_\alpha f \cdot v \\
\notag &= \Box_y v - 2 \lambda S_{ f, z } v + \mc{A}_0 v \text{,}
\end{align}
where $\mc{A}_0$ is given by
\begin{equation}
\label{conjest_A0} \mc{A}_0 := \lambda^2 \nabla^\alpha f \nabla_\alpha f + 2 \lambda z = \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c \lambda \text{.}
\end{equation}
Multiplying \eqref{conjest_01} by $S_{ f, z } v$ yields
\begin{equation}
\label{conjest_10} - \mc{L} v S_{ f, z } v = - \Box_y v S_{ f, z } v + 2 \lambda ( S_{ f, z } v )^2 - \mc{A}_0 \cdot v S_{ f, z } v \text{.}
\end{equation}
For the last term, we apply \eqref{twisted} and the product rule:
\begin{align}
\label{conjest_11} - \mc{A}_0 \cdot v S_{ f, z } v &= - \mc{A}_0 \cdot v ( \nabla^\alpha f D_\alpha v + w_{ f, z } v ) \\
\notag &= - \mc{A}_0 \cdot \left[ \frac{1}{2} \nabla^\alpha f \nabla_\alpha ( v^2 ) - \frac{ \kappa }{y} \nabla^\alpha f \nabla_\alpha y \cdot v^2 + w_{ f, z } v^2 \right] \\
\notag &= - \nabla^\alpha \left( \frac{1}{2} \mc{A}_0 \nabla_\alpha f \cdot v^2 \right) + \frac{1}{2} \nabla^\alpha f \nabla_\alpha \mc{A}_0 \cdot v^2 - z \mc{A}_0 \cdot v^2 \text{.}
\end{align}
Moreover, recalling \eqref{f,z} and \eqref{conjest_A0} yields
\begin{align}
\label{conjest_12} - z \mc{A}_0 &= 4 c \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 32\lambda c^2 \text{,} \\
\notag \frac{1}{2} \nabla^\alpha f \nabla_\alpha \mc{A}_0 &= \lambda^2 ( - 2 \kappa y^{ 6 \kappa - 1 } - 8 c^3 t^2 ) \text{.}
\end{align}
Combining \eqref{conjest_10}--\eqref{conjest_12} results in the identity
\begin{equation}
\label{conjest_20} - \mc{L} v S_{ f, z } v = - \Box_y v S_{ f, z } v + 2 \lambda ( S_{ f, z } v )^2 + \mc{A}_{ f, z } \cdot v^2 - \nabla^\alpha \left( \frac{1}{2} \mc{A}_0 \nabla_\alpha f \cdot v^2 \right) \text{,}
\end{equation}
where the coefficient $\mc{A}_{ f, z }$ is given by
\begin{align}
\label{conjest_A} \mc{A}_{ f, z } &:= \frac{1}{2} \nabla^\alpha f \nabla_\alpha \mc{A}_0 - z \mc{A}_0 \\
\notag &= \lambda^2 ( - 2 \kappa y^{ 6 \kappa - 1 } + 4 c y^{ 4 \kappa } - 24 c^3 t^2 ) - 32 \lambda c^2 \text{.}
\end{align}
Integrating \eqref{conjest_20} over $\mc{C}_\varepsilon$ and recalling \eqref{conjest_A} then yields
\begin{align}
\label{conjest_21} - \int_{ \mc{C}_\varepsilon } \mc{L} v S_{ f, z } v &= - \int_{ \mc{C}_\varepsilon } \Box_y v S_{ f, z } v + 2 \lambda \int_{ \mc{C}_\varepsilon } ( S_{ f, z } v )^2 \\
\notag &\qquad + \int_{ \mc{C}_\varepsilon } [ \lambda^2 ( - 2 \kappa y^{ 6 \kappa - 1 } + 4 c y^{ 4 \kappa } - 24 c^3 t^2 ) - 32 \lambda c^2 ] \cdot v^2 \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c \lambda ] \nabla_\nu f \cdot v^2 \text{.}
\end{align}
Notice that the bound \eqref{eqc} for $c$ implies (for all values of $n$)
\begin{equation}
\label{conjest_T1} 48 c^2 t^2 \leq 48 c^2 T^2 \leq 1 \leq y^{ 4 \kappa } \text{.}
\end{equation}
Then, with large enough $\lambda_0$ (depending on $n$ and $\kappa$), we obtain
\begin{align}
\label{conjest_22} \lambda^2 ( - 2 \kappa y^{ 6 \kappa - 1 } + 4 c y^{ 4 \kappa } - 24 c^3 t^2 ) - 32\lambda c^2 &\geq - 2 \kappa \lambda^2 \cdot y^{ 6 \kappa - 1 } - 32\lambda c^2 \\
\notag &\geq - \kappa \lambda^2 \cdot y^{ 6 \kappa - 1 } \text{.}
\end{align}
Noting in addition that
\[
| \mc{L} v S_{ f, z } v | \leq \frac{1}{ 4 \lambda } ( \mc{L} v )^2 + \lambda ( S_{ f, z } v )^2 \text{,}
\]
then~\eqref{conjest_21} and~\eqref{conjest_22} together imply
\begin{align}
\label{conjest_30} \frac{1}{ 4 \lambda } \int_{ \mc{C}_\varepsilon } ( \mc{L} v )^2 &\geq - \int_{ \mc{C}_\varepsilon } \Box_y v S_{ f, z } v + \lambda \int_{ \mc{C}_\varepsilon } ( S_{ f, z } v )^2 - \kappa \lambda^2 \int_{ \mc{C}_\varepsilon } y^{ 6 \kappa - 1 } \cdot v^2 \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \text{.}
\end{align}
At this point, the proof splits into different cases, depending on $n$.
\vspace{0.6pc}
\noindent
\emph{Case 1: $n \geq 4$.}
First, note that for large $\lambda_0$, we have
\begin{align}
\label{conjest_32} \frac{1}{9} \lambda ( S_{ f, z } v )^2 &\geq c y^{ - 4 \kappa } ( S_{ f, z } v )^2 \\
\notag &\geq c ( D_r v )^2 + c ( 2 c t y^{ - 2 \kappa } \cdot \partial_t v + y^{ - 2 \kappa } w_{ f, z } \cdot v )^2 \\
\notag &\qquad + 2 c ( D_r v ) ( 2 c t y^{ - 2 \kappa } \cdot \partial_t v + y^{ - 2 \kappa } w_{ f, z } \cdot v ) \\
\notag &\geq \frac{1}{2} c ( D_r v )^2 - c ( 2 c t y^{ - 2 \kappa } \cdot \partial_t v+ y^{ - 2 \kappa } w_{ f, z } \cdot v )^2 \\
\notag &\geq \frac{1}{2} c ( D_r v )^2 - 8 c^3 t^2 y^{ - 4 \kappa } \cdot ( \partial_t v)^2 - 2 c y^{ - 4 \kappa } w_{ f, z }^2 \cdot v^2 \\
\notag &\geq \frac{1}{2} c ( D_r v )^2 - \frac{1}{6} c \cdot ( \partial_t v )^2 - 2 c y^{ - 4 \kappa } w_{ f, z }^2 \cdot v^2 \text{,}
\end{align}
where we also recalled \eqref{conjest_T1} and the definitions \eqref{f,z} and \eqref{gral_wAS} of $f$, $z$, and $S_{ f, z }$.
Moreover, recalling the formula \eqref{wAS} for $w_{ f, z }$, we obtain that
\begin{equation}
\label{conjest_33} - 18 c y^{ - 4 \kappa } w_{ f, z }^2 \cdot v^2 \geq -C( y^{-2} +r^{-2} ) \cdot v^2 \text{,}
\end{equation}
for some constant $C > 0$, depending on $n$ and $\kappa$.
Thus, for sufficiently large $\lambda_0$, it follows from \eqref{conjest_32} and \eqref{conjest_33} that
\begin{equation}
\label{conjest_34} \lambda ( S_{ f, z } v )^2 \geq \frac{9}{2} c ( D_r v )^2 - \frac{3}{2} c \cdot ( \partial_t v )^2 - C( y^{-2} + r^{-2} ) \cdot v^2 \text{.}
\end{equation}
Combining \eqref{conjest_30} with \eqref{conjest_34}, we obtain
\begin{align}
\label{conjest_35} \frac{1}{ 4 \lambda } \int_{ \mc{C}_\varepsilon } ( \mc{L} v )^2 &\geq - \int_{ \mc{C}_\varepsilon } \Box_y v S_{ f, z } v + \frac{9}{2} c \int_{ \mc{C}_\varepsilon } ( D_r v )^2 - \frac{3}{2} c \int_{ \mc{C}_\varepsilon } ( \partial_t v )^2 \\
\notag &\qquad - \kappa \lambda^2 \int_{ \mc{C}_\varepsilon } y^{ 6 \kappa - 1 } \cdot v^2 - C \int_{ \mc{C}_\varepsilon } ( y^{-2} + r^{-2} ) \cdot v^2 \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \text{.}
\end{align}
Applying the multiplier inequality \eqref{multineq} to \eqref{conjest_35} then results in the bound
\begin{align}
\label{conjest_40} \frac{1}{ 4 \lambda } \int_{ \mc{C}_\varepsilon } ( \mc{L} v )^2 &\geq \int_{ \mc{C}_\varepsilon } \left[ ( 1 - 4 c ) \cdot | \slashed{\nabla} v |^2 + \frac{1}{2} c \cdot ( \partial_tv )^2 + \frac{1}{2} c \cdot ( D_r v )^2 \right] \\
\notag &\qquad - \kappa \lambda^2 \int_{ \mc{C}_\varepsilon } y^{ 6 \kappa - 1 } \cdot v^2 - \frac{1}{2} ( n - 1 ) \kappa \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-1} \cdot v^2 \\
\notag &\qquad + \frac{1}{4} ( n - 1 ) ( n - 3 ) \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa } r^{-3} \cdot v^2 \\
\notag &\qquad - C \int_{ \mc{C}_\varepsilon } ( y^{-2} + y^{ 2 \kappa - 1 } r^{-2} ) \cdot v^2 - \int_{ \Gamma_\varepsilon } S_{ f, z } v \cdot D_\nu v \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta v D^\beta v + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot v^2 \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + 2 \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \text{.}
\end{align}
(Here, $C$ may differ from previous lines, but still depends only on $n$ and $\kappa$.)
Let $d > 0$, and define now the (positive) quantities
\begin{align}
\label{conjest_J} J := d y^{ 2 \kappa - 2 } r^{-3} + C ( y^{-2} + y^{ 2 \kappa - 1 } r^{-2} ) \text{,} &\qquad J_0 := - \kappa \lambda^2 y^{ 6 \kappa - 1 } \text{,} \\
\notag J_1 := - \frac{1}{2} ( n - 1 ) \kappa y^{ 2 \kappa - 2 } r^{-1} \text{,} &\qquad J_2 := \frac{1}{4} ( n - 1 ) ( n - 3 ) y^{ 2 \kappa } r^{-3} \text{.}
\end{align}
Observe that for sufficiently small $d$ (depending on $n$ and $\kappa$), there is some $0 < \delta \ll 1$ (also depending on $n$ and $\kappa$) such that:
\begin{enumerate}[i)]
\item $J \leq J_2$ whenever $0 < r < \delta$.
\item $J \leq J_1$ whenever $1 - \delta < r < 1$.
\item For sufficiently large $\lambda_0$, we have that $J \leq J_0$ whenever $\delta \leq r \leq 1 - \delta$.
\end{enumerate}
Combining the above with \eqref{conjest_40} yields the desired bound \eqref{conjest}, in the case $n \geq 4$.
\vspace{0.6pc}
\noindent
\emph{Case 2: $n \leq 3$.}
For the cases $n = 1$ and $n = 3$, we first note that \eqref{eqc} implies
\begin{equation}
\label{conjest_T2} 240 c^2 t^2 \leq 240 c^2 T^2 \leq 1 \leq y^{ 4 \kappa } \text{.}
\end{equation}
In this setting, we must deal with $( S_{ f, z } v )^2$ a bit differently.
To this end, we use \eqref{gral_wAS}, the fact that $\lambda_0$ is sufficiently large, and the inequality
\[
(A+B)^2 \geq (1 - 2 \varepsilon) A^2 - \frac{1}{2\varepsilon} (1 - 2\varepsilon) B^2
\]
(with the values $\varepsilon := \frac{1}{3}$, $A := y^{2 \kappa} D_r v$, and $B := 2 c t (\partial_t v) + w_{ f, z} v$) in order to obtain
\begin{equation}
\label{conjest_41} \lambda ( S_{ f, z } v )^2 \geq 60c \left[ \frac{1}{3} y^{ 4 \kappa } ( D_r v )^2 - 4 c^2 t^2 ( \partial_t v )^2 - w_{ f, z }^2 v^2 \right] \text{.}
\end{equation}
Moreover, expanding $w_{ f, z }^2$ using \eqref{wAS} and excluding terms with favorable sign yields
\begin{align}
\label{conjest_42} \lambda ( S_{ f, z } v )^2 &\geq 20 c y^{4\kappa} ( D_r v )^2 - 240 c^3 t^2 ( \partial_t v )^2 - 540 c^3 v^2 \\
\notag &\qquad - 60 c \left[ 4 \kappa^2 y^{ 4 \kappa - 2 } + \frac{ (n-1)^2 }{ 4 r^2 } y^{4\kappa} - \frac{ 2 \kappa (n-1) }{r} y^{4 \kappa - 1 } \right] v^2 \text{.}
\end{align}
The pointwise Hardy inequality \eqref{hardy}, with $q := 4 \kappa + 1$, yields
\begin{align*}
y^{ 4 \kappa } ( D_r v )^2 &\geq \frac{1}{4} ( 1 - 6 \kappa )^2 y^{ 4 \kappa - 2 } \cdot v^2 + \frac{ (1 - 6 \kappa) ( n - 1 ) }{ 2 r } y^{ 4 \kappa - 1 } \cdot v^2 \\
&\qquad + \nabla^\beta \left[ \frac{ ( 1 - 6 \kappa ) }{2} y^{ 4 \kappa - 1 } \nabla_\beta y \cdot v^2 \right] \text{.}
\end{align*}
Combining the above with \eqref{conjest_T2} and \eqref{conjest_42}, and noting that
\[
\frac{15}{4} ( 1 - 6 \kappa )^2 > 240 \kappa^2 \text{,}
\]
we then obtain the bound
\begin{align}
\label{conjest_43} \lambda ( S_{ f, z } v )^2 &\geq 5c ( D_r v )^2 - c ( \partial_t v )^2 - 15 c (n-1)^2 y^{4\kappa} r^{-2} v^2 \\
\notag &\qquad - C (n-1) y^{4\kappa-1} r^{-1} v^2 + \nabla^\beta \left[ \frac{ 15 c ( 1 - 6 \kappa ) }{2} y^{ 4\kappa - 1 } \nabla_\beta y \cdot v^2 \right] \text{,}
\end{align}
where $C > 0$ depends on $n$ and $\kappa$.
Now, applying the multiplier inequality \eqref{multineq} and \eqref{conjest_43} to \eqref{conjest_30}, we see that
\begin{align}
\label{conjest_50} \frac{1}{ 4 \lambda } \int_{ \mc{C}_\varepsilon } ( \mc{L} v )^2 &\geq \int_{ \mc{C}_\varepsilon } \left[ ( 1 - 4 c ) | \slashed{\nabla} v |^2 + c ( \partial_tv )^2 + c ( D_r v )^2 \right] \\
\notag &\qquad - \kappa \lambda^2 \int_{ \mc{C}_\varepsilon } y^{ 6 \kappa - 1 } \cdot v^2 - \frac{1}{2} ( n - 1 ) \kappa \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 2 } r^{-1} \cdot v^2 \\
\notag &\qquad + \frac{1}{2} ( n - 1 ) ( n - 4 ) \kappa \int_{ \mc{C}_\varepsilon } y^{ 2 \kappa - 1 } r^{-2} \cdot v^2 \\
\notag &\qquad - 15 c (n-1)^2 \int_{ \mc{C}_\varepsilon } y^{ 4 \kappa } r^{-2} \cdot v^2 \\
\notag &\qquad - C ( n - 1 ) \int_{ \mc{C}_\varepsilon } y^{ 4 \kappa - 1 } r^{-1} \cdot v^2 \\
\notag &\qquad - \int_{ \Gamma_\varepsilon } S_{ f, z } v \cdot D_\nu v + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta v D^\beta v \\
\notag &\qquad + \frac{1}{2} \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot v^2 + 2 \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \\
\notag &\qquad - \frac{1}{2} \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c \lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + c_2 \int_{ \Gamma_\varepsilon } y^{ 4\kappa - 1 } \nabla_\nu y \cdot v^2 \text{.}
\end{align}
For $n = 1$, the bound \eqref{conjest_50} immediately implies \eqref{conjest}.
For the remaining case $n = 3$, we also note from \eqref{eqc} that
\begin{equation}
\label{conjest_51} \frac{1}{2} ( n - 1 ) ( n - 4 ) \kappa y^{ 2 \kappa - 1 } r^{-2} - 15 c ( n - 1 )^2 y^{ 4 \kappa } r^{-2} \geq - \frac{1}{2} \kappa y^{ 2 \kappa - 1 } r^{-2} \text{.}
\end{equation}
To control the remaining bulk integrand $- C ( n - 1 ) y^{ 4 \kappa - 1 } r^{-1} \cdot v^{-2}$, we define
\begin{align}
\label{conjest_K} K: = d y^{ 2 \kappa - 2 } r^{-2} + C ( n - 1 ) y^{ 4 \kappa - 1 } r^{-1} \text{,} &\qquad K_0: = - \kappa \lambda^2 y^{ 6 \kappa - 1 } \text{,} \\
\notag K_1: = - \frac{1}{2} ( n - 1 ) \kappa y^{ 2 \kappa - 2 } r^{-1} \text{,} &\qquad K_2: = - \frac{1}{2} \kappa y^{ 2 \kappa - 1 } r^{-2} \text{.}
\end{align}
Like for the $n \geq 4$ case, as long as $d$ is sufficiently small (depending on $n$ and $\kappa$), then there exists $0 < \delta \ll 1$ (depending on $n$ and $\kappa$) such that:
\begin{enumerate}[i)]
\item $K \leq K_2$ whenever $0 < r < \delta$.
\item $K \leq K_1$ whenever $1 - \delta < r < 1$.
\item For large enough $\lambda_0$, we have that $K \leq K_0$ whenever $\delta \leq r \leq 1 - \delta$.
\end{enumerate}
Combining the above with \eqref{conjest_50} and \eqref{conjest_51} yields \eqref{conjest} for $n = 3$.
\end{proof}
\subsection{Boundary Limits}
In this subsection, we derive and control the limits of the boundary terms in \eqref{conjest} when $\varepsilon \searrow 0$.
More specifically, we show the following:
\begin{lemma} \label{L.bdrylim}
Let $\Gamma_\varepsilon^\pm$ be as in \eqref{Gamma_eps}.
Then, for $\lambda \geq \lambda_0$,
\begin{align}
\label{bdrylim_outer} - c_3 \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 &\leq \liminf_{ \varepsilon \searrow 0 } \left[ \int_{ \Gamma_\varepsilon^+ } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \int_{ \Gamma_\varepsilon^+ } S_{ f, z } v D_\nu v \right] \\
\notag &\qquad - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } \nabla_\nu w_{ f, z } \cdot v^2 \\
\notag &\qquad + 4 \kappa ( 2 \kappa - 1 ) \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \text{,} \\
\notag 0 &= \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 \text{,}
\end{align}
where the constant $c_3 > 0$ depends on $\kappa$.
In addition, for $\lambda \geq \lambda_0$,
\begin{align}
\label{bdrylim_inner} 0 &\leq \lim_{ \varepsilon \searrow 0 } \left[ \int_{ \Gamma_\varepsilon^- } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \int_{ \Gamma_\varepsilon^- } S_{ f, z } v D_\nu v \right] \\
\notag &\qquad - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } \nabla_\nu w_{ f, z } \cdot v^2 + 4 \kappa ( 2 \kappa - 1 ) \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \text{,} \\
\notag 0 &\leq \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 \text{.}
\end{align}
\end{lemma}
\begin{proof}
First, note that on $\Gamma_\varepsilon^\pm$, we have
\begin{equation}
\label{bdrylim_0} \nu |_{ \Gamma_\varepsilon^\pm } = \pm \partial_r \text{,} \qquad \nabla_\nu y |_{ \Gamma_\varepsilon^\pm } = \mp 1 \text{,} \qquad \nabla_\nu f |_{ \Gamma_\varepsilon^\pm } = \pm y^{ 2 \kappa } |_{ \Gamma_\varepsilon^\pm } \text{.}
\end{equation}
Moreover, note that \eqref{f,z} and \eqref{gral_wAS} imply
\begin{equation}
\label{bdrylim_1} S_{ f, z } v = y^{ 2 \kappa } D_r v + 2 c t \cdot \partial_t v + w_{ f, z } \cdot v \text{.}
\end{equation}
We begin with the outer limits \eqref{bdrylim_outer}.
The main observation is that by \eqref{f,z} and by the assumption that $u$ is boundary admissible (see Definition \ref{admissible}), we have
\begin{align}
\label{bdrylim_oa} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa } ( \partial_t v )^2 &= 0 \text{,} \\
\notag \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa } ( D_r v )^2 &= \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{,} \\
\notag \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ - 2 + 2 \kappa } v^2 &= ( 1 - 2 \kappa )^{-2} \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{.}
\end{align}
We also recall that we have assumed $- \frac{1}{2} < \kappa < 0$.
For the first boundary term, we apply \eqref{bdrylim_0} and \eqref{bdrylim_oa} to obtain
\begin{align}
\label{bdrylim_o1} \liminf_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } \nabla_\nu f \cdot D_\beta v D^\beta v &\geq \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa } [ - ( \partial_t v )^2 + ( D_r v )^2 ] \\
\notag &= \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{.}
\end{align}
Next, expanding $S_{ f, z } v$ using \eqref{bdrylim_1}, noting from \eqref{wAS} that the leading-order behavior of $w_{ f, z }$ near $\Gamma$ is $- 2 \kappa \cdot y^{ 2 \kappa - 1 }$, and applying \eqref{bdrylim_oa}, we obtain that
\begin{align}
\label{bdrylim_o2} - 2 \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } S_{ f, z } v D_\nu v &= - 2 \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } [ y^{ 2 \kappa } ( D_r v )^2 + 2 c t \partial_t v D_r v + w_{ f, z } v D_r v ] \\
\notag &= - 2 \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 + 4 \kappa \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa - 1 } v D_r v \\
\notag &= \left( - 2 + \frac{ 4 \kappa }{ 1 - 2 \kappa } \right) \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{.}
\end{align}
The remaining outer boundary terms are treated similarly.
By \eqref{bdrylim_0} and \eqref{bdrylim_oa},
\begin{align}
\label{bdrylim_o3} - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 &= - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 6 \kappa } v^2 = 0 \text{,} \\
\notag 4 \kappa ( 2 \kappa - 1 ) \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 &= \frac{ 4 \kappa }{ 1 - 2 \kappa } \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{.}
\end{align}
Moreover, by \eqref{wAS} and \eqref{bdrylim_0}, we see that the leading-order behavior of $\partial_r w_{ f, z }$ is given by $- 2 \kappa ( 1 - 2 \kappa ) y^{ 2 \kappa - 2 }$.
Combining this with \eqref{bdrylim_0} and \eqref{bdrylim_oa} yields
\begin{align}
\label{bdrylim_o4} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ } \nabla_\nu w_{ f, z } \cdot v^2 &= - 2 \kappa ( 1 - 2 \kappa ) \lim_{ \varepsilon \searrow 0 } \int_\Gamma y^{ 2 \kappa - 2 } v^2 \\
\notag &= - \frac{ 2 \kappa }{ 1 - 2 \kappa } \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 \text{.}
\end{align}
Summing \eqref{bdrylim_o1}--\eqref{bdrylim_o4} yields the first part of \eqref{bdrylim_outer}.
The second part of \eqref{bdrylim_outer} similarly follows by applying \eqref{bdrylim_0} and \eqref{bdrylim_oa}.
Next, for the interior limits \eqref{bdrylim_inner}, we split into two cases:
\vspace{0.6pc}
\noindent
\emph{Case 1: $n \geq 3$.}
In this case, we begin by noting that the volume of $\Gamma_\varepsilon^-$ satisfies
\begin{equation}
\label{bdrylim_ia} | \Gamma_\varepsilon^- | \lesssim_{ T, n } \varepsilon^{ n - 1 } \text{.}
\end{equation}
Furthermore, since $u$ is smooth on $\mc{C}$, then \eqref{f,z} and \eqref{eqv} imply that $\partial_t v$, $\slashed{\nabla} v$, $D_r v$, and $v$ are all uniformly bounded whenever $r$ is sufficiently small.
Combining the above with \eqref{wAS}, \eqref{bdrylim_0}, \eqref{bdrylim_1}, we obtain that the following limits vanish:
\begin{align}
\label{bdrylim_i1} 0 &= \lim_{ \varepsilon \searrow 0 } \left[ \int_{ \Gamma_\varepsilon^- } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \int_{ \Gamma_\varepsilon^- } S_{ f, z } v D_\nu v \right] \\
\notag &\qquad - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + 4 \kappa ( 2 \kappa - 1 ) \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \text{,} \\
\notag 0 &= \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 \text{.}
\end{align}
This leaves only one remaining limit in \eqref{bdrylim_inner}; for this, we note, from \eqref{wAS}, that the leading-order behavior of $- \partial_r w_{ f, z }$ near $r = 0$ is $\frac{1}{2} ( n - 1 ) r^{-2} y^{ 2 \kappa }$.
As a result,
\begin{align}
\label{bdrylim_i2} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } \nabla_\nu w_{ f, z } \cdot v^2 &= \frac{ n - 1 }{2} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } r^{-2} y^{ 2 \kappa } v^2 \\
\notag &= \begin{cases} 0 & \quad n > 3 \\ C \int_{ -T }^T | v ( t, 0 ) |^2 dt & \quad n = 3 \end{cases} \text{,}
\end{align}
where the last integral is over the line $r = 0$, and where the constant $C$ depends only on $n$.
Combining \eqref{bdrylim_i1} and \eqref{bdrylim_i2} yields \eqref{bdrylim_inner} in this case.
\vspace{0.6pc}
\noindent
\emph{Case 2: $n = 1$.}
Here, we can no longer rely on \eqref{bdrylim_ia} to force most limits to vanish, so we must examine all the terms more carefully.
First, from \eqref{wAS}, \eqref{bdrylim_0}, \eqref{bdrylim_1}, we have that
\begin{align*}
&\int_{ \Gamma_\varepsilon^- } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \int_{ \Gamma_\varepsilon^- } S_{ f, z } v D_\nu v \\
\notag &\quad = \int_{ \Gamma_\varepsilon^- } y^{ 2 \kappa } [ ( \partial_t v )^2 + ( D_r v )^2 ] + \int_{ \Gamma_\varepsilon^- } [ 4 c t \cdot \partial_t v D_r v - 4 \kappa y^{ 2 \kappa - 1 } v D_r v ] \text{.}
\end{align*}
Recalling also our assumption \eqref{eqc} for $c$, we conclude from the above that
\begin{align}
\label{bdrylim_a10} \lim_{ \varepsilon \searrow 0 } \left[ \int_{ \Gamma_\varepsilon^- } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \int_{ \Gamma_\varepsilon^- } S_{ f, z } v D_\nu v \right] &\geq - C \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 2 \kappa - 2 } v^2 \\
\notag &= - C \int_{-T}^T | v ( t, 0 ) |^2 dt \text{,}
\end{align}
where the last integral is over the line $r = 0$, and where $C$ depends only on $\kappa$.
Moreover, letting $\lambda_0$ be sufficiently large and recalling \eqref{eqc} and \eqref{bdrylim_0}, we obtain
\begin{equation}
\label{bdrylim_a11} - \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \geq \tilde{C} \lambda^2 \int_{-T}^T | v ( t, 0 ) |^2 dt \text{,}
\end{equation}
for some constant $\tilde{C} > 0$.
Next, applying \eqref{wAS} and \eqref{bdrylim_0} in a similar manner as before, we obtain inequalities for the remaining limits in the right-hand side of \eqref{bdrylim_inner}:
\begin{align}
\label{bdrylim_a12} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } \nabla_\nu w_{ f, z } \cdot v^2 &\geq - C \int_{-T}^T | v ( t, 0 ) |^2 dt \text{,} \\
\notag 4 \kappa ( 2 \kappa - 1 ) \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 &\geq - C \int_{-T}^T | v ( t, 0 ) |^2 dt \text{,} \\
\notag \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 &= 2 \int_{-T}^T | v ( t, 0 ) |^2 dt \text{.}
\end{align}
Here, $C$ denotes various positive constants that depend on $\kappa$.
Finally, combining \eqref{bdrylim_a10}--\eqref{bdrylim_a12} and taking $\lambda_0$ to be sufficiently large results in \eqref{bdrylim_inner}.
\end{proof}
\subsection{Completion of the Proof}
We are now in position to complete the proof of Theorem \ref{T.Carleman}.
First, recalling the definitions \eqref{f,z} and \eqref{eqv} of $f$ and $v$ and the fact that $c^2 t^2 \lesssim 1$ by our assumption \eqref{eqc}, we have that
\begin{align}
\label{conjest_58} e^{ 2 \lambda f } ( \partial_t u )^2 &\lesssim ( \partial_t v )^2 + \lambda^2 c^2 t^2 v^2 \lesssim ( \partial_t v )^2 + \lambda^2 y^{ 6 \kappa - 1 } v^2 \text{,} \\
\notag e^{ 2 \lambda f } ( D_r u )^2 &\lesssim ( D_r v )^2 + \lambda^2 y^{ 4 \kappa } v^2 \lesssim ( D_r v )^2 + \lambda^2 y^{ 6 \kappa - 1 } v^2 \text{,} \\
\notag e^{ 2 \lambda f } | \slashed{\nabla} u |^2 &= |\slashed{\nabla} v|^2 \text{.}
\end{align}
Furthermore, by \eqref{Box_y_kappa} and \eqref{eqv}, we observe that
\begin{equation}
\label{conjest_59} ( \mc{L} v )^2 \leq 2 e^{ 2 \lambda f } [ (\Box_\kappa u )^2 + \kappa ( n - 1 ) y^{-2} r^{-2} \cdot u^2 ] \text{.}
\end{equation}
Therefore, using these bounds in Lemma~\ref{L.conjest}, it follows that
\begin{align}
\label{conjest_60} &2 \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } ( \Box_\kappa u )^2 + 2 \kappa ( n - 1 ) \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{-1} r^{-1} \cdot u^2 \\
\notag &\quad \geq C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } [ ( \partial_t u )^2 + | \slashed{\nabla} u |^2 + ( D_r u )^2 ] + C \lambda^3 \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{ 6 \kappa - 1 } u^2 \\
\notag &\quad\qquad + 2 \lambda \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta v D^\beta v - 4 \lambda \int_{ \Gamma_\varepsilon } S_{ f, z } v \cdot D_\nu v \\
\notag &\quad\qquad - 2 \lambda \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\quad\qquad + 2 \lambda \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot v^2 + 8\lambda \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \\
\notag &\quad\qquad + \begin{cases}
C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-3} \cdot u^2 & \quad n \geq 4 \\
C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-2} \cdot u^2 + 4 c_2 \lambda \int_{ \Gamma_\varepsilon } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 & \quad n = 3 \\
4 c_2 \lambda \int_{ \Gamma_\varepsilon } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 & \quad n = 1
\end{cases} \,,
\end{align}
for some constant $C > 0$ depending on $n$ and $\kappa$.
Note that if $\lambda_0$ is sufficiently large, then the last term on the left-hand side of \eqref{conjest_60} can be absorbed into the last term on the right-hand side of \eqref{conjest_60} (for all values of $n$).
From this, we obtain
\begin{align}
\label{conjest_70} \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } ( \Box_\kappa u )^2 &\geq C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } [ ( \partial_t u )^2 + | \slashed{\nabla} u |^2 + ( D_r u )^2 + \lambda^2 y^{ 6 \kappa - 1 } u^2 ] \\
\notag &\qquad + \begin{cases}
C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-3} \cdot u^2 & \quad n \geq 4 \\
C \lambda \int_{ \mc{C}_\varepsilon } e^{ 2 \lambda f } y^{ 2 \kappa - 2 } r^{-2} \cdot u^2 & \quad n = 3 \\
0 & \quad n = 1
\end{cases} \\
\notag &\qquad + \lambda \int_{ \Gamma_\varepsilon } \nabla_\nu f \cdot D_\beta v D^\beta v - 2 \lambda \int_{ \Gamma_\varepsilon } S_{ f, z } v \cdot D_\nu v \\
\notag &\qquad - \lambda \int_{ \Gamma_\varepsilon } [ \lambda^2 ( y^{ 4 \kappa } - 4 c^2 t^2 ) - 8 c\lambda ] \nabla_\nu f \cdot v^2 \\
\notag &\qquad + \lambda \int_{ \Gamma_\varepsilon } \nabla_\nu w_{ f, z } \cdot v^2 + 4 \lambda \kappa ( 2 \kappa - 1 ) \int_{ \Gamma_\varepsilon } y^{ 2 \kappa - 2 } \nabla_\nu y \cdot v^2 \\
\notag &\qquad + \begin{cases}
0 & \quad n \geq 4 \\
2 c_2 \lambda \int_{ \Gamma_\varepsilon } y^{ 4 \kappa - 1 } \nabla_\nu y \cdot v^2 & \quad n \leq 3
\end{cases} \text{.}
\end{align}
Finally, the desired inequality \eqref{Carleman} follows by taking the limit $\varepsilon \searrow 0$ in \eqref{conjest_70} and applying all the inequalities from Lemma \ref{L.bdrylim}.
\section{Observability} \label{S.Observability}
Our aim in this section is to show that the Carleman estimates of Theorem \ref{T.Carleman} imply a boundary observability property for solutions to wave equations on the cylindrical spacetime $\mc{C}$ containing potentials that are critically singular at the boundary $\Gamma$.
More specifically, we establish the following result, which is a precise and a slightly stronger version of the result stated in Theorem \ref{T.Observability0}.
\begin{theorem}\label{T.Observability}
Assume $n \neq 2$, and fix $-\frac{1}{2} < \kappa < 0$.
Let $u$ be a solution to
\begin{equation}
\label{weqn} \Box_\kappa u = D_X u + V u \text{,}
\end{equation}
on $\bar{\mc{C}}$, where the vector field $X: \mc{C} \rightarrow \R^{1+n}$ and the potential $V: \mc{C} \rightarrow \R$ satisfy
\begin{equation}
\label{hypXV} |X| \lesssim 1\text{,} \qquad | V | \lesssim \frac{1}{y} + \frac{ n - 1 }{r} \text{,}
\end{equation}
In addition, assume that:
\begin{enumerate}[i)]
\item $u$ is boundary admissible (in the sense of Definition \ref{admissible}).
\item $u$ has finite twisted $H^1$-energy for any $\tau \in ( -T, T )$:
\begin{equation}
\label{H1} E_1 [u] ( \tau ) = \int_{ \mc{C} \cap \{ t = \tau \} } ( ( \partial_t u )^2 + ( D_r u )^2 + | \slashed{\nabla} u |^2 + u^2 ) < \infty \text{.}
\end{equation}
\end{enumerate}
Then, for sufficiently large observation time $T$ satisfying
\begin{equation} \label{obstime}
\begin{cases}
T > \frac{ 4 \sqrt{3} }{ 1 + 2 \kappa } & \quad n \geq 4 \\
T > \max \left\{ \frac{ 4 \sqrt{15} }{ 1 + 2 \kappa }, \frac{ 2 \sqrt{30} }{ \sqrt{ | \kappa | ( 1 + 2 \kappa ) } } \right\} & \quad n = 3 \\
T > \frac{ 4 \sqrt{15} }{ 1 + 2 \kappa } & \quad n = 1
\end{cases} \text{,}
\end{equation}
we have the boundary observability inequality
\begin{equation}
\label{Observability} \int_\Gamma (\mc{N}_\kappa u)^2 \gtrsim E_1 [u] (0) \text{,}
\end{equation}
where the constant of the inequality depends on $n$, $\kappa$, $T$, $X$, and $V$.
\end{theorem}
\subsection{Preliminary Estimates}
In order to prove Theorem \ref{T.Observability}, we require preliminary estimates.
The first is a Hardy estimate to control singular integrands:
\begin{lemma} \label{L.hardy}
Assume the hypotheses of Theorem \ref{T.Observability}.
Then,
\begin{equation}
\label{hardy_int} \int_{ \mc{C} \cap \{ t_0 < t < t_1 \} } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) u^2 \lesssim \int_{ \mc{C} \cap \{ t_0 < t < t_1 \} } (D_r u )^2 \text{,}
\end{equation}
for any $-T \leq t_0 < t_1 \leq T$, where the constant depends only on $n$ and $\kappa$.
\end{lemma}
\begin{proof}
The inequality \eqref{hardy}, with $q=1$, yields
\[
( D_r u )^2 \geq \frac{1}{8} ( 1 - 2 \kappa )^2 \frac{ u^2 }{ y^2 } + \frac{ (n-1) }{9} \frac{ u^2 }{ r^2 } + \frac{ ( 1 - 2 \kappa ) }{2} \nab^\beta ( \nab_\beta y \cdot y^{-1} u^2 ) \text{.}
\]
Letting $0 < \varepsilon \ll 1$ and integrating the above over $\mc{C} \cap \{ t_0 < t < t_1 \}$ yields
\begin{align*}
\int_{ \mc{C}_\varepsilon \cap \{ t_0 < t < t_1 \} } ( D_r u )^2 &\geq C \int_{ \mc{C}_\varepsilon \cap \{ t_0 < t < t_1 \} } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) u^2 \\
&\qquad - \frac{ ( 1 - 2 \kappa ) }{2} \int_{ \Gamma_\varepsilon^+ \cap \{ t_0 < t < t_1 \} } y^{-1} u^2 \\
&\qquad + \frac{ ( 1 - 2 \kappa ) }{2} \int_{ \Gamma_\varepsilon^- \cap \{ t_0 < t < t_1 \} } y^{-1} u^2 \\
&\geq C \int_{ \mc{C}_\varepsilon \cap \{ t_0 < t < t_1 \} } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) u^2 \\
&\qquad - \frac{ ( 1 - 2 \kappa ) }{2} \int_{ \Gamma_\varepsilon^+ \cap \{ t_0 < t < t_1 \} } y^{-1} u^2 \text{.}
\end{align*}
(Here, we have also made use of the identities \eqref{bdrylim_0}.)
Letting $\varepsilon \searrow 0$ and recalling that $u$ is boundary admissible results in the estimate \eqref{hardy_int}.
\end{proof}
We will also need the following energy estimate for solutions to \eqref{weqn}:
\begin{lemma} \label{L.energy}
Assume the hypotheses of Theorem \ref{T.Observability}.
Then,
\begin{equation}
\label{energyineq}
E_1 [u] ( t_1 ) \leq e^{ M | t_1 - t_0 |} E_1 [u] ( t_0 ) \text{,} \qquad t_0, t_1 \in ( -T, T ) \text{,}
\end{equation}
where the constant $M$ depends on $n$, $\kappa$, $X$, and $V$.
\end{lemma}
\begin{proof}
We assume for convenience that $t_0 < t_1$; the opposite case can be proved analogously.
By a standard density argument, we can assume $u$ is smooth within $\mc{C}$.
Fix now a sufficiently small $0 < \varepsilon \ll 1$, and define
\begin{equation}
\label{energy_0} E_{ 1, \varepsilon } [u] ( \tau ) = \int_{ \mc{C}_\varepsilon \cap \{ t = \tau \} } ( ( \partial_t u )^2 + ( D_r u )^2 + | \slashed{\nabla} u |^2 + u^2 ) \text{.}
\end{equation}
Differentiating $E_{ 1, \varepsilon } [u]$ and integrating by parts, we obtain, for any $\tau \in ( -T, T )$,
\begin{align}
\label{energy_1} \frac{d}{ d \tau } E_{ 1, \varepsilon } [u] (\tau) &= 2 \int_{ \mc{C}_\varepsilon \cap \{ t = \tau \} } ( \partial_{tt} u \partial_tu + D^j u D_j \partial_t u + u \partial_t u ) \\
\notag &= - 2 \int_{ \mc{C}_\varepsilon \cap \{ t = \tau \} } \partial_t u ( \Box_y u - u ) + 2 \int_{ \Gamma_\varepsilon \cap \{ t = \tau \} } \partial_t u D_\nu u \text{.}
\end{align}
Note that \eqref{Box_y_kappa}, \eqref{weqn}, and \eqref{hypXV} imply
\begin{align*}
| \Box_y u | &\lesssim \left| D_X u + V u + \frac{ ( n - 1 ) \kappa }{ r y } u \right| \\
&\lesssim | \partial_t u | + | \slashed{\nabla} u | + | D_r u | + \left( \frac{1}{y} + \frac{n-1}{r} \right) |u| \text{.}
\end{align*}
Combining the above with \eqref{energy_1} yields
\begin{align*}
\frac{d}{ d \tau } E_{ 1, \varepsilon } [u] (\tau) &\leq C \cdot E_1 [u] (\tau) + C \cdot E_1^{ \frac{1}{2} } [u] (\tau) \left[ \int_{ \mc{C} \cap \{ t = \tau \} } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) u^2 \right]^{ \frac{1}{2} } \\
\notag &\qquad + 2 \int_{ \Gamma_\varepsilon \cap \{ t = \tau \} } \partial_t u D_\nu u \text{.}
\end{align*}
Next, integrating the above in $\tau$ and applying Lemma \ref{L.hardy}, we obtain
\begin{equation}
\label{energy_2} E_{ 1, \varepsilon } [u] ( t_1 ) \leq E_1 [u] ( t_0 ) + C \int_{ t_0 }^{ t_1 } E_1 [u] (\tau) \, d \tau + 2 \int_{ \Gamma_\varepsilon \cap \{ t_0 < t < t_1 \} } \partial_t u D_\nu u \text{.}
\end{equation}
Since $u$ is boundary admissible, it follows that
\begin{equation}
\label{energy_3} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^+ \cap \{ t_0 < t < t_1 \} } \partial_t u D_\nu u = 0 \text{.}
\end{equation}
Moreover, since $\nu$ points radially along $\Gamma_\varepsilon^-$, then by symmetry,
\begin{equation}
\label{energy_4} \lim_{ \varepsilon \searrow 0 } \int_{ \Gamma_\varepsilon^- \cap \{ t_0 < t < t_1 \} } \partial_t u D_\nu u = 0 \text{.}
\end{equation}
(Alternatively, when $n > 1$, we can also use \eqref{bdrylim_ia}.)
Letting $\varepsilon \searrow 0$ in \eqref{energy_2} and applying \eqref{energy_3}--\eqref{energy_4}, we conclude that
\[
E_1 [u] ( t_1 ) \leq E_1 [u] ( t_0 ) + C \int_{ t_0 }^{ t_1 } E_1 [u] (\tau) \, d \tau \text{.}
\]
The estimate \eqref{energyineq} now follows from the Gr\"onwall inequality.
\end{proof}
\subsection{Proof of Theorem \ref{T.Observability}}
Assume the hypotheses of Theorem \ref{T.Observability}, and set
\begin{equation}
\label{estobs_c} c = \begin{cases}
\frac{ 1 }{ 4 \sqrt{3} \cdot T} & \quad n \geq 4 \\
\min \left\{ \frac{1}{4 \sqrt{15} \cdot T}, \frac{|\kappa|}{120} \right\} & \quad n = 3 \\
\frac{ 1}{ 4 \sqrt{15} \cdot T} & \quad n = 1
\end{cases} \text{.}
\end{equation}
Note, in particular, that \eqref{estobs_c} and \eqref{obstime} imply that the conditions \eqref{eqc} hold.
Moreover, we define the function $f$ as in the statement of Theorem \ref{T.Carleman}, with $c$ as in \eqref{estobs_c}.
Then, direct computations, along with \eqref{obstime}, imply that
\[
\inf_{ \mc{C} \cap \{ t = 0 \} } f \geq - ( 1 + 2 \kappa )^{-1} \text{,} \qquad \sup_{ \mc{C} \cap \{ t = \pm T \} } f < - ( 1 + 2 \kappa )^{-1} \text{.}
\]
Hence, one can find constants $0 < \delta \ll T$ and $\mu_\kappa > ( 1 + 2 \kappa )^{-1}$ such that
\begin{equation} \label{estobs_f}
\begin{cases}
f \leq - \mu_\kappa & \quad \text{when } t \in ( -T, -T + \delta ) \cup ( T - \delta, T ) \\
f \geq - \mu_\kappa & \quad \text{when } t \in ( -\delta, \delta )
\end{cases} \text{.}
\end{equation}
In addition, we define the shorthands
\begin{equation}
\label{estobs_IJ} I_\delta = [ -T + \delta, T - \delta ] \text{,} \qquad J_\delta = ( -T, -T +\delta ) \cup ( T - \delta, T ) \text{.}
\end{equation}
We also let $\xi \in C^\infty ( \bar{\mc{C}} )$ be a cutoff function satisfying:
\begin{enumerate}[i)]
\item $\xi$ depends only on $t$.
\item $\xi = 1$ when $t \in I_\delta$.
\item $\xi = 0$ near $t = \pm T$.
\end{enumerate}
We can then apply the Carleman inequality in Theorem \ref{T.Carleman}, with our above choice \eqref{estobs_c} of $c$ and to the function $\xi u$, in order to obtain
\begin{align}
\label{estobs1} &\lambda \int_\Gamma e^{ 2 \lambda f } \xi^2 ( \mc{N}_\kappa u )^2 + \int_{ \mc{C} } e^{ 2 \lambda f } | \Box_\kappa ( \xi u ) |^2 \\
\notag &\quad \gtrsim \lambda \int_{ \mc{C} } e^{ 2 \lambda f } [ | \partial_t ( \xi u ) |^2 + \xi^2 | \slashed{\nabla} u |^2 + \xi^2 ( D_r u )^2 + \lambda^2 y^{ -1 + 6 \kappa } \xi^2 u^2 ] \\
\notag &\quad \gtrsim \lambda \int_{ I_\delta \times B_1 } e^{ 2 \lambda f } [ ( \partial_t u )^2 + | \slashed{\nabla} u |^2 + ( D_r u )^2 + \lambda^2 y^{ -1 + 6 \kappa } u^2 ] \text{.}
\end{align}
Moreover, noting that
\begin{align*}
| \Box_\kappa ( \xi u ) | &\lesssim | \xi \Box_\kappa u | + | \partial_t \xi | \partial_t u | + | \partial_t^2 \xi | | u | \\
&\lesssim | \Box_\kappa u | + | \partial_t u | + | u | \text{,}
\end{align*}
and recalling \eqref{hypXV} and \eqref{estobs_f}, we derive that
\begin{align*}
\int_{ \mc{C} } e^{ 2 \lambda f } | \Box_\kappa ( \xi u ) |^2 &\lesssim \int_{ I_\delta \times B_1 } e^{ 2 \lambda f } | \Box_\kappa u |^2 + \int_{ J_\delta \times B_1 } e^{ 2 \lambda f } ( | \Box_\kappa u | + | \partial_t u | + | u | ) \\
&\lesssim \int_{ I_\delta \times B_1 } e^{ 2 \lambda f } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 ) \\
&\qquad + \int_{ I_\delta \times B_1 } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) ( e^{ \lambda f } u )^2 \\
&\qquad + e^{ - 2 \lambda \mu_\kappa } \int_{ J_\delta \times B_1 } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 ) \\
&\qquad + e^{ - 2 \lambda \mu_\kappa } \int_{ J_\delta \times B_1 } \left( \frac{1}{ y^2 } + \frac{ n - 1 }{ r^2 } \right) u^2 \text{,}
\end{align*}
where the implicit constants of the inequalities depend also on $X$ and $V$.
Applying Lemma \ref{L.hardy} and recalling the definition of $f$, the above becomes
\begin{align}
\label{estobs2} \int_{ \mc{C} } e^{ 2 \lambda f } | \Box_\kappa ( \xi u ) |^2 &\lesssim \int_{ I_\delta \times B_1 } [ e^{ 2 \lambda f } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 ) + | D_r ( e^{ \lambda f } u ) |^2 ] \\
\notag &\qquad + e^{ - 2 \lambda \mu_\kappa } \int_{ J_\delta \times B_1 } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 ) \\
\notag &\lesssim \int_{ I_\delta \times B_1 } e^{ 2 \lambda f } ( | \partial_t u |^2 + | D_r u |^2 + | \slashed{\nabla} u |^2 + \lambda^2 y^{ 4 \kappa } u^2 ) \\
\notag &\qquad + e^{ - 2 \lambda \mu_\kappa } \int_{ J_\delta } E_1 [ u ] ( \tau ) \, d \tau \text{,}
\end{align}
Combining the inequalities \eqref{estobs1} and \eqref{estobs2} and letting $\lambda$ be sufficiently large (depending also on $X$ and $V$), we then arrive at the bound
\begin{align*}
&\lambda \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 + e^{ -2 \lambda \mu_\kappa } \int_{ J_\delta } E_1 [u] ( \tau ) \, d \tau \\
\notag &\quad \gtrsim \lambda \int_{ I_\delta \times B_1 } e^{ 2 \lambda f } ( | \partial_t u |^2 + | \slashed{\nabla} u |^2 + | D_r u |^2 + \lambda^2 y^{ 6 \kappa - 1 } u^2 )
\end{align*}
Further restricting the domain of the integral in the right-hand side to $( - \delta, \delta ) \times B_1$ and recalling the lower bound in \eqref{estobs_f}, the above becomes
\begin{equation}
\label{obsest3} \lambda \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 + e^{ -2 \lambda \mu_\kappa } \int_{ J_\delta } E_1 [u] ( \tau ) \, d \tau \gtrsim \lambda e^{ - 2 \lambda \mu_\kappa } \int_{ - \delta }^\delta E_1 [ u ] ( \tau ) \, d \tau \text{.}
\end{equation}
Finally, the energy estimate \eqref{energyineq} implies
\[
e^{-M T} E_1 [u] (0)\leq E_1[u](t)\leq e^{M T}E_1[u](0) \text{,}
\]
which, when combined with \eqref{obsest3}, yields
\begin{equation}
\label{obsest4} \lambda \int_\Gamma e^{ 2 \lambda f } ( \mc{N}_\kappa u )^2 + \delta e^{ -2 \lambda \mu_\kappa } e^{ M T } \cdot E_1 [u] ( 0 ) \gtrsim \lambda \delta e^{ - 2 \lambda \mu_\kappa } e^{ - M T } \cdot E_1 [ u ] ( 0 ) \text{.}
\end{equation}
Taking $\lambda$ in \eqref{obsest4} large enough such that $e^{ 2 M T } \ll \lambda$ results in \eqref{Observability}.
\section*{Acknowledgments}
A.E.\ and B.V.\ are supported by the ERC Starting Grant~633152 and by the
ICMAT--Severo Ochoa grant SEV--2015--0554.
A.S.\ is supported by the EPSRC grant EP/R011982/1.
\raggedbottom
|
2,869,038,156,702 | arxiv | \section{Introduction}
The fundamental theorem of calculus states that
$\int_a^b \partial_x f(x)\,dx = f(b) - f(a)$, where $f$ is some smooth function defined on the compact interval $[a,b]$. This theorem is critical in many applications, including computational sciences \cite{israel-theorem,thomson-theorem}. For example, if the function $f$ is strongly oscillatory, a numerical quadrature on the left-hand side would require many points and much computation to obtain accurate results. Nevertheless, the fundamental theorem guarantees that the positive and negative derivatives of this oscillatory function largely cancel each other out. Indeed, one can simply compute the right-hand side directly, without truncation error.
As a generalization of the fundamental theorem, consider the integral of $\partial_x f$ over the same domain under some Lebesgue measure $m$, which is an antiderivative of the density function $\rho$ (a.k.a. the Radon-Nikodym derivative \cite{nagy-density}), i.e., $dm(x) = \rho(x)\,dx$. In the classical version of the theorem, as mentioned in the first paragraph, the density $\rho$ is constant and equals $1/(b-a)$ everywhere on the domain. If this is not the case, however, the integration by parts of $\partial_x f$ involves the derivative of $\rho$,
\begin{equation}
\label{eqn:intro1}
\int_a^b \partial_x f(x)\,dm(x) = f\rho\Big|_a^b - \int_a^b f(x)\,\partial_x\rho(x)\,dx = f\rho\Big|_a^b - \int_a^b f(x)\,\frac{\partial_x\rho}{\rho}(x)\,dm(x)
\end{equation}
The integral in Eq. \ref{eqn:intro1} can be approximated using a Monte Carlo integration scheme if a set of realizations of $x$, $\{x^{1},x^{2},...,x^{N}\}$, distributed according to $m$, is given. However, if $f$ is a strongly-oscillatory function with large magnitude, the Monte Carlo method applied directly to the integral on the left-hand side (LHS) of Eq. \ref{eqn:intro1} would require a large amount of data to obtain an approximation with a reasonably small error \cite{olver-oscillatory,makri-oscillatory}. Alternatively, one can consider the right-hand side (RHS) of the same equation, which requires the function $f$ itself, not its derivative. Assuming the density $\rho$ is a well-behaved function, the variance of the integrand on the RHS is significantly smaller and, therefore, remarkably less data is needed to obtain an accurate result. However, extra computational effort must be put to evaluate $\partial_x\rho/\rho = \partial_x\log\rho$. The computation of that function, which we denote by $g$ and call it the {\em density gradient}, is the main focus of this paper.
Lebesgue integrals involving functions with high fluctuations are critical in the field of sensitivity analysis of chaotic dynamical systems. Ruelle \cite{ruelle-original,ruelle-corrections} derived a closed-form expression, known as the {\em linear response} formula, for the parametric derivative of the mean of a quantity of interest $J$. The linear response formula includes Lebesgue integrals of directional derivatives of a strongly oscillatory $J$ over the manifold of a chaotic system. A regularized version of Ruelle's formula, known as the space-split sensitivity (S3), was obtained through the integration by parts of the original formulation \cite{chandramoorthy-s3}. The S3 algorithm was successfully applied in various low-dimensional systems in the computation \cite{sliwiak-1d} and assessment of existence \cite{sliwiak-differentiability} of parametric derivatives of statistical quantities describing chaos. The crux of the computation of the regularized Ruelle's formula is the {\em SRB density gradient}, defined as a directional derivative of the logarithm of the SRB density \cite{young-srb,crimmins-srb} along the unstable manifold. While an efficient numerical procedure for the approximation of the SRB density gradient specialized to systems with one-dimensional unstable manifolds is available \cite{chandramoorthy-s3,sliwiak-1d,sliwiak-differentiability, chandramoorthy-clv}, we still lack a generalizable algorithm applicable to arbitrary higher-dimensional chaotic systems.
The main purpose of this work is to derive a general formula for the density gradient $g$, defined on a differentiable $m$-dimensional manifold $M$ immersed in the Euclidean space $\mathbb{R}^n$, $m\leq n$. In our analysis, we parameterize $M$ using the chart $x(\xi):\mathbb{R}^m\to \mathbb{R}^n$. Here, the $g$ function is an $m$-element vector, where the $i$-th component equals a directional derivative of $\log\rho$, in the direction of a unit vector $s_{i}$, i.e., $g_{i} = \partial_{s_i}\rho/\rho = (\nabla_x\rho\cdot s_i)/\rho$. The scalar function $\rho$ is the density implied by $x(\xi)$. Without loss of generality, we assume that the $i$-th directional derivative is computed along the isoparametric line in the direction of increasing $i$-th component of $\xi$. Analogously to Eq. \ref{eqn:intro1}, the Lebesgue integral of the directional derivative of $J$ over $M$ with measure $m$ can be written using $g_i$,
\begin{equation}
\label{eqn:intro2}
\int_{M} \nabla_{x}J(x)\cdot s_{i}(x)\,dm(x) = - \int_{M} J(x)\,g_{i}(x)\,dm(x),
\end{equation}
where $J$ is assumed to vanish on the boundary of $M$. For the reasons indicated above, it is computationally efficient to apply the Monte Carlo method to the RHS of Eq. \ref{eqn:intro2}. Analogous integration by parts is required to regularize the linear response \cite{chandramoorthy-s3}. Thus, the derivation of a computable expression for the density gradient defined on higher-dimensional smooth manifolds is a milestone in constructing algorithms for differentiating SRB measures. In addition, an explicit formula for $g$ might serve as a valuable tool in general numerical procedures involving integrals over geometrically complex domains.
The structure of this paper is the following. First, in Section \ref{sec:1D}, we derive a computable expression for the density gradient defined on one-dimensional manifolds (straight lines and curves). We also demonstrate a numerical example of Monte Carlo integration of a highly-oscillatory function, and show the advantage of using the density gradient in computing integrals of this type. In Section \ref{sec:general}, we extend all the concepts introduced in Section \ref{sec:1D} to higher-dimensional manifolds. Section \ref{sec:recursion} focuses on a recursive algorithm for the density gradient defined on a sequence of evolving manifolds under a differentiable map $\varphi$. Sections \ref{sec:1D}-\ref{sec:recursion} include examples of $x(\xi)$ defined by popular dynamical systems, as well as numerical results validating the derived expressions. Finally, Section \ref{sec:conclusion} concludes the paper.
\section{Computing $g$ on one-dimensional manifolds}\label{sec:1D}
In this section, we focus on the computation of the density gradient $g$ in the simplest topological setting. In particular, we consider one-dimensional manifolds, which can be described using a single parameter $\xi\in[0,1]$. That manifold is a curve $\mathcal{C}$ immersed in the Euclidean $\mathbb{R}^n$ space. We assume there exists a one-to-one map $x(\xi)\in\mathcal{C}\subset \mathbb{R}^n$, which is at least twice differentiable with respect to $\xi$, i.e., $x(\xi)\in C^2[0,1]$. In this case, the density gradient function is a scalar quantity defined as a directional derivative along $\mathcal{C}$ of logarithmic density, $g = \partial_s\log\rho$, where $\rho:\mathcal{C}\to[0,1]$ is a density function implied by $x(\xi)$. If we think of $\xi$ as a realization of the random variable uniformly distributed in $[0,1]$, then $x(\xi)$ is in fact the inverse cumulative distribution function (inverse CDF, a.k.a. the quantile function). Intuitively, $x(\xi)$ tells us that $100\xi\;\%$ of all points mapped from the uniformly distributed set are located on the curve segment between $x(0)$ and $x(\xi)$. On the other hand, the density function $\rho$ indicates the density of points mapped on $\mathcal{C}$ per unit curve length. Therefore, $\rho$ is counter-proportional to the magnitude of the first derivative of $x(\xi)$.
In the following three subsections, we analytically derive the expression for $g$ in terms of the inverse CDF $x(\xi)$ for simple line manifolds, $n=1$ (Section \ref{sec:lines}), and general curves, $n\geq 1$ (Section \ref{sec:curves}), and demonstrate its importance in a numerical integration experiment (Section \ref{sec:integral}). We illustrate all relevant concepts using a certain $x(\xi)$ associated with the Van der Pol equation,
\begin{equation}
\label{eqn:van-der-pol}
\frac{d^2u}{dt^2} = 2(1-u^2)\frac{du}{dt} - u, \;\;u(0) = -a,\;\;\frac{du}{dt}(0) = 0,
\end{equation}
which describes the coordinates of a 2D non-conservative oscillator with non-linear dumping \cite{ginoux-vanderpol}. In our numerical examples, we choose $a = 2.0199$, in which case the solution $[u(t), du/dt(t)]^T$ approximately lies on the limit cycle with period $T = 2T_{1/2}\approx 7.638$ and $u(t)\in[-a,a]$ for all $t\geq 0$. Figure \ref{fig:van-der-pol} illustrates the limit cycle of Eq. \ref{eqn:van-der-pol}, which has been computed using the second-order Runge-Kutta (midpoint) method with time step $\Delta t = 0.0001$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/vdp_cycle.png}
\caption{Trajectory of the Van der Pol oscillator (Eq. \ref{eqn:van-der-pol}). The red dot represents the initial condition, as well as the solution after time $T$, while the blue dot indicates the solution after time $T_{1/2}$. The vertical dashed lines correspond to $u = -a$ and $u = a$ (boundaries of the range of $u$). At the green dots, the solution satisfies $du/dt+d^3u/dt^3 = 0$, while the zero acceleration state, $d^2u/dt^2=0$, is represented by orange dots.}
\label{fig:van-der-pol}
\end{figure}
\subsection{Lines: $\mathcal{C}\subset\mathbb{R}$}\label{sec:lines}
We start from the simplest case, i.e., when $\mathcal{C}$ is a bounded line segment in $\mathbb{R}$, between $a$ and $b$. The corresponding inverse CDF $x(\xi)$ differentiably maps $[0,1]$ to $[a,b]$ and is related to the density function by the following expression,
\begin{equation}
\label{eqn:line1}
\xi(x) = \int_{a}^{x(\xi)} \rho(y)\;dy \;\;\;\forall x \in [a,b].
\end{equation}
Since $\xi\in[0,1]$, $\rho(x)$ is in fact the probability density function (PDF) corresponding to the CDF $\xi(x)$, which satisfies $d\xi=\rho(x)\; dx$. Using the inverse function theorem, which asserts $f'(f^{-1}(c)) = 1/[(f^{-1})'(c)]$ for any differentiable one-to-one function $f$ at any $c$ such that $(f^{-1})'(c)\neq0$, we conclude that
\begin{equation}
\label{eqn:line2}
\frac{dx}{d\xi}(\xi)\;\rho(x(\xi)) = 1.
\end{equation}
Eq. \ref{eqn:line2} indicates that at any point $x(\xi)$ on the manifold, the product of the PDF and derivative of the inverse CDF is constant. Thus, by differentiating Eq. \ref{eqn:line2} with respect to $\xi$ and reshuffling terms, we obtain a direct expression for $g$ at each point on the manifold,
\begin{equation}
\label{eqn:line3}
g(x(\xi)) = \partial_{x}\log\rho(x(\xi)) = \frac{\partial_x\rho(x(\xi))}{\rho(x(\xi))} = -\frac{\frac{d^2 x}{d\xi^2}(\xi)}{\Big(\frac{d x}{d\xi}(\xi)\Big)^2}.
\end{equation}
To illustrate these functions and their relation, we will consider the solution to Eq. \ref{eqn:van-der-pol}, $u(t)$, for $t\in[0,T_{1/2}]$, where $T_{1/2} \approx 3.819$. Based on Figure \ref{fig:van-der-pol}, it is evident that $u(t)$ is a one-to-one smooth function and $du/dt \geq 0$ in that time interval. In fact, we can apply the linear transformation $t\to \xi$ to notice that
\begin{equation}
\label{eqn:line4}
x(\xi) = u\left(\xi T_{1/2}\right)
\end{equation}
is a representation of the inverse CDF. Next, we compute the first and second derivative of Eq. \ref{eqn:line4} with respect to $\xi$ and plug them to Eq. \ref{eqn:line3} to obtain the following formula for $g$ along the trajectory,
\begin{equation}
\label{eqn:line5}
g(u(t)) = -\frac{\frac{d^2u}{dt}(t)}{\left(\frac{du}{dt}(t)\right)^2} \stackrel{\text{Eq. \ref{eqn:van-der-pol}}}{=} -\frac{2(1-u^2(t))\frac{du}{dt}(t) - u(t)}{\left(\frac{du}{dt}(t)\right)^2}.
\end{equation}
We observe that the density gradient is invariant to any linear change of variables, i.e., when $d\xi/dt$ is constant. Given a numerical solution to Eq. \ref{eqn:van-der-pol}, the density can be directly computed from $\rho(u(t)) = (T_{1/2}\; du/dt (u(t)))^{-1}$, which follows from Eq. \ref{eqn:line2}, whereas the density gradient function can be evaluated using Eq. \ref{eqn:line5}.
Figure \ref{fig:line} illustrates the inverse CDF $x(\xi)$, defined by Eq. \ref{eqn:line4}, as well as the corresponding density and density gradient. We clearly observe that both $\rho$ and $g$ are undefined at the endpoints, i.e., at $\xi = 0$ and $\xi = 1$, which is a consequence of zero slope of $x(\xi)$. Moreover, the larger the rate of change of $x$, the smaller the value of $\rho$, which confirms our previous intuitive explanation of the density function. We also notice that the density gradient is zero at the point corresponding to a local extremum of $\rho$ and the inflection point of $x(\xi)$.
\begin{figure}
\includegraphics[width=1.\textwidth]{figures/vdp_half_xrg.png}
\caption{The inverse CDF function $x(\xi)$ defined by the solution to the Van der Pol equation, such that $x(\xi(t)) = u(t)$ for all $t\in [0, T_{1/2}]$ (red), and the corresponding density (blue) and density gradient function (green). We used data presented in Figure \ref{fig:van-der-pol} to compute all the three functions.}
\label{fig:line}
\end{figure}
\subsection{Approximating integrals of a highly oscillatory function}\label{sec:integral}
We now demonstrate the use of the density gradient function in the numerical computation of a highly oscillatory function. Consider the following Lebesgue integral,
\begin{equation}
\label{eqn:line-integral}
I = \int_{-a}^{a} \partial_xf(x)\;d\xi(x),
\end{equation}
where $\xi(x)$ denotes a Lebesgue measure defined by Eq. \ref{eqn:line1}, while $f$ is a function whose first derivative is integrable and bounded. Certainly, it is assumed the above integral converges. Indeed, a sufficient condition for the convergence of $I$ in this case is Lebesgue-integrability of the density gradient with respect to the density $\rho$ \cite{sliwiak-differentiability}, i.e., $g\in L^1(\rho)$. However, the necessary and sufficient condition imposes extra requirements for the $f$ function itself, i.e., $\partial_x f\in L^1(\rho)$ or, equivalently, $\partial_x f\,\rho \in L^1[-a,a]$. In our experiment, the function $f$ has the following form,
\begin{equation}
\label{eqn:line-function}
f(x) = \left((x-a)(x+a)\sin(Kx^2)\right)^2,
\end{equation}
with some positive number $K$. We use Eq. \ref{eqn:line1} to rewrite the above integral, and then integrate it by parts. There exist a few scenarios when the resulting boundary term vanishes. One option is that the product $\partial_x f\,\rho$ is periodic and integrable on $[-a,a]$. Another possibility is when both $\partial_x f$ and $\rho$ are bounded and at least one of them vanishes at the domain boundaries. In any case, two new versions of $I$, alternative to the original form (in Eq. \ref{eqn:line-integral}), are available,
\begin{equation}
\label{eqn:line-integral2}
\int_{-a}^{a} \partial_xf(x)\;\rho(x)\;dx = I =
- \int_{-a}^{a} f(x)\;g(x)\;d\xi(x).
\end{equation}
To numerically approximate the integral $I$, we apply three distinct approaches. The integral in Eq. \ref{eqn:line-integral} and the RHS of Eq. \ref{eqn:line-integral2} can be estimated using a Monte Carlo method, which requires generating a random sequence $\{x^1,x^2,...,x^N\}$ distributed according to the measure $\xi$. If such a sequence is available, then the integral of any Lebesgue-integrable function $h(x)$ can be approximated as follows,
\begin{equation}
\label{eqn:line-integral3}
\int_{-a}^{a} h(x)\;d\xi(x) \approx \frac{1}{N}\sum_{i=1}^N h(x^{i}),
\end{equation}
since $\xi\in[0,1]$. Finally, the integral on the LHS of Eq. \ref{eqn:line-integral2} is evaluated using a standard trapezoidal rule with a uniform $N$-element grid for $x$ between $-a$ and $a$. In the calculation, we allot
\begin{equation}
\label{eqn:line-integral4}
x^{i}=u\left(\frac{i-1}{N-1}T_{1/2}\right).
\end{equation}
It can be numerically verified that for this particular choice of the sequence, $g\notin L^1(\rho)$, but $\partial_x f\in L^1(\rho)$. It means means that the integral $I$ converges despite the blow-up of $\rho$ and $g$ at the boundaries of $[-a,a]$. To assess the Lebesgue-integrability of these functions, we applied the procedure described in Section 4 of \cite{sliwiak-differentiability}. This algorithm approximates the slope of the distribution tail of any function in the logarithmic scale.
In order to compare the performance of these three integration methods, we proceed as follows. First, we generate the sequence $\{x^1,x^2,...,x^{N}\}$, $N = 10^5$ (time step is chosen such that $\Delta t = T_{1/2}/(N-1)$) and, using Eq. \ref{eqn:line2} and Eq. \ref{eqn:line5}, we directly evaluate $\rho$ and $g$ at all points from that sequence. Subsequently, both the density and density gradient functions are linearly interpolated everywhere between $-a$ and $a$. We use these interpolators to approximate the two functions at any point of a uniform grid (trapezoidal rule) or sequence defined by Eq. \ref{eqn:line-integral4} (Monte Carlo) for an arbitrary value of $N$. If $K$ is sufficiently small, then the approximation error of the trapezoidal rule is expected to be upperbounded by $\mathcal{O}(1/N)$, because the integrand, $\partial_x f\,\rho$, is Lebesgue-integrable \cite{cruz-traprule}. According to the Nyquist-Shannon sampling theorem, however, the discrete representation of the integrand may not be captured properly if $K$ is very large, in which case the trapezoidal rule's error decays as in a typical Monte Carlo method. Figure \ref{fig:convergence} shows the behavior of the relative error of the approximation of $I$ obtained using these three methods. The error is computed with respect to the reference solution obtained through the trapezoidal rule using $N=10^8$ points.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/convergence_10.png}
\includegraphics[width=0.49\textwidth]{figures/convergence_100000.png}
\caption{Relative error of the approximation of $I$ for $K = 10$ (left) and $K = 100000$ (right) obtained using three methods: Monte Carlo integration applied to Eq. \ref{eqn:line-integral} (blue), Monte Carlo integration applied to the RHS of Eq. \ref{eqn:line-integral2} (red), and trapezoidal rule applied to the LHS of Eq. \ref{eqn:line-integral2} (green). Black and orange dashed lines are reference lines representing functions proportional to $N^{-1/2}$ and $N^{-1}$, respectively. In each of these plots, we computed the relative error with respect to the approximation of $I$ obtained using the trapezoidal with $N = 10^8$ samples.}
\label{fig:convergence}
\end{figure}
We observe that for a moderately-oscillatory integrand ($K = 10$), the relative error of the trapezoidal rule (green curve) decays as $\mathcal{O}(1/N)$, which confirms the theoretical estimates. In this case, the performance of both the Monte Carlo approximations (blue and red curves) does not differ much from the trapezoidal rule's. The Monte Carlo approximation clearly converges to a solution slightly different than the reference solution, which is a consequence of the fact the latter was generated using the trapezoidal rule for a linearly interpolated function. This example indicates that there is no reason to perform integration by parts and compute $g$ to approximate integrals of low- or moderately-oscillatory functions. The right-hand side plot of Figure \ref{fig:convergence} corresponds to a different scenario, i.e., when $f$ is highly-oscillatory ($K=10^5$). The error now decays $\mathcal{O}(1/\sqrt{N})$ at $N\in[10^1,5\cdot10^5]$, regardless of the integration method. The trapezoidal rule requires almost $10^6$ samples to guarantee satisfactory accuracy. Note $\partial_x f$ has a magnitude proportional to $K$, and thus the variance of the sequence $\{\partial_x f(x^1),\partial_x f(x^2),...,\partial_x f(x^N)\}$ is of the order of $K^2$. Therefore, the Monte Carlo approach applied to Eq. \ref{eqn:line-integral} requires $\mathcal{O}(10^{10})$ samples to secure error of the order of 1. A similar error can be achieved if we perform integration by parts and compute $g$ and generate only $\mathcal{O}(1)$ samples, since the variance is reduced $10^{10}$ times.
In conclusion, the computational cost of the Monte Carlo method can be dramatically reduced using the generalized fundamental calculus theorem. In case of the $f$ function, the regularization of the integral in Eq. \ref{eqn:line-integral} may decrease the cost even $K^2$ times. This result is significant specifically in the context of strongly fluctuating functions.
\subsection{General curves: $\mathcal{C}\subset \mathbb{R}^{n}$}\label{sec:curves}
We extend the concepts introduced in Section \ref{sec:lines} to the case in which $x(\xi)$ differentiably maps $[0,1]$ to $\mathcal{C}\subset \mathbb{R}^{n}$, where $n$ is some positive integer. Geometrically, $x(\xi)$ represents a curve embedded in the $n$-dimensional Euclidean space. The measure $\xi(x)$ can now be expressed as an integral of the density, $\rho: \mathcal{C}\to [0,1]$, along $\mathcal{C}$ with respect to the arc length $s$,
\begin{equation}
\label{eqn:curve1}
\xi(x) = \int_{\mathcal{C}[x(0),\,x(\xi)]} \rho(x)\;ds,
\end{equation}
where $\mathcal{C}[x(0),x(\xi)]$ denotes a segment of $\mathcal{C}$ between the points indicated in the square bracket. Due to the parameterization $x(\xi)$, the length of the curve $\mathcal{C}$ equals $\int_{\mathcal{C}}ds$, while the arc length differential $ds$ is related to $d\xi$ by $ds = \|dx/d\xi\|\;d\xi$. Using this relation and Eq. \ref{eqn:curve1}, we obtain the following identity,
\begin{equation}
\label{eqn:curve2}
\rho(x(\xi))\;\left\|\frac{dx}{d\xi}(\xi)\right\| = 1.
\end{equation}
We now differentiate Eq. \ref{eqn:curve2} with respect to $\xi$, apply the chain rule and reshuffle terms,
\begin{equation}
\label{eqn:curve3}
g(x(\xi))= \partial_s\log(\rho(x(\xi))) =\frac{\partial_{s}\rho}{\rho}(x(\xi)) = -\frac{\frac{d x}{d\xi }(\xi)\cdot \frac{d^2 x}{d\xi^2}(\xi)}{\| \frac{d x}{d\xi }(\xi)\|^3},
\end{equation}
where $\partial_s$ denotes the directional derivative along the curve $\mathcal{C}$ in the direction of increasing $\xi$. Note the expression for $g$ in Eq. \ref{eqn:curve3} reduces to Eq. \ref{eqn:line3} if $x(\xi)$ represents a line manifold, i.e., $\mathcal{C}\subset\mathbb{R}^1$.
As an example, we re-consider the Van der Pol oscillator (Eq. \ref{eqn:van-der-pol}). This time, however, $x(\xi)$ represents a curve embedded in $\mathbb{R}^2$. In particular, $x(\xi)$ describes a two-dimensional loop such that
\begin{equation}
\label{eqn:curve4}
x(\xi) = \begin{bmatrix}u(\xi T)\\\frac{du}{dt}(\xi T)\end{bmatrix}
\end{equation}
(see Figure \ref{fig:van-der-pol} for an illustration of the loop). If a numerical solution to Eq. \ref{eqn:van-der-pol} is available, one can combine Eq. \ref{eqn:curve2} with Eq. \ref{eqn:curve4} to directly evaluate the density function. Similarly, by plugging Eq. \ref{eqn:curve4} to Eq. \ref{eqn:curve3}, it is possible to compute the density gradient, analogously to the procedure described in Section \ref{sec:lines}. Consequently, on the RHS of Eq. \ref{eqn:curve3}, $dx/d\xi$ can be replaced with $du/dt$, and $d^2x/d\xi^2$ with $d^2u/dt^2$. We can do so because the density gradient is invariant to any linear transformation of variables.
Figure \ref{fig:curve-density} illustrates the density function $\rho$, as well as the length of the curve segment $ \mathcal{C}[x(0),x(\xi)]$, versus the parameter $\xi$. We observe $\rho$ is large if the slope of the length function is small, and vice versa, which is analogous to the $x\--\rho$ relation in Figure \ref{fig:line}. In this particular case, $\rho(\xi)$ is clearly a periodic function with period 0.5. This property is manifested in Figure \ref{fig:van-der-pol}. Indeed, one can notice the relation between $du/dt$ and $u$ at $t\in[0,T_{1/2}]$ is the same as $-du/dt$ and $-u$ at $t\in[T_{1/2},T]$, where $T_{1/2}$ corresponds to $\xi = 0.5$.
Figure \ref{fig:curve-gradient} shows the density gradient $g$ computed using two distict ways: through a direct evaluation via Eq. \ref{eqn:curve3} and a finite difference scheme (see the caption for more details). The two approaches provide visibly identical solutions, which confirms the correctness of Eq. \ref{eqn:curve3}. Clearly, the density gradient inherits the periodic behavior of $\rho$. We notice that the density gradient vanishes if the numerator of Eq. \ref{eqn:curve3} is zero, which can happen if $d^2u/dt^2 = 0$ (at the two orange dots in Figure \ref{fig:van-der-pol}) and/or $du/dt + d^3u/dt^3 = 0$ (at the six green dots in Figure \ref{fig:van-der-pol}). These two cases coincide with the local extrema of the density function (i.e., $d\rho/d\xi = 0$ if at least one of these equations is satisfied). However, zero density gradient does not imply the inflection point ($d^2u/dt^2 = 0$), in contrast to the line manifold case (see Section \ref{sec:lines}).
\begin{figure}
\includegraphics[width=1.05\textwidth]{figures/vdp_dens_length.png}
\caption{The density $\rho$ (blue) and length of the curve segment $\mathcal{C}[x(0),x(\xi)]$ (red) associated with the map $x(\xi)$ defined by Eq. \ref{eqn:curve4}. The former is computed using the analytical expression in Eq. \ref{eqn:curve2}, while the latter is approximated by summing the length of consecutive linear segments connecting the points in the sequence $\{x(0), x(\Delta t/T), x(2\Delta t/T),...,x(\xi)\}$, obtained in the numerical integration of Eq. \ref{eqn:van-der-pol}.}
\label{fig:curve-density}
\end{figure}
\begin{figure}
\includegraphics[width=1.\textwidth]{figures/vdp_g.png}
\caption{The density gradient function $g$ computed directly (using Eq. \ref{eqn:curve3}) and through a finite difference method. In the latter approach, we note $\partial_s \rho = \partial_{\xi}\rho / \partial_{\xi} s$. Both the numerator and denominator is approximated using the central finite difference scheme on a uniform grid using data presented in Figure \ref{fig:curve-density}.}
\label{fig:curve-gradient}
\end{figure}
\section{Computing $g$ on general smooth manifolds}\label{sec:general}
The purpose of this section is to generalize the concept of the density gradient and derive a formula for $g$ defined on higher-dimensional manifolds. Here, we consider a smooth invertible map $x(\xi):U\to M$, where $U\subset \mathbb{R}^m$, $M\subset \mathbb{R}^n$, $m\leq n$, $x = [x_1,...,x_n]^T$ and $\xi = [\xi_1,...,\xi_m]^T$. $U$ is an $m$-orthotope (hyperrectangle), which is defined as the Cartesian product of $m$ 1D line manifolds (i.e., intervals of the real line). We no longer assume that these elementary sets only involve numbers between 0 and 1. $M$ is an oriented differentiable manifold, whose shape is defined by the chart $x$. For example, if $m=2$ and $n=3$, then $M$ represents a smooth surface. The density gradient $g$ is now defined as a directional gradient of the logarithm of the density function $\rho:M\to[0,1]$ implied by the chart $x(\xi)$. In particular, $g = \nabla_s\log\rho$, $\nabla_s := [\partial_{s_1}, \partial_{s_2}, ..., \partial_{s_m}]^T$, where $\partial_{s_i}$, $i=1,...,m$, denote directional derivatives along the corresponding {\em isoparametric curves}. The $i$-th component of $g$ is the rate of change of $\log\rho$ along the curve whose preimage involves vectors $\xi\in U$ with constant all coordinates except $\xi_i$. If $\log\rho$ is differentiable with respect to all the coordinates of $x$ and $\nabla_x :=[\partial_{x_1}, \partial_{x_2}, ..., \partial_{x_n}]^T$, then $\partial_{s_{i}} \log\rho = \nabla_x\log\rho\cdot s_i$, where $s_{i}$ denotes the unit vector that is tangent to the corresponding isoparametric curve and points in the direction of increasing $\xi_i$. In Section \ref{sec:general-formula}, we derive a generic formula for $g$, while Section \ref{sec:general-example} provides a specific example of a two-dimensional smooth manifold embedded in $\mathbb{R}^3$ (with $m=2$ and $n=3$).
\subsection{Derivation of the general formula}\label{sec:general-formula}
Recall $x(\xi):U\to M$ is an invertible and differentiable map, where $U\subset R^m$, $M\subset R^n$, and $m\leq n$, while $\rho(x):M\to [0,1]$ is the density function implied by that chart. Let $\omega(x)$ be the natural volume form defined on $M$. Therefore, the Lebesgue measure $m$ of any subset $V\subset U$, mapped by $x$ to $N\subset{M}$, equals
\begin{equation}
m(V) = \int_{N}\rho(x)\;d\omega(x),
\end{equation}
which implies that the volume element $dm$ defined on $U$ can be expressed in terms of $\rho$ and the volume element defined on $M$, at every point $x(\xi)$,
\begin{equation}
\label{eqn:general-measure}
dm = d\xi_1\wedge d\xi_2\wedge ... \wedge d\xi_m = \rho(x)\;d\omega(x) = \rho(x)\;dx_1\wedge dx_2\wedge ... \wedge dx_n.
\end{equation}
The wedge symbol ($\wedge$) denotes the exterior product, while $d\xi_{i}$, $i=1,...,m$ and $dx_{i}$, $i=1,...,n$ represent covectors (1-forms) associated with the corresponding coordinate directions. Intuitively, these 1-forms measure small displacements in the direction of one coordinate. The volume element on $M$, $d\omega$, can be expressed in terms of $\xi$,
\begin{equation}
\label{eqn:general-substitution}
d\omega(x(\xi)) = \sqrt{\det C(x(\xi))}\;d\xi_1\wedge d\xi_2\wedge ... \wedge d\xi_m,
\end{equation}
where $C$ represents the $m\times m$ metric tensor of the coordinate transformation $\xi \to x$, defined as
\begin{equation}
\label{eqn:general-metric}
C(x(\xi)) = [\nabla_{\xi} x(\xi)]^T\;\nabla_{\xi} x(\xi),
\end{equation}
or, componentwise,
\begin{equation}
C_{ij}(x(\xi)) = \partial_{\xi_i} x(\xi) \cdot \partial_{\xi_j} x(\xi).
\end{equation}
The vector gradient $\nabla_{\xi} x(\xi)$ is represented by an $n\times m$ matrix, in which the $j$-th column contains the derivative of $x$ with respect to $\xi_j$, i.e., $[\nabla_{\xi}x(\xi)]_{ij} = \partial_{\xi_{j}}x_{i}(\xi)$.
Combining Eq. \ref{eqn:general-measure} and \ref{eqn:general-substitution}, we conclude that the relation between the density function $\rho$ and metric tensor $C$, at any point $x(\xi)\in{M}$, can be written in the following way,
\begin{equation}
\label{eqn:general1}
\rho(x(\xi))\;\sqrt{\det C(x(\xi))} = 1,
\end{equation}
which is a generalization of Eq. \ref{eqn:curve2}.
Let us now QR-factorize the vector gradient $\nabla_{\xi} x(\xi)$,
\begin{equation}
\label{eqn:general-qr}
\nabla_{\xi} x(\xi) = Q(x(\xi))\;R(x(\xi)),
\end{equation}
where $Q$ is an $n\times m$ matrix, whose columns form an orthonormal basis for the column space of $\nabla_{\xi} x(\xi)$, while $R$ is an $m\times m$ upper-triangular matrix. Note $Q^TQ = I$ everywhere on $M$. Using this property, we immediately notice that $C = R^T R$ and, therefore, Eq. \ref{eqn:general1} reduces to
\begin{equation}
\label{eqn:general2}
\rho(x(\xi))\;|\det R(x(\xi))| = 1.
\end{equation}
For any invertible matrix $A(s)$, which depends on a scalar $s$, the following indentity is true,
\begin{equation}
\label{eqn:general-indentity}
\frac{\partial \det A(s)}{\partial s} = \det A\; \mathrm{tr}\left(A^{-1}(s)\;\frac{\partial A(s)}{\partial s}\right).
\end{equation}
Differentiating Eq. \ref{eqn:general2} with respect to $\xi_{i}$, applying chain rule and Eq. \ref{eqn:general-indentity}, we obtain the following expression for the $i$-th component of the density gradient,
\begin{equation}
\label{eqn:general-g1}
g_i(x(\xi)) = \frac{\partial_{s_i}\rho(x(\xi))}{\rho(x(\xi))} = - \frac{\partial_{s_i} \det R(x(\xi))}{\det R(x(\xi))} = - \frac{\partial_{\xi_i} \det R(x(\xi))}{\det R(x(\xi))\|\partial_{\xi_i}x(\xi)\|}.
\end{equation}
Eq. \ref{eqn:general-g1} is computationally inconvenient, as it involves evaluating the determinant of $R$ and its directional derivative. Our goal is to rewrite the RHS of that equation such that only first and second parametric derivatives of $x(\xi)$, as well as $Q$ and $R$ factors, are involved.
Since $R$ is an upper-triangular matrix, we notice that
\begin{equation}
\label{eqn:general-R}
\frac{\partial \det R}{\det R} = \frac{\partial \left(\prod_{k=1}^m R_{kk}\right)}{\prod_{k=1}^m R_{kk}} = \sum_{k=1}^{m}\frac{(\partial R)_{kk}}{R_{kk}} = \mathrm{tr}(\partial R\;R^{-1}).
\end{equation}
Now, differentiating Eq. \ref{eqn:general-qr} with respect to $\xi_i$, and then left- and right-multiplying the resulting expression by $Q^T$ and $R^{-1}$, respectively, we obtain
\begin{equation}
\label{eqn:general-aux1}
Q^T(x(\xi)) \;\partial_{\xi_i}\nabla_{\xi}x(\xi)\;R^{-1}(x(\xi)) = Q^T(x(\xi))\;\partial_{\xi_i}Q(x(\xi)) + \partial_{\xi_i}R(x(\xi))\;R^{-1}(x(\xi)).
\end{equation}
Note that since $Q^TQ = I$, then $Q^T\;\partial_{\xi_i} Q$ is anti-symmetric, which means its trace vanishes. Therefore, the following equality
\begin{equation}
\label{eqn:general-treq}
\mathrm{tr}\left(Q^T(x(\xi)) \;\partial_{\xi_i}\nabla_{\xi}x(\xi)\;R^{-1}(x(\xi))\right) = \mathrm{tr}\left(\partial_{\xi_i}R(x(\xi))\;R^{-1}(x(\xi))\right)
\end{equation}
holds everywhere on $M$. Finally, by combining Eq. \ref{eqn:general-g1}, \ref{eqn:general-R} and \ref{eqn:general-treq}, we obtain the general formula for $g_i$,
\begin{equation}
\label{eqn:general-g2}
g_i(x(\xi)) = \partial_{s_{i}}\log\rho(x(\xi)) = -\frac{\mathrm{tr}\left(Q^T(x(\xi)) \;\partial_{\xi_i}\nabla_{\xi}x(\xi)\;R^{-1}(x(\xi))\right)}{\|\partial_{\xi_i}x(\xi)\|},
\end{equation}
which holds everywhere on $M$ for $i = 1,...,m$. Using Einstein's summation convention, Eq. \ref{eqn:general-g2} can be rewritten to
\begin{equation}
\label{eqn:general-g2-einstein}
g_i(x(\xi)) = -\frac{q_j(x(\xi))\cdot\partial_{
\xi_i} \partial_{\xi_k} x(\xi)\;R_{kj}^{-1}(x(\xi))}{\|\partial_{\xi_i}x(\xi)\|},
\end{equation}
where $q_{j}(x(\xi))$ denotes the $j$-th column of $Q(x(\xi))$. Thus, to directly compute the density gradient at any point on a manifold, all first and second derivatives of the chart $x(\xi)$ must be found. In addition, QR factorization of the vector gradient $\nabla_\xi x$ and inversion of the $R$ matrix must be performed. In practice, inverting the triangular matrix $R$ means solving a linear system using the backward substitution method, which requires $\mathcal{O}(m^2)$ operations. Note Eq. \ref{eqn:general-g2-einstein} reduces to Eq. \ref{eqn:curve3} if $m=1$. In the following section, we present an example illustrating some of these quantities. Although Eq. \ref{eqn:general-g2-einstein} is a formula for the derivative in the direction of a isoparametric curve, we can compute derivatives of $\log\rho$ in an arbitrary direction using the distributive law of the dot product.
\subsection{Example: a surface manifold}\label{sec:general-example}
As an example of a surface manifold (with $m=2$ and $n=3$), let us consider $x(\xi) = u(\xi) = [u_1(\xi),u_2(\xi),u_3(\xi)]^T$, where $\xi= [c,t]^T$, $-5 \leq c \leq 5$, $0 \leq t \leq 0.4$, $\left.u(\xi)\right|_{t=0} = [c,c,28]^T$, and $\partial_t u(\xi) = f(u(\xi))$, where $f$ is defined as follows,
\begin{equation}
\label{eqn:general-lorenz}
\begin{split}
& \partial_t u_1(\xi) = 10\;(u_2(\xi)-u_1(\xi)),\\& \partial_t u_2(\xi) = u_1(\xi)\,(28 - u_3(\xi)) - u_2(\xi), \\ & \partial_t u_3(\xi) = u_1(\xi)\,u_2(\xi) - \frac{8}{3}u_3(\xi).
\end{split}
\end{equation}
System \ref{eqn:general-lorenz} represents the Lorenz '63 oscillator, which is a mathematical model used for atmospheric convection \cite{lorenz-climate}. This system is known to exhibit chaotic behavior. However, we are interested in the solution in a short time interval, such that the trajectories do not intersect and the resulting surface is orientable. In particular, we compute $x(\xi)$ by numerically integrating System \ref{eqn:general-lorenz} in time for different values of $c\in[-5,5]$, using the second-order Runge-Kutta scheme with $\Delta t = 0.002$. There are two reasons we have chosen this particular $x(\xi)$. First, it serves as a perfect example of a problem, in which the smooth one-to-one solution, $x(\xi)$, cannot be found analytically. Thus, the computation of $g$ should be performed numerically using closed-form relations derived in Section \ref{sec:general-formula}. Second, the surface described by the chart $x(\xi)$ can be obtained as a evolution of 1D manifolds. This observation is utilized in Section \ref{sec:recursion}, where we derive expressions for evolving manifolds. To evaluate $\rho$ and $g$, we directly use Eq. \ref{eqn:general2} and Eq. \ref{eqn:general-g2}, respectively. To find these quantities, the vector gradient $\nabla_{\xi}x(\xi)=[\partial_c x(\xi), \partial_t x(\xi)]$, as well as the following second derivatives: $\partial_c^2 x(\xi), \partial_t^2 x(\xi), \partial_c\partial_t x(\xi)$, must be found at every point on the manifold. The time derivative, $\partial_t x(\xi) = f(x(\xi))$, is obtained automatically as we integrate System \ref{eqn:general-lorenz} in time. The second derivative of $x$ with respect to $t$ is obtained using the chain rule, $\partial_t^2 x(\xi) = \partial_t f(x(\xi)) = Df(x(\xi))\,f(x(\xi))$, where $Df$ denotes the Jacobian of System \ref{eqn:general-lorenz}. Thus, from the computational point of view, we need to solve a tangent equation to find $\partial_t x(\xi)$ at every point of the trajectory defined by System \ref{eqn:general-lorenz}. Using this approach, one can analogously find derivatives with respect to $c$. Let $v(\xi) = \partial_c x(\xi)$ and $w(\xi) = \partial^2_c x(\xi)$. Using the chain rule, we conclude that $\partial_t v(\xi) = Df(x(\xi))\,v(\xi)$, $\left.v(\xi)\right|_{t=0} = [1,1,0]^T$ and, by differentiating again, $\partial_t w(\xi) = D^2f(x(\xi))(w(\xi), w(\xi)) + Df(x(\xi))\,w(\xi)$, $\left.w(\xi)\right|_{t=0} = [0,0,0]^T$, where $D^2 f$ denotes the Hessian of $f$. Using Einstein's summation convention, the $i$-th component of the blinear form $D^2f(x(\xi))(w(\xi), w(\xi))$ can be written as $\partial_{x_k}\partial_{x_{l}}f_i\,w_k\,w_l$. Finally, the mixed derivative $\partial_c\partial_t x(\xi) = \partial_t v(\xi)$ is a byproduct of the numerical integration of the tangent equation for $v$. We solve all of these tangent equations using the same time integrator as the one mentioned above. Since $m=2$, the $2\times 2$ $R$ matrix is inverted analytically at every point on the trajectory.
In this case, the $U$ space, which is the domain (preimage) of $x$, is in fact a Cartesian product of $[-5,5]$ and $[0,0.4]$. The upper plot in Figure \ref{fig:general-mesh} graphically represents $U$, while the lower plot illustrates the $u_1\--u_3$ projection of $M$, obtained through the mapping $x(\xi)$. For completeness, in Figure \ref{fig:general-mesh-ext}, we also include the $u_1\--u_2$ and $u_2\--u_3$ projection of the deformed mesh. It is clear that the deformation is symmetric with respect to $c = 0$. We also observe that fibers (isoparametric lines) corresponding to larger values of $t$ are subject to greater stretching than those at smaller $t$. These features are reflected by the distribution of the density function $\rho$, plotted in Figure \ref{fig:general-density}. The smaller the area of each distorted quadrilateral of the mesh, the larger the value of the density function. Indeed, the smallest values of the density distribution are located around $t=0.4$. This region coincides with the most stretched quadrilaterals.
\begin{figure}
\includegraphics[width=1.0\textwidth]{figures/surface_reference.png}
\includegraphics[width=1.0\textwidth]{figures/surface_deformed_1_3.png}
\caption{Upper plot: a structured mesh representing the domain $U=\{(c, t)\,|\,c\in[-5,5],\, t\in[0,0.4]\}$. The black lines correspond to fixed values of $t$, while the red lines illustrate $\xi$ with a fixed value of $c$. The red and black dashed lines represent $c=5$ and $t=0.4$, while the red and black bold lines refer to $c = -2.5$ and $t=0.2$, respectively. Lower plot: $u_1\--u_3$ projection of the image of the structured mesh obtained through the mapping $x(\xi)$.}
\label{fig:general-mesh}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\textwidth]{figures/surface_deformed_1_2.png}
\includegraphics[width=1.0\textwidth]{figures/surface_deformed_2_3.png}
\caption{Extension of Figure \ref{fig:general-mesh}. $u_1\--u_2$ (upper plot) and $u_2\--u_3$ (lower plot) projection of the image of the structured mesh obtained through the mapping $x(\xi)$.}
\label{fig:general-mesh-ext}
\end{figure}
\begin{figure}
\includegraphics[width=1.\textwidth]{figures/density_xz.png}
\caption{$u_1\--u_3$ projection of the density function $\rho$.}
\label{fig:general-density}
\end{figure}
Figure \ref{fig:general-density-gradient} shows the two components of the density gradient $g:=[g_c,g_t]^T = [\partial_{s_1}\log\rho,\partial_{s_2}\log\rho]^T $, corresponding respectively to the $c$- and $t$-direction. The distribution of $g_{c}$ is clearly symmetric with respect to the reflection points on the isoparametric line $c=0$, which is a manifestation of the fact the density is symmetric and directional derivative is computed in the direction of increasing $c$. The symmetry of $g_t$ is a direct consequence of the definition $g_t:=\partial_{s_2}\log\rho$, where $\log\rho$ itself is symmetric. Note the largest-in-magnitude values of $g_a$ concentrate around the boundaries of the range of $c$, i.e., at $c=\pm 5$ and, in case of $g_t$, around $u_1 = 0$. This reflects the fact the density gradient measures the relative rate of change of the density. In particular, its value becomes large if the rate of change of the density is large and/or the density itself is small. Figure \ref{fig:general-direct-fd} illustrates the density gradient along the bold isoparametric curves from Figure \ref{fig:general-mesh}, computed using Eq. \ref{eqn:general-g2-einstein} directly and through finite differencing. In case of both $g_c$ and $g_t$, we observe a good agreement between the solution computed directly and finite difference approximation, which validates our derivation of Eq. \ref{eqn:general-g2-einstein}.
\begin{figure}
\includegraphics[width=1.\textwidth]{figures/ga_xz.png}
\includegraphics[width=1.\textwidth]{figures/gt_xz.png}
\caption{$u_1\--u_3$ projection of the directional derivative of $\log\rho$, in the c-direction, $g_1:=g_c$ (upper plot), and $t$-direction, $g_2 := g_t$ (lower plot).}
\label{fig:general-density-gradient}
\end{figure}
\begin{figure}
\includegraphics[width=1.\textwidth]{figures/fd_vs_exact_a_t.png}
\caption{The first and second component of the density gradient function $g=[g_c,g_t]$, respectively at $t=0.2$ (black solid line on Figure \ref{fig:general-mesh}) and $c=-2.5$ (red solid line on Figure \ref{fig:general-mesh}), computed directly using Eq. \ref{eqn:general-g2-einstein} and through a finite difference method. In the latter approach, we note $\partial_{s_{i}} \rho = \partial_{\xi_{i}}\rho / \partial_{\xi_i} s_i$, $i = 1,2$, $\xi_1=c$, $\xi_2 = t$, where $s_i$ denotes the length of the isoparametric curve associated with $\xi_i$. Both the numerator and denominator is approximated using the central finite difference scheme on a uniform grid. The relation $s_{i}(\xi_i)$ is found in a way analogous to the one described in Section \ref{sec:curves}.}
\label{fig:general-direct-fd}
\end{figure}
\section{Recursive algorithm for $g$ along trajectories defined by diffeomorphism $\varphi$}\label{sec:recursion}
Using the results presented in Section \ref{sec:1D} and \ref{sec:general}, we now propose an iterative method for the density gradient along trajectories defined by a $C^2$ diffeomorphism $\varphi:M^k\to M^{k+1}$, $k\in\mathbb{Z}$, where both $M^k$ and $M^{k+1}$ represent differentiable manifolds of the same dimension embedded in $\mathbb{R}^n$, $n\in \mathbb{Z}^{+}$. Let us consider two different charts, $x^{k}(\xi)\in N^k\subset M^k$ and $x^{k+1}(\xi)\in N^{k+1}\subset M^{k+1}$, such that
\begin{equation}
\label{eqn:recursion-map}
x^{k+1}(\xi) = \varphi(x^k(\xi))
\end{equation}
for all $\xi\in V\subset{U}\subset\mathbb{R}^m$, $1\leq m\leq n$, $k\in\mathbb{Z}$. Let $\omega^k$ and $\omega^{k+1}$ be the natural volume forms in $M^k$ and $M^{k+1}$, respectively, where $\omega^{k+1}$ is the pushforward of $\omega^k$ under $\varphi$. Therefore, for all $k\in\mathbb{Z}$, the Lebesgue measure $m$ of the subspace $V$ can be expressed as follows,
\begin{equation}
\label{eqn:recursion-lebesgue}
m(V) = \int_{N^k}\rho^{k}(x)\;d\omega^k(x) = \int_{N^{k+1}}\rho^{k+1}(x)\;d\omega^{k+1}(x),
\end{equation}
where $\rho^k$ and $\rho^{k+1}$ are densities implied by $x^k(\xi)$ and $x^{k+1}(\xi)$, respectively. Following the procedure involving Eq. \ref{eqn:general-measure}-\ref{eqn:general-metric}, it is possible to find the relation between $\rho^k$, $\rho^{k+1}$, and the metric tensors of the two transformations: $\xi\to x^k$ and $\xi\to x^{k+1}$. Thus, by applying the chain rule, we find a relation between the parametric derivatives of $x^k(\xi)$ and $x^{k+1}(\xi)$, thanks to which a general recursive formula for the density gradient along the trajectory defined by $\varphi$ can be inferred. The $g^k$ function should be understood as the directional derivative of the (logarithmic) density implied by the chart $x^k(\xi)$. In Section \ref{sec:recursion-derivation}, we derive an iterative procedure for $g^k$, while Section \ref{sec:recursion-example} presents the use of the proposed algorithm by revisiting the Lorenz '63 oscillator. Throughout this section, repeated indices in the subscript of any term imply summation (Einstein's notation), unless otherwise stated.
\subsection{A generic recursive procedure for $g^k$}\label{sec:recursion-derivation}
As pointed out above, the first step is to find a relation between the parametric gradients of $x^k$ and $x^{k+1}$. Applying the definition of $\varphi$ from Eq. \ref{eqn:recursion-map} and the chain rule, we can expand $\nabla_{\xi}x^{k+1}$ in the following way,
\begin{equation}
\label{eqn:recursion-first-deriv}
\nabla_{\xi}x^{k+1}(\xi) = D\varphi(x^k(\xi))\;\nabla_{\xi}x^{k}(\xi),
\end{equation}
or, equivalently,
\begin{equation}
\label{eqn:recursion-first-deriv-comp}
\partial_{\xi_i}x^{k+1}(\xi) = D\varphi(x^k(\xi))\;\partial_{\xi_i}x^{k}(\xi),
\end{equation}
where $D\varphi$ denotes the $n \times n$ Jacobian matrix of $\varphi$, i.e., $(D\varphi)_{ij} = \partial_{x_j}\varphi_i$.
By differentiating Eq. \ref{eqn:recursion-first-deriv-comp} once more, with respect to $\xi_{j}$, we obtain
\begin{equation}
\label{eqn:recursion-second-deriv-comp}
\partial_{\xi_i}\partial_{\xi_j}x^{k+1}(\xi) = D^2\varphi(x^k(\xi))\left(\partial_{\xi_i}x^{k}(\xi),\partial_{\xi_j}x^{k}(\xi)\right) + D\varphi(x^k(\xi))\;\partial_{\xi_i}\partial_{\xi_j}x^{k}(\xi),
\end{equation}
where $D^2\varphi$ is the Hessian of $\varphi$, which is in fact a third-order $n\times n\times n$ tensor. Analogously to the example presented in Section \ref{sec:general-example}, the first term in the RHS of Eq. \ref{eqn:recursion-second-deriv-comp} is a bilinear form that outputs an $n$-element vector. In this case, the $i$-th component of that vector equals $\partial_{x_p}\partial_{x_q}\varphi_i(x^k(\xi))\;\partial_{\xi_i}x_p^k(\xi)\;\partial_{\xi_j}x_q^k(\xi)$.
In the second step, we directly use the formula for the density gradient derived in Section \ref{sec:general-formula}. Let $f^k:=f(x^k(\xi))$ be a shorthand notation for any function $f$ defined at $x^k(\xi)$, and $e_i(x^k(\xi)):= \partial_{\xi_i}x^k(\xi)$, $a_{ij}(x^k(\xi)) := \partial_{\xi_i}\partial_{\xi_j}x^{k}(\xi)$. Thus, by combining Eq. \ref{eqn:recursion-first-deriv-comp}, \ref{eqn:recursion-second-deriv-comp} with Eq. \ref{eqn:general-g2-einstein} derived for a generic chart $x(\xi)$, we conclude that
\begin{equation}
\label{eqn:recursion-g}
g_i^k = - \frac{(R^{-1}_{lj})^k}{\|e_i^k\|}\,q_j^k\cdot a_{il}^k ,
\end{equation}
\begin{equation}
\label{eqn:recursion-qr}
(\nabla_\xi x)^k = [e_1^k\,e_2^k\dotsm e^k_m] = Q^k\,R^k = [q_1^k\,q_2^k\dotsm q^k_m]\,R^k.
\end{equation}
\begin{equation}
\label{eqn:recursion-e}
e^{k+1}_{i} = D\varphi^k e^k_i,
\end{equation}
\begin{equation}
\label{eqn:recursion-a}
a^{k+1}_{ij} = D^2\varphi^k (e_i^k,e_j^k) + D\varphi^k a^k_{ij},
\end{equation}
hold for any $\xi\in{V}\subset U$.
To summarize, if a map $\varphi$ relating two consecutive points on the trajectory, $x_{k}(\xi)$ and $x_{k+1}(\xi)$, is available, then the density gradient at one point can be computed using information associated with the other point. In particular, according to Eq. \ref{eqn:recursion-g}, the $i$-th component of $g$ requires knowledge of $e_j$, $j = 1,...,m$ and $a_{pq}$, $p, q = 1,...,m$ at the same point. Thus, to compute one component of the density gradient at $x^k(\xi)$ for some $\xi$, we need to apply the recursion in Eq. \ref{eqn:recursion-e} $km$ times and, analogously, the recursion in Eq. \ref{eqn:recursion-a} $1/2\,k m^2$ times. The $1/2$ factor is a consequence of the fact that $a_{ij}(\xi) = a_{ji}(\xi)$ for any admissible $\xi$, because $x_{k}$ is assumed to be twice differentiable for any $k\in\mathbb{Z}$. In addition, at every step $k$, the QR factorization of $(\nabla_{\xi} x)^k = [e_1^k\,e_2^k\dotsm e^k_m]$ and inversion (either direct if $m$ is small or through solving a linear system) of the resulting $m\times m$ $R^k$ matrix must be performed. We assume $x^0(\xi)$ is given, from which we directly compute initial conditions for recursions in Eq. \ref{eqn:recursion-e} and \ref{eqn:recursion-a}.
The recursion involving Eq. \ref{eqn:recursion-g}-\ref{eqn:recursion-a} can be used to devise algorithms for differentiating the invariant, physical SRB measure $m_{\mathrm{SRB}}$, which is guaranteed to exist in uniformly hyperbolic systems. In general, $m_{\mathrm{SRB}}$ is not absolutely continuous everywhere on the manifold, but only conditional measures of $m_{\mathrm{SRB}}$ along unstable manifolds are absolutely continuous. The SRB density gradient $g_{\mathrm{SRB}}$, defined as a directional derivative of the conditional SRB density on the unstable manifold, is a byproduct of the integration by parts (analogous to Eq. \ref{eqn:intro2}), preceded by the disintegration of $m_{\mathrm{SRB}}$ \cite{chandramoorthy-s3,sliwiak-1d}. Thus, if a direction of the unstable manifold is given, the recursive formula presented in this section might be further developed to compute the SRB density gradient, defined on a manifold of any dimension, along a trajectory initiated at a $m_{\mathrm{SRB}}$-typical point.
\subsection{Example: evolution of a 1D manifold}\label{sec:recursion-example}
In this section, we demonstrate the application of the recursive scheme for the density gradient $g^k$. For this purpose, let us re-consider the Lorenz '63 oscillator, defined by System \ref{eqn:general-lorenz}. In particular, we define $\varphi$, such that it represents numerical time integration of System \ref{eqn:general-lorenz} for a period of $\Delta t$, i.e., $u(t+\Delta t) = \varphi(u(t))$ with $u(t)$ being the solution of the system at time $t$. Let us consider a 1D smooth manifold embedded in $\mathbb{R}^3$ described by the following chart $x^0(c) = [c,c,28]^T$, $-5\leq c\leq 5$. Note $x^0(c)$ coincides with the black solid boundary of the surface depicted in Figures \ref{fig:general-mesh}-\ref{fig:general-mesh-ext}. Now, by applying $\varphi$ recursively, the next step is to numerically compute a sequence of charts $\{x^0(c), x^1(c), x^2(c), ...\}$, where $x^{k+1}(c) = \varphi(x^{k}(c))$. Our aim is to compute $g^k = \partial_s\log\rho^k$, where $\rho^k$ is a density implied by the chart $x^k(c)$. The operator $\partial_s$ denotes a generic directional derivative along the curve in the direction of increasing $c$. The formulas derived in Section \ref{sec:recursion-derivation} give us all necessary tools to compute $g^k$ along the trajectory defined by $\varphi$. In this example, however, we consider the simplest case, $m=1$. Eq. \ref{eqn:recursion-g}-\ref{eqn:recursion-a} can be dramatically simplified, because $\nabla_{\xi} x = dx/dc$ is just a vector, and thus QR factorization is equivalent to normalizing that vector. Let $e = dx/dc = \|dx/dc\|\,q$ and $a = d^2x/dc^2$ and, therefore,
\begin{equation}
\label{eqn:recursion-g-1D}
g^k = - \frac{q^k\cdot a^k}{\|e^k\|^2},
\end{equation}
\begin{equation}
\label{eqn:recursion-e-1D}
e^{k+1} = D\varphi^k\,e^k,\;\;\;q^k = \frac{e^k}{\|e^k\|},
\end{equation}
\begin{equation}
\label{eqn:recursion-a-1D}
a^{k+1} = D^2\varphi^k(e^k,e^k) + D\varphi^k\,a^k.
\end{equation}
Note Eq. \ref{eqn:recursion-g-1D}-\ref{eqn:recursion-a-1D} can be derived directly using Eq. \ref{eqn:curve3} and the chain rule for parametric derivatives.
How does this example differ from the one presented in Section \ref{sec:general-example}? There, we used a chart $x_s(\xi):\mathbb{R}^2\to\mathbb{R}^3$, $\xi=[c,t]^T$, which defined a two-dimensional manifold. The rate of change of $x(\xi)$ in the $t$-direction was determined by the Lorenz '63 oscillator (System \ref{eqn:general-lorenz}). Here, using the iterative procedure, we generate a bunch of 1D manifolds $x^k(c)$. The evolution of these curves (in geometric sense) is determined by $\varphi$, which is in fact a discrete version of System \ref{eqn:general-lorenz}. Thus, if we generate infinitely many such curves and $\Delta t\to 0$, we effectively obtain the same surface as the one shown in Figure \ref{fig:general-mesh}. Intuitively, the density $\rho_s$ implied by $x_s(\xi)$ measures number of points mapped from a uniform distribution per unit surface area. Likewise, the density $\rho^k$, implied by the chart $x^k(c)$, measures number of points mapped from a uniform distribution per unit curve length. Since we use the same discretization scheme to integrate differential equations, the localization of points obtained in both the computation of surface from Section \ref{sec:general-example} and, here, evolution of curves is exactly the same. However, the density $\rho^k$ does not equal to the marginal distribution of $\rho_s$ at $t = k\,\Delta t$ (assuming uniform discretization of time). In case of the surface example, the value of the density function reflects the densification of points, mapped from a uniform distribution, in both the $t$ and $c$ directions. In the latter example, the density is determined only by the localization of points along the evolving curve. Figure \ref{fig:recursion-fd} illustrates the density gradient $g^k$ along the evolving curve, recorded at three different time steps $k$. We observe $g^k=0$ at $k=0$, which is a consequence of the choice of the uniformly distributed initial condition. Due to the symmetric geometry of $M^k$, defined by the Lorenz '63 oscillator at $t\in[0,0.4]$, the density gradient features symmetric behavior with respect to the origin of the $g^k(c)$-vs.-$c$ relation.
\begin{figure}
\centering
\includegraphics[width=1.\textwidth]{figures/iterative_fd_vs_exact.png}
\caption{Density gradient function computed using the recursion involving Eq. \ref{eqn:recursion-g-1D}-\ref{eqn:recursion-a-1D} at three different time steps $k = t/\Delta t$. The finite difference approximation is generated using the approach described in Section \ref{sec:curves}.}
\label{fig:recursion-fd}
\end{figure}
\section{Conclusions and future work}\label{sec:conclusion}
A Monte Carlo integration scheme applied to a highly-oscillatory function might be remarkably expensive. The computational cost, however, can be dramatically reduced by integrating the original formulation by parts. Such treatment gives rise to a new quantity, i.e., a directional derivative of the logarithm of the density implied by a chart describing the integration domain. The computation of that derivative, which we call the {\em density gradient}, requires knowledge of the first and second derivatives of the chart with respect to the domain parameterization. If the domain manifold evolves according to some diffeomorphism $\varphi$, the calculation of the density gradient along a trajectory requires solving a collection of first- and second-order tangent equations, involving both the Jacobian and Hessian of $\varphi$. The number of these equations is respectively proportional to $m$ and $m^2$, where $m$ is the dimension of the manifold.
The formulas derived in this paper is a major step toward constructing generalizable algorithms for {\em SRB density gradient}. This quantity plays a major role in the sensitivity analysis of uniformly hyperbolic systems, including many popular chaotic equations. Using the recursive formula for the density gradient along a trajectory defined by $\varphi$ and the definition of the SRB measure, one can potentially devise a trajectory-driven procedure for the SRB density gradient. This is in fact the subject of the authors' ongoing investigation.
\section*{Acknowledgments}
This work was supported by Air Force Office of Scientific Research Grant No. FA8650-19-C-2207 and U.S. Department of Energy Grant No. DE-FOA-0002068-0018.
\bibliographystyle{elsarticle-num-names}
|
2,869,038,156,703 | arxiv | \section{Introduction}
The Galactic Center (GC) is an arena for astrophysical phenomena unlike any other in our galaxy. The inner parsec alone is packed with gas streams, dark matter, a puzzling young massive stellar population, and remnants of a long history of star formation, all encircled by a dusty circumnuclear disk \citep{Genzel2010}. In the middle of this is a supermassive black hole (hereafter Sgr~A$^*$) that typically emits well below Eddington, though with flares that occasionally reach at least into hard X-rays \citep{Barriere2014}.
It is easy to imagine substantial concentrations of photon emission and energetic particles within this region. Much of the bolometric luminosity has now been identified as originating from the aforementioned massive stars that in turn power the infrared output from dust in the circumnuclear disk (e.g., \citealt{Davidson1992,Krabbe1995,Genzel2010}). The many matters of central import left to be resolved span the spectrum of photons, cosmic rays, and neutrinos.
Our purpose is to address aspects related to those mysteries that plausibly involve very energetic particles. These include the origin of the gamut of gamma rays reaching to $>\,$10~TeV \citep{Nolan2012,Aharonian2004,Aharonian2009,Albert2006,Archer2014,Archer2016,Ahnen2016} and bright X-ray emission with non-thermal characteristics. Such photons, if only for their prime location at the center of the Galactic halo, are of great interest, such as the significant excess of gamma rays at $\sim\! 10$~GeV that has produced considerable excitement (e.g., \citealt{Abazajian2014,Daylan2014,Calore2014,Ajello2016}).
Our focus here is not on providing yet another explanation for such anomalies (not entirely anyway), but rather to better understand the behavior of high-energy particles starting at the Center --- electrons and gamma rays in particular --- via an improved description of the relevant conditions in this unique environment. For example, recent high spatial resolution infrared data has revealed structures within the central parsec. These imply a photon background much denser than typically encountered in the Galaxy with variations in the amplitude of each component throughout this region.
A population of electrons, even if their velocity distribution is isotropic, will thus encounter anisotropic photon backgrounds. Since head-to-head scatterings result in more energy transfer, the resulting inverse Compton spectrum thus depends on the direction to the observer. Moreover, gamma rays produced via this or other processes can in turn be attenuated by interacting with a background photon to produce an electron-positron pair, the probability of which is dependent on the path taken to the telescope.
We construct a phenomenological energy and angle dependent photon field in the central parsec based on recent infrared data to achieve a basic agreement with the measured broadband spectrum and morphology of the various emissions. This is used to better describe the inverse Compton scattering and $\gamma \gamma \!\rightarrow\! e^+ e^-$ extinction, which have a similar dependence on the geometry of the photon background.
We couple these with a convenient method for calculating time-evolved electron spectra in examining several topics of recent interest. These include the diffuse hard X-ray emission extending to $>\,$40~keV discovered by {\it NuSTAR} throughout this region that cannot be simply extrapolated from sources prevalent at lower energies \citep{Perez2015,Mori2015}. We discuss possible attributions, including synchrotron radiation from $\gtrsim\,$100~TeV electrons, and connections to gamma rays.
We also consider contributions from pulsar electrons to the GeV signal seen from the Galactic Center by {\it Fermi} \citep{Acero2015}. \citet{Kistler2015} extends these techniques in detailing potential TeV gamma-ray signatures of the pulsar wind nebula (PWN) G359.95--0.04 situated at a projected distance of 0.3~pc from Sgr~A$^*$ \citep{Wang2006,Muno2008}.
\begin{figure}[t!]\vspace*{-0.15cm}
\hspace*{-0.1cm}\vspace*{-0.15cm}
\includegraphics[width=1.02\columnwidth,clip=true]{GCpc3dp}
\caption{An illustration of the geometry of photon emission components from the inner parsec of the Milky Way used in constructing the photon field used throughout this paper, labelled by temperature corresponding to Table~\ref{tab:params}. The line-of-sight position of PWN G359 is referenced with the blue cone.\\
\label{GCpc}}
\end{figure}
\section{A Portrait of Galactic Center Backgrounds}
The cluster of massive stars at the Galactic Center provides $\gtrsim\! 10^7 \, L_\odot$ of UV photons that drive emission over a broad range of wavelengths. While UV radiation can be effectively upscattered by GeV electrons, for TeV electrons scattering is suppressed due to the energy dependence of the Klein-Nishina cross section so that infrared emission is their most relevant inverse Compton (IC) target. Since this cross section depends on the angle between electron and photon, with head-on scattering resulting in a photon with higher energy \citep{Jones1968}, it is of interest to understand the directional variation of the photon field beyond the integrated intensity.
Constructing a first principles model of the energy/angle dependent photon field in the GC would itself be a tremendous achievement. We rather content ourselves with a satisfactory phenomenological background based on the most recent data. For easy reference, the component parameters are summarized in Table~\ref{tab:params} and the layout illustrated in Fig.~\ref{GCpc}.
{\it Herschel} has now resolved cold dust in the circumnuclear disk (CND) in the FIR from $70\!-\!500\,\mu$m \citep{Etxaluze2011,Goicoechea2013}. \citet{Etxaluze2011} also utilized {\it ISO}-LWS data from $46\!-\!180\,\mu$m, which has less angular resolution, to fill in flux from warmer dust. {\it SOFIA}, with shorter wavelength coverage ($19.7\!-\!37.1\,\mu$m) and sharper resolution, was used to resolve warmer locations of the inner CND in greater detail by \citet{Lau2013}.
We describe these data using two separate rings: one with $T\!=\!90\,$K, $L_{90} \!=\! 2 \times 10^6~L_\odot$, a major radius of $R_{90} \!=\! 1.4\,$pc, and minor radius of $r_{90} \!=\! 0.2\,$pc; the other with $T \!=\! 40\,$K, $L_{40} \!=\! 2 \!\times\! 10^5~L_\odot$, $R_{40} \!=\! 1.7\,$pc, and $r_{40} \!=\! 0.3\,$pc. The inclination follows the orientation derived in \citet{Lau2013}. We assume optically thin emission that is uniform throughout the volume with a blackbody spectrum
\begin{equation}
\frac{dN_i}{d\epsilon_\gamma} = \frac{1}{\pi^2 (\hbar c)^3} \frac{\epsilon_\gamma^2}{e^{\epsilon_\gamma/k_B T_i}-1}
\,.
\label{BB}
\end{equation}
This is not formally correct, since optically thin dust has a modified blackbody form with an emissivity $\propto\!\nu^\beta$ and $\beta \!\lesssim\! 2$ that results in a steeper long wavelength tail (e.g., \citealt{Draine2003}). However, we compensate for this by choosing values for $T$ and $L$ to match the spectral peak for dust of a given temperature (and typically another component becomes more important in the tails).
{\it SOFIA} images display warmer emission nearer the GC \citep{Lau2013}, mostly coinciding with the ionized gas streamers seen in radio (e.g., \citealt{Zhao2010}). In principle, one can begin from the \citet{Zhao2009} model of Keplerian gas stream orbits to construct a more elaborate model accounting for a heating from a central cluster. We here assume emission with $T\!=\!120\,$K and $L_{120} \!=\! 1.5\!\times\!10^6~L_\odot$ and approximate the multiple streams with a uniform sphere of radius $R_{120} \!=\! 0.75\,$pc. The extinction corrected {\it ISO}-SWS spectrum from \citet{Fritz2011}, extending from $2.6\!-\!26\,\mu$m and covering an extended inner portion of the central parsec, as well as radio line measurements (e.g., \citealt{RequenaTorres2012,Mills2013,Smith2014}) also suggest a warmer component that we ascribe to the same volume with $T\!=\!250\,$K and $L_{250} \!=\! 2\!\times\!10^6~L_\odot$.
\begin{deluxetable}{rrcc}[t!]
\tabletypesize{\scriptsize}
\tablecaption{\label{tab:params}}
\tablewidth{\columnwidth}
\tablehead{\colhead{$T [K]$} & \colhead{$L~[L_\odot]$} & \colhead{$R$ [pc]} & \colhead{$r$ [pc]} }
\startdata
\hline \vspace{-0.2cm}\\
35000 & $20\!\times\!10^6$ & 0.25 & --- \\
3500 & $30\!\times\!10^6$ & --- & --- \\
250 & $2\!\times\!10^6$ & 0.75 & --- \\
120 & $1.5\!\times\!10^6$ & 0.75 & --- \\
90 & $2\!\times\!10^6$ & 1.4 & 0.2 \\
40 & $0.2\!\times\!10^6$ & 1.7 & 0.3 \\
2.73 & CMB & --- & --- \\
\hline\vspace{-0.3cm}
\enddata
\tablecomments{Properties of the GC radiation components used here. $R$ refers to the radius of a sphere or major radius of a ring, $r$ to a ring minor radius.\\}
\end{deluxetable}
The IR data are consistent with reprocessing of a fraction of the incident UV flux from a $T \!\approx\! 35000\,$K, $L_{35000} \!\approx\! 2\!\times\!10^7\,L_\odot$ cluster of massive stars at the GC. \citet{Stostad2015} and \citet{FeldmeierKrause2015} infer a cutoff in the surface brightness by $\sim\!0.5\,$pc for this population, which we approximate with a sphere of $R_{35000} \!=\! 0.25\,$pc. \citet{Fritz2011} concludes that little of the line of sight extinction towards the GC arises from within the central parsec, which we will assume to hold for sight-lines not passing through the major dust structures. We also include a contribution from the much more extended old GC stellar component, using the radial profile from \citet{Fritz2014}, with a 3500~K spectrum normalized to $3 \!\times\! 10^7\,L_\odot$ within 100 arcsec, along with the uniform 2.73~K cosmic microwave background (CMB).
\section{Geometry of Emission}
Assuming uniform emissivity, the flux arriving from a given direction can be calculated using ray tracing techniques. For instance, we take an equation for a torus in Euclidean space, $f = (x^2 + y^2 + z^2 - r_1^2 - R_1^2)^2 + 4 R_1^2 (z^2 - r_1^2)$, insert the components for a ray ${\bold x}(t)$ starting from the electron position ${\bold r}_e$ and traversing direction ${\bold p}$, ${\bold x}(t) = {\bold r}_e + {\bold p}\, t$, and solve for the roots to find the length through ring 1, $\ell_1(\theta,\phi)$. This involves solving a quartic equation, which can be done fairly quickly numerically. The procedure either interior or exterior to spherical regions is similar.
To arrive at the energy density at a given position from each component, $u_i$, we do this many times en route to integrating over all angles
\begin{equation}
u_i = \frac{L_{i}}{4\pi c\, V_i} \int d\Omega\, \ell_i(\theta,\phi)
\,,
\label{ui}
\end{equation}
with $V_i$ the component volume. Each spectral energy distribution is shown in Fig.~\ref{SED} along with the CMB (at $\sim\!10^{-3}\,$eV).
In Fig.~\ref{SED} we also compare to the oft-used modeled interstellar radiation field at the GC from \citet{Porter2006}. Since this model is constructed from stellar contributions over larger scales, it is indicative of contributions within the central parsec from outside. We see that our FIR energy density is larger by a factor of $\sim\! 10^3$ and so should remain dominant out to $\sim\!30\,$pc, corresponding to $\sim\!0.25^\circ$ (not accounting for any additional absorption). Other more explicit contributions include the Arches and Quintuplet stellar clusters, which have luminosities comparable to the central cluster \citep{Figer2008}, but are relatively distant. We thus assume these to be small in comparison to the local emission in what follows.
\begin{figure}[t!]
\includegraphics[width=1.01\columnwidth,clip=true]{GCbackHc}
\caption{Energy spectrum of background photons from our photon field. Shown are the components of Table~\ref{tab:params} at a distance from Sgr~A$^*$ of 0.3~pc and their sum ({\it solid line}). The {\it dashed line} shows the total background at 1~pc (in front and behind Sgr~A$^*$ are similar). The GC background of \citet{Porter2006}, designed to be valid over larger scales, is also shown (PMS; {\it dotted}).\\
\label{SED}}
\end{figure}
\section{Gamma-ray Attenuation}
Our first application is to the attenuation of gamma rays due to $\gamma \gamma \rightarrow e^+ e^-$ interactions on intervening photon backgrounds. The cross section depends on the relative angle with a gamma ray of energy $E_\gamma$ through $s \!=\! 2 E_\gamma \epsilon_\gamma (1-\cos{\theta})$ via $q \!=\! \sqrt{1-(2 m_e c^2)^2/s}$ as
\begin{equation}
\sigma_{\gamma \gamma}(s) \!=\! \frac{3}{4} \sigma_T \frac{(m_e c^2)^2}{s} \left[ (3\!-\! q^4) \ln \!\frac{1\!+\! q}{1\!-\! q} \!-\! 2q (2\!-\! q^2) \right]
\!,
\label{sigmapair}
\end{equation}
with $\sigma_T$ the Thomson cross section.
In Fig.~\ref{opa}, we show the result of integrating over two paths: one from the GC and a longer beam through the line of sight to PWN G359 (as denoted in Fig.~\ref{GCpc}) to 1~pc behind the GC. Considering photon number density above the pair threshold, the 90~K, 120~K, and 250~K fields are the most important targets. These are displayed for the latter case. To obtain the total extinction, we add the GC attenuation curve from \citet{Moskalenko2006}, which is based on an interstellar radiation field model describing the galaxy on larger scales (plus the CMB), so double counting relative to our curves should be minimal.
We also display for comparison attenuation from within the inner accretion flow of Sgr~A$^*$. We use spectra from \citet{Dexter2013} spanning from radio to IR (model 915h), with the simplifying assumption that this is spherical within a distance from the black hole of $3\, r_g$, with $r_g \!\simeq\! 6 \times 10^{11}$~cm, comparable to the IR emitting regions. We see that this can be more important for any TeV gamma rays arising from within this limited volume around the black hole.
\section{Electron Energy Loss Simply Stated}
We turn our attention to describing populations of electrons in the central parsec that can upscatter the above photon field into gamma rays. We focus on energy spectra, not attempting to fully describe source morphology (though we remark on this later), evolving an injection spectrum with synchrotron and inverse Compton losses over a specified duration. As far as X-rays from synchrotron are concerned, we will see that only the past few decades are relevant due to rapid cooling.
\begin{deluxetable}{rccc}[t!]
\tabletypesize{\scriptsize}
\tablecaption{\label{tab:params2}}
\tablewidth{\columnwidth}
\tablehead{\colhead{$T [K]$} & \colhead{$u_{\rm BB}$ [$10^{-9}\,$GeV~cm$^{-3}$]} & \colhead{$u_{i}/u_{\rm BB}$ (0.3~pc)} & \colhead{$u_{i}/u_{\rm BB}$ (1~pc)} }
\startdata
\hline \vspace{-0.2cm}\\
35000 & $7.1\!\times\!10^{15}$ & $2.5\!\times\!10^{-11}$ & $1.9\!\times\!10^{-12}$ \\
3500 & $7.1\!\times\!10^{11}$ & $2.1\!\times\!10^{-8}$ & $1.3\!\times\!10^{-8}$ \\
250 & $1.8\!\times\!10^{7}$ & $3.6\!\times\!10^{-4}$ & $8.3\!\times\!10^{-5}$ \\
120 & $9.8\!\times\!10^{5}$ & $5.1\!\times\!10^{-3}$ & $1.2\!\times\!10^{-3}$ \\
90 & $3.1\!\times\!10^{5}$ & $2.3\!\times\!10^{-3}$ & $4.0\!\times\!10^{-3}$ \\
40 & $1.2\!\times\!10^{4}$ & $3.9\!\times\!10^{-3}$ & $2.0\!\times\!10^{-3}$ \\
2.73 & $0.26$ & 1 & 1 \\
\hline\vspace{-0.3cm}
\enddata
\tablecomments{Energy density normalizations of the GC radiation components.\\}
\end{deluxetable}
\begin{figure}[b!]
\hspace*{-0.3cm}
\includegraphics[width=1.05\columnwidth,clip=true]{opaGCs}
\caption{Gamma-ray attenuation due to our background components from a location 1~pc behind the GC ({\it dashed}), the Galactic model of \citet{Moskalenko2006} (MPS; {\it dotted}), and their combination ({\it thick solid}). Also, shown is the combined total from the GC position ({\it thin solid}), compared to attenuation within the inner accretion flow of Sgr~A$^*$ due to mm--IR emission ({\it dotted}).
\label{opa}}
\end{figure}
Use of blackbody spectra allows for standard inverse Compton loss methods (dusty spectra will be examined elsewhere). This can be done more or less exactly, although the resulting solution is rather cumbersome. We rather examine first the form of the energy loss rate in the Thomson limit
\begin{equation}
\left.\frac{dE_e}{dt}\right\vert_{\rm T} = - \frac{4}{3} \, \sigma_T \, c \left(\frac{E_e}{m_e c^2}\right)^{\!2} u_{\rm BB}
\,,
\label{dEt}
\end{equation}
where $E_e$ is the electron energy, $m_e$ the electron mass, and $u_{\rm BB}$ the blackbody energy density for a given $T$, while in the extreme Klein-Nishina regime \citep{Blumenthal1970},
\begin{equation}
\left.\frac{dE_e}{dt}\right\vert_{\rm KN} \! \!=\! - \frac{\sigma_T}{16} \frac{(m_e k_B T c)^2}{\hbar^3} \left( \ln 4 \kappa_e \!-\! 1.981 \right) \!,
\label{dEkn}
\end{equation}
where $\kappa_e \!=\! E_e k_B T/(m_e c^2)^2$. To obtain $dE_e/dt\vert_{\rm IC}$ over the entire energy range, we find a convenient interpolation valid to $\sim\!1$\% below the KN limit,
\begin{equation}
\! \!\! \left.\frac{dE_e}{dt}\right\vert_{\rm H} \!=\! -b_{\rm H} \kappa_e
\left[\left(\frac{\kappa_e}{\kappa_1} \right)^{\!\!A \xi} \!\!+\! \left(\frac{\kappa_e}{\kappa_1} \right)^{\!\!B \xi}
\!\!+\! \left(\frac{\kappa_2}{\kappa_1} \right)^{\!\!B \xi} \!\! \left(\frac{\kappa_e}{\kappa_2} \right)^{\!\!C \xi} \right]^{1/\xi} \!\!\!,
\label{hasanian}
\end{equation}
with $b_{\rm H} \!=\! 3.87 \!\times\! 10^{19}(k_B T)^2 \,$GeV$^{-1}$s$^{-1}$, $A \!=\! 1$, $B \!=\! -0.063$, $C \!=\! -0.855$, $\kappa_{1} \!=\! 0.065$, $\kappa_{2} \!=\! 4.16$, $\xi \!=\! -0.815$, and in which energy is given in terms of GeV.
Now we combine Eqs.~(\ref{dEkn}) and (\ref{hasanian}) as
\begin{equation}
\!\! \left.\frac{dE_e}{dt}\right\vert_{\rm IC} \! \!= \!
\left.\frac{dE_e}{dt}\right\vert_{\rm H} \Theta[10^{3.3} \!-\! \kappa_e]
+\! \left.\frac{dE_e}{dt}\right\vert_{\rm KN} \Theta[\kappa_e \!-\! 10^{3.3}]
,
\label{dEic}
\end{equation}
where $\Theta$ are step functions (cf., \citealt{Delahaye2010}). This is to be evaluated for each distinct background component.
We also need consider the rate of energy loss due to synchrotron radiation,
\begin{equation}
\left.\frac{dE_e}{dt}\right\vert_{\rm sync} = - \frac{4}{3} \, \sigma_T \, c \left(\frac{E_e}{m_e c^2}\right)^2 u_B
\,,
\label{dEtsync}
\end{equation}
for magnetic field energy density $u_B \!=\! B^2/8\pi$. Adding this to the sum of the IC loss terms, we arrive at the total losses
\begin{equation}
b_e(E_e) = - \left.\frac{dE_e}{dt}\right\vert_{\rm sync} - \sum\limits_i \frac{u_i}{u_{\rm BB}} \left.\frac{dE_e}{dt}\right\vert_{{\rm IC,}\,i}
\,,
\label{dEtot}
\end{equation}
where each IC term is scaled by the ratio of the energy density of the photon background $u_i$ to the energy density of a pure blackbody $u_{\rm BB}$ for each $T_i$ (see Table~\ref{tab:params2}).
In Fig.~\ref{bEfig}, we show the cooling rate, $b_e(E_e)/E_e$, for each component of the photon field at a distance of 1~pc from Sgr~A$^*$ and for two different choices of $B$ within the range discussed in \citet{Kistler2015} related to observations of the GC magnetar SGR J1745-29 \citep{Eatough2013}. These demonstrate the change in relative importance of IC versus synchrotron as $B$ is varied as well as the KN suppression via the downturn in the IC curves, e.g., for $E_e \!\gtrsim\! 10$~TeV even the CMB is more relevant than the UV background.
\begin{figure}[t!]\vspace*{-0.2cm}
\hspace*{-0.2cm}
\includegraphics[width=1.02\columnwidth,clip=true]{GCEbH}\vspace*{-0.15cm}
\caption{Rate of electron cooling due to synchrotron radiation for two field strengths ({\it dashed lines}), summed with inverse Compton losses on each background component of Fig.~\ref{SED} at a distance from Sgr~A$^*$ of 1~pc ({\it thin solid lines}; as labeled) to give the total loss rates ({\it thick solid lines}).\\
\label{bEfig}}
\end{figure}
\section{Evolving the Electron Spectrum}
We are interested in the present population of electrons, which requires evolving the spectrum injected over all time. To do so, we first determine the time it takes for an electron with initial energy $E_i$ to reach a final energy $E_f$ as
\begin{equation}
t_l(E_i,E_f) = \int_{E_i}^{E_f} -\frac{dE}{b_e(E)}
\,.
\label{tloss}
\end{equation}
In practice, we take a very high energy, $E_h \!=\! 10^8\,$GeV, and evaluate $t_h(E_f) \!=\! t_l(E_h,E_f)$. We then construct the inverse function $E_t[t_h(E_f)]$ numerically. This is used as a convenient way to relate initial and final energies by taking the relative difference between them. Now we integrate the source injection spectrum $dN_e/dE$ from a time $\tau$ up to today
\begin{equation}
\frac{dN_e}{dE_0} = \int_{0}^{\tau} dt\, \left.\frac{dN_e}{dE\,dt}\right\vert_{E_t[t_h(E)-t]} \frac{b_e(E_t[t_h(E)-t])}{b_e(E)}
\,.
\label{spec}
\end{equation}
This maps the source spectrum at each $t$ to the present time accounting for all relevant energy losses.
\section{Synchrotron and Inverse-Compton Production}
We will consider a few illustrative problems of current interest, both in limits where synchrotron is dominant and where inverse Compton losses are clearly more important. In evaluating the expected spectra of synchrotron and inverse Compton photons to compare with data, we assume that the electron population has a locally isotropic velocity distribution and that relativistic beaming effects are not relevant, though we do consider scattering off of anisotropic photon backgrounds.
Synchrotron can be elegantly calculated in the textbook manner using Bessel functions (e.g., \citealt{Rybicki1979}). Since we are interested in an isotropic electron distribution, we instead follow the simpler approach in \citet{Aharonian2010}, with
\begin{equation}
\frac{dN_\gamma}{dE_\gamma} = \frac{\sqrt{3}}{2 \pi} \frac{e^3 B}{m_e c^2 \hbar E_\gamma} G(x) \,e^{-x} \,,
\label{synch}
\end{equation}
where $x \!=\! 3 e \hbar B E_e^2/(2 m_e^3 c^5)$ and $G(x)$ is an interpolation close to the exact solution and faster to compute. This is convolved with the present electron spectrum $dN_e/dE_0$.
Inverse Compton scattering becomes more involved, since we aim to examine the bulk angular dependence of central parsec photon backgrounds rather than assuming isotropy. \citet{Khangulyan2014} provides a treatment convenient for this purpose (see also \citealt{Jones1968,Moskalenko2000,Zdziarski2013}). For a mono-directional, blackbody photon distribution
\begin{equation}
\frac{dN_{\rm ani}}{dE_\gamma} \!=\! \frac{3 \sigma_T \,m_e^2 c}{4 \pi^2 (\hbar c)^3} \frac{(k_B T)^2}{E_e^2} \! \left[ \frac{z^2}{2(1-z)} F_1(y) \!+\! F_2(y) \right] \!,
\label{ani}
\end{equation}
with $z \!=\! E_\gamma/E_e$, $y \!=\! z (m_e c^2)^2/[2 (1-z) E_e k_B T (1 \!-\! \cos{\theta})]$. Here the photons arrive at an angle $\theta$ to the electron, with the gamma ray departing in the electron direction. Formulas for $F_1$ and $F_2$ are given in \citet{Khangulyan2014}, along with similar fitting equations $F_3$ and $F_4$ in case one is interested in using this technique to find the emission from an isotropic photon background, $dN_{\rm iso}/dE_\gamma$, e.g., the CMB.
\begin{figure*}[t!
\hspace*{-0.2cm}
\includegraphics[width=0.6\columnwidth,clip=true]{sickdb}\hspace*{-0.5cm}
\includegraphics[width=1.6\columnwidth,clip=true]{GC1pcNuS}
\caption{{\it Left:} Projected distribution of electrons with 100~TeV initial energies continuously injected for 10~yr propagating in a 0.1~mG random magnetic field.
{\it Right:} Synchrotron and inverse Compton spectra from hard electron models in a 0.1~mG field with exponential ({\it solid lines}) and power-law ({\it dotted}) spectral breaks. We show an approximate {\it NuSTAR} band and GC source TeV data from HESS \citep{Abramowski2016} and VERITAS \citep{Archer2016} for scale.\\
\label{stream}}
\end{figure*}
Using each angular-dependent photon field, we integrate from the source vantage point over angles with respect to the direction pointing at Earth to obtain the IC spectrum as
\begin{equation}
\frac{dN_i}{dE_\gamma} = \mathcal{E}_i \int_{E_\gamma}^{E_{\rm max}} dE_e \frac{dN_e}{dE_0} \int d\Omega\, \frac{dN_{\rm ani}}{dE_\gamma} \ell_i(\theta,\phi)
\,,
\label{ICspec}
\end{equation}
where $\mathcal{E}_i \!=\! L_i / (4\pi c\, u_{\rm BB} V_i)$. One notable difference from assuming a central source is that the scattering on FIR emission from the rings is seen to vary much less in space. We obtain fluxes $\varphi_i(E_\gamma)$ using a GC distance $d_{\rm GC} \!=\! 8.5\,$kpc.
\section{Hard X-ray synchrotron and NuSTAR}
{\it NuSTAR} has recently discovered a diffuse hard X-ray flux reaching to $\gtrsim\!40\,$keV pervading the central parsecs \citep{Perez2015,Mori2015}. While this could very well be due to some new class of sources endemic to the GC, synchrotron radiation is a well understood means of photon production, and while these X-ray energies are somewhat extreme, they are not terribly so and GC magnetic fields are unusually strong.
As Fig.~\ref{bEfig} shows, at sufficiently high energies synchrotron dominates over IC. Examining the characteristic energy of synchrotron emission,
\begin{equation}
E_\gamma \sim 20\, \left(\frac{E_e}{20\,{\rm TeV}}\right)^{\!2} \left(\frac{B}{{\rm mG}}\right) \, {\rm keV}
\,,
\label{Echar}
\end{equation}
we see that hard X-rays can be the main product at these energies and field strengths.
Now, Fig.~\ref{bEfig} also shows that the cooling time (taking the inverse of the cooling rate) becomes quite short in this range, so one might expect X-ray emission to be limited to a small region around any such electron source.
However, as \citet{Giacinti2012} and \citet{Kistler2012} note, particles tend to propagate anisotropically at early times after injection from a fixed location, i.e., more quickly along the direction of the local magnetic field. Following the arguments in \citet{Kistler2012}, if the cooling time is shorter than the characteristic timescale to reach isotropic diffusion, we would expect synchrotron emission to illuminate a path dependent on the local field structure since the particles only possess large energies for a limited duration. If such a population is present in the GC, their bulk trajectories might be traceable by a hard X-ray telescope like {\it NuSTAR}.
To examine the plausibility of extended hard X-ray emission arising from electrons escaping a discrete source, we first consider the behavior of a population with initial energies of $\sim\!100\,$TeV. Using the methods described in \citet{Kistler2012} and \citet{Yuksel2012}, we show in Fig.~\ref{stream} ({\it left}) an example of a possible realization of this scenario. Here, we have injected 100~TeV electrons over a 0.1~pc radius volume in an isotropic random field configuration scaled to $B_{\rm rms} \!=\! 0.1\,$mG with a coherence length $l_c \!\approx\! 4\,$pc. We inject continuously for $\sim\! 10\,$yr, over which time the energy can decrease to $\sim\! 50$~TeV.
While more elaborate simulations are possible, accounting for a spectrum of injected particles and energy dependence of propagation, this serves to illustrate the basic picture if high-energy electrons are not confined and free to propagate with only the local field guiding them, which may well be predominantly along the Galactic plane. Alternatively, extended emission could arise from jet-like structures as seen reaching from some pulsars (e.g., IGR J11014--6103; \citealt{Pavan2014}).
\begin{figure*}[t!]
\hspace*{-0.2cm}
\includegraphics[width=2.12\columnwidth,clip=true]{GC1pcF}
\caption{Models in a 0.1~mG field to mimic a PWN relativistic Maxwellian, with $E_c \!=\! 25\,$GeV or $E_c \!=\! 250\,$GeV and durations $\tau \!=\! 10^3\,$yr and $10^4\,$yr yielding synchrotron (far left lines) and inverse Compton gamma rays ({\it darker lines:} isotropic IC at 1~pc; {\it lighter lines:} anisotropic IC 1~pc behind the GC). We include here the {\it Fermi} GC source 3FGL J1745.6--2859c \citep{Acero2015}.\\
\label{hardx}}
\end{figure*}
If electrons are capable of retaining high energies over such distances near the GC, we can consider the emission from a single source. We now calculate possible X-ray and gamma-ray fluxes using the methods described above. \citet{Kistler2012} claims that while a bulk anisotropy may be present, the local velocity distribution can still be fairly isotropic. In order to account for the hard spectrum of hard X-rays seen by {\it NuSTAR}, a hard electron spectrum may be needed. We use a smoothly-broken power law with an exponential cutoff to describe the source spectrum
\begin{equation}
\frac{dN_e}{dE} = f_e
\left[\left(E/E_1\right)^{\alpha \eta} + \left(E/E_1\right)^{\beta \eta} \right]^{1/\eta} e^{-E/E_c}\,,
\label{fit}
\end{equation}
with $\alpha$ and $\beta$ the slopes, a break at $E_1$, cutoff energy $E_c$, and using $\eta \!=\! -10$ to give a sharp break. We assume a constant injection spectrum and luminosity over a duration $\tau \!=\! 1000\,$yr.
We assume $\alpha \!=\! -1$ here. Spectra as hard as this have been displayed recently in reconnection simulations (e.g., \citealt{Sironi2014}, \citealt{Guo2014}, \citealt{Werner2014}). This is also representative of any harder spectra, since an equilibrium $\sim\! E_e^{-2}$ electron spectrum would generically result from continuous injection and cooling, leading to an X-ray spectrum of $\sim\! E_\gamma^{-1.5}$. The spectral cutoff at low energies is not relevant here. We consider cases where the high-energy break is due to an exponential cutoff alone at $E_c \!=\! 1000\,$TeV, with luminosity $\mathcal{L}_e \!=\! 2 \!\times\! 10^{35}\,$erg~s$^{-1}$, or a change in index to $\beta \!=\! -2.5$ at $E_1 \!=\! 200\,$TeV with $E_c \!=\! 2000\,$TeV and $\mathcal{L}_e \!=\! 10^{35}\,$erg~s$^{-1}$.
Fig.~\ref{stream} ({\it right}) shows the X-ray and gamma-ray fluxes for a uniform 0.1~mG field compared to an approximated band for {\it NuSTAR}. For these hard spectra, there need not be bright radio emission. We also see that the KN suppression leads to gamma rays principally from electrons with energies lower than that yielding synchrotron in the {\it NuSTAR} range. In a weaker field, electrons would retain their energy longer. They would more easily travel large distances and, for the same photon field, emit more gamma rays. However, for a location beyond the central parsec, the photon background would be lower and the emission could remain below the HESS data.
As for where the electrons arise, the most likely culprits could be a pulsar associated with G359 or some heretofore unknown young pulsar with a velocity too low or local conditions otherwise unfavorable to yielding a prominent cometary nebula (see \citealt{Kistler2015}). For this scenario we have assumed that the highest energy electrons are able to escape and freely propagate. The physical conditions that might permit this would depend on the nature of the source, whether one or both of a linear accelerator setup by magnetic reconnection or Fermi shock acceleration is operating, and the magnetic field structure. Excesses could be present close to the pulsar, where the field should be larger, or where a coherent PWN flow ends. This basic setup can also be applied to a population of hard X-ray sources, such as fainter PWNe due a large number of active pulsars near the GC \citep{OLeary2015,OLeary2016}, which we defer to elsewhere.
\section{Pulsar Exhaust and Fermi}
Often one simply imposes a sharp break in the electron injection spectrum at some low energy (as we just assumed above). However, depending upon prevailing conditions, models that place the acceleration of particles at the termination shock in the pulsar wind can imply thermalization into a relativistic Maxwellian spectrum based on the bulk Lorentz factor of particles in the wind, with the shock energizing only some fraction of these into a power law component (e.g., \citealt{Amato2006,Sironi2013}).
If such an exhaust from the electron acceleration process is produced and goes somewhere, though, it should be emitting. We examine two possible outcomes using unbroken $\alpha \!=\! 2$ spectra cutoff with $E_c \!=\! 25\,$GeV (corresponding to a bulk pre-shock wind Lorentz factor $\Gamma \!\sim\!5 \times 10^4$ and pair multiplicity $\mathcal{M} \!\sim\! 10^5$) and present luminosity $\mathcal{L}_0 \!=\! 10^{36}\,$erg~s$^{-1}$ or $E_c \!=\! 250\,$GeV ($\Gamma \!\sim\!5 \times 10^5$, $\mathcal{M} \!\sim\! 10^3$, and $\mathcal{L}_0 \!=\! 10^{35}\,$erg~s$^{-1}$).
Assuming a continuous luminosity, the equilibrium electron spectrum from this hard injected population will again tend toward $\sim\! E_e^{-2}$. While a fixed $\mathcal{L}_e$ is reasonable for X-rays and TeV gamma rays due to the short cooling times of the emitting particles, at lower energies the accumulated spectrum may be enhanced by the pulsar spin down history. We consider
\begin{equation}
\mathcal{L}_e(t) = \mathcal{L}_0 \left[\frac{1+(\tau-t)/\tau_p}{1+\tau/\tau_p} \right]^{-\frac{n+1}{n-1}} ,
\label{spindown}
\end{equation}
where $\tau$ is the pulsar age, $\tau_p$ is a characteristic spin down time, and we use the canonical dipole $n \!=\! 3$ \citep{Gaensler2006}, although measured values for very young pulsars are often less than this (e.g., \citealt{Livingstone2011}) which would imply a different evolutionary history. Our choices of $\tau_p \!=\! 10^3\,$yr and $\tau$, as well as $E_c$, are motivated to illustrate relations to gamma-ray data.
Fig.~\ref{hardx} shows the gamma-ray and synchrotron spectra assuming injection has occurred for $\tau \!=\! 10^3\,$yr or $10^4\,$yr. The distance is fixed to 1~pc with a 0.1~mG field, although we note that the lower loss rates at these energies would likely result in most gamma rays being produced beyond a nominal PWN. Increasing the injection duration has the effect of accumulating GeV electrons and pushing the sub-GeV gamma-ray flux upwards. The lighter IC lines show an enhancement due to assuming anisotropic IC from 1~pc behind Sgr~A$^*$.
This flux is compared to the {\it Fermi} source coincident with the Galactic Center, 3FGL J1745.6--2859c, using data points from the 3FGL source catalog \citep{Acero2015}, which roughly split the previous 2FGL GC source (\citealt{Nolan2012}; cf., \citealt{Chernyakova2011,Abazajian2014}) into two distinct sources. We also show the TeV data for scale, though one must keep in mind that there is a possible mismatch of spatial scales between the gamma-ray data sets.
Using a larger $\tau \!=\! 10^5\,$yr would decrease the energy at which particles accumulate to $\sim\,$100~MeV. The IC flux could be increased into the {\it NuSTAR} range and continued to {\it INTEGRAL} energies \citep{Belanger2006}. This is though, a rather long duration to expect a high luminosity from a lone pulsar. To go farther back would also, considering typical densities in this region (e.g., \citealt{Ferriere2012,Linden2012,YusefZadeh2013}) necessitate accounting for ionization/Coulomb losses, which should overtake IC at some point and deplete the electron population at lower energies (see, e.g., Fig.~1 of \citealt{Hinton2007}).
The large number of massive stars in the GC implies an enhanced supernova rate and can lead to a typical interval between pulsar births of $\sim\!10^4$--$10^5\,$yr \citep{Dexter2014,Eatough2015,OLeary2015,OLeary2016}. Comparing the fluxes with varying injection periods shows the general behavior for a pulsar population. Relic electrons from inactive pulsars will no longer contribute gamma rays since high-energy particles have lost energy. For fixed luminosity, a higher spectral cutoff means fewer particles accumulating at lower energies with time (compare the two $E_c$ sets in Fig.~\ref{hardx}). So while the high-energy range is more sensitive to a combination of cutoff and age, the flux of softer gamma rays depends less on age than the total number of electrons injected.
One might hope to use synchrotron to track these GeV particles and constrain the morphology. Though the details will again depend upon the ambient magnetic field, as well as the spin down history of the pulsar, we can make a few rough estimates. An isotropic diffusion coefficient of $D \!\sim\! 10^{26}\,$cm$^2\,$s$^{-1}$ implies a distance scale of $(2D \tau)^{1/2} \!\sim\! 1\,$pc for $\tau \!=\! 10^3\,$yr. Generally, both $D$ and $\mathcal{L}_{\rm sync}$ depend on $B$, with the morphology of the emission depending upon the magnetic field configuration and the photon field geometry. We defer detailed examination of such variations to elsewhere.
\section{Discussion and Conclusions}
\label{concl}
The properties of the Galactic Center can lead to unusual phenomena. Consider if you will our two examples. The former examines extremely high-energy electrons, yet results mostly in photons emitted with much lower energies than those from our later example that considers much lower energy electrons. The nominal setup and parameter values appear physically plausible while leading to fluxes of hard X-rays and GeV gamma rays near the observed levels, illustrating the additional lengths yet required to understand this zone and its surroundings at high energies.
We have used a simplified model for the photon field of this complicated region as a starting point for better understanding the production of gamma rays from energetic electrons. This also helps in determining the fate of the gamma rays, produced by whatever process imagined, that may be attenuated by the same photon backgrounds. This explores a tractable middle ground between assuming isotropic backgrounds and following photons from the level of known stars to the heating and emission of dust. The latter course is perhaps difficult, but not impossible (e.g., \citealt{Shcherbakov2014} considered the expected starlight background near the G2 object), and would aid in addressing the following additional implications.
\subsection{Electrons and TeV gamma rays}
In the hard X-ray range there is a paucity of backgrounds as compared to lower energies, so that emission might be attributable to synchrotron radiation even in a complex region. Evidence for electron acceleration to extremely high energies by pulsars includes the $\gtrsim\,$100~MeV flares from the Crab nebula ascribed to synchrotron from PeV electrons \citep{Abdo2011,Tavani2011,Arons2012,Cerutti2013} and signatures of multi-TeV electrons from pulsars in the solar neighborhood (see, e.g., \citealt{Yuksel2009,Kistler2009}). The pulsar wind nebula G359.95--0.04 \citep{Wang2006,Muno2008} suggests such processes are active near the GC, which may also be quite relevant to TeV gamma-ray data (see \citealt{Kistler2015} for greater detail).
Our photon field model also allows for examination of another distinct scenario involving TeV gamma rays. While the gamma-ray opacity along the sight lines examined in Fig.~\ref{opa} ended up not being overwhelming, this did not have to be the case. A larger young stellar flux and/or a larger fraction of dust reprocessing, as may have been present in the past or in more active extragalactic central parsec regions, could easily lead to a more substantial suppression \citep{Kistler2015b}.
Any TeV gamma-ray source in this region produces an extended distribution of electrons and positrons due to $\gamma \gamma \rightarrow e^+ e^-$ on the photon field. Comparing the {\it NuSTAR} and TeV energetics in Fig.~\ref{stream}, these roughly coincide. Although our result suggests that such a process is not currently efficient in our GC, a sufficiently recent outburst of TeV gamma rays would have left an $e^\pm$ detritus yet emitting synchrotron.
\subsection{More on Galactic Center Hard X-rays}
As a more general point regarding hard X-rays, while absorption is largely irrelevant \citep{Wilms2000}, the unusual gas streams in the central parsec could possess column depths sufficient to cause appreciable Thomson scattering. If so, models of the gas density can be compared to X-ray maps to examine variations in intensity to determine the relative geometry of the X-ray emission and estimate the gas column. This would help to clear up uncertainties over the nature of the CND, between high \citep{Christopher2005,Montero2009} and low \citep{RequenaTorres2012,Harada2015} inferred masses.
We also note that {\it NuSTAR} has detected non-thermal hard X-ray emission from the radio filament Sgr~A--E, suggesting that the spectrum could be accounted for via injection of electrons from an unknown PWN \citep{Zhang2014}. Comparing to the better resolved radio images of Sgr~A--E \citep{Ho1985,YusefZadeh1987,Morris2014}, we see that the tail of PWN G359 extrapolates back to this general vicinity. If related, this would imply a coherent structure of $\sim\! 10\,$pc, not unprecedented in the Milky Way (e.g., \citealt{Pavan2014}), just not obviously realizable near the GC. This would require a rather low field strength for electrons to retain their energy until they reach the larger fields in the filament.
\subsection{Moving Beyond the Center, Dark Matter, and Neutrinos}
We have focused on positions within the central parsec, since at larger distances the benefit of bright, compact infrared emission potentially producing an unusually large amount of IC losses in a small volume is lost. The rather generic pulsar wind parameters used lead to a flux within range of {\it Fermi} data and allow further room for accommodation. For instance, there may well be other pulsars in this area yielding GeV gamma rays, either pulsed or from a wind. For a local supernova rate of $\sim\! 10^{-4}\,$yr$^{-1}$ these lead to overlapping contributions in the {\it Fermi} range, with burn off of electrons due to the steep rate of losses simplifying matters at higher energies.
Beyond the incentives to understand the novel astrophysics at the Galactic Center, there is also the quests for dark matter and neutrinos. Of recent interest are claims of a significant excess of gamma rays at $\sim\! 1\!-\!10\,$GeV. This may or may not be related to dark matter, but it does seem to originate at the Center, the IC scattering of electrons from annihilation or decay (cf., \citealt{Cholis2014}) is a direct application. Improved understanding of the mechanism behind GC gamma rays will directly affect the expected flux of neutrinos (e.g., \citealt{Crocker2005,Kistler2006}) and whether the PeV neutrino seen from the vicinity of the GC by IceCube \citep{Aartsen2013} has a Galactic origin \citep{Kistler2015c}.
On larger scales, there should also be energetic electrons present from these and other processes. While the concentrated UV emission most relevant in the central parsec will drop off rapidly, the old stellar component falls off less steeply so its contribution to IC will become relatively more important and may show up at lower gamma-ray energies (cf., \citealt{Abazajian2015}). One can also consider the aforementioned Arches and Quintuplet stellar clusters, although these are rather young and lack the longer history of star formation present in the central parsec, possibly leading to fewer young pulsars. They notably would also not contain a supermassive black hole. Along with the central parsec, these could provide useful checks to discriminate between dark matter, pulsars, and diffuse cosmic-ray background contributions, details of which we will explore elsewhere.
\acknowledgments
We thank John Beacom, Jason Dexter, Ryan O'Leary, Troy Porter, and Hasan Yuksel for useful discussions and the hospitality of Brandt-Leland during the completion of this paper.
MDK acknowledges support provided by Department of Energy contract DE-AC02-76SF00515, and the KIPAC Kavli Fellowship made possible by The Kavli Foundation.
|
2,869,038,156,704 | arxiv | \section{Introduction}
The study of the spatial periodicity
in the spatial distribution of quasars started
with \citet{Tifft1973,Tifft1980,Tifft1995}.
A recent analysis of \citet{Bell2006},
in which 46 400 quasars were processed,
quotes a periodicity near $\Delta z$ =0.7.
The study of periodicity in
the spatial distribution of galaxies
started with \citet{broadhurst},
where the data from four distinct surveys at the north and
south Galactic poles were processed.
He found an apparent
regularity in the galaxy distribution
with a characteristic scale
of 128 Mpc.
Recently, \citet{Hartnett2009a,Hartnett2009b} quotes
peaks in the distribution of galaxies
with a periodicity near $\Delta z$ =0.01.
More precisely, he found a
regular real space radial
distance spacings of 31.7 Mpc, 73.4 Mpc, and 127 Mpc.
On adopting a theoretical point of view,
the periodicity is not easy to explain.
A framework which explains the periodicity
is given by the cellular universe
in which the galaxies are situated on the faces
of irregular polyhedrons.
A reasonable model for the cellular universe
is the Poissonian Voronoi
Tessellation (PVT);
another is the non-Poissonian Tessellation (NPVT).
Some properties of the PVT can be deduced by introducing
the averaged radius of a polyhedron, $\bar{R}$.
The astronomical counterpart is the averaged
radius of the cosmic voids as given by
the Sloan Digital Sky Survey (SDSS)
R7, which is $\bar{R}= \frac{18.23}{h} Mpc$,
see \citet{Vogeley2012}.
The number of intercepted voids, $n_v$,
along a line will be $n_v= \frac{L}{\bar{l}} $,
where $L$ is the considered length and $\bar{l}$ the average
chord.
On assuming $\bar{l} =\frac{4}{3} \bar{R} $,
$n_V = \frac{3L }{4 \bar{R}}$.
The astronomical counterpart of the line is the pencil
beam catalog, a cone characterized by a narrow
solid angle, see \citet{szalay1993}.
The number of galaxies intercepted on the faces
of the PVT will follow the photometric rules
as presented in \citet{zaninetti2010a},
and Figure \ref{introduction} reports the theoretical
number
of galaxies for a pencil beam
characterized by the solid angle $\Omega$ of
60 deg$^2$.
\begin{figure*}
\begin{center}
\includegraphics[width=7cm]{f01.pdf}
\end {center}
\caption{
The number of galaxies as a function
of the distance for a pencil beam
catalog of $\Omega = 60$ deg$^2$.
The curve was calibrated on the data of the
2dF Galaxy Redshift Survey (2dFGRS)
which has $\Omega=1500$ deg$^2$.
Adapted from Figure 4 in \citet{zaninetti2010a}.
}
\label{introduction}%
\end{figure*}
According to the cellular structure of the local universe,
the number of galaxies as a function of distance
will follow a discontinuous rather than a continuous
increase or decrease.
We can therefore raise the following questions.
\begin{itemize}
\item
Can we find an analytical expression
for the chord length distribution for lines
which intersect many PVT or NPVT polyhedrons?
\item
Can we compare the observed periodicity
with the theoretical ones?
\end{itemize}
In order to answer these questions,
Section~\ref{basic} briefly reviews
the existing knowledge of
the chord's length in the presence of a given distribution
of spheres.
Section \ref{voronoichords} derives two new equations
for the chord's distribution in a PVT or NPVT environment.
Section \ref{astrophysicalsec} is devoted to
a test of the new formulas against a real astronomical slice
and against a simulated slice.
\section{The basic equations}
\label{basic}
This Section reviews
a first example
of deducing the average chord of spheres
having the same diameter and
the general formula for
the chord's length when a given distribution
for the spheres' diameters is given.
\subsection{The simplest example}
We calculate the average length of all chords
of spheres having the same radius $R$.
Figure \ref{chord_simple}
reports a section of a sphere
having radius $R$ and chord length $l$.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f02.pdf}
\end {center}
\caption
{
The
section of an intersected sphere of unit radius.
The chord is drawn with the thicker line and
marked with $l$.
}
\label{chord_simple
\end{figure*}
The Pythagorean theorem gives
\begin{equation}
l= \sqrt{2(1+y)} R
\quad ,
\end{equation}
and the average chord length is
\begin{equation}
<l> =
\frac{1}{2} \int_{-1}^{+1} \sqrt{2(1+y)} R dy = \frac{4}{3} R
\label{monogeometrical}
\quad .
\end{equation}
\subsection{The probabilistic approach}
The starting point is a probability density function
(PDF) for the diameter of the voids,
$F(x)$, where $x$ indicates the diameter.
The probability, $G(x)dx$,
that a sphere having diameter between
$x$ and $x+dx$ intersects a random line is
proportional to their cross section
\begin{equation}
G(x) dx = \frac { \frac{\pi}{4} x^2 F(x) dx }
{ \int_0 ^{\infty}
\frac{\pi}{4} x^2 F(x) dx}
=
\frac { x^2 F(x) dx }
{ < x^2>}
\quad .
\end{equation}
Given a line which intersects a sphere of diameter
$x$, the probability that the distance
from the center lies in the range
$r,r+dr$ is
\begin{equation}
p(r) = \frac{2 \pi r dr }{\frac{\pi}{4} x^2 }
\quad ,
\end{equation}
and the chord length is
\begin{equation}
l = \sqrt { x^2 - 4r^2}
\quad ,
\end{equation}
see Figure \ref{chord_statistics}.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f03.pdf}
\end {center}
\caption
{
The
section having diameter $x$
of the intersected sphere.
The chord is drawn with the thicker line and
marked with $l$;
the distance between chord
and center is $r$.
}
\label{chord_statistics
\end{figure*}
The probability that spheres in the
range $(x,x+dx)$ are intersected to produce
chords with lengths in the range
$(l,l+dl)$ is
\begin{equation}
G(x)\, dx \frac{2l\,dl}{x^2}
=
\frac{2l \, dl} { <x^2>} F(x) dx
\quad .
\end{equation}
The probability of having a chord
with length between $(l,l+dl)$ is
\begin{equation}
g(l)
=
\frac{2l} { <x^2>} \int_l^{\infty} F(x) dx
\quad .
\label{fundamental}
\end{equation}
This integral will be called {\it fundamental}
and the previous demonstration has been adapted
from \citet{Ruan1988}.
A first test of the previous integral can be done
inserting as a distribution for the diameters
a Dirac delta function
\begin{equation}
F(x)=\delta (x-2\,R)
\quad .
\end{equation}
As a consequence, the following PDF for chords
is obtained:
\begin{equation}
g(l) = \frac{1}{2} \frac{l}{R^2}
\quad ,
\end{equation}
which has an average value
\begin{equation}
<l> =
\frac {4}{3} R
\quad .
\end{equation}
We have therefore obtained, in the framework
of the probabilistic approach, the same result
deduced with elementary methods,
see Equation (\ref{monogeometrical}).
\section{Voronoi diagrams}
\label{voronoichords}
This section reviews
the distribution of spheres which approximates
the volume distribution for
PVT and NPVT, explains how to generate NPVT seeds, and derives
two new formulas for the distributions of the chords.
\subsection{PVT volume distribution}
We analyze the gamma variate $H (x ;c )$ (\citet{kiang})
\begin{equation}
H (x ;c ) = \frac {c} {\Gamma (c)} (cx )^{c-1} \exp(-cx),
\label{kiang}
\end{equation}
where $ 0 \leq x < \infty $, $ c~>0$,
and $\Gamma$ is the gamma function.
The Kiang PDF has a mean of
\begin{equation}
\mu = 1,
\end{equation}
and variance
\begin{equation}
\sigma^2 = \frac{1}{c}.
\end{equation}
A new PDF due to \citet{Ferenc_2007}
models the normalized area/volume
in 2D/3D PVT
\begin{equation}
FN(x;d) = C \times x^{\frac {3d-1}{2} } \exp{(-(3d+1)x/2)},
\label{rumeni}
\end{equation}
where $C$ is a constant,
\begin{equation}
C =
\frac
{
\sqrt {2}\sqrt {3\,d+1}
}
{
2\,{2}^{3/2\,d} \left( 3\,d+1 \right) ^{-3/2\,d}\Gamma \left( 3/2\,d+
1/2 \right)
},
\end{equation}
and $d(d=1,2,3)$ is the
dimension of the space under consideration.
We will call this
function the Ferenc--Neda PDF;
it has a mean of
\begin{equation}
\mu = 1,
\end{equation}
and variance
\begin{equation}
\sigma^2 = \frac{2}{3d+1}.
\end{equation}
The Ferenc--Neda PDF can be obtained from the Kiang function
(\citet{kiang}) by the transformation
\begin{equation}
c =\frac{3d+1}{2},
\label{kiangrumeni}
\end{equation}
and as an example $d=3$ means $c=5$.
\subsection{NPVT volume distribution}
The most used seeds
which produce the tessellation
are the so called
Poissonian seeds.
In this, the most explored case, the volumes are
modeled in 3D by a Kiang function,
Equation (\ref{kiang}), with $c=5$.
An increase of the value of $c$ of the Kiang
function produces more ordered structures
and a decrease, less ordered structures.
A careful analysis of the distribution
in effective radius of the Sloan Digital Sky Survey (SDSS) DR7
indicates $c \approx 2$, see \citet{zaninetti2012e}.
Therefore the normalized distribution in volumes
which models the voids between galaxies is
\begin{equation}
H (x ;2 ) =4\,x{{\rm e}^{-2\,x}}
\quad .
\label{kiangc2}
\end{equation}
\subsection{NPVT seeds}
\label {secnewseeds}
The 3D set of seeds which generate a distribution in volumes
with $c \approx 2$ for the Kiang function (\ref{kiangc2}) is produced
with the following algorithm.
A given number, $N_{H}$, of forbidden spheres having radius
$R_H$ are generated in a 3D box having side $L$.
Random seeds are produced on the three spatial coordinates; those
that fall inside the forbidden spheres are rejected.
The volume forbidden to the seeds occupies the following
fraction, $f$, of the total available volume
\begin{equation}
f = \frac{N_H \frac{4}{3} \pi R_H^3 } {L^3}
\quad .
\end{equation}
The value of $c \approx 2$ for the Kiang function is found
by increasing progressively $N_{H}$ and $R_H$.
\subsection{PVT chords}
The distribution in volumes of the 3D PVT
can be modeled by the modern
Ferenc--Neda PDF (\ref{rumeni}) by inserting $d=3$.
The corresponding distribution in diameters
can be found
using the following substitution
\begin{equation}
x = \frac{4}{3} \pi (\frac{y}{2})^3
\quad ,
\end{equation}
where $y$ represents
the diameter of the volumes
modeled by the spheres.
Therefore the PVT distribution in diameters is
\begin{equation}
F(y)= {\frac {3125}{62208}}\,{\pi }^{5}{{\it y}}^{14}{{\rm e}^{-5/6\,\pi \,
{{\it y}}^{3}}}
\quad .
\end{equation}
Figure ~\ref{fxdiametri} displays the PDF of the diameters
already obtained.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f04.pdf}
\end {center}
\caption{
PDF $F(y) $ for the PVT diameters.
}
\label{fxdiametri}
\end{figure*}
We are now ready to insert in the fundamental
Equation (\ref{fundamental}) for the chord length
a PDF for the diameters.
The resulting integral is
\begin{eqnarray}
g(l)=
{\frac {243}{1540}}\,{\frac {l{5}^{2/3}\sqrt [3]{6}{\pi }^{2/3}}{
\Gamma \left( 2/3 \right) \left( {{\rm e}^{\pi \,{l}^{3}}} \right) ^
{5/6}}}+{\frac {81}{616}}\,{\frac {{l}^{4}{5}^{2/3}\sqrt [3]{6}{\pi }^
{5/3}}{\Gamma \left( 2/3 \right) \left( {{\rm e}^{\pi \,{l}^{3}}}
\right) ^{5/6}}}
\nonumber \\
+{\frac {135}{2464}}\,{\frac {{l}^{7}{5}^{2/3}\sqrt [
3]{6}{\pi }^{8/3}}{\Gamma \left( 2/3 \right) \left( {{\rm e}^{\pi \,
{l}^{3}}} \right) ^{5/6}}}+{\frac {75}{4928}}\,{\frac {{l}^{10}{5}^{2/
3}\sqrt [3]{6}{\pi }^{11/3}}{\Gamma \left( 2/3 \right) \left( {
{\rm e}^{\pi \,{l}^{3}}} \right) ^{5/6}}}
\nonumber \\
+{\frac {125}{39424}}\,{
\frac {{l}^{13}{5}^{2/3}\sqrt [3]{6}{\pi }^{14/3}}{\Gamma \left( 2/3
\right) \left( {{\rm e}^{\pi \,{l}^{3}}} \right) ^{5/6}}}
\end{eqnarray}
This first result should be corrected due to the fact that the PDF
of the diameters, see Figure \ref{fxdiametri}, is $\approx$ 0 up
to a diameter of $\approx$ 0.563. We therefore introduce the
following translation
\begin{equation}
U = L+ a \quad ,
\end{equation}
where $L$ is the random chord and $a$ the amount of the
translation. The new translated PDF, $g_{1}(u;a)$, takes
values $u$ in the interval $[-a, (6-a)]$. Due to the fact that
only positive chords are defined in the interval $[0, (6-a)]$, a
new constant of normalization should be introduced
\begin {eqnarray}
g_{2}(u;a)= Cg_1(u;a) , \quad where \\
C=\frac{1}{\int_0^{6-a}g_1(u;a) du}
\quad .
\end{eqnarray}
The last transformation of scale is
\begin{equation}
R = \frac{U}{b} \quad ,
\end{equation}
and the definitive PDF for chords is
\begin{equation}
g_3(r;a,b)=
\frac{g_2(\frac{u}{b};a)}{b} \quad .
\label{GLBPOISSONIAN}
\end{equation}
The resulting distribution function will be
\begin{equation}
DF_{1,3}(r:a,b) =\int_0^r g_3(r;a,b) dr \quad .
\label{dfpoisson}
\end{equation}
We are now ready for a comparison with the
distribution function
,$F_{L_{1,3}} $, for chord length
, $L_{1,3}$ in $V_p(1,3)$, see
formula (5.7.6) and Table 5.7.4 in \citet{Okabe2000}.
Table~\ref{table_parameters} shows the average diameter,
variance, skewness, and kurtosis of the already derived
$ g_3 (r;a,b) $.
The parameter $b$
should match the average value of the PDF in
Table 5.7.4 of \citet{Okabe2000}.
\begin{table}
\caption[]
{
The parameters of \lowercase{$g_3 (r;a,b) $},
Eq.~(\ref{GLBPOISSONIAN}), relative to
the PVT case when
\lowercase{$b=1.624$} and \lowercase{$a=0.563$}.
}
\label{table_parameters}
\[
\begin{array}{ll}
\hline
Parameter & value \\ \noalign{\smallskip}
\hline
\noalign{\smallskip}
Mean & 0.662 \\
\noalign{\smallskip}
\hline
Variance & 0.144 \\
\noalign{\smallskip}
\hline
Skewness & 0.274 \\
\hline
Kurtosis & -0.581 \\
\hline
\end{array}
\]
\end {table}
The behavior of $g_3(r;a,b)$
is shown in Figure \ref{corda_pdf}.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f05.pdf}
\end {center}
\caption
{
PDF $g_3 (r;a,b) $ (full line)
for chord length as a function of $r$
when $b=2.452$, $a=0.563$, which
means $<r> =1$, and the mathematical PDF (dashed line)
as extracted from Table 5.7.4 in Okabe et al. (2000),
PVT case.
}
\label{corda_pdf}
\end{figure*}
The behavior of $DF_{1,3}(r:a,b) $
is shown in Figure \ref{corda_df}.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f06.pdf}
\end {center}
\caption{
Distribution function
$ DF_{1,3}(r:a,b) $ (full line)
for chord length
as a function of $r$ when
$b=2.452$, $a=0.563$, which
means $<r> =1$,
and the mathematical DF (dashed line)
as extracted from Table 5.7.4 in Okabe et al. (2000),
PVT case.
}
\label{corda_df}
\end{figure*}
Consider a 3D PVT and suppose
it intersects a randomly oriented line $\gamma$:
the theoretical distribution function
$DF_{1,3}(r:a,b)$ as given by Eq. (\ref{dfpoisson})
and the results
of a numerical simulation can be compared.
We start from $900 000$ 3D
PVT cells and we process
$9947$ chords which
were obtained by adding together the results of
$40$ triples of mutually perpendicular lines.
The numerical distribution of Voronoi lines is
shown in Figure \ref{corda_poisson} with
the display of $DF_{1,3}(r:a,b)$.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f07.pdf}
\end {center}
\caption
{
Comparison between data
(empty circles) and theoretical curve , $ DF_{1,3}(r:a,b) $,
(continuous line)
of the chord length
distribution when
$b=2.452$, $a=0.563$, which
means $<r>$ =1; PVT case.
The maximum distance between the two curves
is $d_{max}=0.01$.
}
\label{corda_poisson}
\end{figure*}
\subsection{NPVT chords}
In the case here analyzed, see the PDF in volumes as
given by Equation(\ref{kiangc2}), the NPVT distribution
in diameters which corresponds to $c=2$ in volumes is
\begin{equation}
F(y)=
1/3\,{\pi }^{2}{{\it y}}^{5}{{\rm e}^{-1/3\,\pi \,{{\it y}}^{3}}}
\quad ,
\end{equation}
where $y$ represents the diameter of the volumes
modeled by spheres.
We now insert in the fundamental
Equation (\ref{fundamental}) for the chord length
a PDF for the diameters as given by the previous equation.
The resulting integral, which
models the NPVT chords, is
\begin{equation}
g_{NPVT}(l)=
\frac
{
l{\pi }^{8/3}\sqrt [3]{3} \left( 3+\pi \,{l}^{3} \right)
}
{
5\,\Gamma \left( 2/3 \right) {\pi }^{2}\sqrt [3]{{{\rm e}^{\pi \,{l}^
{3}}}}
}
\quad .
\label{GLBNONPOISSONIAN}
\end{equation}
We now make
the following translation
\begin{equation}
U = L+ a \quad .
\end{equation}
The new translated PDF is $g_{NPVT,1}(u;a)$
and the
new constant of normalization is
\begin {eqnarray}
g_{NPVT,2}(u;a)= C_{NPVT}g_{NPVT,1} (u;a) , \quad where \\
C_{NPVT}=\frac{1}{\int_0^{6-a}g_{NPVT,1}(u;a) du}
\end{eqnarray}
The transformation of scale is
\begin{equation}
R = \frac{U}{b} \quad ,
\end{equation}
and the definitive PDF for NPVT chords is
\begin{equation}
g_{NPVT,3}(r;a,b) =
\frac{g_{NPVT,2} (\frac{u}{b};a)}{b} \quad .
\label{GLBNONPOISSONIANSHIFT}
\end{equation}
The behavior of the NPVT $g_{NPVT,3} (r;a,b) $ is shown in
Figure \ref{corda_pdf_c2}.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f08.pdf}
\end {center}
\caption{
PVT PDF $g_3 (r;a,b) $ for chord length as a function of
$r$, when $b=2.452$, $a=0.563$
(full line),
and NPVT PDF $g_{NPVT,3} (r;a,b) $
when $b=1.741$, $a=0.36$
(dashed line).
In both cases $<r>$=1.
}
\label{corda_pdf_c2}
\end{figure*}
Table~\ref{table_parametersnpvt}
shows the average diameter, variance,
skewness, and kurtosis of the
already derived NPVT $g_{NPVT,3}(r;a,b) $.
\begin{table}
\caption[]
{
The parameters of \lowercase{$g_{NPVT,3} (r;a,b)$},
Eq.~(\ref{GLBNONPOISSONIAN}), relative to
the NPVT case when \lowercase{$b= 1.153, a=0.36$}.
}
\label{table_parametersnpvt}
\[
\begin{array}{ll}
\hline
Parameter & value \\ \noalign{\smallskip}
\hline
\noalign{\smallskip}
Mean & 0.662 \\
\noalign{\smallskip}
\hline
Variance & 1.153 \\
\noalign{\smallskip}
\hline
Skewness & 0.324 \\
\hline
Kurtosis & -0.442 \\
\hline
\end{array}
\]
\end {table}
The resulting distribution function with scale will be
\begin{equation}
DF_{NPVT}(r:a,b) =\int_0^r g_{NPVT,3} (r;a,b) dr \quad .
\label{dfnonpoisson}
\end{equation}
Also in this case we produce $900 000$ 3D
NPVT cells and we process
$9947$ chords which
were obtained by adding together the results of
$40$ triples of mutually perpendicular lines.
The numerical distribution of Voronoi lines is
shown in Figure \ref{corda_df_npvt} with
the display of $DF_{NPVT}(r:a,b)$.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f09.pdf}
\end {center}
\caption
{
Comparison between data
(empty circles) and theoretical curve
(continuous line)
of the chord length distribution
when
$b=1.741$, $a=0.36 $, which
means $<r>$ =1; NPVT case.
The maximum distance between the two curves
is $d_{max}=0.03$.
The fraction of volume forbidden to the NPVT seeds
is $f= 16\%$.
}
\label{corda_df_npvt}
\end{figure*}
\section{Astrophysical applications}
\label{astrophysicalsec}
At the moment of writing, it is not easy to check
the already found PDFs for the chord distribution
in the local Universe.
This is because research has been focused
on the intersection of the maximal sphere
of voids between galaxies with the mid-plane
of a slab of galaxies, as an example
see Figure 1 in \citet{Vogeley2012}.
Conversely, the organization of the observed patterns
in slices of galaxies in irregular pentagons
or hexagons having the property
of a tessellation has not yet been developed.
We briefly recall that in order to compute
the length in a given direction of a slice of galaxies
the boundary between a region and another region
should be clearly computed in a digital way.
In order to find the value of scaling, $b$, which models
the astrophysical chords, some first approximate methods will be suggested.
A {\it first} method starts from the average chord
for monodispersed bubble size distribution (BSD)
which are bubbles of constant radius $R$,
see (\ref{monogeometrical}), and approximates it
with
\begin{equation}
<l> = \frac{4}{3} <R>
\quad ,
\label{quattroterzi}
\end{equation}
where $R$ has been replaced by $<R>$.
We continue by inserting $<R>=18.23 h^{-1}$\ Mpc,
which is the effective radius in SDSS DR7, see Table 6 in
in \citet{zaninetti2012e}, and
in this case $b=42.32 h^{-1}$\ Mpc.
A comparison should be made with the scaling
of the
probability of
obtaining a cross-section of radius $r$, which is $\frac{31.33}{h}$\ Mpc,
see \citet{zaninetti2012e}.
The PDF of the chord conversely has average value
given by the previous equation (\ref{quattroterzi}) and therefore
$b$ is 4/3 bigger.
We now report a pattern of NPVT series of chords superposed
on a astronomical slice and
a simulation of an astronomical slice in the PVT approximation.
\subsection{The ESP}
As an example, we fix attention on the Eso Slice
Project (ESP)
which covers a strip of 22(RA) $\times$ 1(DE) square degrees,
see \citet{Vettolani1998}.
On the ESP we report two lines over which a random distribution
of chords follows the NPVT equation (\ref{GLBNONPOISSONIAN})
with $b=42.32 h^{-1}$\ Mpc
and $a=0.36$,
see Figure \ref{esoslice}.
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f10.pdf}
\end {center}
\caption
{
Cone diagram of ESP galaxies when the $X$ and $Y$ axes
are expressed in Mpc and
$H_{0}=100 $ $ \mathrm{\ km\ s}^{-1}\mathrm{\ Mpc}^{-1}$
(the Hubble constant),
which means $h$=1.
Two lines in different directions are drawn
with the lengths of the chords randomly generated
with NPVT PDF $g_{NPVT,3} (r;a,b)$
when $b=42.32 h^{-1}$\ Mpc and $a=0.36$.
Each chord has a different color,
red and green,
and different line styles,
full and dotted.
}
\label{esoslice
\end{figure*}
\subsection{A simulated slice}
In order to simulate the 2dF Galaxy Redshift Survey (2dFGRS),
we simulate a 2D cut on a 3D PVT network organized
in two strips
about $75^{\circ}$ long,
see Figure \ref{cut_middle_color}
\begin{figure*}
\begin{center}
\includegraphics[width=10cm]{f11.pdf}
\end {center}
\caption
{
Portion of the NPVT $V_p(2,3)$;
cut on the X-Y plane when two strips of
$75^{\circ}$ are considered.
The parameters of the simulation,
see \citet{zaninetti2010a},
are pixels = 1500,
side = 131 908 Km/sec
and amplify = 1.2.
Along a line which crosses the center, the chords between a face
and the next one are drawn.
Each chord has a different color, red and green,
and different line styles, full and dotted.
The fraction of volume forbidden to the NPVT seeds
is $f= 16\%$ and the number of the seeds, 77 730, is chosen in order to
have $b=42.32 h^{-1}$\ Mpc.
}
\label{cut_middle_color}%
\end{figure*}
Figure \ref{corda_taglio} shows a superposition
of the numerical frequencies
in chord lengths
of a 3D PVT simulation with
the curve of the theoretical PDF of PVT chords,
$ g(l,b)$, as given by Eq.~(\ref{GLBPOISSONIAN}).
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{f12.pdf}
\end {center}
\caption
{
Histogram of
the
frequencies in chord lengths
along 31 lines (1617 sequential chords) with a superposition of the
theoretical NPVT PDF
as represented by Eq.~(\ref{GLBNONPOISSONIAN}).
The number of bins is 30, the reduced $\chi^2$ is 9.21, and
$b=42.32 h^{-1}$\ Mpc.
}
\label{corda_taglio}
\end{figure}
The distribution of the galaxies as
given by NPVT
is reported in Figure~\ref{voro_2df_cones},
conversely Figure 15 in
\citet{zaninetti2010a} was produced with PVT.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{f13.pdf}
\end {center}
\caption
{
Polar plot
of the pixels belonging to a
slice $75^{\circ}$~long and $3^{\circ}$
wide generated by NPVT seeds.
This plot contains 40 000
galaxies,
the maximum frequency of theoretical
galaxies is at $z=0.056$.
In this plot, $\mathcal{M_{\sun}}$ = 5.33 and $h$=1.
}
\label{voro_2df_cones}
\end{figure}
\section{Conclusions}
A mathematical method was developed
for the chord length distribution obtained
from the sphere diameter distribution function
which approximates the volume of the PVT.
The new chord length distribution as represented
by the PVT formula (\ref{GLBPOISSONIAN})
can give mathematical support
for the periodicity along a line, or a cone,
characterized by a small solid angle.
At the same time, a previous analysis has shown
that the best fit to the effective radius of the cosmic
voids as deduced from
the catalog SDSS R7 is represented by a Kiang function
with $c \approx 2$.
Also in this case
we found an expression for the chord length
given by the 3D NPVT formula
(\ref{GLBNONPOISSONIAN}).
In order to produce this kind of NPVT volumes, a new type of seed
has been introduced, see Section \ref{secnewseeds}.
These new seeds are also used to simulate the intersection
between a plane and the NPVT network.
A careful choice of the number of seeds allows matching the simulated
value of the scaling $b$ with the desired value of
$b=42.32 h^{-1}$\ Mpc, see Figure \ref{corda_taglio}.
\hrulefill
|
2,869,038,156,705 | arxiv | \section{Introduction}
The observational tracers of star formation range from the near-UV
(Boissier \it et al. \rm 2008) to the far-IR (Bigiel \it et al. \rm 2008) and, while each
wavelength region has its advantages and disadvantages, LSB galaxies are
difficult to observe outside the traditional optical bandpasses. The most
visible feature of star formation in LSB galaxies is the H$\alpha$ line,
produced by young, massive stars that compose the upper end of the IMF.
The UV photons emitted by these stars will, in turn, ionize the surrounding
gas to form a HII region. While the total H$\alpha$ luminosity of a galaxy
measures its global star formation history, these HII regions map the
amount and location of local star formation, providing a window in to the
details of the star formation process.
Studying HII regions in galaxies allows one to 1) investigate star
formation both globally and locally, 2) examine the upper mass limit of
stellar mass function and 3) map the structure of the ISM. While UV and
far-IR emission may provide a more nuanced view of total star formation;
the size, location and luminosity of HII regions displays the local
variation of star formation directly and can be used to resolve stellar
population questions. In addition, the size and luminosity of HII regions
provide information on number of ionizing stars and the mass of the
underlying stellar associations.
Previous work on HII regions in galaxies focused on high surface brightness
spirals and irregular galaxies (e.g., Caldwell \it et al. \rm 1991, Kennicutt, Edgar
\& Hodge 1989, Youngblood \& Hunter 1999). These studies found that
the number of HII regions in a galaxy increases with later Hubble type, in
correlation with the total star formation rate (SFR), and found various
differences in the HII luminosity function as a function of galaxy
properties. However, very little work has been completed on H$\alpha$
emission in low surface brightness (LSB) galaxies due to the technical
difficulty in measuring narrow band fluxes for object so close to the
brightness of the night sky. Studies by Schombert \it et al. \rm (1992), McGaugh,
Schombert \& Bothun (1995) and recent work by Kim (2007) represent the
deepest H$\alpha$ studies in LSB galaxies.
The results from these previous works can be summarized that LSB galaxies
have 1) small regions of H$\alpha$ emission (assumed to be low in
luminosity, although this early data was not flux calibrated), 2) weakly
correlated with regions of enhanced surface brightness and 3) no coherent
patterns indicative of density wave scenarios. Small and weak HII regions
are consistent with the low SFR's for LSB galaxies as a class of objects,
and agreed with the hypothesis that these galaxies are quiescent and
inhibited in their star formation histories.
This paper, the second in our series on optical observations of PSS-II LSB
galaxies, presents the H$\alpha$ spatial results which map the size,
location and luminosities of HII regions in our sample galaxies. With this
information, our goal is to compare the style of star formation in LSB galaxies
with spirals and irregulars to detect any global differences in their star
formation histories. The characteristics of importance to the star
formation history of a galaxy are the number of HII regions, the luminosity
of the brightest HII regions, the shape of the HII region luminosity
function and the spatial positions of HII regions with respect the optical
distribution of light. Lastly, we examine the optical colors of the HII
regions in the hope of resolving the color dilemma in LSB galaxies, their
unusually blue colors, yet low total SFR's.
\section{Analysis}
Observations, reduction techniques and the characteristics of the sample
are described in Paper I (Schombert, Maciel \& McGaugh 2011). Our final
sample contains 58 LSB galaxies selected from the PSS-II LSB catalog
(Schombert, Pildis \& Eder 1997) with deep $B$, $V$ and H$\alpha$ imaging
from the KPNO 2.1 meter. H$\alpha$ emission was detected in 54 of the 58
galaxies. All detected galaxies had at least one distinct HII region,
although diffuse emission account for approximately 50\% of the total
H$\alpha$ emission in most LSB galaxies.
The sample galaxies all have irregular morphology with some suggestions of
a bulge and disk for a handful. They range in size from 0.5 to 10 kpc and
central surface brightnesses from 22 to 24 mag arcsces$^{-2}$. Their total
luminosities range from $-$14 to $-$19 $V$ mags, which maps into stellar
masses from 10$^7$ to 10$^9$ $M_{\sun}$. The gas fractions for the sample
are between 0.5 and 0.9, so the amount of HI gas covers a similar range.
Identification of a HII region followed a slightly different prescription
from previous studies. In our case, we have identified an H$\alpha$ knot
to be a HII region if it is 1) distinct, i.e. not a filament or diffuse
region, 2) has rough circular symmetry (where spatial resolution limits
this determination) 3) having a clear peak in H$\alpha$ emission, and 4)
falling off uniformly around the peak. Due to resolution limits, any
particular region may include several HII complexes for more distant
galaxies in the sample. However, even for the most distant galaxies, one
arcsec corresponds to 400pc which is sufficient to resolve the high
luminosity HII regions into smaller components. There was no correlation
with the number of HII regions and distance (see \S4) which would imply
that confusion is not a factor in our sample.
Identification was made by visually guiding a threshold algorithm applied
to smoothed H$\alpha$ images. The center of confirmed HII knots were
determined and the luminosity of each selected region was determined by a
circular aperture. The radius of the aperture is determined to be the
point where the flux falls to 25\% of the peak emission. This value is
used for the size of the HII region, regardless of any indication of
non-circularity.
Four examples of our HII region selection process is shown in Figure
\ref{hii_apertures} where the selected HII regions are shown inside red
circles. A five kpc scale is indicated in each frame.
Continuum images (Johnson $V$) can be found at the our data
website (http://abyss.uoregon.edu/$\sim$js/lsb), as well as all the
information on individual HII regions plus color and surface brightness
data on the sample. The four examples in Figure \ref{hii_apertures} were
selected to illustrate several key points about the HII regions in LSB
galaxies.
Galaxy D500-3 (upper left) displays two bright regions near the galaxy core
and a number of fainter regions surrounding the core. None of the HII
regions are evident as higher continuum surface brightness regions from $V$
frames. Even though the brightest two regions are relatively high in
H$\alpha$ luminosity (38.24 and 38.17 log L$_{H\alpha}$, approximately 20
Orion complexes), their stellar populations have no effect on the optical
structure of the nearby region of the galaxy. The 5 kpc bar is indicated
in the upper right of the frame, where the larger HII regions are 100 to
150 pc in size, ranging down to 25 pc for the fainter regions.
Galaxy D572-5 (upper right) exhibits a more luminous set of HII regions
from other LSB galaxies, again several bright regions in the core and a few
fainter HII regions in the outer regions. There is some indication of
diffuse H$\alpha$ emission in the outer disk, but insufficient to warrant
inclusion by our selection algorithm. The brighter HII regions are just
visible in the continuum $V$ frames as distinct blue knots.
Galaxy D646-11 (lower left) displays more scattered H$\alpha$ emission.
The selected HII regions are not centrally concentrated. In fact, the
brightest region (more of a shell or bubble than a star complex) is located
in the outer disk. There are several filaments and diffuse H$\alpha$
regions in the core that were not selected as HII regions. The brighter
HII regions are associated with bluer continuum colors, but this is not
always the case for LSB galaxies as a whole (Pildis, Schombert \& Eder 1997).
Galaxy F750-V1 (lower right) is a smaller, nearby LSB galaxy. While seven HII
regions were selected, most of its H$\alpha$ emission is diffuse. It is a
subjective determination to select any knot in the core region. There is
no signature from the HII regions in the continuum images; however, there
is enhanced blue stellar colors in the diffuse regions.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3,angle=0]{hii_apertures.pdf}
\caption{\small H$\alpha$ maps for four galaxies in our sample. The
selected HII regions are indicated using our criteria of distinctiveness and symmetry.
The blue bar indicates a spatial scale of 5 kpc.
}
\label{hii_apertures}
\end{figure}
Similar criteria to identify H$\alpha$ knots were used to identify surface
brightness knots in the $V$ frames. The mean surface brightness isophotes
(based on ellipse fits) are subtracted from the raw image. This subtracted
image is threshold searched for optical knots. As with the H$\alpha$
knots, these regions are marked and measured with circular apertures
defined by the 25\% width. In the final analysis, 492 HII regions were
identified in 54 LSB galaxies and two DDO objects (154 and 168). In
addition, 271 optical knots were identified in the $V$ frames. Of the
492 HII regions, 207 had no distinct optical counterpart. Of the 271 $V$ knots,
only 49 had no detectable H$\alpha$ emission. The properties of these
regions will be discussed in \S7.
\section{HII Regions Sizes and Luminosities}
In our total LSB sample, 54 (93\%) galaxies had more than one identifiable
HII region. The four galaxies undetected by our H$\alpha$ imaging had the
four lowest gas fractions (less than 0.4). A histogram of the number of
HII regions per galaxy is shown in Figure \ref{num_hii_regions}. The
typical of HII regions per galaxy is between 3 and 10, which is quite low
for late-type galaxies with irregular morphology (Caldwell \it et al. \rm 1991) but
consistent with values from early studies of H$\alpha$ emission in LSB
galaxies (McGaugh, Schombert \& Bothun 1995). We note that these mean
values are much less than the numbers found by Youngblood \& Hunter (1999)
for HII regions in dIrr's. That number is usually above 20 HII regions per
galaxy; but, this is due in part to our different selection schemes and the
intrinsic nature of rich, star-forming dIrr's. We have two galaxies in
common, DDO154 and DDO168. Youngblood \& Hunter find 74 and 58 HII
regions, respectfully, whereas we only find 14 and 25 for the same systems.
While this might appear that we are incomplete in our HII region selection,
the total H$\alpha$ fluxes are in agreement and the difference in number
simply reflects our more stringent selection criteria in defining clear,
isolated HII regions, rather than H$\alpha$ filaments.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7,angle=0]{num_hii_regions.pdf}
\caption{\small A histogram of the number of HII regions found in each galaxy. The
number we find, per galaxy, is typically much lower than other studies due
to our more stringent selection criteria, with most LSB galaxies having less
than 10 HII regions. However, LSB galaxies still
display much lower numbers of HII regions than other star-forming galaxy
types, in line with their low total SFR's.
}
\label{num_hii_regions}
\end{figure}
The H$\alpha$ luminosities for all the HII regions in our sample is shown
in Figure \ref{dist_flux} (note we distinguish the total H$\alpha$ of a
galaxy, $L_{H\alpha}$, versus the H$\alpha$ luminosity of an individual HII
region, $L_{HII}$). The HII region luminosities range from $5 \times
10^{36}$ ergs s$^{-1}$ for the faintest regions to 10$^{39.5}$
for the brightest regions. A single O7V star results in a HII region of
log $L_{HII} = 37.0$ (Werk \it et al. \rm 2008), although HII regions powered by single B0
stars are found in the LMC with $L_{HII} = 36.0$ to 36.2 (Zastrow, Oey \&
Pellegrini 2013). Thus, the faintest regions are difficult to explain
under the observation that very few O or B stars are born in isolation (Chu \&
Gruendl 2008) or may be the result of PN ionization (Walterbos \& Braun
1992). The brighter regions correspond to a 30 Doradus sized complexes and
would contain $10^6 M_{\sun}$ solar masses of $H_2$ gas; however,
even these individual regions would not be detected in CO surveys of LSB
galaxies (Schombert \it et al. \rm 1990).
In some ways, the distribution of HII region luminosities in LSB galaxies
are similar to the distribution in early-type spirals rather than irregulars. In
early-type spirals, there are more low luminosity HII regions relative to
the brightest ones (Kennicutt, Edgar \& Hodge 1989), with fewer of the
massive star forming regions found in dwarf irregulars. On the other hand,
LSB galaxies with HII regions brighter than log $L_{HII} > 38$ do exist, but
HII regions of this size are not found in Sa spirals (Caldwell \it et al. \rm
1991). Thus, it seems the HII regions in LSB galaxies follow more closely
the pattern of other galaxies with irregular morphologies; unfortunately,
we lack sufficient statistics to construct a HII luminosity function for
individual galaxies in order to rigorously examine this effect.
Flux completeness for our HII region selection is a greater concern for our
sample, for we explore a larger volume of the Universe than other samples
as the original PSS-II catalog was surface brightness selected with an
angular size limit, not luminosity limited. The individual HII region
luminosities are shown in Figure \ref{dist_flux} as a function of galaxy
distance. As can be seen in this Figure, the brightest HII regions are
found in the most distant galaxies (which are also the most
massive/brightest galaxies). In addition, the galaxies farther than 40 Mpc
are deficient in HII regions fainter that log $L_{H\alpha}$ = 38.
Interestingly, the 40 Mpc limit is the same limited distance found by
Kennicutt, Edgar \& Hodge (1989) based on resolution experiments with their
H$\alpha$ imaging study.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{dist_flux.pdf}
\caption{\small The H$\alpha$ luminosity of individual HII regions
($L_{HII}$) as a
function of galaxy distance. Fainter HII regions are missing from the
sample of galaxies farther than 40 Mpc due to decreasing spatial/luminosity resolution
(a $L \propto D^2$ cutoff is shown). The brightest HII regions are found
in the more distant galaxies, indicating that 30 Doradus sized star forming
complexes are rare in LSB galaxies, and a larger volume of the Universe
must be searched to locate them.
}
\label{dist_flux}
\end{figure}
The lack of fainter HII regions for the more distant galaxies is probably
due to a lack of spatial resolution to distinguish a HII complex from
diffuse H$\alpha$ emission. To test this hypothesis, we selected a subset
of galaxies between 20 and 30 Mpc and deconvolved their H$\alpha$ images to
simulate their appearance at 80 to 120 Mpc. As expected, the fainter HII
regions (log $L_{HII} < 38$) dropped below the threshold of detection.
However, due to the typical wide spacing of HII regions in LSB galaxies,
there was no significant increase in the brightness of the remaining HII
regions due to blending. We conclude that our sample will severely under
sample low luminosity HII regions for objects greater than 40 Mpc in
distance and, thus, any discussion of a HII region luminosity function must
take this bias into account.
Even for the more complete nearby portion of our sample ($D < 40$ Mpc) the
ratio of $L_{HII}/L_{H\alpha}$ is dramatically different from those found
by Youngblood \& Hunter. Their distribution (their Figure 10) displays
very few galaxies with ratios less than 80\%, such that a majority
of H$\alpha$ emission comes from distinct star forming regions, although
the determination method differs from our calculations in the sense that
they assign HII regions to complexes then compare the amount of H$\alpha$
flux from complexes versus their total fluxes. For our sample, a
significant amount of H$\alpha$ emission in LSB galaxies (typically 50\%)
arises from a warm, diffuse component, rather than directly from HII
complexes, in agreement with the dwarf galaxies studied by van Zee (2000).
The ionizing source of this diffuse component is difficult to determine
(Hoopes \it et al. \rm 2001). Although this conclusion is strongly dependent on
whether one can isolate small, weak HII regions in the diffuse component,
objects that our more stringent selection criteria would miss.
Another concern is that the brightest HII regions are found in the most
distant galaxies. This may be due to confusion, where the HII regions
selected by this study are, in fact, blends of fainter HII regions blurred
by distance. While this may be true for some individual cases, the number
of HII regions as a function of distance does not show a decreasing trend
with distance, a relationship one would expect if a number of fainter HII
regions are being mistakenly grouped together as one complex. The more
likely trend is that fainter HII regions are simply indistinct and confused
with diffuse H$\alpha$ emission, therefore, not selected by our criteria.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{lmax_dist.pdf}
\caption{\small The luminosity of the brightest HII region in a galaxy and
baryon mass as a function of distance. The brightest HII regions and most
massive galaxies are also the most distant objects in our sample, i.e. a
larger volume of the Universe must be sample to find the largest LSB
galaxies.
}
\label{lmax_dist}
\end{figure}
Figure \ref{lmax_dist} displays the luminosity of the brightest HII region
($L_{max}$) and the baryon mass of a galaxy (stellar mass plus gas mass) as
a function of distance. As noted in Paper I, the most massive LSB galaxies
in our sample are at the largest distances. The brightness of the
brightest HII region also increases with distance, in synchronous with the
baryon mass (see \S 6). We conclude that the reason that the brightest HII
regions in LSB galaxies are found in the most distant galaxies is due to a
volume selection effect. The more distant objects in our sample are the
brightest by luminosity (and the largest in baryon mass) and are also
galaxies with the highest H$\alpha$ fluxes in the sample. The low mass,
low H$\alpha$ luminosity galaxies in the sample would not be found at large
distances due to the angular size limit to the PSS-II catalog. There is no
reason to believe that Malmquist bias plays a role in our sample, as it was
not selected by total or H$\alpha$ luminosity. The brighter HII regions in
distant galaxies simply reflect the diversity of LSB galaxies, where LSB
galaxies with bright 30 Doradus sized star forming complexes are rare.
However, due to the loss of fainter HII regions with distance in the
sample, in our following discussions we will distinguish between the
distant sample ($D > 40$ Mpc) and the more complete nearby sample.
For the sample as a whole, about 50\% of the imaged galaxies have between
75 to 200 pc/pixel resolution, 25\% have a resolution less than 50
pc/pixel, where the radius of a HII region is estimated by the point where
the flux falls to 25\% the peak flux. A plot of HII region radius ($r$, in
pc's) versus their H$\alpha$ luminosity is shown in Figure
\ref{flux_radius}. The slope of the relationship is consistent with log
$L_{HII} \propto r^{2}$, meaning that we detect all the H$\alpha$ photons
produced in the complexes. Foreground extinction by dust is very small in
LSB galaxies compared to spirals, in agreement with the lack of far-IR
detection for LSB galaxies and their low mean metallicities (Kuzio de
Naray, McGaugh \& de Blok 2004). Hence, we make no corrections for
internal extinction in any of our quoted flux values.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{flux_radius.pdf}
\caption{\small The size of a HII region in parsecs versus the H$\alpha$
luminosity of the same region. A linear fit (blue line) is consistent with
a relation of log $L_{H\alpha} \propto r^{2}$ meaning that there is little
extinction by dust in LSB galaxy HII regions.
}
\label{flux_radius}
\end{figure}
\section{HII Region Numbers}
The number of HII regions as a function of galaxy mass is shown in Figure
\ref{num_mass}. There is a similar relation between number of HII regions
and galaxy mass as found by Youngblood \& Hunter (1999) (blue line in
Figure \ref{num_mass}). Again, the distant galaxies in our sample fail to
display any relationship due to the under counting of fainter HII regions.
The nearby sample displays the same slope as Youngblood \& Hunter, although
our more stringent detection criteria shifts our number counts to lower
values.
The relationship between number and galaxy mass may simply reflect the
statistical effect of more gas material in a larger volume results in more
star formation events. As star formation is driven by local density
(Helmboldt \it et al. \rm 2005), then more volume will produce more individual star
forming regions. There is a also a trend of brightest HII region flux with
the number of HII regions; but, again, this reflects the statistical
behavior of larger volume provides a greater chance of a larger star
formation event.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{num_mass.pdf}
\caption{\small The number of HII regions per galaxy as a function of
baryon mass (gas plus stellar mass, see McGaugh \& de Blok 1997). The line
is from Youngblood \& Hunter (1999) for dwarf irregulars. Our nearby
sample follows their relationship (albeit with lower total numbers due to
our more stringent selection criteria); however, the distant galaxies are
deficient in low luminosity HII regions.
}
\label{num_mass}
\end{figure}
The number density of HII regions per kpc$^{-2}$ has a weak trend of
decreasing density with increasing galaxy mass where the typical number
density (for the $D < 40$ Mpc sample) is between 0.1 and 1 HII regions per
kpc$^{-2}$ with a mean of 0.3. This is similar to the mean value for Sm/Im
type galaxies from Kennicutt, Edgar \& Hodge (1989). There is no trend of
number density with galaxy mass/size; however, the $D < 40$ sample has a
limited dynamic range in galaxy size and mass.
\section{HII Regions Locations}
The relationship between the H$\alpha$ luminosity of each HII region and
its distance from the galaxy center is shown in Figure \ref{central_dist}.
While the absolute distance, in kpcs, displays a trend that the
brightest HII regions are found in the outer regions (left panel), this is
an artifact of the effect that the largest (brightest) galaxies in the
sample have the brightest HII regions. When the distance from the galaxy
core is displayed in terms of the scale length of the galaxy ($\alpha$,
from exponential fits to the $V$ frames), the relationship disappears
(right panel). The lack of radial correlation in Figure \ref{central_dist}
is reinforced by the fact that the location of the brightest HII regions
are also independent of their distance from the galaxy center.
Our interpretation for a lack of correlation between HII region luminosity
and distance from the galaxy center is that this reflects the underlying
gas distribution in LSB galaxies. In general, the HI gas density in LSB
galaxies is much more extended than the optical image and the density
levels are flat out to several optical scale lengths (de Blok, McGaugh \&
van der Hulst 1996). While it is the molecular gas, not neutral hydrogen,
that drives star formation (Scoville 2012), the distribution of H$_2$ gas
in LSB galaxies is not directly known (Matthews \it et al. \rm 2005) and HI serves
as a necessary proxy. However, since the density of HI gas in LSB galaxies
is low (as are their stellar densities) and typically constant with radius
(stellar surface brightness profiles are also very shallow exponentials),
the lack of a radial trend in decreasing gas density with radius means that
star formation will be dominated by local density enhancements rather than
global processes. And, as concluded by other studies, it is clear that the
spatial distribution of star formation in LSB galaxies differs from the
global patterns found in spirals (Bigiel \it et al. \rm 2008, O'Neil, Oey \& Bothun
2007).
Presumably, the SFR will halt when the molecular gas surface density drops
below a critical value, but an estimate of where that radius occurs
requires more HI information than is available for our sample. However,
there are numerous examples of HII regions at very low surface brightnesses
in LSB galaxies (see Figure \ref{compare_aps} for an example where
H$\alpha$ emission is found beyond 5 scale lengths). Over 1/3 of the HII
regions in our sample occur in regions where the surface brightness is
below 25 $V$ mags arcsecs$^{-1}$ (which corresponds to less than 4
$L_{\sun}$ pc$^{-2}$) and 1/2 the HII regions have no optical signature (an
optical knot or surface brightness enhancement) even at such low surface
brightnesses (indicating a very low cluster mass). This is an important
observation with respect to LSB galaxies as star formation has always been
assumed to be inhibited in low density environments, but not non-existent.
Star formation, as traced by H$\alpha$ is loosely correlated with optical
surface brightness in LSB galaxies, in the sense that for HII regions
without detectable optical knots there is the trend that the brightest HII
regions are located in regions of the galaxy with higher surface
brightness. However, the trend is by no means exact and there exist many
examples of strong HII regions in areas of very low stellar density.
Gravitational instability models suggest a threshold for star formation
where the gas density falls below a critical value (Kennicutt 1998) and
star formation efficiency in HSB galaxies generally follows stellar
densities more strongly than gas densities (Leroy \it et al. \rm 2008). But, star
formation in the low surface brightness regions of our sample suggests some
other method allows the formation of the cold phase of the ISM without the
gravitational pull from stellar mass (see also Thornley, Braine \& Gardan
2006) and that gravitational instability from stellar density does not play
a dominant role.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{central_dist.pdf}
\caption{\small HII region luminosity as a function of distance from the
galaxy center in terms of absolute kpc's and normalized scale lengths. The
artificial relation between kpc and luminosity in the left panel is due to
the fact that the largest galaxies have the brightest HII regions. When
galactic distance is normalized by galaxy scale length ($\alpha$), the
relationship disappears. Since the HI densities of LSB galaxies are relatively
constant (de Blok, McGaugh \& van der Hulst 1996), this diagram simply
reflects that fact that local density drives star formation in LSB galaxies
rather than global patterns found in spirals.
}
\label{central_dist}
\end{figure}
HII regions tend to avoid the cores of LSB galaxies, as can be seen in
scale length panel of Figure \ref{central_dist}. While the central peak
of stellar luminosity in LSB galaxies is ill-defined, due to their
irregular morphology, their outer isophotes are usually fairly regular and
can be used to define a center of stellar mass. The fact that HII regions
tend to be found in regions outside the core may simply reflect the lumpy
distribution of stars and gas in LSB's (Pildis, Schombert \& Eder 1997)
rather than formation effects (i.e., spiral bulges). LSB galaxies rarely
have the central concentrations, bulges or even AGN behavior that would
indicate present, or past, nuclear star formation that is common in many
starburst and spiral galaxies (Schombert 1998).
\section{Brightest HII Regions}
One area where completeness is not an issue is the characteristics of the
brightest HII region in each galaxy. This region represents
the largest site of star formation in each galaxy and, presumably, the
largest concentration of ionizing O stars. While the HII region luminosity
function predicts the number of bright HII regions in a galaxy, there is no
particular model or framework for understanding the relationship between
the luminosity/mass of the brightest region and global characteristics of a
galaxy (Leroy \it et al. \rm 2008).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{lmax_mass.pdf}
\caption{\small The relationship between the H$\alpha$ luminosity of the
brightest HII region in each galaxy, versus the total stellar and gas mass
of the galaxy. While a larger available gas supply would seem necessary
(if not sufficient) for a large HII region, the correlation with stellar
mass implies a longer evolutionary connection between the star formation
history of an LSB galaxy and it's current SFR.
}
\label{lmax_mass}
\end{figure}
There are clear, distinct correlations between the luminosity of the
brightest HII region ($L_{max}$) and galaxy luminosity (i.e., a proxy for
stellar mass), gas mass and the total H$\alpha$ luminosity of the
galaxy. The first two correlations are shown in Figure \ref{lmax_mass},
where stellar luminosity is converted to stellar mass following the
prescription of McGaugh \& de Blok (1997) and gas mass corrected from HI
mass for metallicity and molecular contributions. The correlation with
total H$\alpha$ luminosity is shown in Figure \ref{lmax_lha}.
If the amount of local star formation is determined by a random process of
gas collection (e.g., cloud-cloud collisions), then the correlations with
galaxy mass would simply reflect the statistical nature of more star
formation with larger gas mass and a higher chance of building a large,
bright HII complex with more available gas. In that scenario, the
correlations should be stronger with gas mass versus stellar mass (as the
available gas reservoir is the fuel for star formation, not stellar mass),
and the fact that there is no significant difference may signify at strong
evolutionary connection between the formation of stellar mass and the
available gas supply. At the very least, the current SFR in an LSB galaxy
has a strong evolutionary connection with its past as defined by stellar
mass build-up, even if a significant fraction of the current SF is
occurring in low stellar density regions (perhaps future HSB regions).
The statistical nature can be understood better in terms of comparing the
total star formation rate of galaxy (as given by the total $L_{H\alpha}$)
and the luminosity of the brightest HII region ($L_{max}$). The star
formation rates of LSB galaxies are low compared to other irregular
galaxies (Schombert, Maciel \& McGaugh 2011). However, if the distribution
of HII region luminosities follows the same luminosity function as other
galaxies, then there should be simply a smaller number of HII regions that
can form for a given value of total $L_{H\alpha}$. Thus, the
probability of finding a HII region of a particular luminosity decreases
with higher HII region luminosities.
The correlation between total galaxy H$\alpha$ luminosity and the
luminosity of the brightest HII region is found in Figure \ref{lmax_lha}
(top panel), along with the ratio of the brightest HII region luminosity
and the total H$\alpha$ luminosity (bottom panel). The brightest HII
regions correspond to approximately 200 O7V stars (Werk \it et al. \rm 2008), yet as
the total SFR increases for the sample, they contribute only 10 to 20\% to
the total H$\alpha$ luminosity. The diffuse component means that this
value will never be above 0.5 in LSB galaxies.
In order to test the idea that the properties of the observed HII regions
are simply the result of small number statistics, we constructed a simple
Monte Carlo simulation by randomly selecting HII region luminosities from
the luminosity function as given in Youngblood \& Hunter (1999) for dwarf
irregulars. The HII region luminosities were randomly selected by their
luminosity function probability then added until the total set matched a
given $L_{H\alpha}$ value. The luminosity of the brightest HII region was
then output. After running 10,000 simulations per luminosity bin, the mean
brightest HII region luminosity was determined as a function of
$L_{H\alpha}$. The results from these simulations are shown as the blue
lines in Figure \ref{lmax_lha}.
The agreement between the $L_{max}$ simulation and the data (top panel in
Figure \ref{lmax_lha}) is excellent and demonstrates that, despite previous
claims of truncated LF's in LSB galaxies (O'Neil, Oey \& Bothun 2007,
Helmboldt \it et al. \rm 2005), the luminosity of the brightest HII regions is
consistent with the same pattern of HII regions for dwarf irregulars. The
ratio of $L_{max}$ and the total H$\alpha$ luminosity is also in agreement
with the simulations, where a lack of $L_{max}/L_{H\alpha}$ near unity
simply reflects the statistical improbability of finding a single HII
region that contains all the H$\alpha$ flux of a galaxy including any
diffuse emission.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{lmax_lha.pdf}
\caption{\small The relationship between brightest HII region H$\alpha$
luminosity ($L_{max}$) and total galaxy H$\alpha$ luminosity
($L_{H\alpha}$) and the fraction of the brightest HII region to the total
galaxy flux. The brightest HII regions (log $L_{max} >$ 39) correspond to
a cluster of several hundred O7V stars. Yet, the fractional contribution
to the total galaxy H$\alpha$ luminosity decreases to less than 20\% for
the brightest galaxies. The blue lines display the result of a Monte Carlo
simulation that selects HII regions from the luminosity function defined
for dwarf irregulars by Youngblood \& Hunter (1999). There is no
indication that the HII regions in LSB galaxies display any difference from
HII regions in other irregular galaxies.
}
\label{lmax_lha}
\end{figure}
\section{Optical Colors and HII regions}
The identification of HII regions in the H$\alpha$ images allows us to use
the same apertures on the $B$ and $V$ images to extract continuum
luminosities and $B-V$ colors. As described in \S2, we have divided the
sample of identified optical and H$\alpha$ knots into three types; 1) those
regions with H$\alpha$ emission, but no enhanced optical flux above the
mean surface brightness of the local isophote value, 2) knots with
both H$\alpha$ and optical emission and 3) knots only visible in $V$ images
without detectable H$\alpha$ emission. These three regions would,
presumingly, correspond a low luminosity HII region (no
visible stars), a young HII region with some blowout and visible stars
(Orion type HII region) and an evolved stellar cluster or association sufficiently
old to be free of any remaining hot gas. The regions of the first type
(no optical enhancement) are slightly redder than those HII regions with an
optical knot, but display no extra reddening compared to the regions
surrounding them. They may, in fact, simply represent regions where the
luminosity of the underlying star cluster is small compared to the local
galaxy light, although this is a problematic interpretation due to the low
surface brightness nature of these regions.
An example of H$\alpha$ versus optical knots is shown in Figure
\ref{compare_aps}. In this Figure, the H$\alpha$ and $V$
frames for F608-1 are plotted at the same scale (150 arcsecs
to a side). The HII regions are marked in both panels by red circles, as
determined from the H$\alpha$ image. There are several examples of
H$\alpha$ knots with no visible optical emission (the two HII regions
farthest to the right and topmost). There are also several examples of an
HII region with a distinct optical knot in the $V$ image (e.g., the three
brightest H$\alpha$ regions). The faintest HII regions correspond to log
$L_{H\alpha}$ between 36.2 and 36.5. The brightest three HII regions are
log $L_{H\alpha}$ of 36.8, 36.9 and 37.0, comparable to a cluster of
stellar mass between $3\times10^3$ and $7\times10^3 M_{\sun}$ ionized by a
dozen O stars.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.30,angle=0]{compare_aps.pdf}
\caption{\small The $V$ and H$\alpha$ images for LSB galaxy F608-1. Each
frame is 150 arcsecs to a side. There are several examples of H$\alpha$
knots with no visible optical emission (the two HII regions farthest to the
right and topmost). There are also several examples of an HII region with
a distinct optical knot in the $V$ image (e.g., the three brightest
H$\alpha$ regions). The faintest HII regions correspond to log
$L_{H\alpha}$ between 36.2 and 36.5. The brightest three HII regions are
log $L_{H\alpha}$ of 36.8, 36.9 and 37.0, comparable to a cluster of
stellar mass between $3\times10^3$ and $7\times10^3 M_{\sun}$ ionized by a
dozen O stars.
}
\label{compare_aps}
\end{figure}
For the 429 regions with H$\alpha$ emission, we have plotted their
H$\alpha$ luminosities versus their B-V colors (determined through the same
apertures as the H$\alpha$ fluxes, in Figure \ref{lhii_BV}). No internal
extinction corrections have been applied, although gas and dust are
probably available in sufficient quantities to alter the colors. And, more
importantly, no effort was made to subtract out the underlying galaxy light
(see below) which is necessary to compare to regions without any obvious
optical emission.
Figure \ref{lhii_BV} displays a very weak trend for bluer optical colors
with increasing H$\alpha$ luminosity. This trend is as expected with
greater H$\alpha$ flux implying a larger number of ionizing O stars per HII
region and, therefore, greater blue flux (see Caldwell \it et al. \rm 1991).
However, the poor relationship only emphasizes the rich color structure that
is found in LSB galaxies where star forming regions are often associated
with blue shells and filaments and color features uncorrelated with star
forming regions (to be studied in a later paper). It is worth noting that
the color-H$\alpha$ trend is not as blue as HII regions in early-type
spirals (Caldwell \it et al. \rm 1991). In that sample, HII regions with log
$L_{H\alpha} = 38.5$ have $B-V$ colors less than zero. Many of the regions
with optical emission have much bluer colors (see below) and the colors for
low luminosity HII regions are correlated with the nearby galaxy colors. We
anticipate that the underlying colors will be less than $B-V=0.0$ once the galaxy
light is subtracted.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{lhii_BV.pdf}
\caption{\small A contour density plot of the H$\alpha$ luminosity versus
HII region $B-V$ color for 349 HII regions. There is a weak trend for
bluer optical colors with increasing H$\alpha$ luminosity, consistent with
more blue ionizing stars in the brighter HII regions. These colors are for
all HII regions with and without optical emission (star clusters) without
subtraction of the underlying galaxy light. Regions with blue optical
enhancement will have much bluer $B-V$ colors when the surrounding galaxy
is subtracted.
}
\label{lhii_BV}
\end{figure}
A comparative histogram of $B-V$ colors within the various H$\alpha$ and
$V$ knots is shown in Figure \ref{color_hist}. These colors were
calculated by subtracting the local galaxy isophote from those HII regions
with optical knots, leaving only the luminosity above the underlying galaxy
luminosity density. For HII regions without optical knots, the local
galaxy color is used. This technique does not bias the calculated color
for the optical knots, but it was unsurprising to find the majority of them
have $B-V$ colors bluer then the local galaxy color as was noted in the
two color maps from Paper I.
Here, the reddest colors are found for the H$\alpha$ knots without any
optical signature. It should be noted that these H$\alpha$ only knots also
typically have the lowest H$\alpha$ luminosities. In other words, these are
regions that are ionized by a single or a very small number of O or B stars.
Their mean $B-V$ color is 0.45, which basically confirms that these regions
have little effect on the surface brightness or local color as these values
conform to the mean total color of LSB galaxies. The underlying stellar
association lack sufficient luminosity to alter the galaxy's isophotes and
colors, even at these low surface brightness regimes ($L_{\sun}$ pc$^{-2}$
= 1 to 4).
Regions which display HII emission and an optical enhancement tend to be
bluer than sole H$\alpha$ knots (mean $B-V$ = 0.25) and are also brighter
in H$\alpha$ luminosity with values that correspond to between tens to
hundreds of O stars per region. This trend of optical detection correlated
with H$\alpha$ emission was also seen in early-type spirals by Caldwell
\it et al. \rm (1991). The bluest knots agree well with the bluest regions for
spirals ($B-V = -0.2$). Lastly, the optical knots without H$\alpha$
emission span a full range of $B-V$ colors, although with a mean color
slightly redder than the optical knots with H$\alpha$ emission. The
slightly redder colors probably indicates an evolutionary effect, i.e. as a
cluster ages and the ionizing stars die off, the HII region dissipates and
the cluster ages and reddens (see below).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8,angle=0]{color_hist.pdf}
\caption{\small Normalized absolute $V$ luminosity and color histogram for
all 318 knots with $B-V$ colors. HII knots are regions with only H$\alpha$
emission and no visible optical enhancement above the local isophote. $V$
and HII knots are regions with an distinct knot in both the H$\alpha$ and
$V$ images (typically brighter in H$\alpha$ than sole H$\alpha$ knots).
$V$ knots have optical emission but no detectable H$\alpha$ emission.
}
\label{color_hist}
\end{figure}
To examine this evolutionary processes in greater detail, we plot in Figure
\ref{lha_mag} the absolute $V$ magnitude of the knot (presumably a
stellar association), the HII region H$\alpha$ luminosity versus
$M_{cluster}$. The absolute $V$ magnitude of the stellar association or
cluster ranges from values of $-8$ to $-14$, which would correspond to a
range of cluster masses from open clusters to globular sized if age were
not a factor. However, assigning a cluster mass to the $V$ magnitude is a
difficult procedure, for while the $L_{HII}$ luminosity relates the number
of ionizing O stars per region, the total number of stars (as given by the
IMF) will be extremely sensitive to the age of the HII region (Leitherer
\it et al. \rm 2010). For example, a 10 Myr $10^4 M_{\sun}$ cluster has the same
$V$ magnitude as a 100 Myr $1.5 \times 10^5 M_{\sun}$ cluster and a 500 Myr
$10^6 M_{\sun}$ cluster (Bruzual \& Charlot 2003).
We have divided the sample into red ($B-V > 0.3$) and blue ($B-V < 0.3$)
clusters. The division of the sample by color is clear, the blue clusters
have higher $L_{HII}$ values then red clusters at constant $V$ cluster
luminosity. The inverse interpretation, that red clusters having brighter
$V$ magnitudes at a constant H$\alpha$ value is opposite to what one would
expect from spectroevolutionary models where an aging cluster will redden
by 0.3 in $B-V$ over 500 Myrs, but the luminosity of the underlying cluster
will have decreased by 3 magnitudes. A more plausible scenario is that age
is the defining factor in the difference between red and blue clusters in
Figure \ref{lha_mag}. The blue clusters are younger and have more ionizing
stars per unit cluster mass producing higher H$\alpha$ luminosities. Over
100 Myrs, the number of ionizing stars decreases by a factor of 3
(Werk \it et al. \rm 2008) while the $B-V$ color has reddened by 0.2. This is
consistent with the trend seen in Figure \ref{lha_mag}.
In order to test this hypothesis, we have constructed a series of stellar
population models taking the population colors and luminosities from
Bruzual \& Charlot (2003) for low metallicity ([Fe/H] = $-$0.4) tracks.
Starting with a given stellar mass, we apply the IMF from Kroupa \it et al. \rm
(2011) to determine the number of stars with ionizing photons. We then
apply the ionization Q curves from Martins \it et al. \rm (2005) to determine the H$\alpha$
luminosity of the cluster as a function of age. Each zero age model is
then aged used a standard stellar lifetime as a function of mass, the Q
values are recalculated and new cluster luminosities are determined. The
resulting tracks are shown in Figure \ref{lha_mag}.
As can be seen in Figure \ref{lha_mag}, the star forming regions in LSB
galaxies range in stellar mass from globular cluster sized ($10^6
M_{\sun}$), such as 30 Doradus, to small associations ($10^3 M_{\sun}$),
such as the California nebula and the Taurus cloud in our own Galaxy. In
addition, HII regions vary in age from 2 to 15 Myrs, although a
majority of the detected HII regions have ages between 10 and 15 Myrs. We
note that the position of the model tracks with respect to H$\alpha$
luminosity are extremely sensitive to the shape of the upper end of the
IMF. However, the top edge of our sample agrees well with the zero age
line from our models, indicating that the upper end of the Kroupa IMF
appears to closely represents the IMF in LSB galaxies.
Low luminosity HII regions, lacking any optical signature, would presumably
fall to the bottom left of this diagram. For comparison, we have plotted
the data from Zastrow, Oey \& Pellegrini (2013) for single O or B clusters in
the LMC (black symbols in Figure \ref{lha_mag}). Also shown are single
star ionization curves for single star mass of 10 to 50 $M_{\sun}$. HII
regions with log $L_{H\alpha}$ less than 36.5 would fall in this region,
and have visual luminosities and mean surface brightnesses below detection
levels (a $10^3 M_{\sun}$ cluster within a 100pc pixel would only increase
the surface brightness of that pixel by 1\%).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.9,angle=0]{lha_mag.pdf}
\caption{\small For HII regions with optical signatures in the $V$ frames,
the magnitude of the underlying cluster is plotted versus the HII region
H$\alpha$ luminosity. The correlation with brighter clusters for
increasing H$\alpha$ luminosity is clear. Dividing the sample by a $B-V$
color of 0.3 displays a trend for the bluer clusters to be brighter in
H$\alpha$ luminosity than red clusters. Single star HII regions from the
LMC (Zastrow, Oey \& Pellegrini 2012) are shown as black symbols. Stellar
population models are shown as dotted tracks for cluster masses from $10^3$
to $10^6 M_{\sun}$. Model ages are indicated in Myrs. Typical data errors are shown in
the bottom right.
}
\label{lha_mag}
\end{figure}
\section{Conclusions}
LSB galaxies typically have low total SFR's and, thus, fewer HII regions to
study compared to spirals and irregulars. We have attempted
to overcome this deficiency by observing a larger sample over a greater
volume of the local Universe. Our sample of 54 LSB galaxies produced 429
HII regions for study, most having sufficient S/N in their optical images
to compare broadband luminosities and colors. Four galaxies in our sample
were undetected in H$\alpha$ and have the lowest gas mass fraction of the
sample, suggesting their lower gas supply is responsible for their lack of
star formation.
We summarize our results as the following:
\begin{description}
\item{(1)} LSB galaxies typically have fewer HII regions per galaxy than
other irregular galaxies; however, LSB galaxies have a full range of HII
region sizes from complexes that encompass regions powered by a single O or B
star (log $L_{H\alpha} < 36.5$) to 30 Doradus sized complexes with log
$L_{H\alpha} > 40$. The correlation between HII region luminosity and size
is well defined with a slope of two, indicating that we are observing all of
the photons from the ionized gas.
\item{(2)} LSB galaxies have a wide range in the fraction of HII region's
contribution to the total $L_{H\alpha}$ luminosity from 10 to 90\%. The
fraction having no correlation with galaxy baryon mass.
\item{(3)} There is no correlation between the HII region luminosity and
spatial position in a galaxy. The brightest HII regions do not
preferentially appear at any particular radius as normalized by disk scale
length.
\item{(4)} Roughly 1/2 of the HII regions have a distinct optical
enhancement above the surrounding isophote. This is interpreted to be
stellar mass produced by the star formation event (which is confirmed by
their bluer colors compared to surrounding galaxy color). HII regions
without enhancement are still, loosely, associated with local stellar
density (i.e., surface brightness) in proportion to their $L_{H\alpha}$.
However, there are numerous examples of bright HII regions in faint galaxy
regions.
\item{(5)} The luminosity of the brightest HII region in each galaxy is
correlated with the galaxy's stellar mass, gas mass and total star
formation rate. Monte Carlo simulations confirm that these correlations
are replicated by an underlying HII region luminosity function that matches
that for star forming irregulars. In other words, there is no evidence
that the distribution of HII regions luminosities in LSB galaxies differ from
that of star forming HSB galaxies, and the underlying star formation
mechanisms appear to be the same.
\item{(6)} As observed in spiral galaxies, there is a weak correlation
between the color of a HII region and its H$\alpha$ luminosity. And, while
regions with H$\alpha$ emission are bluer with increasing H$\alpha$
luminosity, there are blue regions in a LSB galaxy without H$\alpha$
emission.
\item{(7)} Comparison with stellar population models indicates that the HII
regions in LSB galaxies range in mass from a few $10^3 M_{\sun}$ to
globular cluster sized systems. Their ages are consistent with clusters
between from 2 to 15 Myrs old. The faintest HII regions are also similar
to single O or B star associations seen in the LMC. Thus, star formation in
LSB galaxies covers the full range of stellar cluster mass and age.
\end{description}
The hope in studying LSB galaxies was to reveal, perhaps, a new realm of
star formation processes or conditions. Where the class of LSB galaxies
differ from HSB galaxies in terms of their bluer colors, lower stellar
densities and higher gas fractions; however, there is nothing particularly
unusual about the individual sites of star formation under more detailed
examination. The local process of star formation, cluster size and mass,
IMF and gas physics, all are consistent with the style of star formation
found in HII regions in spirals and irregulars. With respect to their
global properties, the HII regions in LSB galaxies are more similar to
other irregular galaxies, again reflecting the sporadic distribution of gas
over coherent kinematic processes (i.e., spiral patterns).
\acknowledgements We gratefully acknowledge KPNO/NOAO for the telescope
time to complete this project. Software for this project was developed
under NASA's AIRS and ADP Programs.
|
2,869,038,156,706 | arxiv | \section{#2}\label{#1}}
\newcommand{\Bibitem}[1]{\bibitem{#1}}
\newcommand{\Label}[1]{\label{#1}}
\newcommand{\eea}{\end{eqnarray}}
\newcommand{\ee}{\end{equation}}
\newcommand{\bdm}{\begin{displaymath}}
\newcommand{\edm}{\end{displaymath}}
\newcommand{\dpsty}{\displaystyle}
\newcommand{\bc}{\begin{center}}
\newcommand{\ec}{\end{center}}
\newcommand{\ba}{\begin{array}}
\newcommand{\ea}{\end{array}}
\newcommand{\bab}{\begin{abstract}}
\newcommand{\eab}{\end{abstract}}
\newcommand{\btab}{\begin{tabular}}
\newcommand{\etab}{\end{tabular}}
\newcommand{\bit}{\begin{itemize}}
\newcommand{\eit}{\end{itemize}}
\newcommand{\ben}{\begin{enumerate}}
\newcommand{\een}{\end{enumerate}}
\newcommand{\bfig}{\begin{figure}}
\newcommand{\efig}{\end{figure}}
\newcommand{\arreq}{&\!=\!&}
\newcommand{\arrmi}{&\!-\!&}
\newcommand{\arrpl}{&\!+\!&}
\newcommand{\arrap}{&\!\!\!\approx\!\!\!&}
\newcommand{\non}{\nonumber}
\newcommand{\align}{\!\!\!\!\!\!\!\!&&}
\def\lsim{\; \raise0.3ex\hbox{$<$\kern-0.75em
\raise-1.1ex\hbox{$\sim$}}\; }
\def\gsim{\; \raise0.3ex\hbox{$>$\kern-0.75em
\raise-1.1ex\hbox{$\sim$}}\; }
\newcommand{\DOT}{\hspace{-0.08in}{\bf .}\hspace{0.1in}}
\newcommand{\Laada}{\hbox {$\sqcap$ \kern -1em $\sqcup$}}
\newcommand\loota{{\scriptstyle\sqcap\kern-0.55em\hbox{$\scriptstyle\sqcup$}}}
\newcommand\Loota{{\sqcap\kern-0.65em\hbox{$\sqcup$}}}
\newcommand\laada{\Loota}
\newcommand{\qed}{\hskip 3em \hbox{\BOX} \vskip 2ex}
\def{\rm Re}\hskip2pt{{\rm Re}\hskip2pt}
\def{\rm Im}\hskip2pt{{\rm Im}\hskip2pt}
\newcommand{\real}{{\rm I \! R}}
\newcommand{\Z}{{\sf Z \!\!\! Z}}
\newcommand{\complex}{{\rm C\!\!\! {\sf I}\,\,}}
\def\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}}
\def\leavevmode\hbox{\small1\kern-3.3pt\normalsize1}{\leavevmode\hbox{\small1\kern-3.3pt\normalsize1}}
\newcommand{\slask}{\!\!\!/}
\newcommand{\bis}{{\prime\prime}}
\newcommand{\pa}{\partial}
\newcommand{\na}{\nabla}
\newcommand{\ra}{\rangle}
\newcommand{\la}{\langle}
\newcommand{\goto}{\rightarrow}
\newcommand{\swap}{\leftrightarrow}
\newcommand{\EE}[1]{ \mbox{$\cdot10^{#1}$} }
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\at}[2]{\left.#1\right|_{#2}}
\newcommand{\norm}[1]{\|#1\|}
\newcommand{\abscut}[2]{\Abs{#1}_{\scriptscriptstyle#2}}
\newcommand{\vek}[1]{{\rm\bf #1}}
\newcommand{\integral}[2]{\int\limits_{#1}^{#2}}
\newcommand{\inv}[1]{\frac{1}{#1}}
\newcommand{\dd}[2]{{{\partial #1}\over{\partial #2}}}
\newcommand{\ddd}[2]{{{{\partial}^2 #1}\over{\partial {#2}^2}}}
\newcommand{\dddd}[3]{{{{\partial}^2 #1}\over
{\partial #2 \partial #3}}}
\newcommand{\dder}[2]{{{d #1}\over{d #2}}}
\newcommand{\ddder}[2]{{{d^2 #1}\over{d {#2}^2}}}
\newcommand{\dddder}[3]{{d^2 #1}\over
{d #2 d #3}}
\newcommand{\dx}[1]{d\,^{#1}x}
\newcommand{\dy}[1]{d\,^{#1}y}
\newcommand{\dz}[1]{d\,^{#1}z}
\newcommand{\dl}[1]{\frac{d\,^{#1}l}{(2\pi)^{#1}}}
\newcommand{\dk}[1]{\frac{d\,^{#1}k}{(2\pi)^{#1}}}
\newcommand{\dq}[1]{\frac{d\,^{#1}q}{(2\pi)^{#1}}}
\newcommand{\bfT}{{\bf T }}
\def{\rm GeV}{{\rm GeV}}
\def{\rm\ MeV}{{\rm\ MeV}}
\def{\rm\ keV}{{\rm\ keV}}
\def{\rm\ TeV}{{\rm\ TeV}}
\newcommand{\cA}{{\cal A}}
\newcommand{\cB}{{\cal B}}
\newcommand{\cD}{{\cal D}}
\newcommand{\cE}{{\cal E}}
\newcommand{\cG}{{\cal G}}
\newcommand{\cH}{{\cal H}}
\newcommand{\cL}{{\cal L}}
\newcommand{\cO}{{\cal O}}
\newcommand{\cT}{{\cal T}}
\newcommand{\cN}{{\cal N}}
\newcommand{\cR}{{\cal R}}
\newcommand{\rvac}[1]{|{\cal O}#1\rangle}
\newcommand{\lvac}[1]{\langle{\cal O}#1|}
\newcommand{\rvacb}[1]{|{\cal O}_\beta #1\rangle}
\newcommand{\lvacb}[1]{\langle{\cal O}_\beta #1 |}
\newcommand{\bb}{\bar{\beta}}
\newcommand{\bt}{\tilde{\beta}}
\newcommand{\ctH}{\tilde{\cal H}}
\newcommand{\chH}{\hat{\cal H}}
\newcommand{\1}{\aa}
\newcommand{\2}{\"{a}}
\newcommand{\3}{\"{o}}
\newcommand{\4}{\AA}
\newcommand{\5}{\"{A}}
\newcommand{\6}{\"{O}}
\newcommand{\al}{\alpha}
\newcommand{\g}{\gamma}
\newcommand{\Del}{\Delta}
\newcommand{\e}{\textrm{e}}
\newcommand{\eps}{\epsilon}
\newcommand{\lam}{\lambda}
\newcommand{\Om}{\Omega}
\newcommand{\ve}{\varepsilon}
\newcommand{\mn}{{\mu\nu}}
\newcommand{\vp}{\varphi}
\newcommand{\rf}[1]{(\ref{#1})}
\newcommand{\nn}{\nonumber \\*}
\newcommand{\bfB}{\bf{B}}
\newcommand{\bfv}{\bf{v}}
\newcommand{\bfx}{\bf{x}}
\newcommand{\bfy}{\bf{y}}
\newcommand{\vx}{\vec{x}}
\newcommand{\vy}{\vec{y}}
\newcommand{\oB}{\overline{B}}
\newcommand{\oI}{\overline{I}}
\newcommand{\oR}{\overline{R}}
\newcommand{\rar}{\rightarrow}
\newcommand{\ti}{\times}
\newcommand{\slsh}{\hskip-5pt/}
\newcommand{\sm}{Standard~Model~}
\newcommand{\MP}{M_{\rm Pl}}
\newcommand{\mpl}{M_{\rm Pl}}
\newcommand{\tp}{t_{\rm Pl}}
\def\mes#1{{d^3{#1}\over (2\pi )^3}}
\newcommand{\pmin}{p_{\rm min}}
\newcommand{\pmax}{p_{\rm max}}
\newcommand{\fo}{f_0}
\newcommand{\foi}{f_{0,i}\,}
\newcommand{\fop}{f_0^P}
\newcommand{\fou}{f_0^U}
\def\rule{14cm}{0pt}\and{\rule{14cm}{0pt}\and}
\newcommand{\eff}{{\rm eff}}
\newcommand{\MT}{M_{\rm T}}
\newcommand{\ML}{M_{\rm L}}
\newcommand{\kk}{\vek{k}}
\newcommand{\pp}{{\rm p}}
\newcommand{\pt}{\partial_t}
\newcommand{\half}{{1\over 2}}
\newcommand{\w}{\omega}
\newcommand{\uhat}{\hat{U}_\w}
\newcommand{\etal}{\mbox{\it et al.}}
\newcommand{\ie}{{\it i.e. }}
\newcommand{\eg}{{\it e.g. }}
\newcommand{\trh}{T_{\rm RH}}
\newcommand{\ad}{{a'\over a}}
\newcommand{\bd}{{b'\over b}}
\newcommand{\Rd}{{R'\over R}}
\newcommand{\diag}{{\textrm{diag}}}
\newcommand{\mato}[1]{\tilde{#1}}
\newcommand{\sech}{\textrm{sech}}
\newcommand{\I}{\textrm{I}}
\newcommand{\II}{\textrm{II}}
\newcommand{\III}{\textrm{III}}
\newcommand{\vev}[1]{\langle #1 \rangle}
\newcommand{\hyp}{\,\; F_{1{\hskip -16pt}2}{\hskip 11pt}}
\newcommand{\brhom}{\overline{\rho}_M}
\newcommand{\brho}{\overline{\rho}}
\newcommand{\rhob}{\overline{\rho}}
\newcommand{\Pb}{\overline{P}}
\newcommand{\bH}{\overline{H}}
\newcommand{\ep}{{1+4\eps}}
\newcommand{\lcdm}{$\Lambda$CDM}
\def\smiley{\hbox{\large$\bigcirc$\hspace{-.80em}%
\raise.2ex\hbox{$\cdot\cdot$}\kern-.61em
\lower.2ex\hbox{\scriptsize$\smile$}}\ }
\def\frowney{\hbox{\large$\bigcirc$\hspace{-.80em}%
\raise.2ex\hbox{$\cdot\cdot$}\kern-.635em
\lower.2ex\hbox{\scriptsize$\frown$}}\ }
\begin{document}
\title{Constraining Newtonian stellar configurations in $f(R)$ theories of gravity}
\author{T. Multam\"aki}
\email{[email protected]}
\author{I. Vilja}
\email{[email protected]}
\affiliation{Department of Physics, University of Turku, FIN-20014 Turku, FINLAND}
\date{}
\begin{abstract}
We consider general metric $f(R)$ theories of gravity by solving the field equations in the presence of a
spherical static mass distribution by analytical perturbative means. Expanding the field equations systematically in $\cO(G)$,
we solve the resulting set of equations and show that $f(R)$ theories which attempt to solve the dark energy problem
very generally lead to $\gamma_{PPN}=1/2$ in the solar system. This excludes a large class of theories as
possible explanations of dark energy. We also present
the first order correction to $\gamma_{PPN}$ and show that it cannot have a significant effect.
\end{abstract}
\maketitle
\section{Introduction}
The dark energy problem remains central in modern day cosmology. Since the matter only,
homogeneous universe within the framework of general relativity is in conflict with
cosmological observations, the assumptions behind this model have been questioned. The
most popular modification is to consider a universe filled with other, more exotic forms of
matter, the cosmological constant being the leading natural candidate. Other ways to
tackle the dark energy problem are then to relax the assumption of homogeneity or modify the
theory of gravity.
In recent years, a particular modification of gravity, the $f(R)$ gravity models that
replace the Einstein-Hilbert action of general relativity (GR) with an arbitrary function of
the curvature scalar
(see \eg \cite{turner,turner2,allemandi,meng,nojiri3,nojiri2,cappo1,woodard,odintsov}
and references therein) have been extensively studied. Naive modification of the
gravitational action is not without challenges, however, and obstacles including
cosmological constraints (see \eg \cite{new2,Nojiri:2007as,Starobinsky:2007hu} and references
therein),
instabilities \cite{dolgov,soussa,faraoni}, solar system constraints (see {\it e.g.}
\cite{chiba,confprobs,Clifton,Hu2007} and references therein) and evolution large scale
perturbations \cite{Bean:2006up,Song:2006ej,Song2} need to be overcome. In addition, a number of
consistency requirements need to be satisfied (see \eg \cite{Sawicki2007,Appleby} and
references therein).
One of the most direct and strictest constraints on any modification of gravity comes from
observations of out nearby space-time \ie the solar system. This is often done by
conformally transforming the theory to a scalar-tensor theory and then considering the
Parameterized Post-Newtonian (PPN) limit \cite{damour,magnano} (see also \cite{olmo, ppnok}
for a discussion). The question of validity of the solar system constraints $f(R)$ theories
has been extensively discussed in the literature and not completely without controversy.
The opinions on the viability of $f(R)$ theories have been divided from more or less skeptical
\cite{Erickcek:2006vf,Chiba2,Jin:2006if, Faulkner:2006ub} to approving
\cite{Olmo:2006eh, Faraoni:2006hx} depending on the point of view of the author.
The essence of the discussion has been the question of validity of the Schwarzschild-de Sitter
(SdS) metric as the correct metric in the solar system. The SdS metric is a vacuum
solution to a large class of $f(R)$ theories of gravity but due to the higher-derivative
nature of metric $f(R)$ theories, it is not unique. Other solutions can also be
constructed in empty space, in the presence of matter and in a cosmological setting (see
{\it eg.} \cite{cognola,Multamaki2,Multamaki}).
In light of recent literature \cite{Erickcek:2006vf,Chiba2,Jin:2006if}, the validity of
the solar system constraints has become clear and it is now understood that the
equivalent scalar-tensor theory results are valid in a particular limit that
corresponds to the limit of light effective scalar in the
In terms of the $f(R)$ theory, this is equivalent to requiring that one can approximate
the trace of the field equations by Laplace's equation \cite{Chiba2}.
As a result, the often considered $R-\mu^4/R$ theory \cite{turner} (the CDTT model) is not
consistent with the Solar System constraints in this limit, if the $1/R$ term is to
drive late time cosmological acceleration.
In \cite{Erickcek:2006vf} the CDTT model was considered by linearizing around a static de Sitter spacetime
and solving the trace equation in terms of $R(r)$, resulting in a spacetime outside the star where $\gamma=1/2$.
This result was then generalized for a general $f(R)$ theory in \cite{Chiba2} by studying the space-time outside
a spherical mass distribution and expanding $f(R)$ in terms of a perturbation in $R$. Again solving the
trace equation leads to an outside solution with $\gamma=1/2$ as long as the effective scalar mass is light.
A somewhat different approach was followed
in \cite{kimmo}, where the trace equation was first written in terms $F(r)\equiv df/dR$
in the perturbative expansion. Solving the trace equation then leads to $\gamma=1/2$ outside the star.
In this paper we follow the latter approach by viewing $F$ along with the metric as independent functions.
By expanding all quantities in $G$ and solving the resulting equations inside and outside the star for a
general $f(R)$ theory, we find that generally, $\gamma_{PPN}=1/2+\cO(G)$ outside the star and
the scalar curvature is $\cO(G^2)$ everywhere.
We also idenfity the first order correction to $\gamma_{PPN}$ and show that it cannot have a significant effect.
Only if initial conditions inside the star are fine-tuned such that the scalar curvature follows the matter density
like in GR \cite{Hu2007, kimmo} can these bounds be evaded.
\section{$f(R)$ gravity formalism}
The action for $f(R)$ gravity is ($c=1$)
\be{action}
S = \int{d^4x\,\sqrt{-g}\Big(\frac 1{16 \pi G} f(R)+{\cal{L}}_{m}\Big)}.
\ee
The field equations resulting in the so-called metric approach are
reached by variating with respect to $g_{\mu\nu}$:
\be{eequs}
F(R) R_{\mu\nu}-\frac 12 f(R) g_{\mu\nu}-\nabla_\mu\nabla_\nu F(R)+g_{\mu\nu}\Box F(R)=8
\pi G T^m_{\mu\nu},
\ee
where $T_{\mu\nu}^m$ is the standard minimally coupled stress-energy tensor
and $F(R)\equiv df/dR$.
Contracting the field equations and assuming that we can describe the
stress-energy tensor with a perfect fluid, we get
\be{contra}
F(R)R-2 f(R)+3\Box F(R)=8\pi G(\rho-3p).
\ee
In this letter we consider spherically symmetric static fluid configurations and adopt a metric,
which reads in spherically symmetric coordinates as
\be{sphersym}
ds^2=B(r)dt^2-A(r)dr^2-r^2d\theta^2-r^2\sin^2\theta d\varphi^2.
\ee
By taking suitable linear combinations of the field equations they can be
written in the following form:
\begin{widetext}
\bea{fieldequs0}
\frac{F\,A'}{r\,A} +
\frac{F\,B'}{r\,B} + \frac{A'\,F'}{2\,A} +
\frac{B'\,F'}{2\,B} - F''
& = & 8\,G\,\pi \,A\,(\rho +p)\\
- \frac{F}{r^2} + \frac{A\,F}{r^2} + \frac{F\,A'}{2\,r\,A} +
\frac{F\,B'}{2\,r\,B} -
\frac{F\,A'\,B'}{4\,A\,B} -
\frac{F\,{B'}^2}{4\,{B}^2} - \frac{F'}{r} +
\frac{B'\,F'}{2\,B} + \frac{F\,B''}{2\,B}
& = & 8\,G\,\pi \,A\,(\rho +p)\\
A\,(2f(R)-R\,F(R))+
\frac{6\,F'}{r} - \frac{3\,A'\,F'}{2\,A} +
\frac{3\,B'\,F'}{2\,B} + 3\,F''
& = & -8\,G\,\pi \,A\,(\rho - 3 p),
\eea
\end{widetext}
where prime indicates a derivation with respect to $r$, $'\equiv d/dr$
and we have written $f$ and $F$ as functions of the radial coordinate $r$
expect in combination $2f(R)-R\,F(R)$, which we will expand in terms of curvature $R$.
The corresponding equation of continuity is
\be{cont}
\frac{p'(r)}{\rho(r)+p(r)}=-\frac 12 \frac{B'(r)}{B(r)}.
\ee
When pressure is negligible, it is easy to see that $B$ must be a constant. This is,
however, not acceptable and therefore an adequate perturbation expansion is needed.
\section{Perturbative expansion and its solutions}
We expand the metric as well as $F$ with $G$ as an expansion parameter:
\bea{metexp}
A(r) & = & 1+G\, A_1(r)+ \cO(G^2),\nonumber\\
B(r) & = & B_0+G\, B_1(r)+ \cO(G^2),\nonumber\\
F(r) & = & F_0+G\, F_1(r)+ \cO(G^2),\nonumber\\
p(r) & = & p_0+G\, p_1(r)+ \cO(G^2).\nonumber
\eea
Note, that we consider the density profile $\rho(r)$ to be a fixed function and
also that $B_0$ and $F_0$ are constants.
From the expansion of $A$ and $B$, one can also read out an expansion for $R$:
\be{Rexp}
R = R_0 + G R_1 + \cO(G^2).
\ee
From the equation of continuity we see that at $\cO(G^0)$ pressure is constant
and exactly zero, $p_0=0$, simply because it vanishes in empty space. Therefore pressure effects
are always $\cO (G^2)$ and do not contribute to the $\cO(G^1)$ expansion.
The $2f-FR$ term in the third field equations is crucial in determining the behaviour of
the solution. In general, for a $f(R)$ dark energy model, this term is negligible and can
be omitted, at least in the first order approximation. This is demonstrated explicitly
for the CDTT in model and discussed more generally in \cite{kimmo}, where it is argued that
the non-linear term is completely negligible, barring fine tuning.
This argument is easily understandable in a general model since in the vacuum
$2f-FR\sim G\rho_{DE}\ll G\rho$ for any stellar matter configuration. Note that this will
in general be true also outside a stellar configuration as the dark matter will
completely dominate over the cosmological term. Hence, in the trace equation, the
non-linear terms can be dropped, unless the initial conditions are fine-tuned. We will
return to the fine-tuned condition, or the Palatini limit \cite{Hu2007, kimmo}, later.
More formally, the same conclusion can be confirmed by using an expansion in $G$ for the non
linear-terms as well:
$2f(R)-F(R)R = 2f(R_0)-F(R_0)R_0 + (F(R_0)- F'(R_0)R_0)R_1+ \cO (G^2)$
Evidently, the expansion point $R_0$ has to be such, that it corresponds to the correct
background of the theory, \ie $2f(R_0)-F(R_0)R_0=0$. Then
expanding up to first order in $G$, the field equations are
\bea{fieldequsG}
\frac{F_0\,A_1'}{r} + \frac{F_0\,B_1'}{B_0\,r} - F_1'' & = & 8\,\pi \,\rho,\nonumber\\
\frac{F_0\,A_1}{r^2}-\frac{F_0\,A_1'}{2\,r} + \frac{F_0\,B_1'}{2\,B_0\,r} - \frac{F_1'}{r}
+ \frac{F_0\,B_1''}{2\,B_0} & = & 8\,\pi \,\rho,\\
I_1 R_1+\frac{6\,F_1'}{r} + 3\,F_1'' & = & -8\,\pi \,\rho\nonumber,
\eea
where $I_1=F(R_0)- F'(R_0)R_0$ is a constant and $F_0= F(R_0)$.
The set of equations (\ref{fieldequsG}) can be straightforwardly solved leading to $\cO(G)$
functions:
\bea{solG}
F(r) & = & F_0-\frac 23 G\int_0^r \frac{m(r)}{r^2}\, dr\\
A(r) & = & 1+\frac{4G}{3F_0}\frac{m(r)}{r}\\
B(r) & = & B_0(1+\frac{8 G}{3F_0}\int_0^r \frac{m(r)}{r^2})\, dr,
\eea
where
\be{mdef}
m(r)\equiv \int_0^r 4\pi r^2\rho\, dr.
\ee
Inserting this solution back to expression of the curvature scalar, we find that
$R_1 =0$, {\it i.e.}, $R$ is $\cO(G^2)$. It is crucial that in deriving the solution (\ref{solG}),
we have assumed that $A,\ B,\ F$ are regular at the origin.
The PPN-parameter is now straightforwardly calculable:
\be{PPNG}
\gamma_{PPN}=\frac 12(1-r \frac{m'}{m}) + \frac{2 G}{3 F_0}
\frac{\left( 2\,\int_0^r \frac{m}{r^2}\,dr + m' \right) \,
\left( m - r\,m' \right)}{m}.
\ee
It is easy to see that, at the boundary of the star $\gamma_{PPN}\rar 1/2+\cO(G)$.
This behaviour was also observed in numerical studies \cite{kimmo,henttunen}.
From the first order correction one can furthermore conclude that if one wishes corrections
to be effective at zeroth order, $F_0$ needs to be of order $G$.
However, looking at the continuity equation, Eq. (\ref{cont}), we find that
\be{contG}
-r^2p'=\frac 43\frac{G}{F_0}\rho\,m(r)+\cO(G^2).
\ee
Comparing this with the Newtonian result, $-r^2 p'=4G\rho m(r)$, we see that if
$F_0\sim \cO(G)$, the effective Newton's constant is
orders of magnitude larger than the one in Newton's theory (or GR), resulting in stars with
a completely different mass to radius relationship than
the one observed. Furthermore, from the continuity equation, we can read that unless
$F_0\approx 4/3$ to a high precision, a star with the same
density profile, and hence total mass, will have a different radius than in GR. This
behaviour was already observed in \cite{henttunen}.
In general the results described in this section will apply even when $2f-FR\sim R$, \ie
when $f(R)=R+c_1 R^2$. Because in the approximation described
above, $R\sim \cO(G^2)$, it is easy to see that that this term will play no role in
the trace equation. Similarly for higher order terms
in $R$. One can avoid the constraint only if $F$ has no $G$ order correction. In this case,
the $\Box F$, term is negligible in trace equation and we recover the GR results, or the
Palatini limit \cite{Hu2007, kimmo}. Alternatively, if one relaxes the regularity constraint of
the metric at the origin,
one can also avoid the constraint as demonstrated in \cite{henttunen} for the CDTT model.
\subsection{Recovering the general relativity}
In the Palatini limit, where the trace-equation is similar than in the Palatini formalism,
the theory is fine-tuned so that $2f-FR\approx R\approx - 8\pi G\rho$
throughout (see \cite{kimmo} for a numerical example). This is the mechanism that allows one
to construct
solutions that are consistent with solar system observations \cite{Hu2007}.
In the Palatini limit, the field equations read as
\bea{solP}
F(r) & \simeq & 1\\
A(r) & \simeq & 1+\frac{2Gm(r)}{r}\\
B(r) & \simeq & B_0(1+2 G \int_0^r \frac{m(r)}{r^2})\, dr.
\eea
The $\gamma_{PPN}$ parameter is easily calculable:
\be{PPNP}
\gamma_{PPN} \simeq 1-r \frac{m'}{m}+\cO(G).
\ee
Therefore, in this limit $\gamma_{PPN}\rar 1$ at the surface of the star. However, as shown in
\cite{kimmo} for the CDTT model, this limit can be unstable time leading to the Dolgov-Kawasaki
instability \cite{dolgov}. Stable theories are considered in \cite{Hu2007} and
studied analytically in \cite{new1}.
\section{Discussion and Conclusions}
In this letter we have considered a general metric $f(R)$ theory in the presence of matter by
analyzing the field equations by perturbative means in linear order in the Newton's constant $G$.
We have shown explicitly that for a typical star, any modification of gravity from GR will
naturally lead to physically unacceptable value $\gamma_{PPN}=1/2$. This places a very strong
constraint on any $f(R)$ theory, in particular when acting as a dark energy candidate.
Furthermore, even if the gravity theory
is not motivated by cosmology, but by other arguments, such as quantum gravity, the
presence of non-linear terms can still lead to a space-time inconsistent with observations.
In this order of perturbation theory we can recover the observationally acceptable space-time,
only when $F=df/dR$ has no order $G$ correction. Such a constraint indicates
fine-tuning in the initial values of the solution so that one remains in the high curvature limit,
$R\sim G\rho$ throughout. However, the stability of such a fine-tuned solution may be problematic
\cite{kimmo}, although possible to obtain \cite{Hu2007,new1,new2}.
Since our analysis is of order $\cO(G^1)$, further study on the system, in
particular second order perturbations in $G$, may affect the conclusions. Indeed, our
analysis shows that the first order perturbation theory is essentially independent on
the details of the underlying $f(R)$ theory. The only piece of information used was the
knowledge that there are higher order derivatives in the equations of motion, \ie
that the theory is not GR. New effects may appear in higher order perturbation theory, where
finally the dependence on the functional form of $f(R)$ should become evident. However, our results suggest that
unless the solution is fine-tuned so that $R\sim G\rho$ throughout the mass distribution,
a naive modification where a small correction is added to the Einstein-Hilbert action to solve the dark energy problem
is not likely to pass the solar system constraints.
\acknowledgments
TM is supported by the Academy of Finland.
|
2,869,038,156,707 | arxiv | \section{INTRODUCTION}
Nowadays, much of the daily human activity is constantly being
recorded by means of modern technologies, such as Internet, GPS,
mobile phones, blue-tooth and other electronic devices. The gathering
and analysis of large amount of activity data have allowed the
exploration of statistical features of people's behavior. In
particular, recent studies have revealed interesting properties about
how humans interact with each other, either by having conversations
\cite{Catt_01}, or sexual contacts \cite{Fox_06}, or by means of
mobile devices \cite{Karagiannis_07} or wireless communications
\cite{Scherrer_08}. Despite the fact that the type of interactions in
these studies are quite different, they all found a common interaction
pattern, that is, human contacts are very heterogeneous. This is
consistent with a broad distribution of different magnitudes that
quantify the timing of contacts, such as their duration, frequency and
gaps. This diversity could eventually have an impact on propagation
processes that involve human contact, such as the spreading of rumors
or diseases.
In this article we explore epidemic spreading on a population with
heterogeneous interaction intensities. We use a distribution of
intensities extracted from the pattern of contacts between
participants of a conference \cite{People_05}, obtained in recent
face-to-face experiments \cite{Catt_01}. For the spreading process we
use the Susceptible-Infected-Susceptible (SIS) \cite{Bailey_75}
dynamics on Erd\H{o}s-R\'enyi (ER) networks \cite{Erd_01}, with
infection rates across links that are proportional to the intensity of
encounters.
As the rate of infection increases, the original SIS model \cite{Bailey_75}
exhibits a transition from an absorbing (disease-free) phase where
the infection dies exponentially fast to an active (endemic) phase
where the infection spreads over a large fraction of the population
and becomes persistent. We find that the heterogeneity in the
intensity of contacts introduces an intermediate absorbing region, in
which the epidemic dies very slowly, as a stretched exponential or a
power law in time. We experiment with other rate distributions and
show that this slow approach to epidemic extinction is caused by the
presence of small clusters composed by links with high infection
rates, which remain infected for very long times. We also discuss
analogies with the effects observed in models with quenched disorder
\cite{Vojta_06}.
While our results are mainly concerned with the decay of the infection
in the epidemic-free phase, some related models
\cite{Stehle_11,Yang_12,Kars_01} have focused, instead, on the disease
prevalence within the endemic phase, or study the spreading power of a
given node \cite{Garas_10} using the susceptible-infected-recovered
dynamics. Other studies have introduced heterogeneity at the
individual level, by assigning power law intertime events
\cite{A_Vaz_01,Min_11}, node-dependent infection
rates \cite{Munoz_04}, or topology-dependent weight patterns
\cite{Odor_12,Odor_13,Odor_13-1}. In our model heterogeneity is at the
interaction level, by means of link-dependent infection rates which
are not correlated with the topology of the network.
\section{SIS DINAMICS WITH FACE-TO-FACE DISORDER}
In the SIS model
\cite{Bailey_75}, each individual of a population can be either
susceptible (healthy) or infected. Infected individuals transmit the
disease to its susceptible neighbors in the network at a rate $\nu$
and return to the susceptible state at a rate $\gamma$. The dynamics
is controlled by the rescaled infection rate $\lambda = \nu/\gamma$.
For $\lambda$ above a critical value $\lambda_c$, even a small initial
fraction of infected nodes is able to propagate the disease through
the entire network (active phase), while for $\lambda < \lambda_c$ the
disease quickly dies out (absorbing phase), following an exponential
decay in the number of infected nodes.
This model describes disease spreading in an ideal population where
transmission rates between individuals are all the same. However, in
real populations we expect interactions to be heterogeneous, having a
broad range of intensities, as recently measured by analyzing mobiles
phone data \cite{Karagiannis_07} and by means of person-to-person
experiments \cite{Catt_01}. In order to explore how the behavior of
the SIS model is affected by the heterogeneity of interactions, we run
simulations of the dynamics on ER networks with infection rates
distributed according to the weight distribution $P(w)$ of
face-to-face experiments \cite{Catt_01,People_05} (see
Fig.~\ref{fig1}). In these experiments, participants of a three-day
conference were asked to wear a {\it radio frequency identification}
device on their chest, so that when two persons were close and facing
each other a relation of face-to-face proximity was registered. The
weights $w$ of Fig.~\ref{fig1} are defined as the total number of
packets exchanged (or total contact time) between pairs of
participants during the three days.
\begin{figure}
\includegraphics[width=75mm]{fig1.eps}
\caption{Probability distribution of face-to-face contacts intensities
(weights $w$) of the 25th Chaos Communication Conference in Berlin, on
a log-log scale. Intensity is defined as the total number of
packages exchanged between two attendees, which is proportional to
the contact duration. The large dispersion of the data reflects the
large heterogeneity in the duration of contacts. Weights are
rescaled to the interval $[0,1]$ for a better comparison with the
theoretical distribution $P(w)=1/a w$ in the interval $[e^{-a},1]$,
as shown in the inset for $a=6$ (squares) and $a=3$ (diamonds).}
\label{fig1}
\end{figure}
We are assuming that infection rates are proportional to the total
time individuals are in contact with each other, as the likelihood of
transmission increases with exposure time -longer contacts imply a
higher risk of infection-. Therefore, we assign an \emph{effective
rate of infection} $\beta_{ij}=\lambda w_{ij}$ between two
individuals $i$ and $j$ that are connected by a link of weight
$w_{ij}$, where $\lambda$ is a free parameter that acts as a
transformation scale of contact intensities into infection rates.
In Fig.~\ref{fig2} we show simulation results of the time evolution of
the average density of infected individuals $\rho$, over many
realizations of the SIS dynamics, starting from a configuration where
a small fraction of nodes have been randomly infected, and with
infection rates following the distribution of Fig.~\ref{fig1}. All
simulations in this article correspond to ER networks of mean degree
$\langle k \rangle=4$ and $N=10^5$ nodes. We understand that ER
networks are an oversimplification of the complex topology of
interaction between attendees at the Berlin's conference, which is known
to have a broad degree distribution peaked at an intermediate value
(similar to a Poissonian) as well as topological and temporal correlations
due to the intricate pattern of contacts \cite{isella_01}. However, ER
networks, which have a Poisson degree
distribution but are uncorrelated, are simple enough to allow for an
exploration of the effects of the heterogeneity in the interaction
strengths, avoiding other possible effects due to its specific
degree distribution and correlations.
We found that, besides the typical behavior observed in the active and
exponential phases of the \emph{classic} SIS model (all infection rates
are the same), there is an intermediate region between $\lambda =
0.25$ and $3.7$ with very slow relaxation to the absorbing state. The
region $2.0 \lesssim \lambda \lesssim 3.7$ is characterized by a
power-law decay with a continuously varying exponent, while in the
region $0.25 \lesssim \lambda \lesssim 2.0$ the decay is faster than a
power law but slower than exponential (see the inset of
Fig.~\ref{fig2}), and can be fitted by a stretched exponential.
\begin{figure}
\includegraphics[width=75mm]{fig2.eps}
\caption{Average density of infected nodes $\rho$ vs time $t$, on a
log-log scale, under the SIS dynamics on ER networks with $\langle
k \rangle=4$ and $N=10^5$ nodes. The infection rate distribution
$P(\beta)$ corresponds to the weight distribution $P(w)$ of
Fig.~\ref{fig1}, with $\beta = \lambda w$, for values of the
control parameter $\lambda=3.9, 3.7, 3.6, 3.5, 3.4, 3.3, 3.0, 2.5$
(main plot) and $\lambda=1.5, 1.0, 0.5, 0.2$ (inset), from top to
bottom. $\rho$ decays as a power-law for $2.0 \lesssim \lambda
\lesssim 3.7$, as a stretched exponential for $0.25 \lesssim
\lambda \lesssim 2.0$, and as an exponential for $\lambda \lesssim
0.25$, as shown in the inset on a linear-log scale.}
\label{fig2}
\end{figure}
\section{SIS dynamics with variable disorder strength}
In order to understand this phenomenon we explore the dynamics for
different distributions of weights. We assign to each link $ij$ a
weight $w_{ij}= e^{-ar_{ij}}$, where $r_{ij}$ is a random number taken
from a uniform distribution in the interval $[0,1]$, and $a$ is a
parameter that sets the range of $w_{ij}$ in $[e^{-a},1]$. This
method generates a power-law distribution $P(w)=1/a w$. The parameter
$a$ controls the width of the distribution, and measures the
heterogeneity or strength of disorder. In the inset of
Fig.~\ref{fig1} we plot $P(w)$ for $a=6$ and $a=3$, which is intended
to mimic the broad distribution of face-to-face contacts, even though
the decay exponents are different.
The distribution of infection rates is given by
\begin{equation}
P(\beta)=\frac{1}{a \beta},~~~\mbox{with $\beta$ in $[\lambda
e^{-a},\lambda ]$}.
\label{Pbeta}
\end{equation}
Notice that when $a \to 0$ we recover the classic model where
$\beta_{ij}=\lambda$ for all $ij$. This kind of disorder was already
used in several works on complex networks \cite{bra02,bra03,Buo_01}. We
expect that high-weight links facilitate the spreading of infections
in our model, while low-weight links hinder the spreading.
\begin{figure}[th]
\includegraphics[width=75mm]{fig3.eps}
\caption{(Color online) SIS dynamics on ER networks with $\langle k
\rangle = 4$, $N=10^5$ and distribution of infection rates
$P(\beta)=1/a\,\beta$, with $a=20$ and $\beta$ in $[\lambda
e^{-a},\lambda]$. (a) $\rho$ vs time on a double logarithmic
scale, for $\lambda=6.0, 5.8, \lambda_c \simeq 5.56, 5.4, 5.2, 5.0,
4.5, 4.0, 3.5$ and $3.0$ (from top to bottom). $\rho$ decays as a
power law for $2.0 \lesssim \lambda \lesssim \lambda_c$. (b) $\rho$
vs time on a linear-log scale, for $\lambda=0.2, 0.5, 1.0$ and
$1.5$ (circles). Straight lines are best fittings using the
function $A \,e^{-\alpha t^b}$ with $A=0.0009735$,
$\alpha=0.94731$, $b=1.0$ for $\lambda=0.2$; $A=0.0016$,
$\alpha=1.21$, $b=0.86$ for $\lambda=0.5$; $A=0.0028$,
$\alpha=1.61$, $b=0.69$ for $\lambda= 1.0$; and $A=0.0045$,
$\alpha=2.03$, $b=0.54$ for $\lambda=1.5$. Only $\lambda=0.2$ is
in the exponential region $\lambda<0.25$, while the other curves
are stretched exponentials. (c) $\rho$ vs $\ln t$ on a log-log
scale, showing the extremely slow decay $\rho \sim (\ln
t)^{-\beta}$ at the active-absorbing transition point $\lambda_c$.
(d) The stretched exponential behavior for $\lambda = 0.5, 1.0$ and
$1.5$ is shown as a straight line by plotting $\ln (A/\rho)$ vs
time on a log-log scale.}
\label{fig3}
\end{figure}
The behavior of $\rho$ under the theoretical disorder given by
Eq.~(\ref{Pbeta}) is very similar to the one observed in
Fig.~\ref{fig2} for the F2F disorder, showing a slow relaxation to the
absorbing state, as we can see in Figs.~\ref{fig3}(a) and
\ref{fig3}(b). This suggests that the effect of disorder is quite
robust, since results seem to be independent on the power law exponent
of the distribution of weights. In the $a-\lambda$ phase diagram of
Fig.~\ref{fig4} we
summarize the different types of behaviors. Above the numerical
transition line $\lambda_c^{\mbox{\tiny num}}(a)$ denoted by the red
circles we find the \emph{active phase} (white), where $\rho$ reaches
a stationary value larger than zero, and below we find the
\emph{absorbing phase} where $\rho$ decays to zero. The transition
line corresponds to the value of $\lambda$ for which the decay is
algebraic in the logarithm of time, $\rho \sim (\ln t)^{-\beta}$
\cite{Vojta_06}, as shown in Fig.~\ref{fig3}(c). The
absorbing phase is divided into three regions. The \emph{exponential
region} (green), which appears for $\lambda < \lambda_c^0$,
characterized by the decay $\rho \sim e^{-\alpha t}$ of the classic
model [see Figs.~\ref{fig3}(b) and \ref{fig3}(d)], the \emph{weak
effects region} (yellow) where we observe an stretched exponential
behavior $\rho \sim A \,e^{-\alpha t^b}$ ($b<1.0$) [see
Figs.~\ref{fig3}(b) and \ref{fig3}(d)], and the \emph{strong effects region} (orange),
with a power law decay $\rho \sim t^{-\gamma}$ [see
Fig.~\ref{fig3}(a)]. Exponents $\alpha$, $b$ and $\gamma$ vary
continuously with $\lambda$ and $a$. Along the line separating the
weak and strong effects regions, we observe a crossover between the
pure stretched exponential and power law decays.
\begin{figure}[th]
\includegraphics[width=75mm]{fig4.eps}
\caption{Phase diagram of the SIS model with infection rate
distribution $P(\beta)=1/a\,\beta$, and $\beta$ in $[\lambda
e^{-a},\lambda]$. Colored regions correspond to the absorbing
phase: orange and yellow for the strong and weak effects regions,
and green for the exponential decay region. The dashed and solid
lines are the MF [Eq.~(\ref{lambdacMF})] and percolation
[Eq.~(\ref{lambdacperc})] approximations, respectively, for the
transition between the active and absorbing phases.}
\label{fig4}
\end{figure}
\subsection{Active-absorbing transition line: mean-field and
percolation approaches}
In order to gain an insight about the dynamics of the model we
develop in this section a theoretical estimation of the transition
line between the active and absorbing phases of Fig.~\ref{fig4}.
Within a mean-field (MF) approximation, $\rho$ evolves according to
$\dot{\rho}=-\rho+\lambda \langle w \rangle \langle k \rangle \rho
(1-\rho)$, where $\lambda \langle w \rangle = \lambda (1-e^{-a})/a$ is
the average infection rate, and $\langle k \rangle (1-\rho)$ is the
average number of susceptible neighbors of an infected node. The
stationary solutions $\rho=0$ and $\rho=\left( \lambda -
\lambda_c^{\mbox{\tiny MF}} \right)/\lambda$ correspond to the
absorbing and active phases, respectively, with the transition point at
\begin{equation}
\lambda_{c}^{\mbox{\tiny MF}}(a)=\frac{a}{\langle k\rangle (1-e^{-a})},
\label{lambdacMF}
\end{equation}
For $a=0$ we recover the \emph{classic} transition point
$\lambda_c^0 \equiv \lambda_c(a=0)=1/\langle k \rangle = 0.25$ of the
classic model. \emph{Impurities}, in the form of low-weight links, locally reduce
infection rates, thus the transition happens at a value
$\lambda_{c}^{\mbox{\tiny MF}}(a) > \lambda_{c}^0$.
Expression~(\ref{lambdacMF}) (dashed line in Fig.~\ref{fig4}) is a
very good estimate of $\lambda_c^{\mbox{\tiny num}}(a)$ for $a\lesssim
14$ (weak disorder), but systematic deviations appear as $a$
increases. Discrepancies arise because MF assumes that \emph{all}
links can spread the disease but, when $a$ is large, a fraction of
links have such small rates (inactive links) that infection never
passes through them during the epidemic's life time, and thus the
\emph{effective} network for the spreading dynamics is diluted with
respect to the original network. When dilution is large enough the
effective network gets fragmented into many small disconnected
components and, as the disease cannot spread out of these components
the active state is never reached. Therefore, the active-absorbing
transition point for $a$ large (strong disorder) corresponds to the
\emph{percolation threshold}. This occurs when the fraction of
inactive nodes $q$ (nodes attached only to inactive links) exceeds the
critical value $q_c$. For ER networks with Poisson degree
distribution $P_k=e^{-\langle k\rangle} \frac{\langle k \rangle
^{k}}{k!}$ is
\begin{eqnarray}
q = \sum_k l_{\mbox{\tiny I}}^k P_k = e^{-\langle k \rangle
(1-l_{\mbox{\tiny I}})},~~~~\mbox{where} \\ l_{\mbox{\tiny I}}
\equiv \int_{\lambda e^{-a}}^{\beta_{m}} P(\beta) \,d\beta =
\frac{1}{a} \ln\left(\frac{\beta_{m} e^a}{\lambda} \right)
\label{eq5}
\end{eqnarray}
is the fraction of inactive links, and $\beta_m$ is the largest
infection rate that does not allow the transmission of the disease.
At the percolation threshold $\langle k\rangle=(1-q_c)^{-1}$ in the
$N\to\infty$ limit, thus $q_c = \exp \Big\{(1-q_c)^{-1} \left[ \ln
\left( \beta_m \,e^a / \lambda_c^{\mbox{\tiny perc}} \right) /a -1
\right] \Big\}$, from where the percolation transition line is
\begin{equation}
\lambda_c^{\mbox{\tiny perc}}(a) = \beta_m \; q_c ^{-a (1-q_c)}.
\label{lambdacperc}
\end{equation}
Using $q_c=0.7443$ for a network of size $N=10^5$ \cite{Wu_01},
expression~(\ref{lambdacperc}) with $\beta_{m} \simeq 1.2457$ is in
excellent agreement with $\lambda_c^{\mbox{\tiny num}}(a)$ for $a
\gtrsim 14$ (solid line in Fig.~\ref{fig4}). The value of $\beta_m$
is estimated from the crossover conditions $\lambda_{c}^{\mbox{ \tiny
MF}}(a)=\lambda_c^{\mbox{\tiny perc}}(a)$ and $\partial
\lambda_{c}^{\mbox{ \tiny MF}}(a)/\partial a=\partial
\lambda_{c}^{\mbox{ \tiny perc}}(a)/\partial a$ between the MF and
percolation lines at the weak-strong disorder crossing point $a=a^*$.
We obtain $\beta_{m}=-[e \ln q_c]^{-1}$ and $a^*=-[(1-q_c) \ln q_c
]^{-1}$ with $a^* \simeq 13.243$ for the network used here.
In the next section we analyze in more detail the dynamics in the
absorbing phase and provide an explanation about the origin of slow
relaxations.
\subsection{Anomalous behavior in the absorbing phase}
We have seen that the heterogeneity in infection rates induces a large new
region inside the absorbing phase, in which the temporal evolution of
$\rho$ exhibits an anomalous slow decay. This is caused by the
presence of exponentially small isolated regions in the network where
the system is locally active, that is, with infection rates
$\beta_{ij} > \lambda_c^0$, which are able to sustain the activity for
very long times. To check this, we calculated the size distribution
of clusters composed only by infected nodes $n_I(s)$, at a fixed large
time. Results are shown in Fig.~\ref{fig5} for $a=6$. Inside the weak and
strong effects regions ($\lambda=0.75$ and $1.4$), $n_I(s)$ is close
to an exponential, and the size of the largest cluster $s_{\mbox{\tiny
max}}$ is much smaller than the network size $N=10^5$ (see inset
of Fig.~\ref{fig5}). Also, the values $0.27$ and $0.70$ of the
average infection rates inside these clusters for $\lambda=0.75$ and
$1.4$, respectively, show that the long-time activity is located
inside \emph{active clusters}, in which the average rate of infection
$\langle \beta \rangle > \lambda_c^0$. For comparison, in the active
phase ($\lambda=2.0$) is $7600 \lesssim s_{{\mbox \tiny max}} \lesssim
9500$, indicating the spreading of the disease over a large fraction
of the network.
\begin{figure}
\includegraphics[width=75mm]{fig5.eps}
\caption{Cluster size distribution of infected nodes $n_I$ for
$a=6$ and values of $\lambda$ inside three different regions of
Fig.~\ref{fig4}: $\lambda=0.75$ (circles) and $1.4$ (squares) in
the weak and strong effects regions, and $2.0$ (diamonds) in the
active phase. Distributions correspond to snapshots of the network
at fixed large times. The average infection rates inside each
cluster are $\langle \beta \rangle \simeq 0.27$, $0.70$ and $0.75$
for $\lambda=0.75$, $1.4$ and $2.0$, respectively. Insets: size
distribution of the largest cluster $n_{\mbox{\tiny max}}$ showing
the appearance of a large component in the active phase.}
\label{fig5}
\end{figure}
Similar anomalous behaviors are found in models with disorder, giving
rise to the so-called Griffiths phases (GP)
\cite{Vojta_06,Munoz_04,Odor_12,Odor_13,Odor_13-1,Vaz_03,Mar_12}. The
combination of exponentially rare regions in space that survive for
exponentially long times results in an overall slowing down of the
dynamics, as we show below. The long-time contribution of active
clusters to $\rho$ is estimated as
\begin{equation}
\rho \sim \int ds \, s \, P(s) \, e^{-t/\tau(s)},
\end{equation}
where $P(s) \sim e^{-\tilde p s}$ \cite{New_03} is the fraction of
active clusters of size $s$ and $\tau(s)$ is the mean decay time of
those clusters. By doing a saddle-point analysis, and using the
finite-size scaling $\tau(s) \sim e^{c s}$, one arrives to the
power-law decay $\rho \sim t^{-\tilde p/c}$ (with logarithmic
corrections) observed in the strong effects region of Fig.~\ref{fig4}.
The size of active clusters is of order one for $\lambda$ just above
$\lambda_c^0$, leading to exponentially weak effects of the form $\rho
\sim e^{-\alpha t^b}$ \cite{Vojta_06}.
\section{Discussion and Conclusions}
In summary, the heterogeneity in the intensity of contacts between
individuals induces a regime with extremely slow (power-law or
stretched exponential) relaxation to epidemic extinction, akin to the
slowing down found in systems with quenched disorder. This effect is
very robust, as it was observed using an empirical distribution of
contact durations in face-to-face experiments, as well as a
theoretical distribution with variable width. Given that both are
long tailed distributions but with different exponents, we suspect
that the anomalous relaxation is observed in general for broad weight
distributions. To check this concept we run simulations (not shown)
using a bimodal distribution
$P(\beta)=p\,\delta(\beta-\beta_1)+(1-p)\,\delta(\beta-\beta_2)$, with
$0 \le p \le 1$ and $\beta_2 > \beta_1$ \cite{Vojta_06}. We observed
slow relaxations for $\beta_2 > \lambda_c^0 > \beta_1$, that is, when
there are finite fractions of links with infection rates above and
below the classic transition point $\lambda_c^0$.
In order to explore whether these effects depend on the specific
topology of interactions, we have done some testings with scale-free
networks. We found that the active-absorbing transition line on the
phase diagram of Fig.~\ref{fig4} is shifted down to very small values, but we
could not clearly identify a finite region with slow decay.
Therefore, we suspect that rare-region effects are not present in
networks with heterogeneous degree distributions. This is probably
because weights are randomly distributed over the network, thus
high-degree nodes always spread the disease (it is very unlikey that
all links attached to hubs have very low weights). Instead, assigning
weights according to the topology of the network may induce
rare-region effects, as it was shown in
\cite{Odor_12,Odor_13,Odor_13-1} using Barabasi-Albert trees with
disassortative weighting. It would be worthwhile to perform a deeper
analysis to study how relaxations are affected by other properties of real
contact networks, such as topological and temporal correlations.
While \emph{temporal} heterogeneity, causality and bursty activity was
found to hinder spreading \cite{Kars_01,Min_11}, we showed here that
\emph{spatial} heterogeneity has the counterbalanced effect, making
the epidemic more persistent by slowing down its extinction. Once a
group of highly interacting individuals gets infected, they are able
to continuously reinfect each other at a high rate, keeping the
infection inside the group for very long times. Our findings can be
used to design efficient mitigation strategies for the disease. For
instance, moderating the activity of highly interacting people could
dramatically speed up the final stage of the epidemic.
This work was financially supported by UNMdP and FONCyT (Pict
0293/2008). The authors thank Lucas D. Valdez for useful comments and
discussions.
|
2,869,038,156,708 | arxiv | \section{Introduction}
A neural network block is a function $F$ that maps an input vector $x \in \mathcal{X} \subset \mathbb{R}^p$ to output vector $F(x,\theta) \in \mathbb{R}^{p'}$, and is parameterized by a weight vector $\theta$. We require that $F$ is almost everywhere differentiable
with respect to both of its arguments, allowing the use of gradient methods for tuning $\theta$ based on training data and an optimization criterion, and for passing the gradient to preceding network blocks.
One type of neural building blocks that has received attention in recent years is a {\em residual block} \cite{he2016deep}, where $F(x,\theta)=x+f(x,\theta)$, with $f$ being some differentiable, nonlinear, possibly multi-layer transformation. Input and output dimensionality of a residual block are the same, $p$, and such blocks are usually stacked in a sequence, a ResNet, $x_{t+1}=x_t+f_t(x_t,\theta_t)$.
Often, the functional form of $f_t$ is the same for all blocks $t \in \left\{1,...T\right\}$ in the sequence. Then, we can represent the sequence through $x_{t+1} - x_t = f_\Theta(x_t,t)$, where $\Theta$ consists of trainable parameters for all blocks in the sequence; the second argument, $t$, allows us to pick the proper subset of parameters, $\theta_t$.
If we allow arbitrary $f_\Theta$, for example a neural network with input and output dimensionality $p$ but with many hidden layers of dimensionality higher than $p$, a sequence of residual blocks can, in principle, model arbitrary mappings $x \rightarrow \phi_T(x)$, where we define $\phi_T(x_0)=x_T$ to be the result of applying the sequence of $T$ residual blocks to the initial input $x_0$. For example, a linear layer preceded by a deep sequence of residual blocks is a universal approximator for Lebesgue integrable functions $\mathbb{R}^p \rightarrow \mathbb{R}$ \cite{lin2018resnet}.
Recently, models arising from residual blocks have gained attention as a means to construct invertible networks; that is, training a network results in mappings $x_0 \rightarrow x_T$ for which an inverse mapping $x_T \rightarrow x_0$ exist. Ability to train a mapping that is guaranteed to be invertible has practical applications; for example, they give rise to normalizing flows \cite{deco1995nonlinear,rezende2015variational}, which allow for sampling from a complicated, multi-modal probability distribution by generating samples from a simple one, and transforming them through an invertible mapping. For any given architecture for invertible neural networks, it is important to know whether it can be trained to approximate arbitrary invertible mappings, its approximation capabilities are limited.
\subsection{Invertible Models}
We focus our attention on two invertible neural network architectures: i-ResNet, a constrained ResNet, and Neural ODE, a continuous generalization of a ResNet.
\paragraph{Invertible Residual Networks}
While ResNets refer to arbitrary networks with any residual blocks $x_{t+1} = x_t+f_\Theta(x_t,t)$, that is, can have any residual mapping $f_\Theta(x_t,t)$, i-ResNets \cite{behrmann2018invertible}, and their improved variant, Residual Flows \cite{chen2019residual}, are built from blocks in which $f_\Theta$ is Lipschitz-continuous with constant lower than 1 as a function of $x_t$ for fixed $t$, which we denote by $\mathrm{Lip}(f_\Theta) < 1$. This simple constraint is sufficient \cite{behrmann2018invertible} to guarantee invertibility of the residual network, that is, to make $x_t \rightarrow x_{t+1}$ a one-to-one mapping.
Given the constraint on the Lipschitz constant, an invertible mapping $x \rightarrow 2x$ cannot be performed by a single i-ResNet layer. But a stack of two layers, each of the form $x \rightarrow x+(\sqrt{2}-1)x$ and thus Lipschitz-continuous with constant lower than 1, yields the desired mapping. A single i-ResNet layer $x_{t+1} = (I +f_\Theta)(x_t,t)$, where $I$ is the identity mapping, is $\mathrm{Lip}(I +f_\Theta) = k < 2$, and a composition of $T$ such layers has Lipschitz constant of at most $K=k^T$. Thus, for any finite $K$, it might be possible to approximate any invertible mapping $h$ with $\mathrm{Lip}(h) \leq K$ by a series of i-ResNet layers, with the number of layers depending on $K$. However, the question whether the possibility outlined above true, and i-ResNet have universal approximation capability within the class of invertible continuous mappings, has not been considered thus far.
\paragraph{Neural Ordinary Differential Equations} Neural ODEs (ODE-Nets) \cite{chen2018neural} are a recently proposed class of differentiable neural network building blocks.
ODE-Nets were formulated by observing that processing an initial input vector $x_0$ through a sequence of residual blocks can be seen as evolution of $x_t$ in time $t \in \left\{1,...T\right\}$. Then, a residual block (eq. \ref{eq:ResBlock})
is a discretization of a continuous-time system of ordinary differential equations (eq. \ref{eq:ODE})
\begin{align}
x_{t+1} - x_t &= f_\Theta(x_t,t), \label{eq:ResBlock} \\
\frac{\mpartial x_t}{\mpartial t} = \lim_{\delta_t \rightarrow 0} \frac{x_{t+\delta_t} - x_t }{\delta_t} &=f_\Theta(x_t,t). \label{eq:ODE}
\end{align}
The transformation $\phi_T: \mathcal{X} \rightarrow \mathcal{X}$ taking $x_0$ into $x_T$ realized by an ODE-Net for some chosen, fixed time $T\in \mathbb{R}$ is not specified directly through a functional relationship $x \rightarrow f(x)$ for some neural network $f$, but indirectly, through the solutions to the initial value problem (IVP) of the ODE
\begin{equation}
x_T = \phi_T(x_0) = x_0 + \int_0^T f_\Theta(x_t,t) \diff t \label{eq:IVP}
\end{equation}
involving some underlying neural network $f_\Theta(x_t,t)$ with trainable parameters $\Theta$. By a {\em $p$-ODE-Net} we denote an ODE-Net that takes a $p$-dimensional sample vector on input, and produces a $p$-dimensional vector on output. The underlying network $f_\Theta$ must match those dimensions on its input and output, but in principle can have arbitrary internal architecture, including multiple layers of much higher dimensionality.
By the properties of ODEs, ODE-Nets are always invertible, we can just reverse the limits of integration, or alternatively integrate $-f_\Theta(x_t,t)$. The adjoint sensitivity method \cite{pontryagin1962mathematical} based on reverse-time integration of an expanded ODE allows for finding gradients of the IVP solutions $\phi_T(x_0)$ with respect to parameters $\Theta$ and the initial values $x_0$. This allows training ODE-Nets using gradient descent, as well as combining them with other neural network blocks.
Since their introduction, ODE-Nets have seen improved implementations \cite{rackauckas2019diffeqflux} and enhancements in training and stability \cite{gholami2019anode,zhang2019anodev2}.
Unlike an unconstrained residual block, a Neural ODE on its own does not have universal approximation capability. Consider a continuous, differentiable, invertible function $f(x)=-x$ on $\mathcal{X}=\mathbb{R}$. There is no ODE defined on $\mathbb{R}$ that would result in $x_T = \phi_T(x_0)=-x_0$. Informally, in ODEs, paths $(x_t,t)$ between the initial value $(x_0,0)$ and final value $(x_T,T)$ have to be continuous and cannot intersect in $\mathcal{X} \times \mathbb{R}$ for two different initial values, and paths corresponding to $x \rightarrow -x$ and $0 \rightarrow 0$ would need to intersect. By contrast, in an unconstrained residual block sequence, a discrete dynamical system on $\mathcal{X}$, we do not have continuous paths, only points at unit-time intervals, with an arbitrary transformation between points; finding an unconstrained ResNet for $x \rightarrow -x$ is easy. While ODE-Nets used out-of-the-box have limited modeling capability, some evidence exists that this limitation can be overcome by changing the way way ODE-Nets are applied. Yet, the question whether they can be turned into universal approximators remains open.
\subsection{Our Contribution}
We analyze the approximation capabilities of ODE-Nets and i-ResNets. The results most closely related to ours have been recently provided by the authors of ANODE \cite{dupont2019augmented}, who focus on a $p$-ODE-Net followed by a linear layer. They provide counterexamples showing that such an architecture is not a universal approximator of $\mathbb{R}^p \rightarrow \mathbb{R}$ functions. However, they show empirical evidence indicating that expanding the dimensionality and using $q$-ODE-Net for $q>p$ instead of a $p$-ODE-Net has positive impact on training of the model and on its generalization capabilities. The authors of i-ResNet \cite{behrmann2018invertible} also use expanded dimensionality in their experiments, observing that it leads to a modest increase in model's accuracy.
Here, we prove that setting $q=p+1$ is enough to turn Neural ODE followed by a linear layer into a universal approximator for $\mathbb{R}^p \rightarrow \mathbb{R}$. We show similar result for i-ResNet. Our main focus is on modeling invertible functions -- homeomorphisms -- by exploring pure ODE-Nets and i-ResNets, not capped by a linear layer. We show a class of $\mathcal{X} \rightarrow \mathcal{X}$ invertible mappings that cannot be expressed by these modeling approaches when they operate within $\mathcal{X}$. We then prove that any homeomorphism $\mathcal{X} \rightarrow \mathcal{X}$, for $\mathcal{X} \subset \mathbb{R}^p$, can be modeled by a Neural ODE / i-ResNet operating on an Euclidean space of dimensionality $2p$ that embeds $\mathcal{X}$ as a linear subspace.
\section{Background on ODEs, Flows, and Embeddings}
This section provides background on invertible mappings and ODEs; we recapitulate standard material, for details see \cite{utz1981embedding,lee2001introduction,brin2002introduction,younes2010shapes}.
\subsection{Flows }
A mapping $h: \mathcal{X} \rightarrow \mathcal{X}$ is a {\em homeomorphism} if $h$ is a one-to-one mapping of $\mathcal{X}$ onto itself, and both $h$ and its inverse $h^{-1}$ are continuous. Here, we will assume that $\mathcal{X} \subset \mathbb{R}^p$ for some $p$, and we will use the term $p$-homeomorphism where dimensionality matters.
A {\em topological transformation group} or a {\em flow} \cite{utz1981embedding} is an ordered triple $(\mathcal{X}, \mathbb{G}, \Phi)$ involving an additive group $\mathbb{G}$ with neutral element 0, and a mapping $\Phi: \mathcal{X} \times \mathbb{G} \rightarrow \mathcal{X}$ such that $\Phi(x,0)=x$ and $\Phi(\Phi(x,s),t)=\Phi(x,s+t)$ for all $x \in \mathcal{X}$, all $s,t \in \mathbb{G}$. Further, mapping $\Phi(x,t)$ is assumed to be continuous with respect to the first argument.
The mapping $\Phi$ gives rise to a parametric family of homeomorphisms $\phi_t: \mathcal{X} \rightarrow \mathcal{X}$ defined as $\phi_t(x)=\Phi(x,t)$, with the inverse being $\phi_t^{-1}=\phi_{-t}$.
Given a flow, an {\em orbit} or a {\em trajectory} associated with $x\in \mathcal{X}$ is a subspace $G(x)=\left\{\Phi(x,t): t\in \mathbb{G} \right\}$. Given $x,y \in \mathcal{X}$, either $G(x)=G(y)$ or $G(x)\cap G(y)=\emptyset$; two orbits are either identical or disjoint, they never intersect. A point $x \in \mathcal{X}$ is a {\em fixed point} if $G(x)=\left\{x \right\}$. A {\em path} is a part of the trajectory defined by a specific starting and end points. A path is a subset of $\mathcal{X}$; we will also consider a {\em space-time path} composed of points $(x_t,t)$ if we need to make the time evolution explicit.
A {\em discrete flow} is defined by setting $\mathbb{G}=\mathbb{Z}$. For arbitrary homeomorphism $h$ of $\mathcal{X}$ onto itself, we easily get a corresponding discrete flow, an iterated discrete dynamical system, $\phi_0(x)=x$, $\phi_{t+1}=h(\phi_t(x))$, $\phi_{t-1}(x)=h^{-1}(\phi_t(x))$. Setting $f(x)=h(x)-x$ gives us a ResNet $x_{t+1}=x_t+f(x_t)$ corresponding to $h$, though not necessarily an i-ResNet, since there is no $\mathrm{Lip}(f)<1$ constraint.
For Neural ODEs, the type of flow that is relevant is a {\em continuous flow}, defined by setting $\mathbb{G}=\mathbb{R}$, and adding an assumption that the family of homeomorphisms, the function $\Phi: \mathcal{X} \times \mathbb{R} \rightarrow \mathcal{X}$, is differentiable with respect to its second argument, $t$, with continuous $\diff \Phi / \diff t$. The key difference compared to a discrete flow is that the {\em flow at time $t$}, $\phi_t(x)$, is now defined for arbitrary $t \in \mathbb{R}$, not just for integers. We will use the term $p$-flow to indicate that $\mathcal{X} \subset \mathbb{R}^p$.
Informally, in a continuous flow the orbits are continuous, and the property that orbits never intersect has consequences for what homeomorphisms $\phi_t$ can result from a flow. Unlike in the discrete case, for a given homeomorphism $h$ there may not be a continuous flow such that $\phi_T=h$ for some $T$. We cannot just set $\phi_T=h$, what is required is a continuous family of homeomorphisms $\phi_t$ such that $\phi_T=h$ and $\phi_0$ is identity -- such family may not exist for some $h$. In such case, a Neural ODE would not be able to model $h$. While i-ResNets are discrete, the way they are constructed may also limit the space of mappings they can model to a subset of all homeomorphisms, even if each residual mapping is made arbitrarily complex within the Lipschitz constraint.
\subsection{Continuous Flows and ODEs}
Given a continuous flow $(\mathcal{X}, \mathbb{R}, \Phi)$ one can define a corresponding ODE on $\mathcal{X}$ by defining a vector $V(x) \in \mathbb{R}^p$ for every $x\in \mathcal{X} \subset \mathbb{R}^p$ such that $V(x) = \left. \mpartial \Phi(x,t) / \mpartial t \right|_{t=0}$. Then, the ODE $\mpartial x/ \mpartial t =V(x)$
corresponds to continuous flow $(\mathcal{X}, \mathbb{R}, \Phi)$. Indeed, $\Phi(x_0,T)=x_0 + \int_0^T V(x_t) \diff t$, $\phi_0$ is identity, and $\phi_{(S+T)}(x_0)=\phi_T(\phi_S(x_0))$ for time-independent $V$. Thus, for any homeomorphism family $\Phi$ defining a continuous flow, there is a corresponding ODE that, integrated for time $T$, models the flow at time $T$, $\phi_T(x)$.
The vectors of derivatives $V(x) \in \mathbb{R}^p$ for all $x \in \mathcal{X}$ are continuous over $\mathcal{X}$ and are constant in time, and define a {\em continuous vector field} over $\mathbb{R}^p$. The ODEs evolving according to such a time-invariant vector field, where the right-hand side of eq. \ref{eq:ODE} depends on $x_t$ but not directly on time $t$, are called {\em autonomous ODEs}, and take the form of $\mpartial x / \mpartial t = f_\Theta(x_t)$.
Any {\em time-dependent ODE} (eq. \ref{eq:ODE}) can be transformed into an autonomous ODE by removing time $t$ from being a separate argument of $f_\Theta(x_t,t)$, and adding it as part of the vector $x_t$.
Specifically, we add an additional dimension\footnote{To avoid confusion with $x_t$ indicating time, we use $x[i]$ to denote $i$-th component of vector $x$. } $x[\tau]$ to vector $x$, with $\tau=p+1$. We equate it with time, $x[\tau]=t$, by including $\diff x[\tau] / \diff t = 1$ in the definition of how $f_\Theta$ acts on $x_t$, and including $x_0[\tau] = 0$ in the initial value $x_0$. In defining $f_\Theta$, explicit use of $t$ as a variable is being replaced by using the component $x[\tau]$ of vector $x_t$. The result is an autonomous ODE.
Given time $T$ and an ODE defined by $f_\Theta$, $\phi_T$, the flow at time $T$, may not be well defined, for example if $f_\Theta$ diverges to infinity along the way. However, if $f_\Theta$ is well behaved, the flow will exist at least locally around the initial value.
Specifically, Picard-Lindel{\"o}f theorem states that if an ODE is defined by a Lipschitz-continuous function $f_\Theta(x_t)$, then there exists $\varepsilon > 0$ such that the flow at time $T$, $\phi_T$, is well-defined and unique for $-\varepsilon < T < \varepsilon$. If exists, $\phi_T$ is a homeomorphism, since the inverse exists and is continuous; simply, $\phi_{-T}$ is the inverse of $\phi_T$.
\subsection{Flow Embedding Problem for Homeomorphisms}
\label{sec:emb}
Given a $p$-flow, we can always find a corresponding ODE. Given an ODE, under mild conditions, we can find a corresponding flow at time $T$, $\phi_T$, and it necessarily is a homeomorphism.
Is the class of $p$-flows equivalent to the class of $p$-homeomorphisms, or only to its subset? That is, given a homeomorphism $h$, does a $p$-flow such that $\phi_T=h$ exist? This question is referred to as the problem of embedding the homeomorphism into a flow.
For a homeomorphism $h: \mathcal{X} \rightarrow \mathcal{X}$, its {\em restricted embedding into a flow} is a flow $(\mathcal{X}, \mathbb{R}, \Phi)$ such that $h(x) = \Phi(x,T)$ for some $T$; the flow is restricted to be on the same domain as the homeomorphism. Studies of homeomorphisms on simple domains such as a 1D segment \cite{fort1955embedding} or a 2D plane \cite{andrea1965homeomorphisms} showed that a restricted embedding does not always exist.
An {\em unrestricted embedding into a flow} \cite{utz1981embedding} is a flow $(\mathcal{Y}, \mathbb{R}, \Phi)$ on some space $\mathcal{Y}$ of dimensionality higher than $p$. It involves a homeomorphism $g: \mathcal{X} \rightarrow \mathcal{Z}$ that maps $\mathcal{X}$ into some subset $\mathcal{Z} \subset \mathcal{Y}$, such that the flow on $\mathcal{Y}$ results in mappings on $\mathcal{Z}$ that are equivalent to $h$ on $\mathcal{X}$ for some $T$, that is, $g(h(x)) = \Phi(g(x),T)$. While a solution to the unrestricted embedding problem always exists, it involves a smooth, non-Euclidean manifold $\mathcal{Y}$. For a homeomorphism $h: \mathcal{X} \rightarrow \mathcal{X}$, the manifold $\mathcal{Y}$, variously referred to as the twisted cylinder \cite{utz1981embedding}, or a suspension under a ceiling function \cite{brin2002introduction}, or a mapping torus \cite{browder1966manifolds}, is a quotient space $\mathcal{Y} = \mathcal{X} \times [0,1] / \sim$ defined through the equivalence relation $(x,1) \sim (h(x),0)$.
The flow that maps $x$ at $t=0$ to $h(x)$ at $t=1$ and $h(h(x))$ at $t=2$ involves trajectories in $\mathcal{X} \times [0,1] / \sim$ in the following way: for $t$ going from 0 to 1, the trajectory tracks in a straight line from $(x,0)$ to $(x,1)$; in the quotient space $(x,1)$ is equivalent to $(h(x),0)$. Then, for $t$ going from 1 to 2, the trajectory proceeds from $(h(x),0)$ to $(h(x),1) \sim (h(h(x)),0)$.
The fact that the solution to the unrestricted embedding problem involves a flow on a non-Euclidean manifold makes applying it in the context of gradient-trained ODE-Nets difficult.
\section{Approximation of Homeomorphisms by Neural ODEs}
In exploring the approximation capabilities of Neural ODEs for $p$-homeomorphisms, we will assume that the neural network $f_\Theta(x_t)$ on the right hand side of the ODE is a universal approximator and, if needed, can be made large enough to closely approximate any desired function. Thus, our concern is with what flows can be modeled by a $q$-ODE-Net assuming that $f_\Theta(x_t)$ can have arbitrary internal architecture, including depth and dimensionality, as long as its input-output dimensionality remains fixed at $q$. We consider two scenarios, $q=p$, and $q>p$.
\subsection{Restricting the Dimensionality Limits Capabilities of Neural ODEs}
We show a class of functions that a Neural ODE cannot model, a class that generalizes the $x \rightarrow -x$ one-dimensional example.
\begin{theorem}
\label{thm:ODEnogoND}
Let $\mathcal{X}=\mathbb{R}^p$, and let $\mathcal{Z} \subset \mathcal{X}$ be a set that partitions $\mathcal{X}$ into two or more disjoint, connected subsets $C_i$, for $i=[m]$. Consider a mapping $h: \mathcal{X} \rightarrow \mathcal{X}$ that
\begin{itemize}
\item is an identity transformation on $\mathcal{Z}$, that is, $\forall z \in \mathcal{Z}, h(z)=z$,
\item maps some $x \in C_i$ into $h(x) \in C_j$, for $i \neq j$.
\end{itemize}
Then, no $p$-ODE-Net can model $h$.
\begin{proof}
A $p$-ODE-Net can model $h$ if a restricted flow embedding of $h$ exists. Suppose that it does, a continuous flow $(\mathcal{X}, \mathbb{R}, \Phi)$ can be found for $h$ such that the trajectory of $\Phi(x,t)$ is continuous on $t \in [0,T]$ with $\Phi(x,0)=x$ and $\Phi(x,T) = h(x)$ for some $T \in \mathbb{R}$, for all $x \in \mathcal{X}$.
If $h$ maps some $x \in C_i$ into $h(x) \in C_j$, for $i \neq j$, the trajectory from $\Phi(x,0) = x \in C_i$ to $\Phi(x,T) = h(x) \in C_j$ crosses $\mathcal{Z}$ -- there is $z \in \mathcal{Z}$ such that $\Phi(x,\tau)=z$ for some $\tau \in (0,T)$. From uniqueness and reversibility of ODE trajectories, we then have $\Phi(z,-\tau)=x$. From additive property of flows, we have $\Phi(z,T-\tau)=h(x)$.
Since $h$ is identity over $\mathcal{Z}$ and $\mathcal{Z} \subset \mathcal{X}$, thus $h(z) = \Phi(z,T) = \Phi(z,0) = z$. That is, the trajectory over time $T$ is a closed curve starting and ending at $z$, and $\Phi(z,t)=\Phi(z,T+t)$ for any $t \in \mathbb{R}$. Specifically, $\Phi(z,T-\tau)=\Phi(z,-\tau)=x$. Thus, $h(x)=x$. We arrive at a contradiction with the assumption that $x$ and $h(x)$ are in two disjoint subsets of $\mathbb{R}^p$ separated by $\mathcal{Z}$. Thus, no $p$-ODE-Net can model $h$.
\end{proof}
\end{theorem}
The result above shows that Neural ODEs applied in the most natural way, with $q=p$, are severely restricted in the way distinct regions of the input space can be rearranged in order to learn and generalize from the training set, and the restrictions go well beyond requiring invertibility and continuity.
\subsection{Neural ODEs with Extra Dimensions are Universal Approximators for Homeomorphisms}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig1}
\caption{Trajectories in $\mathbb{R}^{2p}$ that embed an $\mathbb{R}^p \rightarrow \mathbb{R}^p$ homeomorphism, using $f(\tau)=(1-\cos \pi \tau)/2$ and $g(\tau)=(1-\cos 2 \pi \tau)$. Three examples for $p=1$ are shown, including the mapping $h(x)=-x$ that cannot be modeled by Neural ODE on $\mathbb{R}^p$, but can in $\mathbb{R}^{2p}$.}
\label{fig:torus}
\end{figure*}
If we allow the Neural ODE to operate on Euclidean space of dimensionality $q>p$, we can approximate arbitrary $p$-homeomorphism $\mathcal{X} \rightarrow \mathcal{X}$, as long as $q$ is high enough. Here, we show that is suffices to take $q=2p$.
We construct a mapping from the original problem space, $\mathcal{X} \in \mathbb{R}^p$ into $\mathbb{R}^{2p}$ that \footnote{We use upper subscript $x^{(p)}$ to denote dimensionality of vectors; that is, $0^{(p)}\in \mathbb{R}^p$.}
\begin{itemize}
\item preserves $\mathcal{X}$ as a $p$-dimensional linear subspace consisting of vectors $[x,0^{(p)}]$,
\item leads to an ODE that maps $[x,0^{(p)}] \rightarrow [h(x),0^{(p)}]$.
\end{itemize}
Thus, we provide a solution with a structure that is convenient for out-of-the-box training and inference using Neural ODEs -- it is sufficient to add $p$ zeros to input vectors.
\begin{theorem}
\label{thm:main}
For any homeomorphism $h: \mathcal{X} \rightarrow \mathcal{X}$, $\mathcal{X} \subset \mathbb{R}^p$, there exists a $2p$-ODE-Net $\phi_T: \mathbb{R}^{2p} \rightarrow \mathbb{R}^{2p}$ for $T=1$ such that $\phi_T([x,0^{(p)}]) = [h(x),0^{(p)}]$ for any $x \in \mathcal{X}$.
\begin{proof}
We prove the existence in a constructive way, by showing a vector field in $\mathbb{R}^{2p}$, and thus an ODE, with the desired properties.
We start with the extended space $(x,\tau)$ with a variable $\tau$ corresponding to time added as the last dimension, as in the construction of an autonomous ODE from time-dependent ODE. We then define a mapping $y(x,\tau): \mathbb{R}^p \times \mathbb{R} \rightarrow \mathbb{R}^{2p}$ that will represent paths starting from $x$ at time $\tau=0$. For $\tau \in [0,1]$, the mapping (see Fig. \ref{fig:torus}) is defined through
\begin{align}
y(x,\tau)&=\left[ x + f(\tau)\delta_x, \delta_x g(\tau) \right]. \label{eq:mapping}
\end{align}
For each $x$, let $\delta_x \in \mathbb{R}^p$ be defined as $\delta_x = h(x)-x$. The functions $f,g: \mathbb{R} \rightarrow \mathbb{R}$ are required to have continuous first derivative, and have $f(0)=0$, $f(1)=1$, $g(\tau)=0$ iff $\tau \in \mathbb{Z}$, and the derivatives $\mpartial f / \mpartial \tau$ and $\mpartial g / \mpartial \tau$ are null at $\tau \in \mathbb{Z}$ and only there.
The mapping indeed just adds $p$ dimensions of 0 to $x$ at time $\tau=0$, and at time $\tau=1$ it gives the result of the homeomorphism applied to $x$, again with $p$ dimensions of 0
\begin{align*}
y(x,0))&=[x , 0^{(p)}], \\
y(x,1)&=[x +\delta_x, 0^{(p)}]=[h(x), 0^{(p)}]= y(h(x),0).
\end{align*}
For the purpose of constructing an ODE-Net with universal approximation capabilities, $\tau \in [0,1]$ suffices. However, more generally we can define the mapping for $\tau \notin [0,1]$, by setting $y(x,\tau)=y(h^{(\floor{\tau})},\tau - \floor{\tau})$; for example, $y(x,-1.75)=y(h^{-1}(h^{-1}(x)),0.25)$. Intuitively, the mapping $y(x,\tau)$ will provide the position in $\mathbb{R}^{2p}$ of the time evolution for duration $\tau$ of an ODE on $\mathbb{R}^{2p}$ starting from a position corresponding to $x$.
For two distinct $x,x' \in \mathbb{R}^p$, the paths in $\mathbb{R}^{2p}$ given by eq. \ref{eq:mapping} do not intersect at the same position at the same point in time. First, consider the case where $\delta_x$ is not parallel to $\delta_{x'}$. Then, the second set of $p$ variables is equal only if $g(\tau)=0$, that is, only at integer $\tau$. At those time points, the first set of $p$ variables takes iterates of $h$, that is, $...,h^{-1}, x, h(x),h(h(x)),...$, which are different for different $x$. Second, consider $x,x'$ such that $\delta_{x'} = c \delta_x$ for some $c$. Then, either $c \neq 1$ and thus $g(\tau) \neq c g(\tau)$ for all non-integer $\tau$, that is, the second set of $p$ variables are always different except at $\tau$ corresponding to iterates of $h$, which are distinct; or $c=1$, and the second set of $p$ variables are always the same. In the latter case, also $f(\tau)\delta_x = f(\tau)\delta_{x'}$, hence the first set of variables is $x+f(\tau)\delta_x$ is only equal to $x'+f(\tau)\delta_{x'}$ if $x=x'$. Thus, in $\mathbb{R}^{2p}$, paths starting from two distinct points do not intersect at the same point in time. Intuitively, we have added enough dimensions to the original space so that we can reroute all trajectories without intersections.
We have $\tau$ correspond directly to time, that is, $\mpartial \tau / \mpartial t = 1$
The mapping $y$ has continuous derivative with respect to $t$, defining a vector field over the image of $y$, a subset of $\mathbb{R}^{2p}$
\begin{align*}
\frac{\mpartial y}{\mpartial t}&=\left[ \frac{\mpartial f}{\mpartial t} \delta_x, \frac{\mpartial g}{\mpartial t} \delta_x \right].
\end{align*}
From the conditions on $f,g$, we can verify that this time-dependent vector field defined through derivatives of $y(x,\tau)$ with respect to time has the same values for $\tau=0$ and $\tau=1$ for any $x$
\begin{align*}
\frac{\mpartial y}{\mpartial t} \left(x,0\right)=[ 0^{(p)}, 0^{(p)}]=\frac{\mpartial y}{\mpartial t} \left(x,1\right)=\frac{\mpartial y}{\mpartial t} \left(h(x),0\right)
\end{align*}
Thus,
the vector field is well-behaved at $y(x,1)=y(h(x),0)$, it is continuous over the whole image of $y$.
%
The vector field above is defined over a closed subset $y(x,\tau)$ of $\mathbb{R}^{2p}$, and can be (see \cite{lee2001introduction}, Lemma 8.6) extended to the whole $\mathbb{R}^{2p}$. A $(2p)$-ODE-Net with a universal approximator network $f_\Theta$ on the right hand side can be designed to approximate the vector field arbitrarily well. The resulting ODE-Net approximates $[x,0^{p}]$ to $[h(x),0^{p}]$.
\end{proof}
\end{theorem}
Based on the above result, we now have a simple method for training a Neural ODE to approximate a given continuous, invertible mapping $h$ and, for free, obtain also its continuous inverse $h^{-1}$. On input, each sample $x$ is augmented with $p$ zeros. For a given $x$, the output of the ODE-Net is split into two parts. The first $p$ dimensions are connected to a loss function that penalizes deviation from $h(x)$. The remaining $p$ dimensions are connected to a loss function that penalizes for any deviation from 0. Once the network is trained, we can get $h^{-1}$ by using an ODE-Net with $-f_\Theta$ instead of $f_\Theta$ used in the trained ODE-Net.
\section{Approximation of Homeomorphisms by i-ResNets}
\subsection{Restricting the Dimensionality Limits Capabilities of i-ResNets}
We show that similarly to Neural ODEs, i-ResNets cannot model a simple $f(x)\rightarrow -x$ homeomorphism $\mathbb{R} \rightarrow \mathbb{R}$, indicating that their approximation capabilities are limited.
\begin{theorem}
\label{thm:iResNetMinusX}
Let $F_n(x) = (I+f_n)\circ (I+f_{n-1}) \circ \cdots \circ (I+f_{1}) (x)$ be an $n$-layer i-ResNet, and let $x_0 = x$ and $x_n=F_n(x_0)$. If $\mathrm{Lip}(f_i) < 1$ for all $i = 1,...,n$, then there are is no number $n \geq 1$ and no functions $f_i$ for all $i = 1,...,n$ such that $x_n = -x_0$.
\begin{proof}
Consider $a_0 \in \mathbb{R}$ and $b_0=a_0+\delta_0$. Then, $a_1 = a_0 + f_1(a_0)$ and $b_1=a_0+\delta_0 + f_1(a_0+\delta_0)$. From $\mathrm{Lip}(f_1) < 1$ we have that $|f_1(a_0+\delta_0)- f_1(a_0)| < |\delta_0|$. Let $\delta_1 =b_1 - a_1$. Then, we have
\begin{align*}
\delta_1 &= a_0 + \delta_0 + f_1(a_0+\delta_0) - a_0 - f_1(a_0) \\
& = \delta_0 + f_1(a_0+\delta_0) - f_1(a_0), \\
\delta_1 & > \delta_0 - |\delta_0|, \\
\delta_1 & < \delta_0 + |\delta_0|.
\end{align*}
That is, $\delta_1$ has the same sign as $\delta_0$. Thus, applying the reasoning to arbitrary $i$, $i+1$ instead of $0,1$, if $a_i < b_i$, then $a_{i+1} < b_{i+1}$, and if $a_i > b_i$, then $a_{i+1} > b_{i+1}$, for all $i= 0,...,n-1$. Assume we can construct an i-ResNet $F_n$ such that $F_n(0)=0$; then $F_n(x) > 0$ for any $x>0$, and $F_n(x)$ cannot map $x$ into $-x$.
\end{proof}
\end{theorem}
The result above leads to a more general observation about paths in spaces of dimensionality higher than one. As with ODE-Nets, we will use $p$-i-ResNet to denote an i-ResNet operating on $\mathbb{R}^p$.
\begin{corollary}
\label{col:ResNetPaths}
Let the straight line connecting $x_t \in \mathbb{R}^p$ to $x_{t+1}=x_t+f(x_t) \in \mathbb{R}^p$ be called an {\em extended path} $x_t \rightarrow x_{t+1}$ of a time-discrete topological transformation group on $\mathcal{X} \in \mathbb{R}^p$. In $p$-i-ResNet, for $x_t \neq x'_t$, extended paths $x_t \rightarrow x_{t+1}$ and $x'_t \rightarrow x'_{t+1}=x'_{t}$ do not intersect.
\begin{proof}
For two extended paths to intersect, vectors $x_t,x'_t, x_{t+1}, x'_{t+1}$ have to be co-planar. If we restrict attention to dynamics starting form $x_t,x'_t$, we can view it as a one-dimensional system, with the space axis parallel to $x_t -- x'_t$, and time axis orthogonal to it.
First, consider the situation where the line connecting $x_t,x'_t$ is parallel, in the original $\mathbb{R}^p$ space, to the line connecting $x_{t+1}, x'_{t+1}$. Applying Theorem \ref{thm:iResNetMinusX} shows that if $x'_t$ is above $x_t$, then $x'_{t+1}$ is above $x_{t+1}$; extended paths do not intersect in this case.
In the more general case, without loss of generality assume that $x'_{t+1}$ is the one extending farther from the $x_t -- x'_t$ line than $x_{t+1}$. We can construct another i-ResNet, keeping $x_t \rightarrow x_t + f(x_t)$ but with $x'_t \rightarrow x'_t + cf(x'_t)$, for $c<1$ such that we arrive at the parallel case above; $\mathrm{Lip}(cf)<1$. The intersection of extended paths for both i-ResNets, if exists, is at the same position. But the second i-ResNet is the parallel case analyzed above, with no intersection.
\end{proof}
\end{corollary}
The result allows us to show that i-ResNets faces a similar constraint in its capabilities as Neural ODEs
\begin{theorem}
Let $\mathcal{X}=\mathbb{R}^p$, and let $\mathcal{Z} \subset \mathcal{X}$ and $h: \mathcal{X} \rightarrow \mathcal{X}$ be the same as in Theorem \ref{thm:ODEnogoND}. Then, no $p$-i-ResNet can model $h$.
\begin{proof}
Consider a $T$-layered i-ResNet on $\mathcal{X}$, giving riving rise to extended space-time paths in $\mathcal{X} \times [0,T]$, with integer $t \in [0,T]$ corresponding to activations in subsequent layers. For any $x \in \mathcal{Z}$, the extended path in $\mathcal{X} \times [0,T]$ starts at $(x,0)$ and ends at $(x,T)$. Since i-ResNet layers are continuous transformations, the union of all extended paths arising from $\mathcal{Z}$ is a simply connected subset of $\mathcal{X} \times [0,T]$; it has no holes and partitions $\mathcal{X} \times [0,T]$ into separate regions. Since extended paths cannot intersect, $(x,T)$ remains in the same region as $(x,0)$, which is in contradiction with mapping $h$.
\end{proof}
\end{theorem}
The proof shows that the limitation in capabilities of the two architectures for invertible mappings analyzed here arises from the fact that paths in invertible mappings constructed through NeuralODEs and i-ResNets are not allowed to intersect and from continuity in $\mathcal{X}$.
\subsection{i-ResNets with Extra Dimensions are Universal Approximators for Homeomorphisms}
Similarly to Neural ODEs, expanding the dimensionality of the i-ResNet from $p$ to $2p$ by adding zeros on input guarantees that any $p$-homeomorphism can be approximated, as long as its Lipschitz constant is finite and an upper bound on it is known during i-ResNet architecture construction.
\begin{theorem}
\label{thm:mainResNet}
For any homeomorphism $h: \mathcal{X} \rightarrow \mathcal{X}$, $\mathcal{X} \subset \mathbb{R}^p$ with $\mathrm{Lip}(h) \leq k$, there exists a $2p$-i-ResNet $\phi: \mathbb{R}^{2p} \rightarrow \mathbb{R}^{2p}$ with $\lfloor k+4 \rfloor$ residual layers such that $\phi([x,0^{(p)}]) = [h(x),0^{(p)}]$ for any $x \in \mathcal{X}$.
\begin{proof}
For a given invertible i-ResNet approximating $h$, define a possibly non-invertible mapping $\delta(x) = (h(x)-x)/T$, where $T=\lfloor k+1 \rfloor$; we have $\mathrm{Lip}(\delta(x)) < 1$. An i-ResNet that models $h$ using $T+3$ layers $\phi_i$ for $i=0,...,T+2$ can be constructed in the following way:
\begin{align*}
\phi_0([x,0]) &\rightarrow [x,0]+[0, \delta(x) ], \\
\phi_{i}([z,y]) &\rightarrow [z,y]+[yT/(T+1), 0] \;\; i=1,...,T+1, \\
\phi_{T+2}([h(x),\delta(x) ]) &\rightarrow [h(x),\delta(x) ]+[0, -\delta(x)]& \\
\end{align*}
The first layer maps $x$ into $\delta(x)$ and stores it in the second set of $p$ activations. The subsequent $T+1$ layers progress in a straight line from $[x,\delta(x)]$ to $[h(x),\delta(x)]$ in $T+1$ constant-length steps, and the last layer restores null values in the second set of $p$ activations.
All layers are continuous mappings.
The residual part of the first layer has Lipschitz constant below one, since $\mathrm{Lip}(\delta(x)) < 1$.
The middle layers have residual part constant in $z$ and contractive in $y$, with Lipschitz constant below one.
The residual part of the last layer is a mapping of the form $[h,\delta] \rightarrow [0,-\delta]$.
For a pair $x,x' \in \mathcal{X}$, let $h=h(x), h'=h(x')$, $\delta=\delta(x), \delta'=\delta(x')$. We have $\norm{[0,-\delta]-[0,-\delta']} = \norm{\delta-\delta'} \leq \norm{[h,\delta]-[h',\delta']}$, with equality only if $h=h'$. From invertibility of $h(x)$ we have that $h=h'$ implies $x=x'$ and thus $\delta=\delta'$; hence, the residual part of the last layer also has Lipschitz constant below one.
\end{proof}
\end{theorem}
The theoretical construction above suggests that while on the order of $k$ layers may be needed to approximate arbitrary homeomorphism $h(x)$ with $\mathrm{Lip}(h) \leq k$, only the first and last layers depend on $h(x)$ and need to be trained; the middle layers are simple, fixed linear layers. The last layer for $x \rightarrow h(x)$ is the same as the first layer of $h(x) \rightarrow x$ would be, but the inverse of the first layer, but since i-ResNet construct invertible mappings $x \rightarrow x + f(x)$ using possibly non-invertible $f(x)$, it has to be trained along with the first layer.
The construction for i-ResNets is similar to that for NeuralODEs, except one does not need to enforce differentiability in the time domain, hence we do not need smooth accumulation and removal of $\delta(x)$ in the second set of $p$ activations, and the movement from $x$ to $h(x)$ in the original $p$ dimensions does not need to be smooth. In both cases, the transition from $x$ to $h(x)$ progresses along a straight line in the first $p$ dimensions, with the direction of movement stored in the second set of $p$ variables.
\section{Invertible Networks capped by a Linear Layer are Universal Approximators}
We show that a Neural ODE or an i-ResNet followed by a single linear layer can approximate functions, including non-invertible functions, equally well as any traditional feed-forward neural network. Since networks with shallow-but-wide fully-connected architecture \cite{cybenko1989approximation,hornik1991approximation}, or narrow-but-deep unconstrained ResNet-based architecture \cite{lin2018resnet} are universal approximators, so are ODE-Nets and i-ResNets. Consider a function $\mathbb{R}^p \rightarrow \mathbb{R}^r$. For any $(x,y)$ such that $y=f(x)$, the mapping $[x,0]\rightarrow [x,y]$ is a $(p+r)$-homeomorphism, and as we have shown, can be approximated by a $2(p+r)$-ODE-Net or $2(p+r)$-i-ResNet; $y$ can be extracted from the result by a simple linear layer. Through a simple construction, we show that actually using just $p+r$ dimensions is sufficient.
\begin{theorem}
Consider a neural network $F: \mathbb{R}^p \rightarrow \mathbb{R}^r$ that approximates function $f: \mathcal{X} \rightarrow \mathbb{R}^r$ that is Lebesgue integrable for each of the $r$ output dimensions, with $\mathcal{X} \subset \mathbb{R}^p$ being a compact subset. For $q= p+r$, there exists a linear layer-capped $q$-ODE-Net that can perform the mapping $F$. If $f$ is Lipschitz, there also is a linear layer-capped $q$-i-ResNet for $F$.
\begin{proof}
Let $G$ be a neural network that takes input vectors $x^{(q)}=[x^{(p)},x^{(r)}]$ and produces $q$-dimensional output vectors $y^{(q)}=[y^{(p)},y^{(r)}]$, where $y^{(r)}=F(x^{(p)})$ is the desired transformation. $G$ is constructed as follows: use $F$ to produce $y^{(r)}=F(x^{(p)})$, ignore $x^{(r)}$, and always output $y^{(p)}=0$.
%
Consider a $q$-ODE-Net defined through $\mpartial x / \mpartial t = G(x_t)=[0^{(p)},F(x_t^{(p)})]$. Let the initial value be $x_0 = [x^{(p)},0^{(r)}]$. The ODE will not alter the first $p$ dimensions throughout time, hence for any $t$, $F(x_t^{(p)})= y^{(r)}$. After time $T=1$, we will have
\begin{align*}
x_T &= x_0 + \int_0^1 G(x_t) \diff t = [x^{(p)},0^{(r)}] + \int_0^1 [0^{(p)},y^{(r)}] \diff t
\\&= [x^{(p)},F(x^{(p)})].
\end{align*}
Thus, for any $x \in \mathbb{R}^p$, the output $F(x)$ can be recovered from the output of the ODE-Net by a simple, sparse linear layer that ignores all dimensions except the last $r$, which it returns. A similar construction can be used for defining layers of i-ResNet. We define $k$ residual layers, each with residual mapping $[x^{(p)},...] \rightarrow [x^{(p)},...] + [0^{(p)},F(x^{(p)})/k]$. If $\mathrm{Lip}(F) < k$, then the residual mapping $[x^{(p)},...] \rightarrow [0^{(p)},F(x^{(p)})/k]$ has Lipschitz constant below 1.
\end{proof}
\end{theorem}
\section{Experimental Results}
\subsection{i-ResNets}
We tested whether i-ResNet operating in one dimension can learn to perform the $x \rightarrow -x$ mapping, and whether adding one more dimension has impact on the ability learn the mapping. To this end, we constructed a network with five residual blocks. In each block, the residual mapping is a single linear transformation, that is, the residual block is $x_{t+1}=x_t + Wx_t$. We used the official i-ResNet PyTorch package \cite{behrmann2018invertible} that relies on spectral normalization \cite{miyato2018spectral} to limit the Lipschitz constant to less than unity. We trained the network on a set of 10,000 randomly generated values of $x$ uniformly distributed in $[-10,10]$ for 100 epochs, and used an independent test set of 2,000 samples generated similarly.
For the one-dimensional $x \rightarrow -x$ and the two-dimensional $[x,0] \rightarrow [-x,0]$ target mapping, we used MSE as the loss. Adding one extra dimension results in successful learning of the mapping, confirming Theorem \ref{thm:mainResNet}. The test MSE on each output is below $10^{-10}$; the network learned to negate $x$, and to bring the additional dimension back to null, allowing for invertibility of the model. For the i-ResNet operating in the original, one-dimensional space, learning is not successful (MSE of 33.39), the network learned a mapping $x \rightarrow cx$ for a small positive $c$, that is, the mapping closest to negation of $x$ that can be achieved while keeping non-intersecting paths, confirming experimentally Corollary \ref{col:ResNetPaths}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.3\textwidth]{fig2a.png}
\includegraphics[width=0.3\textwidth]{fig2b.png}
\includegraphics[width=0.3\textwidth]{fig2c.png}
\caption{{\bf Left and center:} test set cross-entropy loss, for increasing number $d$ of null channels added to RGB images. For each $d$, the input images have dimensionality $32\times 32\times (3+d)$. Left: ODE-Net with $k$=64 convolutional filters; center: $k$=128 filters. {\bf Right:} Minimum of test set cross-entropy loss across all epochs as a function of $d$, the number of null channels added to input images, for ODE-Nets with different number of convolutional filters, $k$.}
\label{fig:teL}
\end{figure*}
\subsection{Neural ODEs}
We performed experiments to validate if the $q\geq 2p$-dimensions threshold beyond which any $p$-homeomorphism can be approximated by a $q$-ODE-Net can be observed empirically in a practical classification problem. We used the CIFAR10 dataset \cite{krizhevsky2009learning} that consists of 32 x 32 RGB images, that is, each input image has dimensionality of $p=32\times 32\times 3$. We constructed a series of $q$-ODE-Nets with dimensionality $q\geq p$, and for each measured the cross-entropy loss for the problem of classifying CIFAR10 images into one of ten classes. We used the default split of the dataset into 50,000 training and 10,000 test images.
In designing the architecture of the neural network underlying the ODE we followed ANODE \cite{dupont2019augmented}. Briefly, the network is composed of three 2D convolutional layers. The first two convolutional layers use $k$ filters, and the last one uses the number of input channels as the number of filters, to ensure that the dimensionalities of the input and output of the network match. The convolution stack is followed by a ReLU activation function. A linear layer, with softmax activation and cross-entropy loss, operates on top the ODE block. We used torchdiffeq package \cite{chen2018neural} and trained on a single NVIDIA Tesla V100 GPU card.
To extended the dimensionality of the space in which the ODE operates, we introduce additional null channels on input, that is, we use input images of the form $32\times 32\times (3+d)$. Then, to achieve $q=2p$, we need $d=3$. We tested $d \in \left\{ 0,...,7\right\}$. To analyze how the capacity of the network interplays with the increases in input dimensionality, we also experimented with varying the number of convolutional filters, $k$, in the layers inside the ODE block.
The results in Fig. \ref{fig:teL} show that once the networks with small network capacity, below 64 filters, behaves differently than networks with 64 or more filters. Once the network capacity is high enough, at 64 filters or more, adding dimensions past beyond 3, that is, beyond $2p$, results in slower decrease in test set loss. To quantify if this slowdown is likely to arise by chance, we calculated the change in test set loss $\ell_d$ for dimensionality $d$ as the $d$ increases by one, $\delta_d=\ell_d-\ell_{d-1}$, for $d$=1,...,7. We pooled the results from experiments with 64 convolution filter or more. Two-tailed nonparametric Mann-Whitney U test between $\delta_1,...,\delta_3$ and $\delta_4,...,\delta_7$ shows the change of trend is significant ($p$=.002).
\section*{Acknowledgments}
T.A. is supported by NSF grant IIS-1453658.
\bibliographystyle{alpha}
\newcommand{\etalchar}[1]{$^{#1}$}
\newcommand{\noopsort}[1]{}
|
2,869,038,156,709 | arxiv | \section{Introduction}
\label{sec:intro}
Pattern recognition is a task primates are generally very good at
while machines are not so much. Examples are the recognition of human
faces or the recognition of handwritten characters. The scientific
disciplines of machine learning and computational learning theory have
taken on the challenge of pattern recognition since the early days of
modern computer science. A wide variety of very sophisticated and
powerful algorithms and tools currently exist \citep{bishop06}. In
this paper we are going back to some of the roots and address the
challenge of learning with networks of simple Boolean logic gates. To
the best of our knowledge, Alan Turing was the first person to explore
the possibility of learning with simple NAND gates in his long
forgotten 1948 paper, which was published much later
\citep{turing48,teuscher01:conn}. One of the earliest attempts to
classify patterns by machine came from \cite{selfridge:58}, and
\cite{neisser60:sciam}. Later, many have explored random logical nets
made up from Boolean or threshold (McCulloch-Pitts) neurons:
\citep{rozonoer69,amari71,aleksander1984,aleksander98,aleksander1973}.
\cite{martland87} showed that it is possible to predict the activity
of a boolean network with randomly connected inputs, if the
characteristics of the boolean neurons can be described
probabilistically. In a second paper, \cite{martland87b} illustrated
how the boolean networks are used to store and retrieve patterns and
even pattern sequences auto-associatively. Seminal contributions on
random Boolean networks came from
\cite{kauffman69,kauffman93,kauffman84} and Weisbuch
\cite{weisbuch89,weisbuch91}.
In 1987, Carnevali and Patarnello
\citep{patarnello87:europhys,carnevali87:europhys}, used {\em
simulated annealing} and in 1989 also {\em genetic algorithms}
\citep{patarnello89b:aleksander} as a global stochastic optimization
technique to train feedforward Boolean networks to solve computational
tasks. They showed that such networks can indeed be trained to
recognize and generalize patterns. \cite{broeck90:physicalreview}
also investigated the learning process in feedforward Boolean networks
and discovered their amazing ability to generalize.
\cite{teuscher07:ddaysrbn} presented
preliminary results that true RBNs, i.e., Boolean networks with
recurrent connections, can also be trained to learn and generalize
computational tasks. They further hypothesized that the performance is
best around the critical connectivity $K = 2$.
In the current paper, we extend and generalize Patarnello and
Carnevali's results to random Boolean networks (RBNs) and use genetic
algorithms to evolve both the network topology and the node transfer
functions to solve a simple task. Our work is mainly motivated by the
application of RBNs in the context of emerging nanoscale electronics
\citep{teuscher09:ijnmc}. Such networks are particularly appealing for
that application because of their simplicity. However, what is lacking
is a solid approach that allows to train such systems for performing
specific operations. Similar ideas have been explored with none-RBN
building blocks by \cite{tour02} and by \cite{lawson06}. One of the
broader goals we have is to systematically explore the relationship
between generalization and learning (or memorization) as a function of
the system size, the connectivity $K$, the size of the input space,
the size of the training sample, and the type of the problem to be
solved. In the current paper, we restrict ourselves to look at the
influence of the system size $N$ and of connectivity $K$ on the
learning and generalization capabilities. In the case of emerging
electronics, such as for example self-assembled nanowire networks use
to compute simple functions, we are interested to find the smallest
network with the lowest connectivity that can learn how to solve the
task with the least number of patterns presented.
\section{Random Boolean Networks}
\label{sec:rbn}
A {\sl random Boolean network} (RBN)
\cite{kauffman69,kauffman84,kauffman93} is a discrete dynamical system
composed of $N$ nodes, also called {\sl automata}, {\sl elements} or
{\sl cells}. Each automaton is a Boolean variable with two possible
states: $\{0,1\}$, and the dynamics is such that
\begin{equation}
{\bf F}:\{0,1\}^N\mapsto \{0,1\}^N,
\label{globalmap}
\end{equation}
where ${\bf F}=(f_1,...,f_i,...,f_N)$, and each $f_i$ is represented
by a look-up table of $K_i$ inputs randomly chosen from the set of $N$
nodes. Initially, $K_i$ neighbors and a look-up table are assigned to
each node at random. Note that $K_i$ (i.e., the fan-in) can refer to
the {\sl exact} or to the {\sl average} number of incoming connections
per node. In this paper we use $K$ to refer to the average
connectivity.
A node state $ \sigma_i^t \in \{0,1\}$ is updated using its
corresponding Boolean function:
\begin{equation}
\sigma_i^{t+1} = f_i(\sigma_{i_1}^t,\sigma_{i_2}^t, ... ,\sigma_{i_{K_i}}^t).
\label{update}
\end{equation}
These Boolean functions are commonly represented by {\sl look-up
tables} (LUTs), which associate a $1$-bit output (the node's future
state) to each possible $K$-bit input configuration. The table's
out-column is called the {\sl rule} of the node. Note that even though
the LUTs of a RBN map well on an FPGA or other memory-based
architectures, the random interconnect in general does not.
We randomly initialize the states of the nodes (initial condition of
the RBN). The nodes are updated synchronously using their
corresponding Boolean functions. Other updating schemes exist, see for
example \citep{gershenson2003:alife} for an overview. Synchronous
random Boolean networks as introduced by Kauffman are commonly called
$NK$ {\sl networks} or {\sl models}.
\begin{figure}
\centering \includegraphics[width=.95\textwidth]{network_example}
\caption{Illustration of an $18$-node RBN with 3 input nodes (node
IDs 1, 2, and 3, colored in blue) and 1 output node (node ID 18,
colored in red). The average connectivity is $K=2.5$. The node
rules are commonly represented by {\sl lookup-tables} (LUTs),
which associate a $1$-bit output (the node's future state) to each
possible $K$-bit input configuration. The table's out-column is
commonly called the {\sl rule} of the node.}
\label{fig:network_example}
\end{figure}
The ``classical'' RBN is a closed system without explicit inputs and
outputs. In order to solve tasks that involve inputs and outputs, we
modify the classical model and add $I$ input nodes and designate $O$
nodes as output nodes. The input nodes have no logical function and
simply serve to distribute the input signals to any number of
{randomly chosen} nodes in the network. On the other
hand, the output nodes are just like any other network node, i.e.,
with a Boolean transfer function, except that their state can be read
from outside the network. {The network is constructed
in a random unbiased process in which we pick $L=N\times K$ pairs of
source and destination nodes from the network and connect them with
probability $p=0.5$. This construction results in a binomial
in-degree distribution in the initial network population \citep{Erdos:1959p1849}. The source
nodes can be any of the input nodes, compute nodes, or the output
nodes and the destination nodes can be chosen only from the compute
nodes and the output nodes.} Figure \ref{fig:network_example} shows
an $18$-node RBN with $3$ input nodes and $1$ output node.
\section{Functional Entropy}
\label{sec:functionalentropy}
Any given network in the space of all possible networks processes
information and realizes a particular function. Naturally, the task
of GAs (or any other search technique) is to only search in the space
of possible networks and to find networks that realize a desired
function, such as for example the even-odd task. Therefore, the
learning capability, with respect to the entire class of functions,
can be interpreted as the frequency of the realization of all
possible functions. In our case, that means the class of Boolean
functions with three inputs by using a class of ``computers,'' i.e.,
the Boolean networks. \cite{broeck90:physicalreview} and
\cite{Amirikian:1994p42} investigated the {\em phase volume} of a
function, which they defined as the number of networks that realize a
given function. Thus, the entropy of the functions realized by all
possible networks is an indicator of the richness of the
computational power of the networks. We extend this concept to the
class of random Automata networks $G(N,K)$ characterized using two
parameters: the size of the network $N$ and the average connectivity
$K$. We call this the {\sl functional entropy} of the $NK$ landscape.
Figure~\ref{fig:entropyn20n100} shows the landscape of the functional
entropy for networks of $N=20$ and $N=100$ with an average
connectivity of $0.5\le K\le 8$. To calculate the functional entropy,
we create $10,000$ networks with a given $N$ and $K$. We then
simulate the networks to determine the function each of the networks
is computing. The entropy can then be simply calculated using:
\begin{equation}
S_{G(N,K)}=-\sum_{i}{p_ilog_2p_i}.
\end{equation}
Here, $p_i$ is the probability of the function $i$ being realized by
the network of $N$ nodes and $K$ connectivity. For $I=3$, there are
$256$ different Boolean functions. Thus, the maximum entropy of the
space is $8$. This maximum entropy is achievable only if all $256$
functions are realized with equal probability. This is, however, not
the case because the distribution of the functions is not uniform in
general. Also, the space of possible networks cannot be adequately
represented in $10,000$ samples. However, our sampling is good enough
to estimate a comparative richness of the functional entropy of
different classes of networks. For example for $N=20$, the peak of
the entropy in the space of Boolean functions with three inputs lies
at $K=3.5$, whereas for the class of five-input functions, this peak
is at $K=5.5$ (Figures~\ref{fig:eni3n20} and \ref{fig:eni5n20}.) For
$N=100$, the peak of the entropy for three-input and five-input
functions is at $K=2.5$ and $K=3.0$ respectively
(Figure~\ref{fig:eni3n100} and \ref{fig:eni5n100}.) The lower $K$
values of the maximum entropy for larger networks suggests that as $N$
increases, the networks will have their highest capacity in a lower
connectivity range.
\begin{figure}
\subfigure[$N=20$, $I=3$]{
\includegraphics[width=.495\textwidth]{entropy_i3n20}
\label{fig:eni3n20}
}
\subfigure[$N=20$, $I=5$]{
\includegraphics[width=.495\textwidth]{entropy_i5n20}
\label{fig:eni5n20}
}
\subfigure[$N=100$, $I=3$]{
\includegraphics[width=.495\textwidth]{entropy_I3N100}
\label{fig:eni3n100}
}
\subfigure[$N=100$, $I=5$]{
\includegraphics[width=.495\textwidth]{entropy_I5N100}
\label{fig:eni5n100}
}
\caption{Entropy landscape of $N=20$ and $N=100$ networks. The entropy
is calculated by simulating $10,000$ networks for each $K$. The
maximum entropy for $I=3$ and $I=5$ is $8$ bits and $32$ bits
respectively. Due to exponential probability distribution and the
inadequacy of sampling over the space of networks, the actual values
are much lower then the theoretical values. However, the position of
the maximum empirical entropy as a function of $K$ is valid due to
unbiased sampling of the space.}
\label{fig:entropyn20n100}
\end{figure}
To study how the maximum attainable functional entropy changes as a
function of $N$, we created networks with sizes $5\le N\le 2000$ and
$0.5\le K\le 8.0$ and determined the maximum of the entropy landscape
as a function of $K$. Figures~\ref{fig:linear_ijaacs} and
\ref{fig:log-log-2_ijaacs} show the scaling of the maximum functional
entropy as a function of $K$ on linear and log-log scales
respectively. As one can see, the data points from the simulations
follow a power-law of the form:
\begin{equation}
K=aN^b+c,
\label{eq:power-law}
\end{equation}
where, $a=14.06$, $b=-0.83$, and $c=2.32$. The solid line in the plots
shows the fitted power-law equation. In
Figure~\ref{fig:log-log-2_ijaacs}, the straight line is the result of
subtracting $c=2.32$ from the equation and from the data points.
\begin{figure}
\centering
\subfigure[Maximum entropy scaling on a linear scale.]{
\includegraphics[width=.69\textwidth]{linear_ijaacs}
\label{fig:linear_ijaacs}
}\\
\subfigure[Maximum entropy scaling on a log-log scale.]{
\includegraphics[width=.69\textwidth]{log-log-2_ijaacs}
\label{fig:log-log-2_ijaacs}
}
\caption{\subref{fig:linear_ijaacs} The connectivity of the maximum
entropy networks scales as a power-law of the system size $N$
according to
Equation~\ref{eq:power-law}. \subref{fig:log-log-2_ijaacs} is
generated by subtracting $c=2.32$ from the data points.}
\label{fig:maxentropyscaling}
\end{figure}
{Studying the functional entropy of the network
ensembles reveals features of the network fitness landscape in the
context of task solving. In Section~\ref{sec:cummeasures}, we will
see how functional entropy explains the result of the cumulative
performance measures.}
\section{Experimental Setup}
\label{sec:expsetup}
We use {\em genetic algorithms} (GAs) to train the RBNs to solve the
even-odd {task}, the mapping task, and the bitwise
AND task. The {\em even-odd task} consists of determining if an
$l-$bit input has an even or an odd number of $1$s in the input. If
the number of $1$s is an odd number, the output of the network must be
$1$, {and} $0$ otherwise. This task is admittedly
rather trivial if one allows for counting the number of $1$s. Also, if
enough links are assigned to a single RBN node, the task can be solved
with a single node since all the combinations can be enumerated in the
look-up table. However, we are not interested to find such trivial
solutions, instead, we look for networks that are able to generalize
well if only a subset of the input patterns is presented during the
training phase. In Section \ref{sec:exp1} we also use the {\em bitwise
AND task}, which does exactly what its name suggests, i.e., form the
logical AND operation bit by bit with two $l-$bit inputs and one
$l-$bit output. The {\em mapping task} is used in Section
\ref{sec:exp2} and consists of a $l-$bit input and an $l-$bit
output. The output must have the same number of $l-$bits as the input,
but not necessarily in the same order. Throughout the rest of the
paper, we use $I$ to refer to the total number of input bits to the
network. For example, the bitwise AND for two $3$-bit inputs is a
problem with $I=6$ inputs.
To apply GAs, we encode the network into a bit-stream that consists of
both the network's adjacency matrix and the Boolean transfer functions
for each node. {We represent the adjacency matrix in a
list of source and destination node IDs of each link. We then append
this list with the look-up tables for each node's transfer
function. Note that the index to the beginning and the end of the
look-up table for each node can be calculated by knowing the node
index and node in-degree.} The genetic operators consist of a
mutation and a one-point crossover operator that are applied to the
genotypes in the network population. The mutation operator picks a
random location in the genome and performs either of the following two
operations, depending on the content of that location:
\begin{enumerate}
\item If the location points to a source or a destination node of a
link, we randomly replace it with a pointer to a new node in the
network.
\item If the location contains a bit in the LUT, we flip that bit.
\end{enumerate}
We perform crossover by choosing a random location in the two genomes
and then exchange the contents of the two genomes split at that
point. We further define a fitness function $f$ and a generalization
function $g$. For an input space $M'$ of size $m'$ and an input sample
$M$ of size $m$ we write: $E_M = \frac{1}{m}\sum_{j\in M}{d(j)}$ with
$f=1-E_M$, where $d(j)$ is the Hamming distance between the network
output for the $j^{th}$ input in the random sample from the input
space and the expected network output for that input. Similarly, we
write: $E_{M'} = \frac{1}{n}\sum_{j\in M'}{d(j)}$ with $g=1-E_{M'}$,
where $d(i)$ is the Hamming distance between the network output for
the $i^{th}$ input from the entire input space and the expected
network output for that input.
The simple genetic algorithm we use is as following:
\begin{enumerate}
\item Create a random initial population of $S$ networks.
\item Evaluate the performance of the networks on a random sample of
the input space.
\item Apply the genetic operators to obtain a new population.
\item {For the selection, we use a deterministic
tournament in which pairs of individuals are selected randomly and
the better of the two will make it into the offspring population.}
\item Continue with steps 2 and 3 until at least one of the networks
achieves a perfect fitness or after $G_{max}$ generations are
reached.
\end{enumerate}
To optimize feedforward networks (see Section \ref{sec:exp1}), we have
to make sure that the mutation and crossover operators do not violate
the feedforward topology of the network. We add an order attribute to
each node on the network and the nodes accept connections only from
lower order nodes.
Since RBNs have recurrent connections, their rich dynamics need to be
taken into account when solving tasks, and in particular interpreting
output signals. Their finite and deterministic behavior guarantees
that a network will fall into a (periodic or fixed point) attractor
after a finite number of steps. The transient length depends on the
network's average connectivity $K$ and the network size $N$
\citep{kauffman93}. For our simulations, we run the networks long
enough until they reach an attractor. Based on \citep{kauffman93}, we
run our networks (with $k<5$) for $2N$ time steps to reach an
attractor. However, due to potentially ambiguous outputs on periodic
attractors, we further calculate the average activation of the output
nodes over a number of time steps equal to the size $N$ of the network
and consider the activity level as $1$ if at least half of the time
the output is $1$, otherwise the activity will be $0$. A similar
technique was used successfully in \citep{teuscher01:conn}.
\section{Training and Network Performance Definitions}
\label{sec:definitions}
\cite{patarnello87:europhys} introduced the
notion of {\em learning probability} as a way of describing the
learning and generalization capability of their feedforward
networks. They defined the learning probability as the probability of
the training process yielding a network with perfect generalization,
given that the training achieves perfect fitness on a sample of the
input space.
The learning probability is expressed as a function of the fraction of
the input space, $s=\frac{m}{m'}$, used during the training. To
calculate this measure in a robust way, we run the training process
$r$ times and store both the fitness $f$ and the generalization $g$
values. We define the learning probability as a function of $s$,
$\delta(s)= Pr(g=1|f=1)=\frac{\alpha'(s)}{\alpha(s)}$, where
$\alpha(s)=Pr(f=1)$ is the {\textit{perfect training
likelihood}, i.e., the} probability of achieving a perfect fitness
{($f=1$)} after training, and $\alpha'(s)=Pr(g=1)$ is
the probability of obtaining a perfect fitness in generalization,
{($g=1$)}. In the following sections, we will define
new measures to evaluate the network performance more effectively.
One can say that the probabilistic measures, such as the learning
probability described above, only focus on the perfect cases and hence
describe the performance of the training process rather than the
effect of the training on the network performance. Thus, we define
the {\em mean training score} as $\beta(s)=\frac{1}{r}\sum_r
f_{final}$ and the {\em mean generalization score} as
$\beta'(s)=\frac{1}{r}\sum_r g_{final}$, where {$r$
is the number of evolutionary runs,} and $f_{final}$ and $g_{final}$
are the training fitness and the generalization fitness of the best
networks respectively at the end of training.
To compare the overall network performance for different training
sample sizes, we introduce a {\em cumulative measure} for all four
measures as defined above. The cumulative measure is obtained by a
simple trapezoidal integration \citep{wittaker69} to calculate the area
under the curve for the learning probability, the perfect training
likelihood, the mean generalization score, and mean training score.
\section{Learning in Feedforward Boolean Networks}
\label{sec:exp1}
The goal of this first experiment was to simply replicate the results that was reported in \citep{patarnello89b:aleksander} with
feedforward Boolean networks. Figure \ref{fig:fig7and8} shows the
learning probability of such networks on the even-odd (RIGHT) and the
bitwise AND task (LEFT) for $K = 2$ networks. We observe that as the
size $I$ of the input space increases, the training process requires a
smaller number of training examples to achieve a perfect learning
probability. For $I=3$, some of the networks can solve a significant
number of patterns without training because the task is too easy. We
have initially determined the GA parameters (see figure legends), such
as the mutation rate and the maximum number of generations
experimentally, depending on how quickly we achieved perfect fitness
on average. We have found the GA to be very robust against parameter
variations for our tasks. These result shown in Figure
\ref{fig:fig7and8} directly confirm Patarnello and Carnevali's
\citep{patarnello89b:aleksander} experiments.
\begin{figure}
\centering \includegraphics[width=.495\textwidth]{figure7}
\centering \includegraphics[width=.495\textwidth]{figure8}
\caption{LEFT: The learning probability of feedforward networks on
the bitwise AND task for different input sizes
$I$. $s=\frac{m}{m'}$ is the fraction of the input space used in
training. As $I$ increases, the learning process requires a
smaller fraction of the input space during the training to achieve
a perfect learning probability. RIGHT: The learning probability of
feedforward networks on the even-odd task for various input sizes
$I$. As $I$ increases, the learning process requires a smaller
fraction of the input space during the training to achieve a
perfect learning probability. For $I=3$, some of the networks can
correctly classify a significant number of patterns without
training because the task is too easy. For both plots: $N = 50$,
$K = 2$, $G_{max} = 3000$, initial population size $= 50$,
crossover rate $= 0.6$, mutation rate $= 0.3$. The GA was repeated
over $700$ runs.}
\label{fig:fig7and8}
\end{figure}
\section{Learning in RBNs}
\label{sec:exp2}
Next, we trained recurrent RBNs for the even-odd and the mapping
tasks. Figure \ref{fig:evenodd_1} (LEFT) shows the learning
probability of the {$N=20$ and $K=2.0$} networks on
the even-odd task with different input sizes $I$. While the problem
size increases exponentially with $I$, we observe that despite this
state-space explosion, a higher number of inputs $I$ requires a
smaller fraction of the input space for training the networks to
achieve a high learning probability. Figure \ref{fig:evenodd_1}
(RIGHT) shows the same behavior for the mapping task, however, since
the task is more difficult, we observe a worse generalization
behavior. Also, compared to Figure \ref{fig:fig7and8}, we observe in
both cases that the generalization for recurrent networks is not as
good as for feedforward Boolean networks. In fact, for the studied
input sizes, none of the networks reaches a learning probability of
$1$ without training it on all the patterns. The lower learning
probability in RBNs is mainly due to the larger search space and the
recurrent connections, which lead to long transients and bistable
outputs that need to be interpreted in a particular way. Nevertheless,
studying adaptation and learning in RBNs, i.e., with no constraints on
the network connectivity, keeps our approach as generic as possible.
\begin{figure}
\includegraphics[width=.495\textwidth]{figure1}
\includegraphics[width=.495\textwidth]{figure2}
\caption{LEFT: The learning probability of RBNs on the even-odd task
for different problem sizes: $I=3,4,5,7$. With increasing $I$, the
training process requires a smaller fraction of input space in
order to reach a higher learning probability. $N=20$,
{$K=2.0$}, $G_{max} = 500$, init. population $=
50$, crossover rate $= 0.7$, mutation rate $= 0.0$. We calculate
the data over $400$ runs for all $I$s. RIGHT: The learning
probability of RBNs on the mapping task for $I=3,4,5$. We observe
the same behavior, but the networks generalize even worse because
the task is more difficult. $N=40$, same GA parameters.}
\label{fig:evenodd_1}
\end{figure}
To investigate the effect of the average connectivity $K$ on the
learning probability, we repeat the even-odd task for networks with
$K \in \{1.0, 1.5, 2.0, 2.5, 3.0\}$. The network size was held
constant at $N=20$. In order to describe the training performance, we
defined the {\em perfect training likelihood} measure $\alpha(s)$ as
the probability for the algorithm to be able to train the network
with the given {fraction $(s)$ of the input space (see section
\ref{sec:definitions} for definition).
Considering the perfect training likelihood, the results in Figure
\ref{fig:evenodd_2} (RIGHT) show that for networks with subcritical
connectivity $K < 2$, the patterns are harder to learn than with
supercritical connectivity $K>2$. Close to the ``edge of chaos'',
i.e., for $K = 2$ and $K = 2.5$, we see an interesting behavior:
for sample sizes above $40\%$ of the patterns, the perfect training
likelihood increases again. This transition may be related to the
changes in information capacity of the network at $K=2$ and needs
further investigation with different tasks.
The significant difference between the learning probability and the
perfect training likelihood for $s<0.5$ in Figure \ref{fig:evenodd_2}
is due to the small sample size. It is thus very easy for the network
to solve the task correctly, but over all $r$ runs of the experiment,
there is no network that can generalize successfully despite achieving
a perfect training score. Also, according to the definitions in
Section \ref{sec:definitions}, it is not surprising that for a
fraction $s=1$ of the input space, i.e., all patterns are presented,
the learning probability and the perfect training likelihood are
different. Out of $r$ runs, the GA did not find perfect networks for
the task for all example, but if the networks solve the training
inputs perfectly, they will also generalize perfectly because in this
case, the training sample input includes all possible patterns.
\begin{figure}
\includegraphics[width=.495\textwidth]{figure3_left}
\includegraphics[width=.495\textwidth]{figure3_right}
\caption{LEFT: The learning probability of networks with size $N=15$
and $K \in \{1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.5, 5.0\}$ for the
even-odd task of size $I=5$. Networks with connectivity $K = 1.5,
2, 2.5$ have a higher learning probability. RIGHT: The perfect
training likelihood for the same networks and the same task. For $K \geq 2$, correctly classifying them is easier.}
\label{fig:evenodd_2}
\end{figure}
\section{Mean Generalization and Training Score}
\label{sec:exp4}
Figure \ref{fig:Fig14} shows the learning probability (LEFT) and the
perfect training likelihood (RIGHT) measured as Patarnello and
Carnevali did, i.e., they only counted the number of networks with
perfect generalization scores (see Section
\ref{sec:definitions}). Thus, if a network generalizes only $90\%$ of
the patterns, it is not counted in their score. That means that the
probabilistic measures of performance that we used so far have the
drawback of describing the fitness landscape of the space of possible
networks rather than the performance of a particular network, which we
are more interested in. To address this issue, we introduce a new way
of measuring both the learning and the generalization capability. We
define both of these measures as the average of the generalization and
learning fitness over $r$ runs (see section \ref{sec:definitions}).
Figure \ref{fig:Fig15} shows the generalization (LEFT) and the
training score (RIGHT) with this new measure. As opposed to Carnevali
and Patarnello's work, where higher $K$ led to a lower learning
probability, our results with the new measures for higher $K$ lead to
a higher performance with a better generalization and training score.
Our measures therefore better represent the performance of the
networks with regards to a given task because they also include
networks that can partially solve the task.
\begin{figure}
\includegraphics[width=.495\textwidth]{Fig14_left}
\includegraphics[width=.495\textwidth]{Fig14_right}
\caption{Learning probability (LEFT) and perfect training likelihood
(RIGHT). $I=3, N=15$, even-odd task. Compared to the learning
probability for $I=5$, there is not much difference between the
learning probability of networks with various $K$ for $I=3$
because of the small input space. However, the perfect training
likelihood still increases with $K$. $K$ ranges from 1.0 to 4.9
with 0.1 increments.}
\label{fig:Fig14}
\end{figure}
\begin{figure}
\includegraphics[width=.495\textwidth]{Fig15_left}
\includegraphics[width=.495\textwidth]{Fig15_right}
\caption{The new generalization (LEFT) and training score (RIGHT),
which better reflects the performance of the networks with regards
to a given task. $I=3, N=15$, even-odd task. $K$ ranges from 1.0
to 4.9 with 0.1 increments.}
\label{fig:Fig15}
\end{figure}
\section{Cumulative Measures}
\label{sec:cummeasures}
In all the previous generalization figures, the question arises which
networks are ``better'' than others, in particular if they do not
reach a maximal generalization score when less than $100\%$ of the
patterns are presented. This behavior can be observed in Figure
\ref{fig:evenodd_2} (LEFT) for the even-odd task.
Figure \ref{fig:Fig12} shows the {\em cumulative learning probability}
(LEFT) and the {\em cumulative training likelihood} (RIGHT) determined
by integrating numerically (see Section \ref{sec:definitions} for
definitions) the area under the curves of Figure
\ref{fig:Fig14}. Figure \ref{fig:Fig12} (LEFT) shows that $K$ has no
effect on the generalization and that the generalization capability is
very low. Figure \ref{fig:Fig12} (RIGHT) shows that higher $K$
increases the chance of perfect training, i.e., the network can be
trained to memorize all training patterns. Each cluster of
connectivities in Figure \ref{fig:Fig14} (RIGHT) corresponds to a
``step'' in the curves of Figure \ref{fig:Fig12} (RIGHT).
\begin{figure}
\includegraphics[width=.495\textwidth]{cumLRP}
\includegraphics[width=.495\textwidth]{cumTRP}
\caption{LEFT: Cumulative learning probability. RIGHT: Cumulative
training likelihood. Each data point in this figure corresponds to
the area under the curves shown in Figure \ref{fig:Fig14}. $N=15$
in both figures, even-odd task. As one can see, perfect
memorization is more likely with higher $K$, but perfect
generalization is more likely for near-critical connectivity
$1.5\leq K\leq 3$. The cumulative learning probability and the
perfect training likelihood represent the area under the learning
probability and perfect training likelihood curves respectively
(see Figure \ref{fig:evenodd_2} and \ref{fig:Fig14}, and Section
\ref{sec:definitions}).}
\label{fig:Fig12}
\end{figure}
Figure \ref{fig:Fig13} shows the {\em cumulative generalization score}
(LEFT) and the {\em cumulative training score} (RIGHT) based on the
new measure as introduced in Section \ref{sec:exp4}. We have used the
even-odd task for two input sizes, $I=3$ and $I=5$. We observe that
$K$ has now a significant effect on the generalization score. The
higher $K$, the better the generalization. Moreover, different
intervals of $K$ result in a step-wise generalization score
increase. Figure \ref{fig:Fig13} (RIGHT) shows that the cumulative
training score for higher $K$ increases the chance of perfect
training, i.e., the network can be trained to memorize all training
patterns. Also, the higher the input size $I$, the better the
generalization, which was already observed by Patarnello and Carnevali
(see also Section \ref{sec:exp1}).
{In Section~\ref{sec:functionalentropy} we introduced
the functional entropy as a measure of the computational richness of
a network ensembles. {H}igher functional entropy
implies that the probability of functions being realized by a
network ensemble is more evenly distributed. Consequently, even if
the target function for the training has very low probability of
realization in an ensemble with high functional entropy, the
evolutionary process can easily find the functions close to the
target function. Therefore, higher functional entropy lends itself
to higher generalization score. This fact is observable by comparing
figures~\ref{fig:Fig13} and \ref{fig:entropyn20n100}.}
In summary, we have seen so far that according to our new measures,
higher $K$ networks both generalize and memorize better, but they
achieve perfect generalization less often. The picture is a bit more
complicated, however. Our data also shows that for networks around
$K=1.5$, there are more networks in the space of all possible networks
that can generalize perfectly. For $K>1.5$, the networks have a higher
generalization score on average, but there is a lower number of
networks with perfect generalization. That is because the fraction of
networks with perfect generalization is too small with respect to the
space of all the networks. For $K<1.5$, the networks are hard to train,
but if we manage to do so, they also generalize well.
\begin{figure}
\includegraphics[width=.495\textwidth]{cumMGR}
\includegraphics[width=.495\textwidth]{cumMTR}
\caption{LEFT: Cumulative generalization score. RIGHT: Cumulative
training score. $N=15$ in both figures, even-odd task, $I=3$ and
$I=5$. As one can see, both the network's generalization and the
memorization capacity increase with $K$. The cumulative
generalization and training score represent the area under the
mean generalization and training score curves respectively (see
Figure \ref{fig:Fig15} and Section \ref{sec:definitions}).}
\label{fig:Fig13}
\end{figure}
Figure \ref{fig:Fig10} shows the complete cumulative learning
probability (LEFT) and cumulative training likelihood (RIGHT)
landscapes as a function of $K$ and $N$. We observe that according to
these measures, neither the system size nor the connectivity affects
the learning probability. Also, the networks have a very low learning
probability, as seen in Figure \ref{fig:Fig12}. That means that the
performance of the training method does not depend on the system size
and the connectivity and confirms our hypothesis that Carnevali and
Patarnello's measure is more about the method than the network's
performance.
\begin{figure}
\includegraphics[width=.495\textwidth]{Fig10_left}
\includegraphics[width=.495\textwidth]{Fig10_right}
\caption{LEFT: Cumulative learning probability. RIGHT: Cumulative
training likelihood. Even-odd task.}
\label{fig:Fig10}
\end{figure}
Finally, Figure \ref{fig:Fig11} shows the same data as presented in
Figure \ref{fig:Fig10} but with our own score measures. For both the
cumulative generalization score and the cumulative training score, the
network size $N$ has no effect on the generalization and the training,
at least for this task. However, we see that for the cumulative
generalization score, the higher $K$, the higher the generalization
score. The same applies to the cumulative training score. This
contrasts what we have seen in Figure \ref{fig:Fig10}.
\begin{figure}
\includegraphics[width=.495\textwidth]{Fig11_left}
\includegraphics[width=.495\textwidth]{Fig11_right}
\caption{LEFT: Cumulative generalization score. RIGHT: Cumulative
training score. Even-odd task.}
\label{fig:Fig11}
\end{figure}
\section{Discussion}
We have seen that Patarnello and Carnevali's measure quantifies the
fitness landscape of the networks rather than the network's
performance. Our newly defined measures applied to RBNs have shown
that higher $K$ networks both generalize and memorize better. However,
our results suggest that for large input spaces and for $K<1.5$ and
$K>3$ networks, the space of the possible networks changes in a way
that makes it difficult to find perfect networks (see Figures
\ref{fig:evenodd_2} and \ref{fig:Fig14}). On the other hand, for
$1.5\leq K <3$, finding the perfect networks is significantly
easier. This is a direct result of the change in the number of
possible networks and the number of networks that realize a particular
task as a function of $K$.
In \citep{lizier08_alife}, Lizier {\em et al.} investigated
information theoretical aspects of phase transitions in RBNs and
concluded that subcritical networks ($K<2$) are more suitable for
computational tasks that require more of an information storage, while
supercritical networks ($K>2$) are more suitable for computations that
require more of an information transfer. The networks at critical
connectivity ($K=2$) showed a balance between information transfer and
information storage. This finding is purely information theoretic and
does neither consider input and outputs nor actual computational
tasks. In our case, solving the tasks depends on the stable network
states and their interpretations. The results in \citep{lizier08_alife}
do not apply directly to the performance of our networks, but we
believe there is a way to link the findings in future work. Compared
to Lizier {\em et al.}, our experiments show that supercritical
networks do a better job at both memorizing and generalizing. However,
from the point of view of the learning probability, we also observe
that for networks with $1.5\leq K < 3$, we are more likely to find
perfect networks for our specific computational tasks.
{We measured the computational richness of a network
ensemble by using its functional entropy
(Section~\ref{sec:functionalentropy}). In
Section~\ref{sec:cummeasures}, we explained how higher functional
entropy for a network ensemble result in higher generalization
score. In addition to higher generalization score, higher functional
entropy improves the performance of an evolutionary search because
it naturally result in higher fitness diversity in the evolving
population (Figure~\ref{fig:fitstdev}). With more evenly distributed
probability of functions realized by the individual networks in the
ensemble, it is more likely that individuals in the population
realize different functions and thus diversifying the fitness of the
population. This fitness diversity creates higher gradient that
increase the rate of fitness improvement during the evolution
\citep{PRICE:1972p2444}.}
\begin{figure}
\centering
\includegraphics[width=.50\textwidth]{mean_stdev_fitness}
\caption{{Mean fitness standard deviation of the
population for $I=3$ and $I=5$. $N=20$, $1.0\le K\le 3.0$. An
increase in the functional entropy of the network increases the
diversity of the fitness of the population and creates gradients to guide
the search process (cf. figures~\ref{fig:entropyn20n100} and \ref{fig:Fig11}).}}
\label{fig:fitstdev}
\end{figure}
\section{Conclusion}
In this paper we empirically showed that random Boolean networks can
be evolved to solve simple computational tasks. We have investigated
the learning and generalization capabilities of such networks as a
function of the system size $N$, the average connectivity $K$, problem
size $I$, and the task. We have seen
that the learning probability measure used by
\cite{patarnello87:europhys} was of limited use and have thus
introduced new measures, which better describe what the networks are
doing during the training and generalization phase. The results
presented in this paper are invariant of the training parameters and
are intrinsic to both the learning capability of dynamical automata
networks and the complexity of the computational task. Future work
will focus on the understanding of the Boolean function space, in
particular on the function bias.
\section*{Acknowledgments}
This work was partly funded by NSF grant \# 1028120.
\bibliographystyle{plainnat}
|
2,869,038,156,710 | arxiv | \section{Introduction}
Stars form in the molecular clouds with a variety of masses, ranging from high-mass ($>$ 8 M$_\odot$)
to the low-mass upto 0.08 M$_\odot$ and most of the stars emerge as clusters (Lada \& Lada 2003). Despite several observational and theoretical works focusing on the processes involved in the formation and evolution of stars and star clusters,
a number of issues persist. As young star clusters possess young stars of a diverse mass range, formed from the same molecular cloud, they
are particularly suited to enhance our understanding on the physical processes related to the star formation, such as, whether star
formation is a fast or slow process, what is the shape of the initial mass function (IMF) towards the low-mass end, and the
total star formation efficiencies.
However, the presence of massive stars in such systems may significantly influence
the evolution of low-mass stars and subsequent star formation. As soon as the massive stars form,
they tend to ionize the natal cloud and create expanding H{\sc ii} region. The expanding edge of the
H{\sc ii} region (I-front) interacts with the surrounding cloud and may trigger star formation via
various processes. Thus, young star clusters associated with the H{\sc ii} regions and young stellar objects (YSOs) are ideal sites to study the
influence of massive stars on the formation and evolution of low-mass stars and the processes involved
in triggering star formation.
Observations of the early stages of star formation, i.e., Class~{\sc 0}/{\sc i}/{\sc ii} YSOs show that these are associated with the disk accretion, jets and outflows.
However, jets and outflows are only present when a young star possesses an accretion disk. The strength of a jet depends on the evolutionary status of the driving source and the less evolved source
(with thick circumstellar disk) has more powerful outflows. Hence, the presence of jets and outflows in a star forming region can be used to identify young stars
that are deeply embedded in the molecular clouds.
As YSOs possess circumstellar accretion disks during their earlier stages of formation,
they also exhibit excess emission in the longer (particularly IR) wavelengths. However, the amount of excess emission depends on different evolutionary classes, i.e., Class~{\sc 0}, Class {\sc i}, Class {\sc ii}, and
Class {\sc iii} (Andr{\'e} 1995). Hence, in general, this property of the young stars is used to identify and classify them
(Lada et al. 2006; Luhman 2012).
As Class~{\sc iii} objects are disk anemic or possess thin disks, they present no or little excess emission in infrared (IR). Therefore, IR observations become less sensitive for the identification of Class {\sc iii} objects. However, X-ray observations can be utilized to obtain a complete census of young stars
as both Class {\sc ii} and Class {\sc iii} objects are generally
more luminous in the X-ray wavelengths compared than their main-sequence (MS) counterparts (e.g., Feigelson \& Montmerle 1999; Getman et al. 2012).
Hence, IR and X-ray observations are widely used to identify and characterize a complete census of YSOs in star-forming regions.
{\hspace{-0.65cm}\bf Our target: NGC~1893}\\
NGC~1893 is a young star cluster located in the Auriga OB2 association.
It is associated with the H{\sc ii} region W8 (or Sh2-236) and contains about five `O’-type stars and
several `B'-type stars (Marco \& Negueruela 2002). Recent studies place the cluster at a moderate distance ranging from $\sim$ 3.2 - 3.6 kpc.
Reddening, E($B$ - $V$), toward the cluster direction ranges from 0.4 to 0.6 mag (Sharma et al. 2007, Prisinzano et al. 2011, Lim et al. 2014).
The radial extent of the cluster is found to be $\sim$ 6$^\prime$ (Sharma et al. 2007).
Using different color excess ratios over a wide wavelength range, Lim et al. (2014) found a normal reddening law (R$_V$ = 3.1) towards the cluster.
CO (1-0) observations of the H{\sc ii} region/ cluster complex show that it still consists of molecular gas having V$_{LSR}$ = ($-$7.2 $\pm$ 0.5) km $s^{-1}$ (Blitz et al. 1982).
As NGC~1893 is located in the outer Galaxy and probably formed
in the low-metallicity environment, it is interesting to probe how such a rich star cluster formed there despite
of the expected unfavourable conditions for star formation in the outer Galaxy.
The region is associated with two tadpole nebulae, Sim 129 and Sim 130, and several YSOs (Maheswar et al. 2007, Caramazza et al. 2008, 2012, Prisinzano et al. 2011, Lim et al. 2014).
The spatial distribution of the YSOs shows that they have an elongated and aligned
distribution from the cluster center to Sim 129 and Sim 130. An age gradient from the massive stars of the cluster
toward the nebulae is also observed (Pandey et al. 2013, Kuhn, Getman \& Feigelson 2015).
Lata et al. (2012) identified 53 young stellar variables based on the $VI$ bands time-series photometric observations.
They found
that the rotational period of young stars decreases with stellar age and mass, and the amplitude
in the light curves also declines with the same physical quantities. Based on a cumulative age distribution of
the classical T-Tauri stars (CTTSs) and weak-line T-Tauri stars (WTTSs), Pandey et al. (2013) found
that these are coeval. Hence, the cluster/ H{\sc ii} region complex is an active site of star formation and an ideal target to study the
formation and evolution of the young stellar population.
Though the cluster and associated H{\sc ii} region is a target of various optical photometric studies, most of these were shallow
(V$\sim$ 21-22 mag) and complete upto $\sim$ 1 M$_\odot$ (Lim et al. 2014). As low-mass stars out-number high-mass stars in a star cluster,
information on the low-mass stellar population of the young clusters is essential to infer the cluster properties, star formation histories
and mass function (MF). Deep photometric observations are important tools to probe the faint low-mass stars and study the properties of young clusters.
To characterize the low-mass stars and study star formation in the cluster region, we carried out deep $VI$ band
observations of the region with the 4K$\times$4K CCD $IMAGER$ mounted on the 3.6-m Devasthal Optical Telescope (DOT). Our analysis show that
the present optical data are $\sim$ 3 mag deeper than that of the previous studies (e.g., Sharma et al. 2007, Lim et al. 2014).
\begin{figure*}
\centering
\includegraphics[scale = 0.68, trim = 0 0 0 10, clip]{fig1_crop.pdf}
\caption{Left panel: DSS2-R band image of the NGC 1893 complex. Diamond symbols represent the locations of `O' type stars
in the region. Right panel: A color-composite view of the central portion ($\sim$ 6$^\prime$.5 $\times$ 6$^\prime$.5 FOV) of the cluster constructed using the DSS2-R (blue), 4K$\times$4K CCD IMAGER V-band (green) and I-band (red) images.}
\label{fig1}
\end{figure*}
\section{Observations \& Data Reduction}
The $V$$I$ bands photometric observations of the central portion of the cluster NGC~1893 were carried out on 2020 October 15 using the
4K$\times$4K CCD $IMAGER$ mounted at the axial ports of the 3.6m DOT (Pandey et al. 2018).
With a plate scale of $\sim$ 0.095 arcsec/ pixel, the CCD covers a field of view of 6$^\prime$.5 $\times$ 6$^\prime$.5.
The observations were carried out in a 2$\times$2 binning mode to enhance the signal to noise ratio.
The images were taken in readout noise and gain mode of 10 e$^{-}$/s and 5 e$^{-}$/ADU, respectively.
The sky conditions were excellent throughout the night, and during the observations the average full-width half-maxima (FWHM) of the point sources was $\sim$ 0$^{\prime\prime}$.5.
To avoid the saturation and contamination to the flux of the stars by the nearby bright stars, we took 12 exposures of
300s and 210s in both $V$ and $I$-band, respectively. Thus, the total integration time for the $V$ and $I$ band images were 1 hour and 42 minutes,
respectively. Along with the object frames, several bias and flat frames were also taken during the same night.
Pre-processing of the object frames (i.e., bias subtraction, flat-fielding, etc.) was done using the $IRAF$
data reduction package. We combined the $V$ and $I$ band images separately using the $imcombine$ task of $IRAF$.
We used the $DAOPHOT - II$ software package (Stetson 1987) for the source detection and the photometric measurements of the detected sources
in the combined images. The point-spread function (PSF) was obtained for each frame using several
uncontaminated stars and the $ALLSTAR$ task is used to obtain the instrumental magnitudes of the stars in each frame.
The instrumental magnitudes were calibrated using the photometric measurements of the stars in the cluster NGC 1893 by Sharma et al. (2007). For calibration, we restricted the Sharma et al. (2007) catalog to 18$<$V mag$<$20 to avoid the effects of large
photometric errors and saturation of bright stars towards the fainter and brighter ends, respectively. In total, we detected about 1750 stars in both $V$ and $I$ bands. Out of these, about 1510 stars have magnitude uncertainty $<$ 0.1 mag in both $V$ and $I$ bands.
Prisinzano et al. (2011) also carried out optical and infrared (IR) observations of the young cluster NGC~1893 using Device Optimized for LOw RESolution,
a 2K $\times$ 2K CCD camera with a plate scale of 0$^{\prime\prime}$.25 /pixel, mounted on the 3.6-m Telescopio Nazionale Galileo (TNG) at La Palma.
They have reported seeing of $\sim$ 1$^{\prime\prime}$ during the observations.
Though our observations are performed with a similar size telescope, the excellent atmospheric conditions (FWHM $\sim$ 0$^{\prime\prime}$.5) and better spatial
resolution of 4K$\times$4K CCD $IMAGER$ are the advantages. We detected sources below $V$ $\sim$ 25 mag, however considering magnitude uncertainty of $<$ 0.1 mag in both
$V$ and $I$ band, our photometry is limited to $V$ $\sim$ 24 mag. These values are comparable to $VI$ photometry of the same region
taken with the TNG (see Figure 4 and Figure 8 of Prisinzano et al. 2011). Here, we note that during the observations, the reflectivity of the primary mirror of the 3.6m DOT was $\sim$ 60\%. Hence with 90\% reflectivity, we expect to reach about 0.5 mag deeper than the present observations.
Fig. 1 (left panel) shows the DSS2-R band image of the cluster NGC~1893 along with the Sim 129 and Sim 130 (toward the north-east).
Diamond symbols represent the locations of massive `O'-type stars taken from SIMBAD. The square represents the area covered with the
4K$\times$4K CCD $IMAGER$ observations (shown in right panel). A color-composite image of the central portion of the cluster (FOV $\sim$ 6$^\prime$.5 $\times$ 6$^\prime$.5) constructed using the DSS2-R (blue), 4K$\times$4K CCD $IMAGER$ $V$(green) and $I$-band (red) images is shown in the right panel of Fig. 1.
\section{Results \& Discussion}
\subsection{Young stellar population in the region}
The spatial distribution of the young stars traces the sites of recent/ ongoing star formation. Hence, the identification and characterization of young stars in a region is
essential to infer the star formation histories and the physical properties of the star clusters and H{\sc ii} regions.
Caramazza et al. (2008) identified 359 YSOs in the cluster NGC 1893 using {\it Spitzer}-IRAC observations. Based on the IRAC color-color
diagram and X-ray emission, these YSOs were characterized as Class~0/{\sc i}, Class~{\sc ii} or Class~{\sc iii} sources. Out of 359 YSOs,
seven sources were classified as Class~0/{\sc i}, 242 Class~{\sc ii} and 110 Class~{\sc iii}. Caramazza et al. (2008)
also reported $\sim$ 460 candidate members that were not detected at 5.8 and 8.0 $\mu$m
and have
[3.6] - [4.5] $>$ 0.3 mag. Therefore, they could not categorize these candidate members as YSOs.
Prisinzano et al. (2011) identified 1034 disk bearing (Class {\sc 0}/{\sc i}/{\sc ii}) sources and about 442 diskless (Class~{\sc iii})
sources in the young cluster NGC 1893 by applying the Q-index method on the data of Caramazza et al. (2008). However, some of their YSO candidates
($\sim$ 170 sources) appeared older than the majority of young population, which they attributed due to the edge-on disks and accretion activity. Caramazza et al. (2012)
studied the coronal properties of the young stars in the NGC~1893 region using the $Chandra$ X-ray observations.
\subsection{Optical counterparts of the YSOs candidates}
As young stars possess disks in their early stages and show excess emission in the longer wavelengths, the estimation of the physical properties
of the young stars using longer wavelength observations can be problematic. Since, optical observation mostly traces the
photospheric emission from the young stars, these are the best suited to study the physical properties, such as age, mass, etc. of the young stars.
In the
present work, we have used the catalog of Caramazza et al. (2008) to guide our analysis and, in particular, to identify the young stellar
population in the central portion of the cluster.
We looked for the optical counterparts of the candidate members (Table 4 of Caramazza et al. 2008) in our optical data using a matching radius of 0.6 arcsec. We found optical counterparts of 226 member stars within the central portion ($\sim$ 6$^\prime$.5 $\times$ 6$^\prime$.5) of the cluster.
Out of these, one source is Class~{\sc 0}/{\sc i} candidate, whereas 106 sources are Class~{\sc ii} and 42 sources are Class~{\sc iii} in nature.
Though the rest of the 77 sources were designated as candidate members by Caramazza et al. (2008), these were detected in only 3.6 $\mu$m
and 4.5 $\mu$m $IRAC$ bands
and hence were not identified as YSOs based on the IRAC color-color diagram. Accordingly, the nature of these
sources remained ambiguous.
We looked for these 77 candidates in the YSO catalog of Prisinzano et al. (2011) and found that 55 of these
were not present in their YSO catalog either. Since, we are using deep optical $VI$ photometry of the cluster in the present work, it will be helpful to examine the nature of these 55 candidate members of Caramazza et al. (2008).
We also used the catalog of X-ray sources of Caramazza et al. (2012) to select the additional YSO candidates in the cluster. Using a matching radius of
0.6 arcsec, we found optical counterparts of 336 X-ray sources. Out of these, 109 sources were common to those in the catalog of Caramazza et al. (2008). In summary, we found that the evolutionary status of $\sim$ 123 X-ray sources and unclassified candidate members were not previously known.
\subsection{Kinematic members of the cluster NGC~1893 with Gaia EDR3}
The Gaia mission has revolutionized the kinematic studies of the stars and star clusters by providing rich astrometric
and photometric data. In the present work, we used Gaia Early Data Release 3(EDR3; Gaia Collaboration et al. 2020)
catalog to elucidate the kinematics of the stars in the young star cluster NGC~1893. We use Gaia EDR3 data for the sources within the cluster region (radius $\le$ 6 arcmin).
We used the proper motion in RA ($\mu$$_\alpha$$^\star$) and proper motion in declination ($\mu$$_\delta$) for the stars within the cluster radius
to generate the vector point diagram (VPD), where $\mu$$_\alpha$$^\star$ $\equiv$ ${\mu}_{\alpha}\cos(\delta)$.
We restricted only those sources with proper motion uncertainty less than 0.5 mas yr$^{-1}$ and G-band magnitude uncertainty less than 0.1 mag.
The VPD for those sources is shown in Fig. 2. The over-density of sources can be easily noticed in Fig. 2. The proper motion of the stars in the cluster region
peaks at $\mu$$_\alpha$$^\star$,$\mu$$_\delta$ $\sim$ -0.30, -1.43 mas yr$^{-1}$.
A circular area of radius 0.7 mas yr$^{-1}$ around the peak of the over-density in the VPD is used to select the probable cluster members, and
the remaining sources in the VPD are considered as field stars.
To determine the membership probability of the stars in the cluster region, we used the approach discussed in Pandey et al. (2020).
Assuming a distance of $\sim$ 3.25 kpc (Sharma et al. 2007) and a radial velocity dispersion of 1 km s$^{-1}$ for open clusters (Girard et al. 1989),
a dispersion ($\sigma$$_{c}$) of $\sim$ 0.06 mas yr$^{-1}$ in the PMs of the cluster can be expected. We calculated $\mu$$_{xf}$ = 0.84 mas yr$^{-1}$,
$\mu$$_{yf}$ = -2.70 mas yr$^{-1}$, $\sigma$$_{xf}$ = 2.29 mas yr$^{-1}$ and $\sigma$$_{yf}$ = 3.76 mas yr$^{-1}$ for the probable field members. These values are further used to construct
the frequency distributions of the cluster stars (${\phi }_{c}^{\nu }$) and field stars (${\phi }_{f}^{\nu }$) by using the equation given in Yadav et al. (2013)
and then the value of membership probability for the $i$$^{th}$ star is calculated using the equation given below:
\begin{equation}
P_\mu(i) = {{n_c\times\phi^\nu_c(i)}\over{n_c\times\phi^\nu_c(i)+n_f\times\phi^\nu_f(i)}}
\end{equation}
\begin{figure}
\centering
\includegraphics[scale = 0.67, trim = 0 0 0 0, clip]{pm1_crop.pdf}
\caption{Proper motion vector-point diagram for the Gaia sources located within the cluster radius. The red triangles show the young stellar candidates having $VI$
photometry in the present work.}
\label{fig2}
\end{figure}
\begin{figure}
\includegraphics[scale = 0.5, trim = 0 0 0 0, clip]{mag_pmu_par_crop.pdf}
\caption{Top Panel: Membership probability for the stars located within the cluster area (radius $\sim$ 6$^{\prime}$) plotted as a function
of G-band magnitude. Bottom panel: The blue dots show the distribution of
parallax values for the Gaia sources within the cluster as a function of G magnitude. The error bars show respective uncertainties in the parallax values.
The red circles represent the probable cluster members (P$_\mu$ $>$ 80\%). The green squares represent the probable cluster members with
good parallax estimates ($\varpi$/$\sigma$$_\varpi$ $>$ 5).}
\label{fig3}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale = 0.8, trim = 0 0 0 0, clip]{cmd_gaia_crop.pdf}
\includegraphics[scale = 0.8, trim = 0 0 0 0, clip]{yso_cmd_crop.pdf}
\caption{Top Panel: $V$/($V$-$I$) color-magnitude diagram for the stars in the central portion of the cluster ($\sim$ 6$^{\prime}$.5 $\times$ 6$^{\prime}$.5) obtained using the 4K$\times$4K CCD $IMAGER$ observations. The open circles represent Gaia sources with P$_\mu$$>$80\%.
Bottom panel: $V$/($V$ - $I$) color-magnitude diagram for the young stars in the cluster region. The red filled circle represents the
Class~{\sc i} object, magenta squares represent the Class {\sc ii} objects, and blue triangles
represent the Class {\sc iii} objects from Caramazza et al.(2008). Cross symbols represent the optical counterparts of the X-ray sources detected by the $Chandra$
observations (Caramazza et al. 2012) that were not listed in Caramazza et al.(2008). Star symbols represent
the candidate members of the cluster from Caramazza et al. (2008) that were detected in only two $IRAC$ bands and do not exhibit
X-ray emission. The zero-age main sequence (Girardi et al. 2002) and 1, 10 Myr pre-main sequence isochrones (Siess
et al. 2000) corrected for the adopted distance and reddening are also plotted.}
\label{fig4}
\end{figure*}
where n$_c$ and n$_f$ are the normalized number of probable cluster members and field members, respectively.
In figure 3 (upper panel), we have plotted the estimated membership probability for all the Gaia sources within the cluster radius as a function of G-band magnitude as blue dots.
Gaia sources with high membership probability (P$_\mu$ $>$ 80\%) are shown with red circles. There seems to be a clear separation between the cluster members and field stars toward the brighter part, supporting the effectiveness of this technique. A high membership probability extends down to G $\sim$ 19 mag, whereas toward the fainter limits, the probability gradually decreases.
A majority of the stars with high membership probability follow a tight distribution in the VPD. From the above analysis, we calculated the membership probability of
$\sim$ 950 stars in the cluster region.
We also plotted the parallax for all the Gaia sources as a function of G-band magnitude (dots in Fig. 3, bottom panel). The respective uncertainties in the parallax values are shown with the error bars. The red circles represent sources having membership probability
P$_\mu$$>$ 80\%. We also estimated the cluster distance using the parallax values of the cluster members having high membership probability (P$_\mu$$>$ 80\%) and
good parallax accuracies ($\varpi$/$\sigma$$_\varpi$ $>$5). These sources are shown with the square symbols in Fig. 3 (bottom panel). The median parallax value of these sources
is 0.318 $\pm$ 0.054 mas. We estimate the cluster distance after correcting the median parallax value for the known parallax offset of $\sim$ -0.015 (Statssun \& Torres 2021).
The distance estimate for the cluster using Gaia data comes out to be $\sim$ 3.30 $\pm$ 0.54 kpc, which is in agreement with that reported by Sharma et al. (2007).
We also cross-matched the YSOs that have counterparts in our optical photometry to sources in the Gaia catalog. Using a matching radius of $\sim$ 1.2 arcsec, we found $\sim$ 135 Gaia counterparts. These sources are shown with the red triangles in the VPD. Except for few contaminants, most of them follow the distribution of probable cluster members in the VPD, suggesting that these YSO candidates
are the cluster members.
From the above analysis, it is clear that using the Gaia EDR3 data, our sample of kinematic members of the cluster is restricted to only relatively bright members (G$<$ 19 mag). Therefore, to select the candidate members towards the low-mass
end, we adopted the procedure discussed in the following section.
\subsection{Color-Magnitude Diagram: Constraining Young Nature of the Candidate Members}
The color-magnitude diagram (CMD) is often used to study the mass distribution and evolutionary stages of the stars of a cluster.
In Fig. 4 (top panel), we show the $V$ / ($V$ - $I$) CMD for all the stars in our optical catalog within the cluster region.
The red circles represent the cluster members with high membership probabilities (P$_\mu$ $>$ 80\%). We have also plotted a zero-age main sequence (ZAMS) isochrone (thin blue curve) from Girardi et al. (2002) and 1, 10 Myr pre-main sequence (PMS) isochrones (dashed and thick curves, respectively) from Siess et al. (2000). In the present work, we adopted a distance of $\sim$ 3.25 kpc
and reddening E($B$ - $V$) of 0.4 mag for the cluster (Sharma et al. 2007). All the isochrones are corrected for the
adopted distance and reddening. The probable cluster members (shown with red circles in top panel of Fig. 4) clearly show that most of these are young in the CMD. As can be seen, there is a clear color gap in the cluster CMD (along the 10 Myr isochrone) that seems to separate a young stellar population
(toward the right side in the CMD) from the likely field stars toward the cluster direction. A similar
separation of PMS population on the CMD is also observed by Panwar et al. (2018) for the young cluster
Berkeley 59.
There may be significant contamination by the field population along the line of sight (e.g., Sung \& Bessell 2010; Sung et al. 2013) that can also contaminate our CMD. Therefore, to further examine the distribution of our proposed cluster population, we used the young stars selected based on the X-ray and IR emission properties from Caramazza et al. (2008).
We constructed the $V$ / ($V$ - $I$) CMD for the {\it Spitzer}-IRAC and Chandra X-ray identified stars (see Fig. 4, bottom panel). In Fig. 4 (bottom panel), the red filled circle shows the location of the Class~{\sc i} object, magenta squares represent the Class {\sc ii} objects, and blue triangles
represent the Class {\sc iii} objects from Caramazza et al.(2008).
By examining the locations of these YSOs in the CMD, we found that most of the YSOs are
located to the right of the 10 Myr PMS isochrone of Siess et al. (2000),
confirming that the contamination due to field stars in the PMS population of the cluster region is insignificant.
A majority of the
YSOs selected based on the IR and X-ray properties by Caramazza et al. (2008) are young low-mass stars.
To examine the evolutionary status of the ambiguous candidate members (without 5.8 and 8.0 $\mu$m photometry) of the
Caramazza et al. (2008) and X-ray sources from Caramazza et al. (2012), we placed these sources on the optical CMD.
In Fig. 4 (bottom panel), cross symbols represent the optical counterparts of the X-ray sources detected by the $Chandra$
observations (Caramazza et al.2012) that were not cataloged by Caramazza et al. (2008) as YSOs. In contrast, the star symbols represent the ambiguous candidate members of NGC~1893 cataloged
in Table 4 by Caramazza et al. (2008). The positions of these sources on the optical CMD suggest that a majority of them are young
low-mass stars. The positions of these young stars alongwith the $V$ and $I$ band magnitudes are listed in Table 1.
Comparing the distribution of all stars with the {\it Spitzer} and X-ray identified
YSOs in the $V$/($V$ - $I$) CMD, we suggest that the stars located toward the right of the 10 Myr isochrone may belong to the PMS population of the cluster NGC~1893.
However, deep optical observations of a nearby control field or spectroscopic observations of the PMS population are necessary for the confirmation.
\begin{figure}
\centering
\includegraphics[scale = 0.64, trim = 0 0 0 0, clip]{yso_mf_crop.pdf}
\caption{The MF of the YSOs in the mass range (0.2$\le$M/M$_\odot$$\le$2.5), derived from the optical data. The error bars represent $\pm$ $\sqrt{N}$ errors. The dashed line shows the least-squares fit to the mass ranges described in the text. The value of the slope obtained is given in the figure.}
\label{fig5}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale = 0.56, trim = 0 0 0 0, clip]{mass_dis_crop.pdf}
\includegraphics[scale = 0.56, trim = 0 0 0 0, clip]{cumu_crop.pdf}
\caption{Left panel: Spatial distribution of the YSO candidates within the 3 arcmin radius around the cluster center (shown with green star symbol). Right panel: Cumulative radial distribution of the young stars
in two mass bins.}
\label{fig6}
\end{figure*}
\subsection{Mass Function of the Young Stars}
The distribution of the stellar masses in a star cluster at a star formation event is termed as the IMF of that star cluster.
Young star clusters (age $<$ 10 Myr) are the particularly suited for the IMF studies. They are too young to loose a significant number of members
either due to dynamical evolution or stellar evolution. Hence, their MFs can be considered IMFs. The variation of the MF
is an important tool that gives clues
to the physical conditions of star formation processes (e.g., Bate 2009). The MF is defined as the number of stars per unit logarithmic mass interval,
and is generally represented by a power law with a slope,
$\Gamma$ = d log N(log m)/d log m,
where N(log m) is the distribution of the number of stars per unit logarithmic mass interval. Observational results for the most of the star clusters in the solar neighbourhood suggest MF slopes similar to that
given by Salpeter (1955), that is, $\Gamma$ = -1.35.
Here, we used the optical CMD to count the number of stars in different mass bins, which is shown in the lower panel of Fig. 4 along with isochrones and evolutionary tracks.
Since our YSO sample is complete down to 0.2 M$_\odot$, for the MF study we have taken only those YSOs that have masses in the range of
0.2$\le$ M/M$_\odot$$\le$2.5. The mass distribution of our YSO sample has a best-fitting slope, $\Gamma$ = -1.43 $\pm$ 0.15 (see Fig. 5), similar to the Salpeter value.
Our YSO MF appears consistent with those reported for other active star-forming regions, for example, the MF of a YSO sample
(with masses $>$0.2 M$_\odot$) derived by Erickson et al. (2011), the YSO MF slopes of the star-forming regions W51 A
($\Gamma$ = -1.17 $\pm$ 0.26) and W51 B ($\Gamma$ = -1.32 $\pm$ 0.26) by Kang et al. (2009)
and the YSO MF slope of the young cluster IC~1805 ($\Gamma$ = -1.23 $\pm$ 0.14) derived by Panwar et al. (2017).
The MF slope below $\sim$1 M$_\odot$ show a flattening. It has already been pointed out earlier by many groups
(see e.g., Kroupa 2002; Chabrier 2003, 2005; Ojha et al. 2009) that for the masses above $\sim$1 M$_\odot$, the MF can generally be approximated by a declining power law with a slope similar to the Salpeter, whereas for the masses below $\sim$1 M$_\odot$ the distribution becomes flatter,
and turns down at the lowest stellar masses.
\subsection{Indication for the Mass Segregation}
Although, there are extensive studies on mass segregation in star clusters, only a few focused on the mass segregation among low-mass stars (e.g., Andersen et al. 2011, Panwar et al. 2018). Sharma et al. (2007) studied the mass segregation in the young cluster NGC 1893 in the mass range 5.5$\le$M/M$_\odot$$\le$17.7
and found that the high-mass stars are more centrally concentrated than their lower-mass siblings. In the present work, we investigated the mass segregation in low-mass stars
by dividing our low-mass young stars, that are located within the 3 arcmin radius around the cluster center, into two mass groups, 0.2$\le$M/M$_\odot$$\le$0.5 and
0.5$\le$M/M$_\odot$$\le$2.5. Fig. 6 (left panel) shows the spatial distribution of the stars from these two different groups. Toward the center of the cluster, most of the
YSOs seem to be relatively massive. The cumulative distribution of the young stars as a function of radial distance from the cluster center in two different mass groups (see Fig. 6, right panel)
also suggest that more massive stars (0.5$\le$M/M$_\odot$$\le$2.5) tend to lie toward the cluster center, which indicates the mass segregation in the central portion of the cluster region. As the estimated relaxation time for the cluster is very large compared to the age of the cluster, the
observed mass segregation in the cluster may be primordial in nature.
\begin{table*}
\caption{Positions and photometric magnitudes of new YSO candidates.}
\small
\begin{tabular}{ccccc}
\hline
$Id$ &$\alpha_{(2000)}$ & $\delta_{(2000)}$ & V $\pm$ eV& I $\pm$ eI\\
\hline
1 & 80.65629 & 33.36719 & 24.45 $\pm$ 0.18 & 20.76 $\pm$ 0.04\\
2 & 80.73150 & 33.37947 & 23.53 $\pm$ 0.08 & 20.52 $\pm$ 0.04\\
3 & 80.65462 & 33.38117 & 24.93 $\pm$ 0.31 & 21.38 $\pm$ 0.11\\
4 & 80.65742 & 33.37886 & 23.75 $\pm$ 0.10 & 20.46 $\pm$ 0.03\\
5 & 80.73238 & 33.42503 & 23.08 $\pm$ 0.06 & 19.87 $\pm$ 0.02\\
6 & 80.64283 & 33.38447 & 22.43 $\pm$ 0.03 & 19.07 $\pm$ 0.02\\
7 & 80.75638 & 33.42536 & 22.43 $\pm$ 0.03 & 19.37 $\pm$ 0.02\\
8 & 80.69583 & 33.39625 & 21.77 $\pm$ 0.02 & 18.95 $\pm$ 0.01\\
9 & 80.65854 & 33.40050 & 23.63 $\pm$ 0.09 & 20.19 $\pm$ 0.03\\
10 & 80.74296 & 33.37753 & 22.15 $\pm$ 0.03 & 19.74 $\pm$ 0.02\\
11 & 80.69575 & 33.43133 & 22.75 $\pm$ 0.05 & 19.63 $\pm$ 0.02\\
12 & 80.70321 & 33.42994 & 22.88 $\pm$ 0.04 & 19.74 $\pm$ 0.02\\
13 & 80.64454 & 33.43739 & 21.73 $\pm$ 0.02 & 19.20 $\pm$ 0.02\\
14 & 80.71604 & 33.44136 & 24.32 $\pm$ 0.15 & 20.67 $\pm$ 0.03\\
15 & 80.72604 & 33.42392 & 23.23 $\pm$ 0.06 & 20.16 $\pm$ 0.02\\
16 & 80.74096 & 33.45031 & 23.83 $\pm$ 0.10 & 20.55 $\pm$ 0.04\\
17 & 80.69533 & 33.36542 & 22.09 $\pm$ 0.02 & 19.37 $\pm$ 0.02\\
18 & 80.70000 & 33.44533 & 20.81 $\pm$ 0.01 & 18.57 $\pm$ 0.01\\
19 & 80.69396 & 33.44772 & 22.76 $\pm$ 0.04 & 19.55 $\pm$ 0.02\\
20 & 80.68996 & 33.44886 & 21.20 $\pm$ 0.02 & 18.72 $\pm$ 0.01\\
21 & 80.73087 & 33.44939 & 24.11 $\pm$ 0.13 & 20.48 $\pm$ 0.03\\
22 & 80.72146 & 33.45006 & 22.24 $\pm$ 0.03 & 19.38 $\pm$ 0.02\\
23 & 80.65863 & 33.45064 & 23.40 $\pm$ 0.08 & 20.58 $\pm$ 0.04\\
24 & 80.75554 & 33.46075 & 22.32 $\pm$ 0.03 & 19.85 $\pm$ 0.02\\
25 & 80.67600 & 33.45364 & 23.09 $\pm$ 0.06 & 19.93 $\pm$ 0.03\\
26 & 80.72067 & 33.46458 & 23.43 $\pm$ 0.07 & 20.06 $\pm$ 0.03\\
27 & 80.68438 & 33.41661 & 20.44 $\pm$ 0.01 & 18.11 $\pm$ 0.01\\
28 & 80.69267 & 33.45094 & 23.02 $\pm$ 0.06 & 19.63 $\pm$ 0.02\\
29 & 80.72354 & 33.42625 & 23.71 $\pm$ 0.10 & 20.21 $\pm$ 0.03\\
30 & 80.71304 & 33.46053 & 23.90 $\pm$ 0.14 & 20.32 $\pm$ 0.03\\
31 & 80.63804 & 33.46564 & 21.61 $\pm$ 0.03 & 19.20 $\pm$ 0.02\\
32 & 80.69742 & 33.46828 & 23.49 $\pm$ 0.11 & 20.16 $\pm$ 0.03\\
33 & 80.68633 & 33.38081 & 24.43 $\pm$ 0.18 & 21.27 $\pm$ 0.07\\
34 & 80.72071 & 33.38917 & 23.47 $\pm$ 0.08 & 20.05 $\pm$ 0.03\\
35 & 80.68033 & 33.40683 & 24.32 $\pm$ 0.19 & 20.39 $\pm$ 0.03\\
36 & 80.71533 & 33.40808 & 24.06 $\pm$ 0.13 & 20.41 $\pm$ 0.03\\
37 & 80.73371 & 33.41489 & 22.19 $\pm$ 0.02 & 19.08 $\pm$ 0.01\\
38 & 80.70925 & 33.40958 & 23.03 $\pm$ 0.06 & 19.89 $\pm$ 0.02\\
39 & 80.71037 & 33.41331 & 23.66 $\pm$ 0.10 & 20.55 $\pm$ 0.03\\
40 & 80.67242 & 33.42939 & 21.02 $\pm$ 0.02 & 18.50 $\pm$ 0.01\\
41 & 80.69050 & 33.43528 & 23.31 $\pm$ 0.07 & 20.49 $\pm$ 0.04\\
42 & 80.68492 & 33.43072 & 23.11 $\pm$ 0.07 & 19.88 $\pm$ 0.02\\
43 & 80.73079 & 33.44306 & 23.13 $\pm$ 0.06 & 19.94 $\pm$ 0.02\\
44 & 80.70250 & 33.44439 & 21.86 $\pm$ 0.02 & 19.06 $\pm$ 0.01\\
45 & 80.65404 & 33.37956 & 23.95 $\pm$ 0.14 & 20.82 $\pm$ 0.05\\
46 & 80.71221 & 33.36967 & 19.96 $\pm$ 0.01 & 17.82 $\pm$ 0.01\\
47 & 80.70875 & 33.44600 & 21.99 $\pm$ 0.06 & 19.00 $\pm$ 0.02\\
48 & 80.70008 & 33.36347 & 18.88 $\pm$ 0.01 & 16.98 $\pm$ 0.01\\
49 & 80.70225 & 33.40778 & 19.80 $\pm$ 0.01 & 17.59 $\pm$ 0.01\\
50 & 80.67783 & 33.41528 & 22.42 $\pm$ 0.04 & 19.57 $\pm$ 0.02\\
51 & 80.73717 & 33.44658 & 22.15 $\pm$ 0.03 & 19.26 $\pm$ 0.02\\
52 & 80.68675 & 33.41978 & 22.42 $\pm$ 0.04 & 19.73 $\pm$ 0.02\\
53 & 80.68450 & 33.41511 & 21.30 $\pm$ 0.02 & 18.81 $\pm$ 0.01\\
54 & 80.69863 & 33.44711 & 20.77 $\pm$ 0.01 & 18.13 $\pm$ 0.01\\
55 & 80.74375 & 33.44139 & 23.61 $\pm$ 0.08 & 20.66 $\pm$ 0.04\\
\hline
\end{tabular}
\begin{tabular}{ccccc}
\hline
$Id$ &$\alpha_{(2000)}$ & $\delta_{(2000)}$ & V $\pm$ eV& I $\pm$ eI\\
\hline
56 & 80.69458 & 33.39606 & 22.87 $\pm$ 0.05 & 19.85 $\pm$ 0.02\\
57 & 80.71696 & 33.44383 & 23.10 $\pm$ 0.05 & 20.21 $\pm$ 0.03\\
58 & 80.72688 & 33.43981 & 22.68 $\pm$ 0.05 & 20.06 $\pm$ 0.03\\
59 & 80.70908 & 33.44658 & 18.97 $\pm$ 0.01 & 17.41 $\pm$ 0.01\\
60 & 80.73842 & 33.44344 & 21.17 $\pm$ 0.02 & 18.72 $\pm$ 0.01\\
61 & 80.68525 & 33.42400 & 23.06 $\pm$ 0.05 & 20.10 $\pm$ 0.03\\
62 & 80.68050 & 33.44478 & 22.40 $\pm$ 0.03 & 19.52 $\pm$ 0.02\\
63 & 80.68983 & 33.41750 & 21.68 $\pm$ 0.02 & 18.75 $\pm$ 0.01\\
64 & 80.69846 & 33.41500 & 21.70 $\pm$ 0.02 & 18.97 $\pm$ 0.01\\
65 & 80.70829 & 33.41067 & 20.86 $\pm$ 0.02 & 18.37 $\pm$ 0.01\\
66 & 80.70017 & 33.41553 & 20.31 $\pm$ 0.01 & 18.14 $\pm$ 0.01\\
67 & 80.69579 & 33.42458 & 20.04 $\pm$ 0.01 & 17.94 $\pm$ 0.01\\
68 & 80.70346 & 33.42611 & 19.88 $\pm$ 0.01 & 17.81 $\pm$ 0.01\\
69 & 80.71450 & 33.41242 & 20.53 $\pm$ 0.01 & 18.42 $\pm$ 0.01\\
70 & 80.68837 & 33.40383 & 22.59 $\pm$ 0.07 & 19.41 $\pm$ 0.02\\
71 & 80.69200 & 33.43078 & 19.43 $\pm$ 0.02 & 17.53 $\pm$ 0.01\\
72 & 80.71504 & 33.40528 & 21.51 $\pm$ 0.02 & 19.08 $\pm$ 0.01\\
73 & 80.70096 & 33.41422 & 22.14 $\pm$ 0.03 & 19.11 $\pm$ 0.01\\
74 & 80.71096 & 33.44517 & 20.65 $\pm$ 0.01 & 18.41 $\pm$ 0.01\\
75 & 80.72900 & 33.42025 & 20.06 $\pm$ 0.01 & 17.91 $\pm$ 0.01\\
76 & 80.69771 & 33.42211 & 19.71 $\pm$ 0.01 & 17.65 $\pm$ 0.01\\
77 & 80.68375 & 33.39275 & 22.75 $\pm$ 0.04 & 20.14 $\pm$ 0.03\\
78 & 80.70804 & 33.42825 & 17.89 $\pm$ 0.01 & 16.35 $\pm$ 0.01\\
79 & 80.70304 & 33.37025 & 21.33 $\pm$ 0.02 & 18.85 $\pm$ 0.01\\
80 & 80.70808 & 33.44975 & 19.61 $\pm$ 0.02 & 17.50 $\pm$ 0.01\\
81 & 80.75825 & 33.45269 & 23.20 $\pm$ 0.07 & 20.34 $\pm$ 0.03\\
82 & 80.70363 & 33.46756 & 22.68 $\pm$ 0.05 & 19.91 $\pm$ 0.02\\
83 & 80.68200 & 33.37367 & 23.22 $\pm$ 0.07 & 19.90 $\pm$ 0.02\\
84 & 80.68908 & 33.39406 & 23.13 $\pm$ 0.06 & 20.14 $\pm$ 0.03\\
85 & 80.69333 & 33.39747 & 21.93 $\pm$ 0.02 & 18.92 $\pm$ 0.01\\
86 & 80.74046 & 33.45139 & 24.93 $\pm$ 0.27 & 21.40 $\pm$ 0.07\\
87 & 80.68050 & 33.45406 & 22.89 $\pm$ 0.05 & 20.09 $\pm$ 0.03\\
88 & 80.69779 & 33.45925 & 22.97 $\pm$ 0.06 & 20.06 $\pm$ 0.02\\
89 & 80.74229 & 33.44472 & 18.74 $\pm$ 0.04 & 17.00 $\pm$ 0.01\\
90 & 80.69917 & 33.43533 & 22.44 $\pm$ 0.04 & 19.44 $\pm$ 0.05\\
91 & 80.71362 & 33.41425 & 22.92 $\pm$ 0.05 & 20.22 $\pm$ 0.03\\
92 & 80.74192 & 33.43458 & 21.00 $\pm$ 0.02 & 18.70 $\pm$ 0.02\\
93 & 80.69258 & 33.42667 & 20.86 $\pm$ 0.04 & 18.57 $\pm$ 0.01\\
94 & 80.71742 & 33.43439 & 22.40 $\pm$ 0.04 & 19.73 $\pm$ 0.02\\
95 & 80.71613 & 33.42244 & 22.28 $\pm$ 0.03 & 19.74 $\pm$ 0.02\\
96 & 80.70333 & 33.43089 & 20.87 $\pm$ 0.01 & 18.27 $\pm$ 0.01\\
97 & 80.73067 & 33.43503 & 21.46 $\pm$ 0.02 & 18.96 $\pm$ 0.01\\
98 & 80.67933 & 33.44217 & 19.81 $\pm$ 0.01 & 17.87 $\pm$ 0.01\\
99 & 80.74229 & 33.44506 & 20.02 $\pm$ 0.02 & 17.86 $\pm$ 0.01\\
100 & 80.69483 & 33.43578 & 23.22 $\pm$ 0.06 & 20.21 $\pm$ 0.03\\
101 & 80.69554 & 33.44658 & 24.25 $\pm$ 0.19 & 20.62 $\pm$ 0.04\\
102 & 80.67579 & 33.46319 & 23.33 $\pm$ 0.09 & 20.53 $\pm$ 0.04\\
103 & 80.65292 & 33.43331 & 22.72 $\pm$ 0.05 & 19.25 $\pm$ 0.02\\
104 & 80.69171 & 33.42394 & 18.96 $\pm$ 0.02 & 16.90 $\pm$ 0.01\\
105 & 80.68162 & 33.44603 & 20.76 $\pm$ 0.01 & 18.30 $\pm$ 0.01\\
106 & 80.68975 & 33.42769 & 18.88 $\pm$ 0.01 & 16.98 $\pm$ 0.01\\
107 & 80.71637 & 33.44664 & 22.21 $\pm$ 0.04 & 19.52 $\pm$ 0.02\\
108 & 80.71179 & 33.42675 & 21.53 $\pm$ 0.05 & 18.89 $\pm$ 0.01\\
109 & 80.69796 & 33.42717 & 22.94 $\pm$ 0.05 & 19.77 $\pm$ 0.02\\
110 & 80.69467 & 33.42525 & 20.74 $\pm$ 0.02 & 18.24 $\pm$ 0.01\\
\hline
\end{tabular}
\label{tab1}
\end{table*}
\section{Summary \& conclusions}
Here, we present the results of our deep optical ($VI$) observations of the central portion of the cluster NGC~1893. Thanks to the excellent observing conditions and spatial resolution of 4K$\times$4K CCD $IMAGER$,
{present data are $\sim$ 3 mag deeper than most of the previous studies.}
\begin{itemize}
\item Considering a distance of $\sim$ 3.25 kpc and reddening E($B$ - $V$)
of $\sim$ 0.4 mag for the cluster, our optical data are deep enough to reveal the stars below $\sim$ 0.2 M$_\odot$.
\item We found counterparts of $\sim$ 450 YSOs, candidate members and X-ray sources in our optical catalog.
\item We estimated the membership probability of the stars in the cluster region using the Gaia EDR3. The parallax values of the stars with high membership probability and good parallax measurements
($\varpi$/$\sigma$$_\varpi$ $>$ 5) are used to estimate the cluster distance. The estimated distance of $\sim$ 3.30 kpc is in good agreement with the photometric distance estimate for the cluster reported by Sharma et al. (2007).
\item The locations of the young stellar candidates in the
$V$/($V$ - $I$) CMD show that most of them have ages $<$ 10 Myr. Comparing the CMDs for all stars and young members in the cluster region,
we suggest an insignificant contribution of the field stars in the PMS zone of the cluster CMD and young stellar population of the cluster region can be identified using the CMD.
\item We also found that most of the candidate members having no classification flag in Caramazza et al. (2008) and X-ray sources in Caramazza et al. (2012) also occupy the locations
of PMS stars in CMD.
\item Based on the present optical observations, we identified $\sim$ 425 young stars in the central portion of the cluster NGC~1893 and $\sim$ 110 of these were new. Our results also demonstrate that present optical $VI$ observations with the 4K$\times$4K CCD $IMAGER$ reveal faint low-mass stars ($V$ $\sim$ 25 mag at 60\% reflectivity of the primary mirror) in the cluster NGC~1893.
\item The MF of our YSO sample has a power-law index
of -1.43 $\pm$ 0.15, close to the Salpeter value (-1.35) and reported for other star forming regions. The spatial distribution of the YSOs as a function of mass suggest that
toward the cluster center, most of the stars are relatively massive.
\end{itemize}
\section*{Acknowledgements}
We thank the referee for valuable comments that significantly improved the manuscript. We are thankful to the DTAC and staff of the 3.6-m DOT which is operated by ARIES.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (https://www.cosmos.esa.int/gaia), processed by the {\it Gaia}
Data Processing and Analysis Consortium. Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\vspace{-1em}
\begin{theunbibliography}{}
\vspace{-1.5em}
\bibitem{latexcompanion}
Andr{\'e}, P. 1995, Ap\&SS, 224, 29
\bibitem{latexcompanion}
Andersen M., Meyer M. R., Robberto M., Bergeron L. E. \& Reid N. 2011, A\&A,534, A10
\bibitem{latexcompanion}
Bate M. R. 2009, MNRAS, 392, 1363
\bibitem{latexcompanion}
Blitz L., Fich M. \& Stark A. A. 1982, ApJS, 49, 183
\bibitem{latexcompanion}
Caramazza M., Micela G., Prisinzano L., Rebull L., Sciortino S., Stauffer J. R. 2008, A\&A, 488, 211
\bibitem{latexcompanion}
Caramazza M. et al., 2012, A\&A, 539, A74
\bibitem{latexcompanion}
Chabrier G. 2003, PASP, 115, 763
\bibitem{latexcompanion}
Chabrier G. 2005, in Corbelli E., Palla F., Zinnecker H., eds, Astrophysicsand Space Science Library, Vol. 327, The Initial Mass Function 50 YearsLater. Springer, Dordrecht, p. 41
\bibitem{latexcompanion}
Feigelson E. D. \& Montmerle T. 1999, ARA\&A, 37, 363
\bibitem{latexcompanion}
Gaia Collaboration 2020, VizieR Online Data Catalog, I/350
\bibitem{latexcompanion}
Getman K. V., Feigelson E. D., Sicilia-Aguilar A., Broos P. S., Kuhn M. A. \& Garmire G. P. 2012, MNRAS, 426, 2917
\bibitem{latexcompanion}
Girard T. M., Grundy W. M., Lopez C. E. \& Van Altena W. F. 1989, AJ, 98, 227
\bibitem{latexcompanion}
Girardi L. et al. 2002, A\&A, 391, 195
\bibitem{latexcompanion}
Kroupa P. 2002, Science, 295, 82
\bibitem{latexcompanion}
Kuhn M. A., Getman K. V., Feigelson E. D. 2015, ApJ, 802, 60
\bibitem{latexcompanion}
Lada C. J. \& Lada E. A. 2003, ARA\&A, 41, 57
\bibitem{latexcompanion}
Lada C. J., Muench A. A., Luhman, K. L., et al. 2006, AJ, 131, 1574
\bibitem{latexcompanion}
Lata et al. 2014, MNRAS, 442, 273
\bibitem{latexcompanion}
Lim B., Sung H., Kim J. S., Bessell M. S., Park B. G., 2014, MNRAS, 443,454
\bibitem{latexcompanion}
Luhman K. L. 2012, ARA\&A, 50, 65
\bibitem{latexcompanion}
Maheswar G., Sharma S., Biman J. M., Pandey A. K., \& Bhatt, H. C. 2007, MNRAS, 379, 123
\bibitem{latexcompanion}
Marco A. \& Negueruela I. 2002, A\&A, 393, 195
\bibitem{latexcompanion}
Ojha D. K., Tamura M., Nakajima Y. et al. 2019, ApJ, 693,634
\bibitem{latexcompanion}
Pandey A. K., Samal M. R., Chauhan N., Eswaraiah C., Pandey J. C., Chen W. P., Ojha D. K. 2013, New Astron., 19, 1
\bibitem{latexcompanion}
Pandey R., Sharma S., Panwar N. et al. 2020, ApJ, 891, 81
\bibitem{latexcompanion}
Pandey S. B., Yadav R. K. S., Nanjappa N., Yadav S., Krishna Reddey B.,Sahu S., Srinivasan R. 2018, Bull. Soc. Roy. Soc. Liege (BSRSL),
87,42
\bibitem{latexcompanion}
Panwar N., Samal M. R., Pandey A. K., Jose J., Chen W. P. et al. 2017, MNRAS, 468, 2684
\bibitem{latexcompanion}
Panwar N., Pandey A. K., Samal M. R. et al. 2018, AJ, 155, 44
\bibitem{latexcompanion}
Prisinzano L., Sanz-Forcada J., Micela G., Caramazza M., Guarcello M. G., Sciortino S., Testi L. 2011, A\&A, 527, 19
\bibitem{latexcompanion}
Salpeter E. E. 1955, ApJ, 121, 161
\bibitem{latexcompanion}
Stassun K. \& T. Guillermo 2021, ApJ, 907, 33
\bibitem{latexcompanion}
Sharma S., Pandey A. K., Ojha D. K., Chen W. P., Ghosh S. K., Bhatt B. C.,Maheswar G., Sagar R. 2007, MNRAS, 380, 1141
\bibitem{latexcompanion}
Siess L., Dufour E., Forestini M., 2000, A\&A, 358, 5931
\bibitem{latexcompanion}
Yadav R. K. S., Sariya D. P. \& Sagar R. 2013, MNRAS, 430, 3350
\end{theunbibliography}
\end{document}
|
2,869,038,156,711 | arxiv | \section{Preliminaries}\label{sec:maths}
\subfile{sections/02_maths_basic}
\subfile{sections/03_maths_adp}
\subfile{sections/04_maths_safetymod}
\subfile{sections/05_maths_actorcritic}
\subfile{sections/06_maths_ident}
\subfile{sections/07_maths_stability}
\subfile{sections/08_sim}
\subfile{sections/09_conclusion}
\addtolength{\textheight}{-12cm}
\bibliographystyle{jabbrv_IEEEtran}
\section{Simulation Results}%
\label{sec:sim}
To test the efficacy of the proposed control law, we perform a simulation study
on a class of nonlinear Euler-Lagrangian systems
\begin{equation}
\label{eq:ELsys}
\begin{aligned}
M(q)\ddot{q} + C_{m}(q,\dot{q}) \dot{q} + G(q)+F_{d}(\dot{q}) = \tau(t)
\end{aligned}
\end{equation}
Specifically, we consider the safe, optimal control problem for a two-link robot
manipulator system
\begin{equation*}
\begin{aligned}
M(q) &= \begin{bmatrix}
p_1 + 2 p_3 c_2 & p_2 + p_3 c_2 \\
p_2 + p_3 c_2 & p_2 \\
\end{bmatrix} \; , \;
F_{m}(\dot{q}) = \begin{bmatrix}
f_{d_1} \dot{q}_1 \\ f_{d_2} \dot{q}_2
\end{bmatrix} \\
C_{m}(q,\dot{q}) &= \begin{bmatrix}
-p_3 s_2 \dot{q}_2 & -p_3 s_2 (\dot{q}_1 + \dot{q}_2) \\
p_3 s_2 \dot{q}_1 & 0
\end{bmatrix} \; ,\; G(q) = 0_{2 \times 1}
\end{aligned}
\end{equation*}
where the signals $q_{1}(t),q_{2}(t) \in \Re$ denote the angular position of
the two link joints in radians. The parameters used for the simulation are
$p_1 = \SI{3.473}{kg.m} $,
$p_2 = \SI{0.196}{kg.m} $,
$p_3 = \SI{0.242}{kg.m} $,
$f_{d_{1}} = \SI{5.3}{N.s} $,
$f_{d_{2}} = \SI{1.1}{N.s} $.
The system is then reformulated to the control affine form given in
\eqref{eq:systemDynamics} by defining the system state as
$x = [q_{1},q_{2},\dot{q}_{1},\dot{q}_{2}]^{T}$ and the control action as $u = \tau$. We seek to solve the optimal
control problem (Problem \ref{prob:constrained}) considering the following the
cost function components as
$Q(x) = \quad{}{x}$ and $ R = \mathbb{I}_{2\times2} $ and the state constraint set
$\mathcal{C} = \{x \in \Re^{4}: |x_{i}|< a_{i}\;\forall i \in \{1,2,3,4\} \}$
\footnote{We consider rectangular constraints for the ease of visualization.
The proposed method can be easily extended to consider other types of state constraints.}.
We consider the following candidate Barrier
Lyapunov Function: $B_{f} = \sum_{i = 1}^{n} \log{\frac{a_{i}^{2}}{a_{i}^{2} - x_{i}^{2}}}$.
For the given two-link robot manipulator system, we have considered
$a_{i} = 5 \;\forall i \in \{1,2,3,4\}$.
We observe that there exists a $\gamma = 5$ that satisfies the condition
$\gamma \norm{\nabla B_f} > B_{f}$.
The Critic NN and the Identifier NN were considered to be two-layer
NNs with sigmoidal activation function and hidden layer consisting of 30 and
5 neurons respectively.
The gains for the actor-critic components were chosen as $\eta_{c} = 2$ ,
$\eta_{a1} = 1$ and $\eta_{a2} = 50$. The forgetting factor $\beta = 0.001$ and
the multiplier $\nu = 5$. The Lagrangian multiplier $\lambda$ was set to 100 to
ensure that the value of the bound $\overline{B}_{d}$ is of a reasonable magnitude. For identifier, we chose the gains
$\Gamma_{wf}= 10 \mathbb{I}_{l\times l}$ ,
$\Gamma_{vf}= 10 \mathbb{I}_{n \times n}$. The
identifier feedback gain was set to $k = 10$.
The covariance matrix was initialized to $\Gamma(0) = \mathbb{I}_{p \times p}$
and all the NN weights were initialized in the range of $[-1,1]$
with a uniform probability distribution.
\begin{figure}[htpb]
\centering
\subfloat[State Trajectory\label{fig:state}]{\ifig{0.5}{state}}
\hfill
\subfloat[Control Effort\label{fig:u}]{\ifig{0.5}{u}} \\
\subfloat[Identifier Estimation Error\label{fig:error}]{\ifig{0.5}{error}}
\hfill
\subfloat[Comparision with Unconstrained ACI\cite{bhasin2013Automatica}\label{fig:comp}]{\ifig{0.5}{comp}}
\caption{Simulation Plots of our proposed method}
\end{figure}
Fig. \ref{fig:state} shows the state trajectory of the system under the influence
of the proposed control law. We observe that all of the states
are inside the prescribed limit shown in red dotted lines. Additionally, the state remains uniformly
ultimately bounded. Fig. \ref{fig:u} shows the control effort imposed by the
controller. Fig. \ref{fig:error} shows the estimation error of the identifier.
It can be seen that the estimation error converges very close to zero. In the
Fig. \ref{fig:comp} we observe a comparison of performance in the initial 2
seconds of training of the proposed method with the ACI method
\cite{bhasin2013Automatica}. The hyper-parameters for the algorithm outlined in
\cite{bhasin2013Automatica}, were taken in similar orders of magnitude as
detailed in that article to enable a juxtaposition of the two results for better comparison.
We observe that while the ACI method initially violates the safety criterion,
the proposed method manages to keep the states well within the boundaries of the
safe set, highlighting the transient safety guarantees of the proposed method.
\end{document}
\subsection{Approximate Dynamic Programming}%
\label{subsec:dp}
In the theory of Dynamic Programming, the optimal value function is defined
as
\begin{align}
V^{*}(x(t)) = \min_{\substack{u(\tau) \\ t \le \tau < \infty}} \int_{t}^{\infty} r(x(s),u(s)) ds
\end{align}
The Hamiltonian of the system is defined as follows
\begin{equation}
H(x,u,\nabla V) \triangleq r(x,u) + \nabla V^{T} (f(x) + g(x) u)
\label{eq:perfectHamiltonian}
\end{equation}
We obtain the optimal control law $u^{*}(x)$ for the unconstrained optimal control problem
in \eqref{eq:objective} by minimizing the Hamiltonian w.r.t. the control action $u$
\begin{equation}
u^{*}(x) = \argmin_{u} H(x,u,\nabla V^{*}) = - \frac{1}{2} R^{-1} g^{T} \nabla V^{*}
\label{eq:optimalControlLaw}
\end{equation}
Under the optimal control law in \eqref{eq:optimalControlLaw}, the value of
Hamiltonian is identically equal to zero leading to the
Hamilton-Jacobi-Bellman (HJB) equation
\begin{equation}
H(x,u^{*},\nabla V^{*}) = 0
\label{eq:perfecthjb}
\end{equation}
The Hamiltonian in \eqref{eq:perfectHamiltonian} can be approximated by replacing $u^{*}$, $V^{*}$, $f(x)$ with
their corresponding estimates $\hat{u}$ (\textbf{Actor}), $\hat{V}$ (\textbf{Critic}) and
$\hat{f}(x)$ (\textbf{Identifier}).
\begin{equation}
\hat{H}(x,\hat{u},\nabla \hat{V}) \triangleq r(x,\hat{u}) + \nabla \hat{V}^{T} (\hat{f}(x) + g(x) \hat{u})
\label{eq:estimatedHJB}
\end{equation}
The Bellman Residual error is defined as
\begin{equation}
\begin{aligned}
\delta_{hjb} \triangleq \hat{H}(x,\hat{u},\nabla \hat{V}) - H(x,u^{*},\nabla V^{*})
\end{aligned}
\end{equation}
We parameterize the Value function via a single layer
neural network (NN)
\begin{equation}
V^{*}(x) = W^{T} \phi(x) + \epsilon_v(x)
\label{eq:perfectValueFunction}
\end{equation}
where $\phi: \Re^{n}\rightarrow \Re^{p}$ denotes the basis function chosen to
approximate the value function, satisfying $\phi(0) = 0$. The parameter
$W \in \Re^{p}$ denotes the true NN weight and
$\epsilon_v: \Re^{n}\rightarrow \Re$ denotes the function approximation error.
\begin{assm
The value function approximation error $\epsilon_v$ and its derivative w.r.t. state
are bounded as
$\norm{\epsilon_{v}(x)} \le \overline{\epsilon},\norm{\nabla \epsilon_{v}(x)} \le \overline{\epsilon}_{d}$.
Additionally, these bounds approach 0 as the number of neurons approaches infinity.
\label{assm:actorCriticNNerrorbounded}
\end{assm}
Since the NN weight $W$ is unknown in \eqref{eq:perfectValueFunction}, we maintain two
estimates $\hat{W}_a \in \Re^{p}$ and $\hat{W}_c \in \Re^{p}$ for the control law and the value
function estimate, respectively.
\end{document}
\subsection{Identifier Design}%
\label{subsec:identifier}
We represent the system drift dynamics $f(x)$ in \eqref{eq:systemDynamics} via a
two-layer NN parameterized by $W_f \in \R{l}{n}$ and
$V_f \in \R{n}{l}$. We represent the activation function of the NN by
$\sigma: \Re^{l}\rightarrow \Re^{l} $. The dynamics of the system can be written as
\begin{align}
\dot{x} = W_f^T \sigma(\VfT x) + \epsilon_f + g(x) \tau
\end{align}
The following state estimator is designed by involving estimates of $W_{f}$ and $V_{f}$ in
the form of $\WfHat$ and $\VfHat$ respectively
\begin{equation}
\dot{\hat{x}} = \hat{W}_f^T \sigma(\VfHatT x) + g(x) \tau + k \tilde{x}
\label{eq:identifierderivative}
\end{equation}
where $\tilde{x} \triangleq x- \hat{x}$ denotes the state estimation error and
$k \in \Re_{>0}$ is a feedback gain.
\begin{assm}
The parameters $W_{f}, V_{f}$ are assumed to be bounded and
$\|\sigma(\cdot)\|<\overline{\sigma},\|\nabla \sigma(\cdot)\| < \overline{\sigma}_{d} \; \; \forall x \in \mathcal{C}$.\label{assm:identifierbound}
\end{assm}
Based on a Lyapunov analysis (omitted here in the interest of space), we
design the following adaptive laws for the NN parameters
\begin{equation}
\label{eq:identifierAdaptiveLaws}
\dot{\hat{W}}_{f} = \proj(\Gamma_{wf}\hat{\sigma}\tilde{x}^{T}) \;,\;
\dot{\hat{V}}_{f} = \proj(\Gamma_{vf}x \tilde{x} ^{T} \hat{W}_f^T \nabla \hat{\sigma})
\end{equation}
where $\Gamma_{wf} \in \R{l}{l}$ and $\Gamma_{vf} \in \R{n}{n}$ are
positive definite gain matrices. We define $\tilde{W}_f \triangleq W_{f} - \WfHat$
and $\tilde{V}_f \triangleq V_{f} - \VfHat$.
\begin{thm
\label{thm:ident}
Under the identifier update laws given by \eqref{eq:identifierderivative}, \eqref{eq:identifierAdaptiveLaws} and Assumption \ref{assm:identifierbound},
the state identification error ($\tilde{x}(t)$), the error in NN
parameters ($\tilde{W}_{f}(t)$ and $\tilde{V}_{f}(t)$) are Uniformly Ultimately
Bounded (UUB)
\end{thm}
\begin{proof}
(Sketch) We define an auxiliary state
$\zeta \triangleq { [\tilde{x}^{T},\vect(\tilde{W}_f)^{T}, \vect(\tilde{V}_f)^{T}] }^{T}$.
Considering the following Lyapunov function
\begin{equation*}
V_{1}(\zeta) = \frac{1}{2}\tilde{x}^{T}\tilde{x} +
\frac{1}{2}tr(\WfTilde^T \Gamma_{wf}^{-1} \WfTilde) +
\frac{1}{2}tr(\VfTilde^T \Gamma_{vf}^{-1} \VfTilde)
\end{equation*}
One can show that $\dot{V}_{1}(\cdot)$ is negative whenever $\zeta$ lies outside
the compact set $\Omega_{\zeta} \triangleq \{\zeta : \norm{\tilde{x}} \le \frac{\overline{\sigma}^{2} \norm{\tilde{W}_f}^{2}}{4k^{2}} + \frac{1}{k}\overline{\chi}
\}$, where $\overline{\chi}$ is the computable upper bound of $\epsilon_f$ and higher
order terms originating from Taylor's approximation of $\sigma$. Hence the state $\zeta$ is UUB
\end{proof}
Block diagram of the resulting system is shown in Fig. \ref{fig:blockDiag}
\end{document}
\subsection{BLF-based Constrained Optimal Control Problem}%
\label{subsec:mod}
A positive-definite differentiable function $B_{f}: \mathcal{C} \rightarrow \Re$
satisfying the following properties is called a Barrier Lyapunov function (BLF)
if its time derivative along the system trajectories is negative
semi-definite, i.e. $\dot{B_{f}}(x) \le 0$
\begin{equation*}
B_{f}(0) = 0, \; \;
B_{f}(x) > 0 \;\;\forall x \in \mathcal{C} /\{0\}, \;\;
\lim_{x\rightarrow \partial \mathcal{C}} B_{f}(x) = \infty
\end{equation*}
The existence of a BLF over $\mathcal{C}$ implies the forward invariance of $\mathcal{C}$
\cite[Lemma 1]{tee2009Automatica}.
\begin{construction}
$B_{f}(x)$ is constructed in a way such that
$\exists \; \gamma \in \Re_{>0}$ satisfying $\gamma \norm{\nabla B_f} \ge B_{f}\; \forall x \in \mathcal{C}$.\label{construction:gradBfupperBound}
\end{construction}
\begin{examp}
For $x \in \Re$ and $\mathcal{C} = [-1,1]$ a candidate BLF
$B_{f}(x) = \log(\frac{1}{1-x^{2}})$ with $\gamma=0.5$ satisfies the condition in
Construction \ref{construction:gradBfupperBound}.
\end{examp}
\begin{rem}
The constant $\gamma$ would be used to compute the largest attracting subset
of $\mathcal{C}$.
\end{rem}
Problem \ref{prob:constrained} can be reformulated in terms of BLF as
\begin{prob}
\label{prob:equivalent}
\begin{subequations}
\begin{align}
\min_{u(s)\;\forall s \in \Re_{\ge 0}} \hspace{8pt} & H(x,u,\nabla V^{*}) \\
\text{s.t.} \hspace{20pt} & \frac{d B_{f}}{dt} \Big|_{\dot{x} = f(x) + g(x)u} \le 0 \;\;\;\; \label{eq:barrierConstraint}\\
& B_{f}(x(0)) < \infty
\label{eq:modInitCond}
\end{align}
\end{subequations}
\end{prob}
The constraint in \eqref{eq:barrierConstraint} can be rewritten as
\begin{equation}
\label{eq:barrierConstraintReformulated}
\nabla B_f(x)^{T} [f(x) + g(x) u] \le 0
\end{equation}
We observe that the constraint in \eqref{eq:barrierConstraintReformulated} is
affine in the decision variable $u$. This, combined with the fact that the
Hamiltonian in \eqref{eq:perfectHamiltonian} is convex in $u$, makes Problem
\ref{prob:equivalent} a convex optimization problem.
To find an analytical solution, we define the Lagrangian as
\begin{equation}
L(x,u,\nabla V^{*},\lambda) = H(x,u,\nabla V^{*}) + \lambda \nabla B_f^{T} (f(x) + g(x)u)
\end{equation}
where $\lambda \in \Re_{\ge 0}$ is the Lagrange multiplier. The control law can
be obtained by minimizing the Lagrangian
\begin{equation}
u^{*}_{safe}(x,\lambda) = -\frac{1}{2} R^{-1}g^{T}(x)[\nabla V^{*}(x)+\lambda \nabla B_f(x) ]
\label{eq:uStarModified}
\end{equation}
\begin{rem}
The Lagrange multiplier $\lambda$ provides a way to reformulate a constrained
optimization problem into a weighted unconstrained optimization problem.
Typically, the expression for Lagrange multipliers are obtained from the KKT
conditions\cite{almubarak2021CDC}. For simplification of analysis, we
approximate the optimal Lagrange multiplier with a user-defined constant
$\lambda$, resulting in a suboptimal solution.
\end{rem}
The estimated safe control law is given by
\begin{equation}
\hat{u}(x,\lambda) = -\frac{1}{2} R^{-1}g^{T}(x)[ \nabla \phi(x)^{T} \hat{W}_a + \lambda \nabla B_f(x) ]
\label{eq:finalControlLaw}
\end{equation}
\begin{thm
\label{thm:safety}
Under the control law in \eqref{eq:finalControlLaw} and provided Assumptions
\ref{assm:lipschitz}-\ref{assm:actorCriticNNerrorbounded} hold, the set $\mathcal{C}$ is forward invariant for the
system in \eqref{eq:systemDynamics} if $x(0) \in \mathcal{C}$.
\end{thm}
\begin{proof}
Consider the candidate Lyapunov function as $B_{f}(x) : \mathcal{C} \rightarrow \Re$.
The time derivative of $B_{f}(x)$ along the trajectories of
$\dot{x} = f(x) + g(x)\hat{u}$ is given by
\begin{equation}
\dot{B}_{f} = \nabla B_{f}^{T} (f(x) + g(x) \hat{u})
\end{equation}
Substituting the control law from \eqref{eq:finalControlLaw}, we have
\begin{equation}
\dot{B}_{f} = \nabla B_{f}^{T} f - \frac{1}{2} \nabla B_f^{T} R_{g}
\nabla \phi^{T} \hat{W}_a - \frac{\lambda}{2} \nabla B_f^{T} R_{g} \nabla B_f
\label{eq:Bdot}
\end{equation}
where we define $R_{g}(x) \triangleq g(x)R^{-1}g^{T}(x)$ and
$R_{s}(x) \triangleq \nabla \phi(x) R_{g}(x) \nabla \phi^{T}(x)$. Under Assumption
\ref{assm:fullrank}, $R_{g}(x)$ is positive-definite. Additionally, $R_{g}$ is
bounded as $\norm{R_{g}(x)} \le \overline{R}_{g} \;\forall x \in \mathcal{C}$. Since
$f(x)$ and $\nabla \phi(x)$ are continuous functions over compact set $\mathcal{C}$,
$\norm{f(x)} \le \overline{f}, \norm{\nabla \phi(x)} \le \overline{\phi}_{d}\;\forall x \in \mathcal{C}$.
We can upper bound the right hand side of \eqref{eq:Bdot} by
\begin{equation}
\dot{B}_{f} \le (\overline{f} + \frac{1}{2} \overline{\phi}_{d}
\overline{W}_{a} \overline{R}_{g} ) \norm{\nabla B_f} - \frac{\lambda}{2} \lambda_{\min}(R_{g})\norm{\nabla B_f}^{2}
\end{equation}
where $\overline{W}_{a} \in \Re_{>0}$ is the bound on the true NN weight $W$
which is subsequently enforced on $\hat{W}_a$ via a projection operator\cite{lavretsky2011Arxiv}. We observe that the $\dot{B}_{f}$ is negative outside the compact set $\Omega
= \{ x \in \Re^{n} :\norm{\nabla B_f} \le \overline{B}_{d}
\}$, where $\overline{B}_{d} \triangleq \frac{\overline{f}
+ \frac{1}{2} \overline{\phi}_{d}
\overline{W}_{a} \overline{R}_{g}}{\frac{\lambda}{2}\lambda_{\min}(R_{g})}$
is a computable finite positive constant. Under the condition in Construction
\ref{construction:gradBfupperBound} we can upper bound the value of Barrier function as
\begin{equation}
\label{eq:4}
B_{f}(x(t)) \le \max\big(B_{f}(x(0)), \gamma \overline{B}_{d} \big)
\end{equation}
Since $x(0) \in \mathcal{C}$, the $B_{f}(x(0))$ is finite. Thus,
$B_{f}(t) \in \mathcal{L}_{\infty}$. Since the value of the Barrier function along the system
trajectory is bounded, then by the definition of $B_{f}(x)$, at no point in
time, the state trajectory intersects the boundary of the safe set $\partial \mathcal{C}$
\cite[Lemma 1]{tee2009Automatica}. Thus the state
$x(t) \in \mathcal{C} \;\;\forall t \in \Re_{\ge 0}$ and the system is forward invariant.
Since the BLF is continuously differentiable in $x$, the $\nabla B_f(x)$ is a
continuous function over the compact set $\Omega$. Thus,
$\norm{\nabla B_f} \in \mathcal{L}_{\infty}$. Since all constituents of the control
law in \eqref{eq:finalControlLaw} are bounded, we can conclude that $\hat{u}(t)\in \mathcal{L}_{\infty}$.
\end{proof}
\begin{rem}
Theorem \ref{thm:safety} proves that the control policy in
\eqref{eq:finalControlLaw} guarantees safety for all time. Further, the control
policy doesn't switch between a stabilizing backup policy and the RL policy,
which is a distinct advantage over approaches that rely on an elusive backup policy.
\end{rem}
\end{document}
\subsection{Actor-Critic Design}%
\label{subsec:actorCriticMaths}
The actor NN weight $\hat{W}_a$ and the critic NN weight $\hat{W}_c$ are updated to
minimize the norm of the estimation errors $\tilde{W}_c \triangleq W - \hat{W}_c$ and
$\tilde{W}_a \triangleq W - \hat{W}_a$.
A least-squares update law for the critic can be obtained from the consideration
of the integral squared Bellman error
\cite{bhasin2013Automatica} as follows
\begin{equation}
\label{eq:integralBellmanError}
E_{c} = \int_{0}^{t} \delta_{hjb}^{2}(\tau) d\tau
\end{equation}
Defining $\omega \triangleq \frac{\partial \delta_{hjb}}{\partial
\hat{W}_c}$ the update law for critic is given as
\begin{equation}
\label{eq:criticUpdateLaw}
\dot{\hat{W}}_{c} = \eta_{c} \Gamma \frac{\omega}{1+\nu \omega^{T}\Gamma\omega}\delta_{hjb}
\end{equation}
where the learning rate $\eta_{c}$ and normalizing factor $\nu$
are positive user-defined constants.
The positive-definite covariance matrix $\Gamma \in \R{p}{p}$ is updated via the update law
\begin{equation}
\label{eq:gammaUpdateLaw}
\dot{\Gamma} = \beta \Gamma - \eta_{c} \Gamma \frac{\omega \omega^{T}}{1+\nu \omega^{T}\Gamma\omega} \Gamma
\end{equation}
Under the aforementioned update law of the covariance matrix, the
following bounds can be established
\begin{equation}
\label{eq:gammaBounds}
\varphi_{1}I_{p} \preccurlyeq \Gamma(t) \preccurlyeq \varphi_{0} I_{p} \;\;\; \forall t \ge 0
\end{equation}
where $\preccurlyeq$ denotes the semi-definite ordering and
$\varphi_{0} > \varphi_{1}$ are positive constants.
The update law for the actor is obtained by the gradient descent of the cost
function in \eqref{eq:integralBellmanError}
\begin{equation}
\label{eq:actorUpdateLaw}
\begin{aligned}
\dot{\hat{W}}_{a} &= \proj\Big[ -\frac{\eta_{a1}}{\sqrt{1+\omega^{T}\omega}} R_{s}(\hat{W}_a - \hat{W}_c)\delta_{hjb} \\
&\;\;-\eta_{a2}(\hat{W}_a - \hat{W}_c) - \frac{1}{2}\lambda \nabla \phi R_{g}\nabla B_f \Big]
\end{aligned}
\end{equation}
where the projection operator, $\proj(.)$ \cite{lavretsky2011Arxiv} is used to keep the
estimates of the actor parameter bounded. The positive constants
$\eta_{a1}, \eta_{a2} \in \Re_{> 0 }$ user defined gains. The last two terms in
the argument of the projection operator are attributed to the subsequent
Lyapunov analysis in Subsection \ref{subsec:lyap}.
\subfile{blockdiag}
\end{document}
\section{Introduction}
\label{sec:intro}
The reinforcement learning (RL) framework has seen reasonable success in solving
optimal control problems under uncertain system dynamics. However, most RL-based
methods need to explore the state-action spaces during the initial phases of
training. Consequently, they tend to apply control inputs that may
be detrimental to real-time safety-critical systems. This fundamental challenge
of RL algorithms precludes their use in real-world systems lest they endanger
the safety of humans and property. Therefore, researchers actively seek to
bolster RL algorithms with provable safety guarantees. Formally, the notion of
safety of dynamical systems is the certification of forward
invariance\cite{blanchini1999Automatica} of state and actuation constraint sets.
Under this definition of safety, the safe RL problem is the
mathematical construct to solve optimal control problems under user-defined
state and actuation constraints.
In literature, various methods are proposed to ensure the safety of RL
algorithms. One school of
thought is to exploit model predictive control (MPC) to buttress
RL algorithms with safety guarantees\cite{zanon2020TAC,wabersich2021TAC,li2018ACC}. While
these algorithms provide a unified approach to handling state and actuation
constraints, they solve an optimization routine at each
time step of the controller run and thus are computationally expensive.
Another class of methods in the safe RL literature employs control
barrier functions (CBF)\cite{ames2016TAC,ames2019ECC}. CBFs provide a
Lyapunov-like analysis to ensure the safety of dynamical systems
without the need to compute system trajectories. In
literature, it is common to combine CBFs with control Lyapunov functions (CLFs) in the form of an
optimization problem to
trade-off safety and stability objectives\cite{choi2020Arxiv}.
However, these approaches are limited to discrete-time control
problems.
The extension of the results of RL to uncertain continuous-time systems have
been achieved by combining approximate dynamic programming
(ADP) with adaptive control
\cite{vamvoudakis2017SysConLet,bhasin2013Automatica,vamvoudakis2010Automatica,lewis2009CircuitsSystemsMagazine}.
These approaches approximately solve the unconstrained optimal control
problem for uncertain system dynamics, however, the constrained optimal control
problems for continuous-time systems remain an active area of research.
The safety problem of continuous-time RL is primarily addressed by
considering the continuous-time counterpart of CBFs, namely barrier Lyapunov
functions (BLF) \cite{tee2009Automatica}. One research direction is to
transform the constrained state dynamics into dynamics of an unconstrained
state \cite{yang2019ACC,greene2020LCSS,mahmud2021ACC} and subsequently use ADP
algorithms to solve the unconstrained problem. However, this approach
typically handles rectangular state constraints (box constraints on individual
components of states) and cannot be trivially extended to general convex state constraints.
Additionally, these approaches modify the original cost function non-trivially.
Another approach involves adding BLF to the cost formulation
\cite{marvi2021IJRNC,cohen2020CDC}. Such an addition often renders the
system's value function not continuously differentiable, which is typically
needed to establish theoretical guarantees of the algorithms.
A common feature in both continuous-time and discrete-time RL algorithms is
the use of the so-called ``backup controllers''\cite[Assm.
2]{mahmud2021ACC}\cite{almubarak2021CDC}. These are user-defined
stabilizing controllers that step in place when RL algorithms generate control
actions not in accordance with the safety requirements. Most literature assumes
access to an initial policy that stabilizes the system under a wide range of
epistemic uncertainties. The backup controllers are typically used as a
fallback measure during the initial phase of the RL training when the agent has
limited knowledge of the system under control. The assumption of the
availability of such controllers is restrictive, and the formulation of
backup controllers may be difficult for certain complex systems. Additionally,
the act of switching to a backup controller deviates from the on-policy RL algorithm leading to sub-optimal results.
In this paper, an on-policy RL algorithm is developed for the optimal control of
continuous-time nonlinear systems that guarantee safety while obviating the need
for a backup controller.
Furthermore, the
objective function of the optimal control problem remains unchanged. Inspired by\cite{almubarak2021CDC}, we focus our efforts on extending the
Actor-Critic-Identifier (ACI) architecture\cite{bhasin2013Automatica} to solve
the optimal regulation problem for a class of uncertain nonlinear systems under
user-defined state constraints.
\subsubsection*{Contributions}\label{subsec:contribution}
The contributions of the present paper are three-fold. First, we formulate the
safety problem as the minimization of the Hamiltonian subject to a constraint
involving the time derivative of the BLF. We subsequently show that the proposed optimization problem is convex, and thus we compute the analytical solution for
the optimal control policy by minimizing the Lagrangian.
Second, we approximate the optimal control policy obtained from the proposed Lagrangian
method and show that this approximate control law renders the system safe for
each time step of the controller run without the help of a backup
stabilizing controller. Third, we extend the ACI
approach\cite{bhasin2013Automatica} to learn the optimal safe control policy online for a general class of uncertain nonlinear systems. Subsequently, we perform
simulation studies on a class of Euler-Lagrange nonlinear systems to show the
efficacy of our proposed methodology. We additionally compare our results with
ACI approach to demonstrate the safety guarantees of the proposed
method.
\subsubsection*{Notations}%
\label{subsec:notation}
Let $\vect(.)$ denote the vectorization operator of a matrix yielding a column
vector obtained by stacking the columns of the matrix on top of one another. We
will use $\nabla$ to denote gradient operator with respect to (w.r.t.) $x$. We use $\norm{.}$ to
denote the Euclidean norm for vectors and the corresponding induced norm for
matrices. Let $\mathcal{L}_{\infty}$ denote the set of all bounded signals.
$\lambda_{\min}(A)$ denotes the minimum eigenvalue of matrix $A$.
\end{document}
\subsection{Stability analysis}%
\label{subsec:lyap}
The Bellman estimation error can be written in its unmeasurable form as
\begin{equation}
\delta_{hjb} = \nabla \hat{V}^{T} \hat{F}_{\hat{u}} + r(x,\hat{u}) - \nabla V^{*T}
{F}_{u^{*}} - r(x,{u^{*}})
\label{eq:unmeasurableDeltaHJB}
\end{equation}
where $F_{u^{*}} \triangleq f(x) + g(x) u^{*}$ and
$\hat{F}_{\hat{u}} \triangleq \hat{f}(x) + g(x) \hat{u}$. Additionally, we
define $\tilde{F}_{\hat{u}} \triangleq F_{u^{*}} - \hat{F}_{\hat{u}}$.
Substituting the instantaneous cost from \eqref{eq:instantCost}, and the
NN approximations of $V^{*}$ from \eqref{eq:perfectValueFunction}
and its estimate $\hat{V}$ we have
\begin{equation}
\delta_{hjb} = \hat{W}_c^{T} \omega - [W^{T}\nabla \phi + \nabla \epsilon_v^{T}] F_{u^{*}} + \quad{R}{\hat{u}} -
u^{*T} R u^{*}
\label{eq:unmeasurableDeltaHJBPart2}
\end{equation}
Substituting optimal control $u^{*}$ from \eqref{eq:uStarModified} and its
estimate $\hat{u}$ from \eqref{eq:finalControlLaw} in
\eqref{eq:unmeasurableDeltaHJBPart2} and simplifying, we have
\begin{equation}
\delta_{hjb} = -\tilde{W}_c^{T} \omega + T_{1}
\label{eq:unmeasurableDeltaHJBFinal}
\end{equation}
where
\begin{equation*}
\begin{aligned}
T_{1} \triangleq &- W^{T} \nabla \phi \tilde{F}_{\hat{u}}
- \nabla \epsilon_v^{T} F_{u^{*}}
+ \frac{1}{4} \quad{R_{s}}{\hat{W}_a}
- \frac{1}{4}\quad{R_{s}}{W} \\
&-\frac{1}{4} \quad{R_{g}}{\nabla \epsilon_v}
- \frac{1}{2}\lambda \nabla B_f^{T} R_{g}(\nabla \phi^T \tilde{W}_a + \nabla \epsilon_v) \\
&- \frac{1}{2} W^{T} \nabla \phi R_{g} \nabla \epsilon_v
\end{aligned}
\end{equation*}
Substituting \eqref{eq:unmeasurableDeltaHJBFinal} into the dynamics of the
critic estimation error $\dot{\tilde{W}}_c = - \dot{\hat{W}}_{c}$ we obtain two components, a nominal dynamics term ($\Omega_{nom}$) and a perturbation
term ($\Delta$)
\begin{equation}
\dot{\tilde{W}}_c = \underbrace{-\eta_{c} \Gamma \psi \psi^{T} \tilde{W}_c}_{\Omega_{nom}} + \underbrace{\eta_{c} \Gamma \frac{\omega}{1+ \nu \quad{\Gamma}{\omega}} T_{1}}_{\Delta}
\label{eq:nomPlusPert}
\end{equation}
where
$\psi(t) \triangleq \frac{\omega(t)}{\sqrt{1+ \nu \quad{\Gamma(t)}{\omega(t)}}} \in \Re^{n}$
is the normalized gradient vector for the update law of the critic. The
regressor $\psi(t)$ is bounded as
\begin{equation}
\label{eq:3}
\norm{\psi(t)} \le \frac{1}{\sqrt{\nu \varphi_{1}}} \;\; \forall t \ge 0
\end{equation}
The nominal dynamics
$\dot{\tilde{W}}_{c} = \Omega_{nom}$ is globally exponentially stable
(GES), provided that the bounded signal $\psi(t)$ is persistently exciting (PE)\cite{bhasin2013Automatica}.
Consequently, there exists a
positive-definite scalar-valued function $V_{c} (\tilde{W}_c, t)$ such that the
following conditions are satisfied
\begin{equation}
\label{eq:definitionOfVc}
\begin{aligned}
c_{1} \norm{\tilde{W}_c}^{2} \le V_{c}(\tilde{W}_c,t) &\le c_{2} \norm{\tilde{W}_c}^{2} \\
\frac{\partial V_{c}}{\partial t} + \frac{\partial V_{c}}{\partial \tilde{W}_c} \Omega_{nom} &\le -c_{3}\norm{\tilde{W}_c}^{2} \\
\norm{\frac{\partial V_{c}}{\partial \tilde{W}_c}} &\le c_{4} \norm{\tilde{W}_c}
\end{aligned}
\end{equation}
where $c_{1},c_{2},c_{3},c_{4}$ are positive scalar constants. Additionally, we
define the following term that would appear in the subsequent Lyapunov analysis
\begin{equation*}
\begin{aligned}
T_{2} &\triangleq \frac{1}{4} \quad{R_{g}}{\nabla \epsilon_v}
- \frac{1}{2} \lambda \nabla B_f^{T} R_{g} \nabla \epsilon_v
+ \frac{1}{2}\tilde{W}_a^{T} \nabla \phi R_{g} \nabla \epsilon_v \\
&- \frac{1}{4} \quad{R_{s}}{\hat{W}_a}
+ \lambda \nabla B_f^{T} f(x)
- \lambda \nabla B_f^{T} R_{g} \nabla \phi^{T} \hat{W}_a
\end{aligned}
\end{equation*}
Under the Assumptions \ref{assm:gBounded}-\ref{assm:actorCriticNNerrorbounded}, Theorems
\ref{thm:safety}-\ref{thm:ident}, we can obtain the following computable
bounds
\begin{equation}
\label{eq:Bounds}
\begin{aligned}
\norm{\tilde{W}_a} &\le \overline{W}_a,
\norm{T_{1}} &\le \overline{T}_1,
\norm{R_{s}} &\le \overline{R}_s,
\norm{T_{2}} &\le \overline{T}_2
\end{aligned}
\end{equation}
where $\overline{W}_a, \overline{T}_1, \overline{R}_s, \overline{T}_2 \in \Re_{>0}$ are computable positive constants. Subsequently, we can bound the perturbation term in \eqref{eq:nomPlusPert} by
\begin{equation}
\label{eq:perturbationBound}
\norm{\Delta} \le \frac{\eta_{c}\varphi_{0} \overline{T}_1}{2\sqrt{\nu \varphi_{1}}}
\end{equation}
where $\varphi_{1}$ was defined in \eqref{eq:gammaBounds}.
\begin{thm}
Provided Assumptions \ref{assm:lipschitz}-\ref{assm:identifierbound} hold, the regressor matrix $\psi(t)$
is PE, and the following gain conditions
$c_{3} < \eta_{a1}\overline{W}_a \overline{R}_s $ and $4\eta_{a2} > \overline{R}_s$ are satisfied,
then the control action in \eqref{eq:finalControlLaw}, the actor and critic
update laws from \eqref{eq:criticUpdateLaw} and \eqref{eq:actorUpdateLaw}, and the
identifier \eqref{eq:identifierderivative}, \eqref{eq:identifierAdaptiveLaws}
guarantee that the state $x(t)$, actor weight estimation error $\tilde{W}_a(t)$
and the critic weight estimation error $\tilde{W}_c(t)$ are UUB.
\end{thm}
\begin{proof}
We define the auxiliary state $z \triangleq
[x^{T} , \vect(\tilde{W}_c)^{T} , \vect(\tilde{W}_a)^{T}]^{T}$. We consider the
following locally positive-definite candidate Lyapunov
function
\begin{align}
V_{L}(z,t) &= V^{*}(x)+ \lambda B_{f}(x) + V_{c}(\tilde{W}_c,t) + \frac{1}{2} \quad{ }{\tilde{W}_a}
\end{align}
Computing the derivative of the same w.r.t. time along the system trajectory
\begin{equation}
\begin{aligned}
\dot{V}_{L}(z,t) &= ( \nabla V^{*} + \lambda \nabla B_{f})^{T} [f + g\hat{u}] + \frac{dV_{c}}{dt} - \tilde{W}_a^{T} \dot{\hat{W}}_{a}
\end{aligned}
\end{equation}
Using \eqref{eq:perfecthjb}, \eqref{eq:definitionOfVc} we have
\begin{equation}
\begin{aligned}
\dot{V}_{L} \le & -Q(x) - u^{*T}R u^{*} - \nabla V^{*T} g \tilde{u} + \lambda \nabla B_{f}^{T} [f + g\hat{u}] \\
& - c_{3} \norm{\tilde{W}_c} ^{2} + c_{4} \norm{\tilde{W}_c}\norm{\Delta} - \tilde{W}_a^{T} \dot{\hat{W}}_{a}
\end{aligned}
\end{equation}
Substituting the control law from \eqref{eq:finalControlLaw}, bounds from
\eqref{eq:Bounds} - \eqref{eq:perturbationBound}, the actor update law from
\eqref{eq:actorUpdateLaw}, the $\delta_{hjb}$ from
\eqref{eq:unmeasurableDeltaHJBFinal} and using the properties of the $\proj(.)$ operator
\cite{lavretsky2011Arxiv} we have
\begin{equation}
\begin{aligned}
\dot{V}_{L} \le & -Q(x)
-(c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s)\norm{\tilde{W}_c}^{2}
- (\eta_{a2} - \frac{\overline{R}_s}{4}) \norm{\tilde{W}_a}^{2} \\
&- \frac{3 \lambda^{2}}{4} \lambda_{\min}(R_{g})\norm{\nabla B_f}^{2}
+ \overline{T}_2 + \eta_{a1} \overline{W}_a^{2} \overline{R}_s \overline{T}_1 + T_{3} \norm{\tilde{W}_c}
\end{aligned}
\end{equation}
where $T_{3} \triangleq \frac{c_{4} \eta_{c} \varphi_{0}\overline{T}_1}{2 \sqrt{\nu \varphi_{1}}}
+ \eta_{a2} \overline{W}_a + \eta_{a1} \overline{W}_a \overline{R}_s \overline{T}_1 + \eta_{a1} \overline{W}_a^{2} \overline{R}_s$. Under the gain condition of $c_{3} > \eta_{a1} \overline{W}_a \overline{R}_s$, completing the
squares yields
\begin{equation}
\begin{aligned}
\dot{V}_{L} \le & -Q(x)
-(1-\theta)(c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s)\norm{\tilde{W}_c}^{2} \\
&- (\eta_{a2} - \frac{\overline{R}_s}{4}) \norm{\tilde{W}_a}^{2}
- \frac{3 \lambda^{2}}{4} \lambda_{\min}(R_{g})\norm{\nabla B_f}^{2} \\
&+ \frac{T_{3}^{2}}{4\theta (c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s )}
+ \overline{T}_2 + \eta_{a1} \overline{W}_a^{2} \overline{R}_s \overline{T}_1
\end{aligned}
\end{equation}
where $\theta \in (0,1)$. Under the additional gain condition of
$4\eta_{a2} > \overline{R}_s$ , there exist two class $\mathcal{K}$ functions $\alpha_{1}$
and $\alpha_{2}$ such that the following inequalities hold
\begin{equation}
\begin{aligned}
&\alpha_{1}(\norm{z}) \le Q(x) + (1-\theta) (c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s) \norm{\tilde{W}_c}^{2} \\
&+ (\eta_{a2} - \frac{\overline{R}_s}{4}) \norm{\tilde{W}_a}^{2}
+ \frac{3 \lambda^{2}}{4} \lambda_{\min}(R_{g})\norm{\nabla B_f}^{2} \le \alpha_{2}(\norm{z})
\end{aligned}
\end{equation}
The derivative of Lyapunov function is upper-bounded by
\begin{equation}
\begin{aligned}
\dot{V}_{L} \le & -\alpha_{1}(\norm{z}) + \frac{T_{3}^{2}}{4\theta (c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s )}
+ \overline{T}_2 + \eta_{a1} \overline{W}_a^{2} \overline{R}_s \overline{T}_1
\end{aligned}
\end{equation}
we observe that $\dot{V}_{L}(z,t)$ is negative whenever $z(t)$ lies outside the
compact set $\Omega_{z} \triangleq \{z : \norm{z} \le \alpha_{1}^{-1}(\frac{T_{3}^{2}}{4\theta (c_{3} - \eta_{a1} \overline{W}_a \overline{R}_s )}
+ \overline{T}_2 + \eta_{a1} \overline{W}_a^{2} \overline{R}_s \overline{T}_1)\}$. We can thus
conclude that the norm of the auxiliary state $\norm{z(t)}$ is
UUB.
\end{proof}
\end{document}
\section{Conclusions and Future Work}
We develop an online Actor-Critic-Identifier
architecture-based safe RL algorithm to solve the optimal regulation problem for
a class of uncertain
nonlinear systems while adhering to
user-defined state constraints. We formulate the safety problem as a convex
optimization problem involving the minimization of the Hamiltonian subject to
the negative semi-definiteness of a candidate BLF. We derive an optimal control
law for the constrained system by solving the Lagrangian and show that the
on-policy RL algorithm ensures the forward invariance of the constraint set
without the need to switch to an external stabilizing backup controller. We
subsequently develop adaptation laws to learn the optimal policy and
demonstrate that all closed-loop signals are UUB. Finally, we demonstrate the
effectiveness of our controller on a two-link robot manipulator system and
compare our results with that of the existing literature. We show that the
proposed method successfully managed to ensure safety during the initial phase
of training while the existing approach shows safety violations.
Future work includes extending the proposed methodology to include actuation
constraints as well as state constraints, without
requiring a backup controller. The optimality of the proposed
controller may be improved by considering Lagrange multiplier obtained from KKT conditions.
\end{document}
|
2,869,038,156,712 | arxiv | \section{Introduction}
An element $a\in R$ is \emph{strongly clean} provided
that it is the sum of an idempotent and a unit that commutes. A
ring $R$ is \emph{strongly clean} provided that every element in
$R$ is strongly clean. A ring $R$ is local if it has only one
maximal right ideal. As is well known, a ring $R$ is local if and
only if for any $x\in R$, $x$ or $1-x$ is invertible. Strongly
clean matrices over commutative local rings was extensively
studied by many authors from very different view points (cf. [1-3]
and [8]). Recently, a related cleanness of triangular matrix rings
over abelian rings was studied by Diesl et al. (cf. [7]). In fact,
every triangular matrix ring over a commutative local ring is
strongly rad-clean (cf. [6]).
Following Diesl, we say that $a\in R$ is
\emph{strongly rad-clean} provided that there exists an idempotent
$e\in R$ such that $a-e\in U(R),\ ae=ea$ and $eae\in J(eRe)$ (cf.
[6]). A ring $R$ is \emph{strongly rad-clean} provided that every
element in $R$ is strongly rad-clean. Strongly rad-clean rings
form a natural subclass of strongly clean rings which have stable
range one (cf. [4]). Let $M$ be a right $R$-module, and let
$\varphi\in end_R(M)$. Then we include a relevant diagram to
reinforce the theme of direct sum decompositions:
$$\begin{array}{cccccc}
M&=&A&\bigoplus &B&\\
&&\varphi\downarrow\cong &&\downarrow&\varphi \\
M&=&A&\bigoplus &B&
\end{array}$$
If such diagram holds we call this is an AB-decomposition for
$\varphi$. It turns out by [2, Lemma 40] that $\varphi$ is
strongly $\pi$-regular if and only if there is an AB-decomposition
with $\varphi|_{B}\in N(end(B))$ (the set of nilpotent
elements).\\
Further, $\varphi$ is strongly rad-clean if and only if there is
an \emph{AB-decomposition} with $\varphi|_{B}\in J(end(B))$ (the
Jacobson radical of $end(B)$). Thus, strong rad-cleanness can be
seen as a natural extension of strong $\pi$-regularity. In [2,
Theorem 12], the authors gave a criterion to characterize when a
square matrix over a commutative local ring is strongly clean. We
extend this result to strongly rad-clean matrices over a
commutative local ring. We completely determine when a $2\times 2$
matrix over a commutative local ring is strongly rad-clean. Application to the matrices
over power-series is also studied.
Throughout, all rings are commutative with an identity and all
modules are unitary left modules. Let $M$ be a left $R$-module. We
denote the endomorphism ring of $M$ by $end(M)$ and the
automorphism ring of $M$ by $aut(M)$, respectively. The characteristic polynomial of $A$ is the polynomial
$\chi(A)=det(tI_n-A)$. We always use
$J(R)$ to denote the Jacobson radical and $U(R)$ is the set of invertible elements of a ring $R$. $M_2(R)$
stands for the ring of all $2\times 2$ matrices over $R$, and
$GL_2(R)$ denotes the 2-dimensional general linear group of $R$.
\section{Strongly rad-clean 2x2 matrices over a commutative
local ring}
In this section, we study the structure of strongly rad-clean elements in various situations related
to ordinary ring extensions which have roles in ring theory. We start with a well known characterization of strongly rad-clean element in the endomorphism ring of a module $M$.
\begin{lem} \label{Lemma 1}\ \
Let $E=end(_RM)$, and let $\alpha\in E$. Then the following are equivalent:\end{lem}
\begin{enumerate}
\item [(1)]{\it $\alpha \in E$ is strongly rad-clean.}
\vspace{-.5mm}
\item [(2)]{\it There exists a direct sum decomposition
$M=P\oplus Q$ where $P$ and $Q$ are $\alpha$-invariant, and
$\alpha|_P\in aut(P)$ and $\alpha|_Q\in J\big(end(Q)\big)$.}
\vspace{-.5mm}
\end{enumerate}
\begin{proof} See [6, Proposition 4.1.2].\end{proof}
\begin{lem}\label{Lemma 3} Let $R$ be a ring, let $M$ be a
left $R$-module. Suppose that $x,y,a,b\in end(_RM)$ such that
$xa+yb=1_M, xy=yx=0, ay=ya$ and $xb=bx$. Then $M=ker(x)\oplus
ker(y)$ as left $R$-modules.
\end{lem}
\begin{proof} See [2, Lemma 11].
\end{proof}
A commutative ring $R$ is {\it projective-free} if every finitely generated projective $R$-module is free. Evidently, every commutative local ring is projective-free. We now derive
\begin{lem}\label{Lemma 2} Let $R$ be projective-free. Then $A\in M_2(R)$ is strongly rad-clean if and only if
$A\in GL_2(R)$, or $A\in J\big(M_2(R)\big)$, or $A$ is similar to $diag(\alpha,\beta)$ with $\alpha\in J(R)$ and $\beta\in U(R)$.\end{lem}
\begin{proof}
$\Longrightarrow$ Write $A=E+U, E^2=E, U\in GL_2(R), EA=AE\in J(M_2(R))$. Since $R$ is projective-free,
there exists $P\in GL_n(R)$ such that $PEP^{-1}=diag(0,0), diag(1,1)$ or $diag(1,0)$.
Then $(i)$ $PAP^{-1}=PUP^{-1}$; hence, $A\in GL_2(R)$, $(ii)$ $(PAP^{-1})diag(1,1)=diag(1,1)(PAP^{-1})\in J(M_2(R)$, and so $A\in J(M_2(R))$. $(3)$
$(PAP^{-1})diag(1,0)=diag(1,0)(PAP^{-1})\in J(M_2(R)$ and $PAP^{-1}-diag(1,0)\in GL_2(R)$. Hence, $PAP^{-1}=\left(
\begin{array}{cc}
a&b\\
c&d
\end{array}
\right)$ with $a\in J(R),b=c=0$ and $d\in UR)$. Therefore $A$ is similar to $diag(\alpha,\beta)$ with $\alpha\in J(R)$ and $\beta\in U(R)$.
$\Longleftarrow$ If $A\in GL_2(R)$ or $A\in J\big(M_2(R)\big)$, then $A$ is strongly rad-clean.
We now assume that $A$ is similar to $diag(\alpha,\beta)$ with $\alpha\in J(R)$ and $\beta\in U(R)$. Then $A$ is similar to
$\left(
\begin{array}{cc}
1&0\\
0&0\end{array} \right)+ \left(
\begin{array}{cc}
\alpha -1&0\\
0&\beta\end{array} \right)$ where $$\begin{array}{c}
\left(
\begin{array}{cc}
\alpha -1&0\\
0&\beta\end{array} \right)\in GL_2(R), \left(
\begin{array}{cc}
\alpha&0\\
0&\beta\end{array} \right)\left(
\begin{array}{cc}
1&0\\
0&0\end{array} \right)\in J\big(M_2(R)\big)\\
\left(
\begin{array}{cc}
\alpha -1&0\\
0&\beta\end{array} \right)\left(
\begin{array}{cc}
1&0\\
0&0\end{array} \right)=\left(
\begin{array}{cc}
1&0\\
0&0\end{array} \right)\left(
\begin{array}{cc}
\alpha -1&0\\
0&\beta\end{array} \right).
\end{array}$$ Therefore $A\in M_2(R)$ is strongly rad-clean.\end{proof}
\begin{thm} \label{Theorem 4} Let $R$ be projective-free. Then $A\in M_2(R)$ is strongly rad-clean if and only if
\begin{enumerate}
\item [(1)]{\it $A\in GL_2(R)\big)$, or}
\vspace{-.5mm}
\item [(2)]{\it $A\in J\big(M_2(R)\big)$, or}
\vspace{-.5mm}
\item [(3)]{\it $\chi(A)=0$ has roots $\alpha\in U(R),\beta\in J(R)$.}
\vspace{-.5mm}
\end{enumerate}
\end{thm}
\begin{proof} $\Longrightarrow$ By Lemma \ref{Lemma 2}, $A\in GL_2(R)$, or $A\in
J\big(M_2(R)\big)$, or $A$ is similar to a matrix $\left(
\begin{array}{cc}
\alpha&0\\
0&\beta\end{array} \right)$, where $\alpha\in J(R)$ and $\beta\in U(R)$. Then $\chi(A)=(x-\alpha)(x-\beta)$ has roots
$\alpha\in U(R),\beta\in J(R)$.
$\Longleftarrow$ If $(1)$ or $(2)$ holds, then $A\in M_2(R)$ is
strongly rad-clean. If $(3)$ holds, we assume that $\chi(A)=(t-\alpha)(t-\beta)$. Choose $X=A-\alpha I_2$ and $Y=A-\beta I_2$. Then
$$\begin{array}{c}
X(\beta-\alpha)^{-1}I_2-Y(\beta-\alpha)^{-1}I_2=I_2,\\
XY=YX=0, X(\beta-\alpha)^{-1}I_2=(\beta-\alpha)^{-1}I_2X,\\
(\beta-\alpha)^{-1}I_2Y=Y(\beta-\alpha)^{-1}I_2.
\end{array}$$ By virtue of Lemma \ref{Lemma 3}, we have $2R=ker(X)\oplus ker(Y)$. For any $x\in ker(X)$, we have $(x)AX=(x)XA=0$, and
so $(x)A\in ker(X)$. Then $ker(X)$ is $A$-invariant.
Similarly, $ker(Y)$ is $A$-invariant. For any $x\in ker(X)$, we have $0=(x)X=(x)\big(A-\alpha I_2\big)$; hence,
$(x)A=(x)\alpha I_2$. By hypothesis, we have $A|_{ker(X)}\in J\big(end(ker(X))\big)$. For any $y\in ker(Y)$, we
prove that
$$0=(y)Y=(y)\big(A-\beta I_2\big).$$ This implies that $(y)A=(y)\big(\beta I_2\big)$. Obviously,
$A|_{ker(Y)}\in aut\big(ker(Y)\big)$. Therefore $A\in M_2(R)$ is strongly rad-clean by Lemma \ref{Lemma 1}.
\end{proof}
We have accumulated all the information necessary to prove the following.
\begin{thm}\label{Theorem 5} Let $R$ be a commutative
local ring, and let $A\in M_2(R)$. Then the following are
equivalent:
\begin{enumerate}
\item [(1)]{\it $A\in M_2(R)$ is
strongly rad-clean.} \vspace{-.5mm}
\item [(2)]{\it $A\in GL_2(R)$ or $A\in J\big(M_2(R)\big)$, or $trA\in U(R)$ and the quadratic equation $x^2+x=-\frac{detA}{tr^2A}$ has a root in $J(R)$.}
\vspace{-.5mm}
\item [(3)]{\it $A\in GL_2(R)$ or $A\in J\big(M_2(R)\big)$, or $trA\in U(R), detA\in J(R)$ and the quadratic equation $x^2+x=\frac{detA}{tr^2A-4detA}$ is
solvable.}
\end{enumerate}
\end{thm}
\begin{proof} $(1)\Rightarrow (2)$ Assume that $A\not\in GL_2(R)$ and $A\not\in
J\big(M_2(R)\big)$. By virtue of Theorem \ref{Theorem 4},
$trA\in U(R)$ and the characteristic polynomial $\chi(A)$ has a
root in $J(R)$ and a root in $U(R)$. According to Lemma \ref{Lemma 2},
$A$ is similar to $\left(
\begin{array}{cc}
\lambda&0\\
0&\mu
\end{array} \right)$, where $\lambda\in J(R), \mu\in U(R)$. Clearly,
$y^2-(\lambda+\mu)y+\lambda\mu=0$ has a root $\lambda$ in $J(R)$.
Hence so does the equation
$$(\lambda+\mu)^{-1}y^2-y=-(\lambda+\mu)^{-1}\lambda\mu.$$ Set
$z=(\lambda+\mu)^{-1}y$. Then
$$(\lambda+\mu)z^2-(\lambda+\mu)z=-(\lambda+\mu)^{-1}\lambda\mu.$$ That is,
$z^2-z=-(\lambda+\mu)^{-2}\lambda\mu.$ Consequently,
$z^2-z=-\frac{detA}{tr^2A}$ has a root in $J(R)$. Let $x=-z$. Then $x^2+x=-\frac{detA}{tr^2A}$ has a root in $J(R)$.
$(2)\Rightarrow (3)$ By hypothesis, we prove that the equation
$y^2-y=-\frac{detA}{tr^2A}$ has a root $a\in J(R)$. Assume that $trA\in U(R)$. Then
$\big(a(2a-1)^{-1}\big)^2-\big(a(2a-1)^{-1}\big)=\frac{detA}{tr^2A\cdot
\big(4(a^2-a)+1\big)} =\frac{detA}{tr^2A\cdot
\big(-4(trA)^{-2}detA+1\big)}=\frac{detA}{tr^2A-4detA}. $
Therefore the equation $y^2-y=\frac{detA}{tr^2A-4detA}$ is
solvable. Let $x=-y$. Then $x^2+x=\frac{detA}{tr^2A-4detA}$ is
solvable.
$(3)\Rightarrow (1)$ Suppose $A\not\in GL_2(R)$ and $A\not\in
J\big(M_2(R)\big)$. Then $trA\in U(R), detA\in J(R)$ and the
equation $x^2+x=\frac{detA}{tr^2A-4detA}$ has a root. Let $y=-x$.
Then $y^2-y\frac{detA}{tr^2A-4detA}$ has a root $a\in R$.
Clearly, $b:=1-a\in R$ is a root of this equation. As $a^2-a\in
J(R)$, we see that either $a\in J(R)$ or $1-a\in J(R)$. Thus,
$2a-1=1-2(1-a)\in U(R)$. It is easy to verify that
$\big(a(2a-1)^{-1}trA\big)^2-trA\cdot
\big(a(2a-1)^{-1}trA\big)+detA=-\frac{tr^2A\cdot
(a^2-a)}{4(a^2-a)+1}+detA=0.$ Thus the equation $y^2-trA\cdot
y+detA=0$ has roots $a(2a-1)^{-1}trA$ and $b(2b-1)^{-1}trA$. Since
$ab\in J(R)$, we see that $a+b=1$ and either $a\in J(R)$ or $b\in
J(R)$. Therefore $y^2-trA\cdot y+detA=0$ has a root in $U(R)$ and a
root in $J(R)$. Since $R$ is a commutative local ring, it is projective-free. By virtue of Theorem \ref{Theorem 4}, we obtain the result.\end{proof}
\begin{cor} \label{Corollary 6} Let $R$ be a commutative
local ring, and let $A\in M_2(R)$. Then the following are
equivalent:
\begin{enumerate}
\item [(1)]{\it $A\in M_2(R)$ is
strongly clean.} \vspace{-.5mm}
\item [(2)]{\it $I_2-A\in GL_2(R)$ or $A\in M_2(R)$ is
strongly rad-clean.}
\end{enumerate}
\end{cor}
\begin{proof}
$(2)\Rightarrow (1)$ is trivial.
$(1)\Rightarrow (2)$ In view of [3, Corollary 16.4.33], $A\in
GL_2(R)$, or $I_2-A\in GL_2(R)$ or $trA\in U(R), detA\in J(R)$ and
the quadratic equation \\ $x^2-x=\frac{detA}{tr^2A-4detA}$ is
solvable. Hence $x^2+x=\frac{detA}{tr^2A-4detA}$ is
solvable. According to Theorem \ref{Theorem 5}, we complete the
proof.
\end{proof}
\begin{cor} \label{Corollary 7} Let $R$ be a commutative
local ring. If $\frac{1}{2}\in R$, then the following are
equivalent:
\begin{enumerate}
\item [(1)]{\it $A\in M_2(R)$ is strongly rad-clean.} \vspace{-.5mm}
\item [(2)]{\it $A\in GL_2(R)$ or $A\in J\big(M_2(R)\big)$, or $trA\in U(R), detA\in J(R)$ and $tr^2A-4detA$ is square.}
\vspace{-.5mm}
\end{enumerate}
\end{cor}
\begin{proof} $(1)\Rightarrow (2)$ According to Theorem \ref{Theorem 5}, $A\in GL_2(R)$ or $A\in J\big(M_2(R)\big)$, or $trA\in U(R), detA\in J(R)$ and the quadratic equation $x^2-x=\frac{detA}{tr^2A-4detA}$ is
solvable. If $a\in R$ is the root of the equation, then
$(2a-1)^2=4(a^2-a)+1=\frac{tr^2A}{tr^2A-4detA}\in U(R)$. As in the
proof of Theorem \ref{Theorem 5} , $2a-1\in U(R)$. Therefore
$tr^2A-4detA=\big(trA\cdot (2a-1)^{-1}\big)^2$.
$(2)\Rightarrow (1)$ If $trA\in U(R), detA\in J(R)$ and $tr^2A-4detA=u^2$ for some $u\in R$, then $u\in U(R)$ and the equation $x^2+x=\frac{detA}{tr^2A-4detA}$ has a root $-\frac{1}{2}u^{-1}(trA+u)$. By virtue of Theorem \ref{Theorem 5}, $A\in M_2(R)$ is strongly rad-clean.
\end{proof}
\vskip4mm Every strongly rad-clean matrix over a ring is strongly clean. But there exist strongly clean matrices over a commutative local ring which is not strongly rad-clean as the following shows.
\begin{ex} \label{Example 8} \em Let $R={\mathbb Z}_4$, and let $A=
\left(
\begin{array}{cc}
2&3\\
0&2
\end{array}
\right)\in M_2(R)$. $R$ is a commutative local ring. Then $A=\left(
\begin{array}{cc}
1&0\\
0&1
\end{array}
\right)+\left(
\begin{array}{cc}
1&3\\
0&1
\end{array}
\right)$ is a strongly clean decomposition. Thus $A\in M_2(R)$ is strongly clean.
If $A\in M_2(R)$ is strongly rad-clean, there exist an idempotent $E\in M_2(R)$ and an invertible $U\in M_2(R)$ such that $A=E+U, EA=AE$ and $EAE\in J\big(M_2(R)\big)$. Hence, $AU=A(A-E)=(A-E)A=UA$, and then $E=A-U\in GL_2(R)$ as $A^4=0$. This implies that $E=I_2$, and so $EAE=A\not\in J\big(M_2(R)\big)$, as $J(R)=2R$. This gives a contradiction. Therefore $A\in M_2(R)$ is not strongly rad-clean.
\end{ex}
Following Chen, $a\in R$ is {\it strongly J-clean} provided
that there exists an
idempotent $e\in R$ such that $ea=ae$ and $a-e\in J(R)$ (cf. [3]). Every uniquely
clean ring is strongly J-clean (cf. [9]). We have
\begin{prop} \label{Proposition 16} Every strongly J-clean
element in a ring is strongly rad-clean.
\end{prop}
\begin{proof} For if $a\in R$, there exist $e, w\in R$ such that
$a+1=e+w$ with $e^2=e$, $w\in J(R)$ and $ae=ea$. Multiplying the
last equation from the right and from the left by $e$ we have
$eae=ewe\in eJ(R)e=J(eRe)$. But $a=e+(w-1)$, where $w-1\in U(R)$.
This completes the proof.
\end{proof}
\vskip4mm Following Cui and Chen, an element $a\in R$ is quaspolar if there exists an idempotent $e\in comm(a)$ such that
$a+e\in U(R)$ and $ae\in R^{qnil}$. Obviously, $A~\mbox{is strongly J-clean}\Longrightarrow A~\mbox{is strongly rad-clean}\Longrightarrow A~\mbox{is quasipolar}.$ But the converses are not true, as the following shows:
\begin{ex} \label{Example 18}\em $(1)$ Let $R$ be a commutative
local ring and $A= \left(
\begin{array}{cc}
1 &1\\
1&0
\end{array}
\right)$ be in $M_2(R)$. Since $A\in GL_2(R)$, by Lemma \ref{Lemma 2}, it is
strongly rad-clean but is not strongly J-clean, as $I_2-A\not\in J(M_2(R))$.
$(2)$ Let $R={\mathbb Z}_{(3)}$ and $A= \left(
\begin{array}{cc}
2 &1\\
-1&1
\end{array}
\right)$. Then $tr A=3\in J(R)$ and $det A=3\in J(R)$. Hence $A$ is
quasipolar by [5, Theorem 2.6]. Note that $tr A \notin U(R)$,
$A\notin GL_2(R)$ and $A\notin J(M_2(R))$. Thus, $A$ is not
strongly rad-clean, in terms of Corollary \ref{Corollary 7}.
\end{ex}
\vskip4mm Let $R$ be a
commutative local ring. If $A\in M_2(R)$ is strongly rad-clean,
we claim that $A\in GL_2(R)$, or $A\in J\big(M_2(R)\big)$, or $A$ has an
invertible trace. If $A\not\in
GL_2(R)$ and $A\not\in J\big(M_2(R)\big)$, it follows from Lemma
\ref{Lemma 2} that $A$ is similar to a matrix $\left(
\begin{array}{cc}
1+w_1&0\\
0&w_2\end{array} \right)$ where $1+w_1\in J(R), w_2\in U(R)$.
Therefore $tr(A)=(1+w_1)+w_2\in U(R)$, and we are done.
We set $B_{12}(a)=\left(
\begin{array}{cc}
1&a\\
0&1 \end{array} \right)$ and $B_{21}(a)=\left(
\begin{array}{cc}
1&0\\
a&1 \end{array} \right)$. We have
\begin{thm} \label{Theorem 10} Let $R$ be a commutative local
ring. Then the following are equivalent:
\begin{enumerate}
\item [(1)]{\it Every $A\in M_2(R)$ with invertible trace is strongly rad-clean.} \vspace{-.5mm}
\item [(2)]{\it For any $\lambda\in J(R), \mu\in U(R)$, the quadratic equation $x^2+\mu x+\lambda=0$ is solvable.}
\vspace{-.5mm}
\end{enumerate}
\end{thm}
\begin{proof} $(1)\Rightarrow (2)$ Let $\lambda\in J(R), \mu\in U(R)$. Choose $A=\left(
\begin{array}{cc}
0&\lambda\\
1&-\mu\end{array} \right)$. Then $A\in M_2(R)$ is strongly rad
clean. Obviously, $A\not\in GL_2(R)$ and $A\not\in
J\big(M_2(R)\big)$. In view of Theorem \ref{Theorem 4} the quadratic
polynomial $\chi(A)=x^2+\mu x+\lambda$ is solvable.
$(2)\Rightarrow (1)$ Let $A= \left(
\begin{array}{cc}
a&b\\
c&d \end{array} \right)$ with $tr(A)\in U(R)$. Case I. $c\in
U(R)$. Then
$$diag(c,1)B_{12}(-ac^{-1})AB_{12}(ac^{-1})diag(c^{-1},1)=\left(
\begin{array}{cc}
0&\lambda\\
1&\mu\end{array} \right)$$ for some $\lambda,\mu\in R$. If
$\lambda\in U(R)$, then $A\in GL_2(R)$, and so it is strongly rad-clean.
If $\lambda\in J(R)$, then $\mu\in U(R)$. Then $A$ is strongly rad-clean by Theorem 2.5.\\ Case II. $b\in U(R)$. Then
$$\left(
\begin{array}{cc}
0&1\\
1&0\end{array} \right)A\left(
\begin{array}{cc}
0&1\\
1&0\end{array} \right)=\left(
\begin{array}{cc}
d&c\\
b&a\end{array} \right),$$ and the result follows from Case I.\\
Case III. $c, b\in J(R), a-d\in U(R)$. Then
$$B_{21}(-1)AB_{21}(1)= \left( \begin{array}{cc}
a+b & b \\ c-a+d-b & d-b \\
\end{array} \right)$$ where $a-d+b-c\in U(R)$; hence
the result follows from Case I. \\Case IV. $c, b\in J(R), a, d\in
U(R)$. Then
$$B_{21}(-ca^{-1})A= \left(
\begin{array}{cc}
a&b\\
0&d-ca^{-1}b\end{array} \right);$$ hence, $A\in GL_2(R)$.\\ Case
V. $c, b, a, d\in J(R)$. Then $A\in J\big(M_2(R)\big)$, and so
$tr(A)\in J(R)$, a contradiction.
Therefore $A\in M_2(R)$ with invertible trace is strongly rad-clean.\end{proof}
\begin{ex} \label{Example 11} \em Let $R={\mathbb Z}_4$. Then $R$ is a
commutative local ring. For any $\lambda\in J(R), \mu\in U(R)$, we
directly check that the quadratic equation $x^2+\mu x+\lambda=0$
is solvable. Applying Lemma \ref{Theorem 10}, every $2\times 2$ matrix over
$R$ with invertible trace is strongly rad-clean. In this case, $M_2(R)$ is not strongly rad-clean.\end{ex}
\begin{ex} \label{Example 12} Let $R=\widehat{{\mathbb Z}}_2$ be the
ring of 2-adic integers. Then every $2\times 2$ matrix with
invertible trace is strongly rad-clean.
\end{ex}
\begin{proof} Obviously, $R$ is a commutative local ring. Let
$\lambda\in J(R), \mu\in U(R)$. Then $\left(
\begin{array}{cc}
0&-\lambda\\
1&-\mu
\end{array}
\right)\in M_2(R)$ is strongly clean, by [5, Theorem 3.3]. Clearly, $det(A)=\lambda\in J(R)$. As $R/J(R)\cong {\mathbb Z}_2$, we see that
$\mu\in -1+J(R)$, and then $det(A-I_2)=\lambda+\mu+1\in J(R)$. In light of [5, Lemma 3.1],
the equation $x^2+\mu x+\lambda=0$ is solvable. This completes the proof, by Theorem \ref{Theorem 10}.
\end{proof}
\vskip4mm We note that matrix with non-invertible trace over commutative local rings maybe not strongly rad-clean. For instance, $A=\left(
\begin{array}{cc}
1&1\\
1&1\end{array} \right)\in M_2(\widehat{{\mathbb Z}}_2)$ is not
strongly rad-clean.
\section{Strongly rad-clean matrices over power series over
commutative local rings}
We now investigate strongly rad-clean
matrices over power series over commutative local rings.
\begin{lem} \label{Lemma 13} Let $R$ be a commutative ring,
and let $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,$ $\cdots,x_n]]\big)$. Then the following
hold:
\begin{enumerate}
\item [(1)]{\it $A(x_1,\cdots,x_n)\in GL_2\big(R[[x_1,\cdots,x_n]])$ if and only if $A(0,\cdots ,0)\in GL_2(R)$.}
\vspace{-.5mm}
\item [(2)]{\it $A(x_1,\cdots,x_n)\in J\big(M_2(R[[x_1,\cdots,x_n]])\big)$ if and only if $A(0,\cdots ,0)\in J\big(M_2(R)\big)$.}
\vspace{-.5mm}
\end{enumerate}
\end{lem}
\begin{proof} $(1)$ We suffice to prove for $n=1$. If $A(x_1)\in
GL_2\big(R[[x_1]])$, it is easy to verify that $A(0)\in GL_2(R)$.
Conversely, assume that $A(0)\in GL_2(R)$. Write $$A(x_1)= \left(
\begin{array}{cc}
\sum\limits_{i=0}^{\infty}a_ix_1^i&\sum\limits_{i=0}^{\infty}b_ix_1^i\\
\sum\limits_{i=0}^{\infty}c_ix_1^i&\sum\limits_{i=0}^{\infty}d_ix_1^i
\end{array}
\right),$$ where $A(0)= \left(
\begin{array}{cc}
a_0 &b_0 \\
c_0&d_0
\end{array}
\right).$ We note that the
determinant of $A(x_1)$ is $a_0d_0-c_0b_0+x_1f(x_1)$, which is a unit plus an
element of the radical of $R[[x_1]]$. Thus, $A(x_1)\in
GL_2\big(R[[x_1]])$, as required.
$(2)$ It is immediate from $(1)$.
\end{proof}
\begin{thm} \label{Theorem 14} Let $R$ be a commutative local
ring, and let $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,$ $\cdots,x_n]])$. Then the following are
equivalent:
\begin{enumerate}
\item [(1)]{\it $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,\cdots,x_n]])$ is strongly rad-clean.}
\vspace{-.5mm}
\item [(2)]{\it $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,\cdots,x_n]]/(x_1^{m_1}\cdots x_n^{m_n})\big)$ is strongly rad-clean.}\vspace{-.5mm}
\item [(3)]{\it $A(0,\cdots ,0)\in M_2(R)$ is strongly rad-clean.}
\vspace{-.5mm}
\end{enumerate}
\end{thm}
\begin{proof} $(1)\Rightarrow (2)$ and $(2)\Rightarrow (3)$ are obvious.
$(3)\Rightarrow (1)$ It will suffice to prove for $n=1$.
Set $x=x_1$. Clearly, $R[[x]]$ is a commutative local
ring. Since $A(0)$ is strongly clean in $M_2(R)$, it follows from
Theorem \ref{Theorem 4} that $A(0)\in GL_2(R)$, or $A(0)\in
J\big(M_2(R)\big)$, or $\chi\big(A(0)\big)$ has a root $\alpha\in
J(R)$ and a root $\beta\in U(R)$. If $A(0)\in GL_2(R)$ or
$A(0)\in J\big(M_2(R)\big)$, in view of Lemma \ref{Lemma 13}, $A(x)\in
GL_2\big(R[[x]])$ or $A(x)\in J\big(M_2(R[[x]])\big)$. Hence, $A(x)\in
M_2\big(R[[x]]\big)$ is strongly rad-clean. Thus, we may assume
that $\chi\big(A(0)\big)=t^2+\mu t+\lambda$ has a root $\alpha\in
J(R)$ and a root $\beta\in U(R)$.
Write $\chi\big(A(x)\big)=t^2+\mu(x)t+\lambda(x)$ where
$\mu(x)=\sum\limits_{i=0}^{\infty}\mu_ix^i,
\lambda(x)=\sum\limits_{i=0}^{\infty}\lambda_ix^i\in R[[x]]$ and
$\mu_0=\mu,\lambda_0=\lambda$. Let $b_0=\alpha$. It is easy to verify that $\mu_0=\alpha+\beta\in U(R)$.
Hence, $2b_0+\mu_0\in U(R)$. Choose
$$\begin{array}{c}
b_1=(2b_0+\mu_0)^{-1}(-\lambda_1-\mu_1b_0),\\
b_2=(2b_0+\mu_0)^{-1}(-\lambda_2-\mu_1b_1-\mu_2b_0-b_1^2),\\
\vdots
\end{array}$$ Then $y=\sum\limits_{i=0}^{\infty}b_ix^i\in R[[x]]$
is a root of $\chi\big(A(x)\big)$. In addition, $y\in
J\big(R[[x]]\big)$ as $b_0\in J(R)$. Since $y^2+\mu(x)y+\lambda(x)=0$, we have
$\chi\big(A(x)\big)=(t-y)(t+y)+\mu (t-y)=(t-y)(t+y+\mu)$. Set
$z=-y-\mu$. Then $z\in U\big(R[[x]]\big)$ as $\mu\in
U\big(R[[x]]\big)$. Therefore $\chi\big(A(x)\big)$ has a root in
$J\big(R[[x]]\big)$ and a root in $U\big(R[[x]]\big)$. According
to Theorem \ref{Theorem 4}, $A(x)\in M_2\big(R[[x]])$ is strongly rad-clean,
as asserted.
\end{proof}
\begin{cor} \label{Corollary 15} Let $R$ be a commutative
local ring. Then the following are equivalent:
\begin{enumerate}
\item [(1)]{\it Every $A\in M_2(R)$ with invertible trace is strongly rad-clean.} \vspace{-.5mm}
\item [(2)]{\it Every $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,\cdots,x_n]]\big)$ with invertible trace is strongly rad-clean.}
\vspace{-.5mm}
\end{enumerate}
\end{cor}
\begin{proof} $(1)\Rightarrow (2)$ Let $A(x_1,\cdots,x_n)\in M_2\big(R[[x_1,\cdots,x_n]]\big)$ with invertible
trace. Then $trA(0,\cdots ,0)\in U(R)$. By hypothesis, $A(0,\cdots ,0)\in M_2(R)$ is
strongly rad-clean. In light of Theorem \ref{Theorem 4}, $A(x_1,\cdots,x_n)\in
M_2\big(R[[x_1,\cdots,x_n]])$ is strongly rad-clean.
$(2)\Rightarrow (1)$ is obvious.
\end{proof}
\vskip10mm ACKNOWLEDGEMENT
\vskip4mm The authors are grateful to the referee for his/her
helpful comments and suggestions which led to a much improved paper.
|
2,869,038,156,713 | arxiv | \section{Introduction}
Let $K$ be a knot in the $3$-sphere $S^3$
with a tubular neighborhood $N(K)$.
Then the set of \textit{slopes} for $K$
(i.e., $\partial N(K)$-isotopy classes of simple loops on $\partial N(K)$)
is identified with $\mathbb{Q} \cup \{\infty \}$
using preferred meridian-longitude pair so that
a meridian corresponds to $\infty$.
A slope $\gamma$ is said to be \textit{integral} if
a representative of $\gamma$ intersects a meridian exactly once,
in other words, $\gamma$ corresponds to an integer under the above
identification.
In the following,
we denote by $(K; \gamma)$
the $3$-manifold obtained from $S^3$ by Dehn surgery
on a knot $K$ with slope $\gamma$,
i.e., by attaching a solid torus to $S^3-$int$N(K)$ in such a way
that $\gamma$ bounds a meridian disk of the filled solid torus.
If $\gamma$ corresponds to $r \in \mathbb{Q} \cup \{ \infty \}$,
then we identify $\gamma$ and $r$ and write $(K; r)$ for $(K; \gamma)$. \par
We denote by $\mathcal{L}$
the \textit{set of lens slopes}
$\{r \in \mathbb{Q}\ |\ \exists$
hyperbolic knot $K \subset S^3$ such that $(K;r)$ is a lens space$\}$,
where $S^3$ and $S^2 \times S^1$ are also considered as lens spaces.
Then the cyclic surgery theorem \cite{CGLS} implies that
$\mathcal{L} \subset \mathbb{Z}$.
A result of Gabai \cite[Corollary 8.3]{Ga} shows that
$0 \not\in \mathcal{L}$,
a result of Gordon and Luecke \cite{G-Lu} shows
that $\pm 1 \not\in \mathcal{L}$.
In \cite{KM} Kronheimer and Mrowka
prove that $\pm 2 \not\in \mathcal{L}$.
Furthermore,
a result of Kronheimer, Mrowka, Ozsv\'ath and Szab\'o \cite{KMOS}
implies that $\pm 3, \pm 4 \not\in \mathcal{L}$.
Besides,
Berge \cite[Table of Lens Spaces]{Berge}
suggests that
if $n \in \mathcal{L}$,
then $|n| \ge 18$ and
not every integer $n$ with $|n| \ge 18$ appears
in $\mathcal{L}$.
Fintushel and Stern \cite{FS} had shown that $18$-surgery on
the $(-2, 3, 7)$ pretzel knot yields a lens space.
\textit{Which slope $($rational number$)$ can or cannot appear in
the set of Seifert fibered slopes
$\mathcal{S} = \{r \in \mathbb{Q}\ |\ \exists$
hyperbolic knot $K \subset S^3$ such that $(K;r)$
is Seifert fibered$\}$?}
It is conjectured that $\mathcal{S} \subset \mathbb{Z}$ \cite{Go}.
The purpose of this paper is to prove:
\begin{THM}
\label{Seifert slopes}
For each integer $n \in \mathbb{Z}$,
there exists a tunnel number one,
hyperbolic knot $K_n$ in $S^3$
such that $(K_n; n)$ is a small Seifert fiber space
$($i.e., a Seifert fiber space over $S^2$
with exactly three exceptional fibers$)$.
\end{THM}
\begin{rem}
Since $K_n$ has tunnel number one,
it is embedded in a genus two Heegaard surface of $S^3$ and
strongly invertible \cite[Lemma 5]{Mor}.
See \cite[Question 3.1]{MMM}.
\end{rem}
Theorem \ref{Seifert slopes},
together with the previous known results,
shows:
\begin{CO}
\label{LZS}
$\mathcal{L} \subsetneqq \mathbb{Z} \subset \mathcal{S}$.
\end{CO}
\begin{rems}$\phantom{99}$
(1)\qua For the \textit{set of reducing slopes}
$\mathcal{R} = \{r \in \mathbb{Q}\ |\ \exists$
hyperbolic knot $K \subset S^3$ such that $(K;r)$ is reducible$\}$,
Gordon and Luecke \cite{G-Lu1} have shown that
$\mathcal{R} \subset \mathbb{Z}$.
In fact, the cabling conjecture \cite{GS} asserts that
$\mathcal{R} = \emptyset$.
(2)\qua For the \textit{set of toroidal slopes}
$\mathcal{T} = \{r \in \mathbb{Q}\ |\ \exists$
hyperbolic knot $K \subset S^3$ such that $(K;r)$ is toroidal$\}$,
Gordon and Luecke \cite{G-Lu2} have shown that
$\mathcal{T} \subset \mathbb{Z}/2$ (integers or half integers).
In \cite{Tera},
Teragaito shows that
$\mathbb{Z} \subset \mathcal{T}$ and conjectures that
$\mathcal{T} \subsetneqq \mathbb{Z}/2$.
\end{rems}
\textbf{Acknowledgements}\qua
We would like to thank the referee for careful reading
and useful comments. \newline
The first author was partially
supported by Grant-in-Aid for
Scientific Research (No.\ 15540095),
The Ministry of Education, Culture, Sports,
Science and Technology, Japan. \newline
\section{Hyperbolic knots with Seifert fibered surgeries}
Our construction is based on an example of
a longitudinal Seifert fibered surgery given in \cite{IMS}.
Let $k \cup c$ be a $2$-bridge link given in
Figure \ref{fig:knotKn},
and let $K_n$ be a knot obtained from $k$ by
$\frac{1}{-n+4}$-surgery along $c$.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.4\linewidth]{knotKn.eps}
\caption{$K_n$}
\label{fig:knotKn}
\end{center}
\end{figure}
We shall say that a Seifert fiber space is of
\textit{type} $S^2(n_1, n_2, n_3)$
if it has a Seifert fibration over
$S^2$ with three exceptional fibers of
indices $n_1, n_2$ and $n_3$ $(n_i \ge 2)$.
Since $K_4$ is unknotted,
$(K_4; 4)$ is a lens space $L(4, 1)$.
For the other $n$'s,
we have:
\begin{LM}
\label{Seifert}
$(K_n; n)$ is a small Seifert fiber space of type
$S^2(3, 5, |4n-15|)$ for any integer $n \ne 4$.
\end{LM}
\begin{proof}
Since the linking number of
$k$ and $c$ is one (with suitable orientations),
$(K_n ; n)$ has surgery descriptions
as in Figure \ref{fig:description}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=1.0\linewidth]{description.eps}
\caption{Surgery descriptions of $(K_n; n)$}
\label{fig:description}
\end{center}
\end{figure}
Let us take the quotient by the strong inversion of $S^3$
with an axis $L$ as shown in Figure \ref{fig:msequence1}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.9\linewidth]{msequence1.eps}
\nocolon\caption{}
\label{fig:msequence1}
\end{center}
\end{figure}
Then we obtain a branch knot $b'$ which is the image of the axis $L$.
The Montesinos trick (\cite{Mon}, \cite{Bl}) shows that
$-\frac{1}{2}, -1, \frac{3n-11}{-n+4}$ and
$1$-surgery on
$t_1, t_2, c$ and $k$ in the upstairs correspond to
$-\frac{1}{2}, -1, \frac{3n-11}{-n+4}$ and
$1$-untangle surgery on $b'$ in the downstairs,
where an \textit{$r$-untangle surgery} is
a replacement of $\frac{1}{0}$-untangle by
$r$-untangle.
(We adopt Bleiler's convention \cite{Ble}
on the parametrization of rational tangles.)
These untangle surgeries convert $b'$ into
a link $b$ (Figure \ref{fig:msequence1}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.9\linewidth]{msequence2.eps}
\caption{Continued from Figure \ref{fig:msequence1}}
\label{fig:msequence2}
\end{center}
\end{figure}
Following the sequence of isotopies in
Figures \ref{fig:msequence1} and \ref{fig:msequence2},
we obtain a Montesinos link
$M(\frac{2}{5}, -\frac{2}{3}, \frac{n-4}{4n-15})$.
Since $(K_n ; n)$ is the double branched cover of $S^3$ branched over
the Montesinos link $M(\frac{2}{5}, -\frac{2}{3}, \frac{n-4}{4n-15})$,
$(K_n; n)$ is a Seifert fiber space of type
$S^2(3, 5, |4n-15|)$ as desired.
\end{proof}
\begin{LM}
\label{hyperbolic}
The knot $K_n$ is hyperbolic if
$n \ne 3, 4, 5$.
\end{LM}
\begin{proof}
Note that the $2$-bridge link given in Figure \ref{fig:knotKn}
is not a $(2, p)$-torus link,
and hence by \cite{Me}
it is a hyperbolic link.
If $n \ne 3, 4, 5$,
then $|-n+4|>1$ and
it follows from
\cite[Theorem 1]{AMM1} (also \cite[Theorem 1.2]{AMM3})
that $K_n$ is a hyperbolic knot.
See also
\cite[Corollary A.2]{G-Lu3},
\cite[Theorem 1.2]{MM3} and
\cite[Theorem 1.1]{AMM2}.
\end{proof}
\begin{rem}
It follows from \cite{Ma}, \cite{KMS}
that $K_n$ is a nontrivial knot except when $n = 4$.
An experiment using Weeks' computer program
``SnapPea" \cite{W} suggests that
$K_3$ and $K_5$ are hyperbolic,
but we will not use this experimental results.
\end{rem}
\begin{LM}
\label{tunnel}
The knot $K_n$ has tunnel number one for any integer $n \ne 4$.
\end{LM}
\begin{proof}
Since the link $k \cup c$ is a two-bridge link,
the tunnel number of $k \cup c$ is one
with unknotting tunnel $\tau$;
A regular neighborhood $N(k \cup c \cup \tau)$ is a genus two
handlebody and $S^3 - \mathrm{int}N(k \cup c \cup \tau)$ is also
a genus two handlebody,
see Figure \ref{fig:tunnel}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.5\linewidth]{tunnel.eps}
\nocolon\caption{}
\label{fig:tunnel}
\end{center}
\end{figure}
Then the general fact below
(in which $k \cup c$ is not necessarily a two-bridge link)
shows that the tunnel number of $K_n$ is less than or equal to one.
Since our knot $K_n$ $(n \ne 4)$ is knotted in $S^3$,
the tunnel number of $K_n$ is one.
\end{proof}
\begin{CL}
\label{tunnel1}
Let $k \cup c$ be a two component link in $S^3$
which has tunnel number one.
Assume that $c$ is unknotted in $S^3$.
Then every knot obtained from $k$ by twisting along $c$
has tunnel number at most one.
\end{CL}
\begin{proof}
Let $\tau$ be an unknotting tunnel and
$V$ a regular neighborhood of $k \cup c \cup \tau$ in $S^3$;
$V$ is a genus two handlebody.
Since $\tau$ is an unknotting tunnel for $k \cup c$,
by definition,
$W = S^3 - \mathrm{int}V$ is also
a genus two handlebody.
Take a small tubular neighborhood $N(c) \subset \mathrm{int}V$
and perform $-\frac{1}{n}$-surgery on $c$ using $N(c)$.
Then we obtain
a knot $k_n$ as the image of $k$ and
obtain a genus two handlebody $V(c; -\frac{1}{n})$.
Note that $V(c; -\frac{1}{n})$ and $W$ define
a genus two Heegaard splitting of $S^3$,
see Figure \ref{fig:arc},
where $c_n^*$ denotes the core of the filled solid torus.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.88\linewidth]{arc.eps}
\nocolon\caption{}
\label{fig:arc}
\end{center}
\end{figure}
Then it is easy to see that an arc $\tau_n$ given by Figure \ref{fig:arc}
is an unknotting tunnel for $k_n$ as desired.
\end{proof}
Now we are ready to prove Theorem \ref{Seifert slopes}.
Lemmas \ref{Seifert}, \ref{hyperbolic} and \ref{tunnel}
show that our knots $K_n$ enjoy the required properties,
except for $n = 3, 4, 5$.
To prove Theorem \ref{Seifert slopes},
we find hyperbolic knots $K'_n$ so that
$(K'_n; n)$ is Seifert fibered for $n = 3, 4, 5$
(instead of showing that $K_3$, $K_5$ are hyperbolic).
As the simplest way,
let $K'_3$, $K'_4$ and $K'_5$
be the mirror image of
$K_{-3}$, $K_{-4}$ and $K_{-5}$,
respectively.
Since $K_{-3}$, $K_{-4}$ and $K_{-5}$ are tunnel number one,
hyperbolic knots
by Lemmas \ref{hyperbolic} and \ref{tunnel},
their mirror images
$K'_3$, $K'_4$ and $K'_5$ are also tunnel number one,
hyperbolic knots.
It is easy to observe that
$(K'_3; 3)$
(resp. $(K'_4; 4)$, $(K'_5; 5)$)
is the mirror image of $(K_{-3}; -3)$
(resp. $(K_{-4}; -4)$, $(K_{-5}; -5)$).
By Lemma \ref{Seifert},
$(K_{-3}; -3)$, $(K_{-4}; -4)$ and $(K_{-5}; -5)$ are Seifert fibered,
and hence $(K'_3; 3)$, $(K'_4; 4)$ and $(K'_5; 5)$ are also Seifert fibered.
Putting $K_n$ as $K'_n$ for $n = 3, 4, 5$,
we finish a proof of Theorem \ref{Seifert slopes}.
\hspace*{\fill} $\qed$
\section{Identifying exceptional fibers}
In \cite{MM3},
Miyazaki and Motegi conjectured that
if $K$ admits a Seifert fibered surgery,
then there is a trivial knot $c \subset S^3$ disjoint from $K$
which becomes a Seifert fiber in the resulting Seifert fiber space,
and verified the conjecture for several Seifert fibered surgeries
\cite[Section 6]{MM3}, see also \cite{EM}.
Furthermore,
computer experiments via ``SnapPea" \cite{W} suggest that
such a knot $c$ is realized by a short closed geodesic in
the hyperbolic manifold $S^3 - K$,
for details see \cite[Section 9]{MM3}, \cite{Mot2}.
In this section,
we verify the conjecture for Seifert fibered surgeries
given in Theorem \ref{Seifert slopes}.
Recall that $K_n$ is obtained from $k$ by
$\frac{1}{-n+4}$-surgery on the trivial knot $c$
(i.e., $(n-4)$-twist along $c$), see Figure \ref{fig:knotKn}.
Denote by $c_n$ the core of the filled solid torus.
Then $K_n \cup c_n$ is a link in $S^3$
such that
$c_n$ is a trivial knot.
\begin{LM}
\label{fiber}
After $n$-surgery on $K_n$,
$c_n$ becomes an exceptional fiber of index $|4n-15|$
in the resulting Seifert fiber space $(K_n; n)$.
\end{LM}
\begin{proof}
Following the sequences given by
Figures \ref{fig:msequence1} and \ref{fig:msequence2},
we have a Montesinos link with three arcs
$\gamma$, $\tau_1$ and $\tau_2$ as in Figure \ref{fig:positions},
where $n = 1$ in the final Montesinos link,
and $\gamma$, $\tau_1$, $\tau_2$ and $\kappa$ are the
images of $c$, $t_1$, $t_2$ and $k$, respectively.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=1.0\linewidth]{positions.eps}
\caption{Positions of exceptional fibers}
\label{fig:positions}
\end{center}
\end{figure}
From Figure \ref{fig:positions} we recognize that
$t_1, t_2$ and $c$ become exceptional fibers of
indices $5$, $3$ and $|4n-15|$, respectively
in $(K_n; n)$.
\end{proof}
For $n \ne 3, 4, 5$,
$c_n$ becomes an exceptional fiber of index $|4n-15|$,
which is the unique maximal index,
in $(K_n; n)$.
Experiments via ``SnapPea" \cite{W}
suggest that $c_n$ is a shortest closed geodesic in $S^3 - K_n$ $(n \ne 3, 4, 5)$.
For sufficiently large $|n|$,
hyperbolic Dehn surgery theorem \cite{T1}, \cite{T2}
shows that
$c_n$ is the unique shortest closed geodesic in $S^3 - K_n$. \par
Let us assume that $n = 3, 4, 5$.
Then we have put $K_n$ as the mirror image of $K_{-n}$ in
the proof of Theorem \ref{Seifert slopes}.
Let $k' \cup c'$ be the mirror image of the link $k \cup c$.
Then $K_n$ is obtained also from $k'$ by
$\frac{1}{-n-4}$-surgery on $c'$ (i.e., $(n+4)$-twist along $c'$);
we denote the core of the filled solid torus by $c'_n$.
Note that there is an orientation reversing diffeomorphism
from $(K_{-n}; -n)$ to $(K_n; n)$ sending
$c_{-n}$ (regarded as a fiber in $(K_{-n}; -n)$)
to $c'_n$ (regarded as a fiber in $(K_n; n)$).
Thus the above observation implies that
$c'_n$ becomes
an exceptional fiber of index $|4n+15|$,
which is the unique maximal index,
in $(K_n; n)$ $(n = 3, 4, 5)$.
|
2,869,038,156,714 | arxiv | \section{Introduction}
\label{sec:intro}
Semiconductor lasers with narrow linewidth and wide tunability are of central interest in photonic applications where controlling the optical phase is essential, for instance for microwave photonics ~\cite{marpaung_2019NP}, optical beamforming networks~\cite{zhuang_2010JLT}, coherent optical communications~\cite{zhang_2009PTL}, light detection and ranging (LIDAR)~\cite{koroshetz_2005}, optical sensing~\cite{he_2011NT}, or precision metrology and timing, including GPS systems ~\cite{hemmerich_1990OC, jiang_2011NP,newman_2019O}. Of particular interest are narrow linewidth semiconductor lasers for pumping Raman and Brillouin lasers~\cite{spillane_2002N,li_2017O,gundavarapu_2019NPhot}, integration into functional photonic circuits, to serve as light engines, such as for electrically driven and fully integrated Kerr frequency combs~\cite{stern_2018N, raja_2019NatCom}.
A measure for a laser's intrinsic phase stability is the intrinsic linewidth (Schawlow-Townes limit), which can only be narrowed via increasing the photon lifetime of the laser cavity, or via increasing the laser power ~\cite{schawlow_1958PR, lax_1968PR}. However, in monolithic diode lasers both are problematic due to linear and nonlinear loss. The intrinsic waveguiding loss in these semiconductor amplifiers is high, which limits the photon lifetime. Furthermore, the spectral filtering circuitry required for single-frequency oscillation causes additional loss, while efficient output coupling decreases the lifetime further. Also, at high laser power nonlinear loss occurs. This leads to large intrinsic linewidths typically in the range of a MHz~\cite{akulova_2002JSTQE}.
Many orders of magnitude smaller intrinsic linewidths have been achieved with hybrid and heterogeneously integrated diode lasers, ultimately reaching into the sub-kHz-range~\cite{boller_2019Phot}. In all these approaches the cavity is extended with additional waveguide circuitry fabricated from a different material platform selected for low loss. For extending the cavity length and maintaining single longitudinal mode oscillation, spectral filtering has mostly been based on microring resonators employing Si waveguides~\cite{kita_2015APL,hulme_2013OE,kobayashi_2015JLT, tran_2020JSTQE}, SiON~\cite{matsumoto_2010OFCC}, SiO$_2$~\cite{debregeas_2014ISLC} and Si$_3$N$_4$~\cite{oldenbeuving_2013LPL, fan_2014SPIE, fan_2017CLEO}, thereby reducing the intrinsic linewidth from hundreds of~kilohertz~\cite{hulme_2013OE, fan_2017CLEO} to 220~Hz~\cite{tran_2020JSTQE}. Silicon waveguides bear the advantage of heterogeneous integration~\cite{santis_2018PNAS, huang_2019O}. However, beyond certain intra-cavity intensities and laser powers, using silicon limits the lowest achievable linewidth through nonlinear loss~\cite{vilenchik_2015PROC, santis_2018PNAS}, specifically, due to two-photon absorption across the relatively small bandgap of silicon \cite{kuyken_2017NANOPHOT}. Avoiding high intensities is difficult when having to select a single longitudinal mode within the wide semiconductor gain spectrum, because high-finesse filtering for strong side mode suppression is associated with resonantly enhanced power. Relying on external amplification and operating the diode laser at low power is not a viable route, because the linewidth increases inversely with lowering the laser output~\cite{schawlow_1958PR}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./laser_overview}
\caption{\label{fig:hybrid_laser} Schematic view of the hybrid laser comprising an InP gain section and a Si$_{3}$N$_{4}$ feedback circuit that extends the cavity length physically via a spiral, with a length of 33~mm, and optically via three ring resonators. The cavity mirrors are formed by the HR coating on the back facet of the gain section and the Sagnac mirror. The combined total optical length is significantly larger than the optical length of the solitary semiconductor chip.}
\end{figure}
To overcome these inherent limitations of the silicon platform, we use a wide bandgap, Si$_3$N$_4$ waveguide circuit, for which two-photon absorption is negligible~\cite{moss_2013NatPhot}, coupled to an InP gain section to realize a hybrid integrated semiconductor laser with an intrinsic linewidth as low as 40~Hz. This is achieved by realizing a laser cavity of long photon lifetime, in spite of almost 100\% passive roundtrip loss, and in spite of high intracavity intensity. A scheme of the laser is displayed in Fig.~\ref{fig:hybrid_laser}, comprising an InP semiconductor amplifier and a dielectric, low-loss silicon nitride waveguide feedback circuit for cavity length extension. In this particular design, optical cavity length extension is obtained by a physical increase of length via a 33-mm long spiral in combination with an optical increase via resonant excitation of intracavity microring resonators. The end mirrors are the reflection at the back facet of the gain chip and the Sagnac loop mirror, meaning that the light passes the microring resonators twice per roundtrip. Narrow linewidth is achieved with three basic considerations. The first is providing a long photon lifetime already in a single roundtrip through a low-loss and long extension circuit. This decouples the laser cavity photon lifetime from intrinsically high loss in the remaining parts of the cavity, specifically, in the semiconductor amplifier, but also from loss resulting from coupling between different waveguide platforms, and due to strong output coupling for increased power efficiency. Second, we exploit low propagation loss in the cavity extension to implement single-mode resolved spectral filtering already in a single roundtrip through the extension. This imposes single-mode oscillation with high side-mode suppression, which enables adjusting for stable laser operation at lowest linewidth without spectral mode hops. Third, to prevent that nonlinear loss does not compromise the photon lifetime, we use a wide-bandgap dielectric waveguide platform for laser cavity extension and restrict high-finesse spectral filtering solely to the dielectric part of the cavity. Thereby the laser linewidth can be decreased inversely with increasing the laser output.
\section{Conditions for narrow linewidth}
\label{sec:conditions}
To illustrate the key ingredients in our approach we recall the main conditions to induce narrow linewidth in extended cavity single-mode diode lasers~\cite{patzak_1983EL, henry_1986JLT, ujihara_1984JQE, bjork_1987JQE, koch_1990JLT}. The first condition is a long photon lifetime of the passive cavity because this increases the phase memory time of the laser resonator. If the total roundtrip loss can be reduced to below a few percent, the photon lifetime can be extended via multiple roundtrips in a short resonator~\cite{santis_2018PNAS}. In this case, due to the large free spectral range of short resonators, lower-finesse intracavity spectral filtering is sufficient for achieving single-mode oscillation. However, this approach is usually hard to realize due to intrinsically high passive roundtrip loss in semiconductor lasers.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./laser_schematic}
\caption{\label{fig:hybrid_laser_schematic} Schematic representation of a laser with an extended external cavity. The gain section has an intrinsic loss, $\alpha_{\textrm{i}}$, and gain, $g$, per unit length. Further, $L_{\textrm{g}}$ is the length of the gain section, $R_{\textrm{b}}$ is the back reflectivity and $T_{\textrm{c}}$ is the mode coupling efficiency at the interface. The passive gain section provides a total effective roundtrip reflectivity $R_{\textrm{i}}$. The feedback chip provides a low propagation loss, $\alpha_{\textrm{f}}$ ($\ll \alpha_{\textrm{i}}$), a long effective length, $L_{\textrm{f}}$, an end mirror with reflectivity $R_{\textrm{s}}$ and a spectral filtering with width $\Delta\nu_{\textrm{f}}$ to ensure single-mode oscillation, that are all combined in a single reflectivity $R_{\textrm{f}}=|r_{\textrm{f}}(\nu)|^2$, with $r_{\textrm{f}}(\nu)$ the total complex amplitude reflectivity of the feedback circuit. The large $L_{\textrm{f}}$ dominates the total length of the laser cavity and is responsible for increasing the photon lifetime and narrowing the laser linewidth, even in the presence of high intrinsic loss $\alpha_{\textrm{i}}$ (\textit{i.e.}, low $R_{\textrm{i}}$).}
\end{figure}
Our approach provides a long photon lifetime in spite of high passive roundtrip loss, by extending the laser cavity with a long feedback arm as displayed in Fig.~\ref{fig:hybrid_laser_schematic}. The laser comprises a gain section of length $L_{\textrm{g}}$ with intrinsic loss $\alpha_{\textrm{i}}$ and gain $g$ per unit length and a feedback arm of length $L_{\textrm{f}}$ having a propagation loss $\alpha_{\textrm{f}}$ per unit length. We define $L^{\textrm{(o)}}= n_{\textrm{g}} L$ as the group-index weighted optical length corresponding to a waveguide of length $L$ with effective group index $n_{\textrm{g}}$. The feedback arm also contains a narrow spectral filter with bandwidth $\Delta\nu_\textrm{f}$ to enable singe-mode oscillation. The end mirrors of the cavity have reflectances $R_{\textrm{b}}$ and $R_{\textrm{s}}$ through which a power $P_b$ and $P_f$ is extracted, respectively, from the laser cavity. The mode coupling at the interface between the gain section and feedback arm results in a transmittance $T_{\textrm{c}}$. To illustrate how the photon lifetime of the passive laser cavity changes with the length of the feedback arm, we assume, for simplicity, that $T_{\textrm{c}}=1$, \textit{i.e.}, we assume perfect coupling between the two sections, and that all microrings, that constitute the narrow band optical filter, are tuned to be perfectly resonant at the laser wavelength. Under these conditions, the photon lifetime $\tau_p$ is given by~\cite{coldren_2012}
\begin{equation}
\frac{1}{\tau_p}= \frac{1}{L_{\textrm{g}}+L_{\textrm{f}}}\left(\alpha_{\textrm{i}}v_{\textrm{g,i}} L_{\textrm{g}} +\alpha_{\textrm{f}}v_{\textrm{g,f}} L_{\textrm{f}} - \frac{1}{2}\left\langle v_{\textrm{g}} \right\rangle\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right),
\label{eq:photon_lifetime}
\end{equation}
where $v_\textrm{g,i}=c/n_\textrm{g,i}$ and $v_\textrm{g,f}=c/n_\textrm{g,f}$ are the effective group velocities of the gain and feedback section, respectively, with $c$ the speed of light in vacuum, $\alpha_m= -\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})/2(L_{\textrm{g}}+L_{\textrm{f}})$ is the distributed mirror loss and $\left\langle v_{\textrm{g}} \right\rangle = (v_{\textrm{g,i}} L_{\textrm{g}} +v_{\textrm{g,f}} L_{\textrm{f}})/(L_{\textrm{g}} +L_{\textrm{f}})$ is the length weighted average group velocity of the propagating optical mode. Taking as typical values $R_{\textrm{b}}=0.9$, $R_{\textrm{s}}=0.8$, $\alpha_{\textrm{i}}=1600$~m$^{-1}$, $n_\textrm{g,i}=3.6$, $L_{\textrm{g}}=1$ mm and $n_\textrm{g,f}=1.715$, Fig.~\ref{fig:photon_lifetime} shows the calculated photon lifetime versus $L_{\textrm{f}}$ for a nominal propagation loss of $\alpha_{\textrm{f}}=2.3$ m$^{-1}$ (0.1~dB/cm) and for losses that are a factor of 5 smaller and larger, for comparison. Figure~\ref{fig:photon_lifetime} shows that for very small extension of the laser cavity, $L_{\textrm{f}} \ll L_{\textrm{g}}$, the photon lifetime is a constant. In this regime, $1/\tau_p\approx v_{\textrm{g,i}}\left(\alpha_{\textrm{i}} - \frac{1}{2L_{\textrm{g}}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right)$, which is a constant independent of $L_{\textrm{f}}$. As $L_{\textrm{f}}$ increases, the photon lifetime starts to increase linearly with $L_{\textrm{f}}$ independent of the propagation loss $\alpha_{\textrm{f}}$ of the waveguide. In this regime $1/\tau_p\approx (1/L_{\textrm{f}})\left(\alpha_{\textrm{i}}v_{\textrm{g,i}} L_{\textrm{g}} - \frac{1}{2}v_{\textrm{g,f}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right) $ as $L_{\textrm{f}} \gg L_{\textrm{g}}$ and still $\alpha_{\textrm{f}} v_{\textrm{g,f}} L_{\textrm{f}} \ll \alpha_{\textrm{i}} v_{\textrm{g,i}} L_{\textrm{g}} - \frac{1}{2}v_{\textrm{g,f}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})$. Indeed, the photon lifetime is independent of $\alpha_{\textrm{f}}$ in this regime (see Fig.~\ref{fig:photon_lifetime}). This means that in stationary state the gain coefficient of the hybrid laser only weakly depends on the propagation loss of the feedback section and that the amount of spontaneous emission, which is the source for the intrinsic linewidth, is approximately constant when increasing $L_{\textrm{f}}$. The resulting increase in phase memory time corresponds to a narrowing of the intrinsic linewidth. Furthermore, if we define $R_\textrm{i}=R_{\textrm{b}} e^{-2\alpha_{\textrm{i}} L_{\textrm{g}}} T_{\textrm{c}}^2$ as the reflectance of the passive gain section and $R_{\textrm{f}}(\nu) = |r_{\textrm{f}}(\nu)e^{i\phi_{\textrm{f}}(\nu)}|^2$, with $r_{\textrm{f}}(\nu)e^{i\phi_{\textrm{f}}(\nu)}$ the frequency dependent effective complex amplitude reflectivity of the feedback arm (see Fig.~\ref{fig:hybrid_laser_schematic}), we have $R_{\textrm{f}} \gg R_{\textrm{i}}$ for typical values of $R_{\textrm{s}}$ used in our experiment. We define this as the strong feedback regime. If $L_{\textrm{f}}$ is still further increased, the total propagation loss will eventually become the dominant loss leading to a saturation of the photon lifetime. In this regime $1/\tau_p \approx v_{\textrm{g,f}}\alpha_{\textrm{f}}$. This is clearly visible in Fig.~\ref{fig:photon_lifetime} for the different propagation losses. For the nominal propagation loss, the photon lifetime saturates at about 2.5~ns for $L_{\textrm{f}} \gtrsim 1$~m. Note that we have kept the mirror reflectances constant and only changed the effective length of the feedback arm.
Including a more realistic $T_{\textrm{c}}$ does not change the above observations as a proper design of the hybrid laser will lead to $T_{\textrm{c}} \gtrsim 0.9$. Including the associated loss in the intrinsic loss $\alpha_{\textrm{i}}$ of the gain section only slightly increases this value. From this we conclude that extending the length of the feedback arm above a threshold, that, for a given outcouple loss, is set by the length of the gain section, will linearly increase the photon lifetime, and hence reduce the intrinsic linewidth, as long as the total loss, due to passive losses in the gain section and outcoupling, dominates the total propagation loss in the feedback circuit.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./photon_lifetime}
\caption{\label{fig:photon_lifetime}Calculated photon lifetime $\tau_{\textrm{p}}$ as a function of the geometric length $L_{\textrm{f}}$ of the feedback arm for a typical propagation loss of $\alpha_{\textrm{f}}=2.3$ m$^{-1}$ (0.1~dB/cm) and for losses that are a factor of 5 smaller and larger, while $\alpha_{\textrm{g}}=1600$~m$^{-1}$. Other parameters are $R_{\textrm{b}}=0.9$, $R_{\textrm{s}}=0.8$, $n_\textrm{g,i}=3.6$, $n_\textrm{g,f}=1.715$, and $L_{\textrm{g}}=1$ mm.}
\end{figure}
Including off-resonance effects of the filter on the photon lifetime, and thus on the intrinsic linewidth of the laser, is more complicated as the total feedback loss increases and becomes strongly frequency dependent, while at the same time the length of the feedback arm reduces when the filter is detuned from the line center. To illustrate the effect on the instrinsic linewidth, we consider the whole feedback arm as a lumped reflectance $R_{\textrm{f}}(\nu)$ (see Fig.~\ref{fig:hybrid_laser_schematic}) and recall the expression for the intrinsic or Schawlow-Townes linewidth $\Delta\nu_{\textrm{ST}}$~\cite{fan_2017OE}
\begin{equation} \label{eq:schawlow_townes}
\Delta\nu_{\textrm{ST}}=\frac{h\nu}{4\pi}\frac{n_{\textrm{sp}} \gamma_{\textrm{tot}} \gamma_{\textrm{m}} F_{\textrm{P}}}{P_0 K(\nu)} \frac{1+\alpha_{\textrm{H}}^2}{F^2}.
\end{equation}
Here, $\gamma_{\textrm{m}}=-\frac{v_{\textrm{g}}}{2L_{\textrm{g}}} \textrm{ln}(R_{\textrm{b}} R_{\textrm{f}}(\nu))$ is the mirror loss rate and $\gamma_{\textrm{tot}}$ is the total loss rate, both assumed to be homogeneously distributed over the length of the gain section. Further, $h\nu$ is the photon energy of the laser oscillation, $P_0$ the output power at a particular output port, and $K(\nu)>1$ a weight factor accounting for power emitted from other ports. $F_{\textrm{P}} >1$ is the longitudinal Petermann factor increasing the linewidth, in case that reflective feedback ($R_{\textrm{b}}$ and $R_{\textrm{f}}(\nu)$) becomes very small~\cite{ujihara_1984JQE, henry_1986JLT}. $\alpha_{\textrm{H}}$ is the Henry linewidth enhancement factor due to gain-index coupling~\cite{henry_1982JQE} and $n_{\textrm{sp}}$ is the spontaneous emission enhancement factor that takes into account the reduction in inversion due to reabsorption by valence band electrons. Typically, $n_{\textrm{sp}}$ takes a value of around 2. Finally, $F=1+A+B$, where $A=\frac{1}{\tau_{\textrm{g}}} \frac{d}{d\nu}\phi_{\textrm{f}}(\nu)$, $B=\frac{\alpha_{\textrm{H}}}{\tau_{\textrm{g}}}\frac{d}{d\nu} \textrm{ln}(|r_{\textrm{f}}(\nu)|)$ and $\tau_{\textrm{g}}$ is the roundtrip time of the solitary gain section. At resonance, \textit{i.e.}, when the center of the filter's reflection peak coincides with the oscillation frequency, $B=0$ and $A$ is maximum and equal to the ratio of the optical length of the feedback arm to the optical length of the gain section~\cite{fan_2017OE, tran_2020JSTQE}. We find that $\Delta\nu_{\textrm{ST}}$ reduces with the inverse of $L_{\textrm{f}}^2$ when keeping the end mirror reflectances constant, in agreement with our discussion on the photon lifetime above. Off-resonance, $A$ decreases and $B$ increases on the rising edge of the filter peak whenever gain-index coupling is present, \textit{i.e.}, whenever $\alpha_{\textrm{H}}>0$. For a sufficiently sharp reflection peak, the maximum in $F$ is found for an oscillation frequency on the rising edge of the filter's reflection peak, slightly detuned from the line center, where spontaneous emission-induced index and frequency fluctuations are compensated by a steep, frequency dependent resonator loss~\cite{kazarinov_1987JQE, olsson_1987APL, koch_1990JLT, tran_2020JSTQE}.
A full numerical analysis of Eq.~(\ref{eq:schawlow_townes}), including changes in $R_{\textrm{f}}(\nu)$ and, consequently, $K(\nu)$ and $F_{\textrm{P}}$ for a more complete off-resonance description of the linewidth behaviour, is still of limited value due to the underlying assumptions used in deriving Eq.~(\ref{eq:schawlow_townes}). For example, the spatial distribution of the inversion density varies notably along the axis of the gain section, which is due to the relatively high intrinsic loss $\alpha_{\textrm{i}}$. This means that the mean field approximation used in deriving Eq.~(\ref{eq:schawlow_townes}) is not well justified. While not suitable for accurate predictions of the intrinsic linewidth for our hybrid diode laser, Eqs.~(\ref{eq:photon_lifetime}) and (\ref{eq:schawlow_townes}) are still very useful to determine scaling properties and design strategies for lowering the intrinsic linewidth of the laser. We have shown that increasing the optical length of the feedback arm narrows the linewidth as long as the total propagation loss in the feedback arm is not the dominant loss in the laser, \textit{i.e.}, the total propagation loss, including nonlinear loss, is only a small fraction of the remaining loss. This means that the maximum obtainable photon lifetime is set by the propagation loss of the feedback arm.
The second condition is a sufficiently high spectral resolution of the feedback filter for single-mode oscillation to allow the use of Eq.~(\ref{eq:schawlow_townes}). Furthermore, compensating the linewidth enhancement due to gain-index coupling via strongly frequency selective loss at the low-frequency side of the feedback filter transmission requires fine tuning of the laser without spectral mode hops. Such a fine-tuning requires single-mode resolved spectral filtering in the feedback arm. Single-mode filtering is obtained if the FWHM bandwidth of the reflection peak of the filter, $\Delta\nu_f$, is narrower than the laser cavity mode spacing.
The third condition for narrow linewidth is operating the laser maximally high above threshold. This reduces the relative rate of randomly phased spontaneous emission as compared to phase-preserving stimulated emission. At a given roundtrip loss, high-above-threshold operation can only be achieved by increasing the pump power. In the experiment, we increase the laser power for linewidth narrowing. To maintain single-mode oscillation with high-finesse spectral filtering, we use a dielectric waveguide platform for extending the cavity length, where spectral filtering is implemented only with dielectric materials. This choice ensures that high intracavity intensity, occurring at high laser power due to filter-induced enhancement, is only present in the dielectric part of the laser. There, nonlinear loss can be safely neglected~\cite{moss_2013NatPhot, kuyken_2017NANOPHOT} due to the wide bandgap of dielectric materials.
\section{Laser design}
Figure~\ref{fig:hybrid_laser} shows the schematic design of the hybrid laser, comprising an InP semiconductor optical amplifier (gain section) and an extended cavity made of a long Si$_3$N$_4$ low-loss dielectric waveguide circuit that provides frequency selective feedback to the amplifier. As the directional couplers and, to a lesser extent, the effective refractive index of the waveguide used in the laser design are wavelength dependent, in the following the nominal wavelength of 1560~nm is assumed unless otherwise specified.
The InP semiconductor amplifier (COVEGA, SAF 1126) for generation of light in a single transverse mode at around 1.55~$\mu$m wavelength has a length of $L_{\textrm{g}}=1000$~$\mu$m and a specified typical output power of 60~mW based on amplification in multiple quantum wells. The back facet is high-reflection coated ($R_{\textrm{b}}=90\%$) to provide double-pass amplification. In order to suppress back-reflection into the amplifier, the front facet is anti-reflection coated to a specified reflectivity of $10^{-4}$ for an external index of 1.5, which is close to the effective refractive index of the tapered input end of the Si$_3$N$_4$ waveguide circuit (1.584). The semiconductor waveguide is tilted by 6$^{\circ}$, to further reduce back-reflection. Derived from the far-field specifications, the mode field diameter at the exit facet is 4.4~$\mu$m in the horizontal and 1.3~$\mu$m in the vertical direction. The amplifier is integrated with the Si$_3$N$_4$ circuit via alignment for maximizing the amount of amplified spontaneous emission (ASE) entering the Si$_3$N$_4$ circuit, followed by bonding with an adhesive. The integrated laser is mounted on a thermoelectric cooler and kept at 25~$^{\circ}$C. The electrical connects are wire bonded to a fan-out electronic circuit board. For driving the amplifier with a low-noise current, we use a battery-driven power supply (ILX Lightwave, LDX3620).
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\linewidth]{./figure_3}
\caption{\label{fig:double_stripe} Schematic view of the cross section of the double stripe Si$_{3}$N$_{4}$ waveguide used in the photonic feedback circuit for the hybrid laser. The supported single transverse optical mode has a cross section of $1.6 \times 1.7$~$\mu$m$^2$.}
\end{figure}
A long optical path length for linewidth narrowing, and sharp spectral filtering for single-mode oscillation, is provided with a Si$_3$N$_4$ circuit optimized for low-loss and high frequency selectivity. In this platform~\cite{roeloffzen_2018JSTQE} the core cross section can be adjusted to obtain a proper combination of tight guiding and low loss. We select a symmetric double-stripe geometry, see Fig.~\ref{fig:double_stripe}, that comprises two Si$_3$N$_4$ cores (1.2~$\mu$m $\times$ 170~nm) separated by 500~nm embedded in a SiO$_2$ cladding. This cross section yields a single-spatial mode of size $1.6 \times 1.7$~$\mu$m$^2$ for the TE polarization and an effective group index of 1.715. The propagation loss is smaller than 0.1~dB/cm, which agrees with values reported by Roeloffzen \textit{et al}.~\cite{roeloffzen_2018JSTQE}, and is determined from light scattering measurements with an IR camera using test structures from the same wafer with lengths of 5, 10 and 15~cm. The chosen cross section and the high index contrast between core and cladding ($\Delta n \approx 0.53$) provides tight guiding, making radiative loss (bending loss) negligible also for waveguides with tight bending radii as small as 100~$\mu$m. This enables to employ small-radius, low-loss microring resonators for Vernier-filtering with a wide free spectral range (FSR) comparable to the gain bandwidth~\cite{oldenbeuving_2013LPL}. Tight guiding in combination with low loss enables to realize significant on-chip optical path lengths. For example, extending the cavity length such that the returning power drops to a fraction of $R_{\textrm{f}}=1/3$ and assuming nominal parameters otherwise, \textit{i.e.}, a Sagnac mirror reflectance of $R_{\textrm{s}}=0.9$, a loss coefficient of $\alpha_{\textrm{f}}=0.1$~dB/cm, and a ring power coupling of $\kappa^2=10\%$, results in a roundtrip optical length of the laser cavity of about $2L^{\textrm{(o)}}=74$~cm. This corresponds to extending the photon lifetime to about 1 nanoseconds (see Fig.~\ref{fig:photon_lifetime}). The selected waveguide cross section is also suitable for low-loss adiabatic tapering. With two-dimensional tapering, the calculated maximum power coupling to the mode field of the gain section is in the range of $T_{\textrm{c}}$=90 to 93\% ~\cite{fan_2016PJ}, and the coupling to the $10.5 \pm 0.8 \mu$m diameter mode of single-mode output fibers (Fujikura 1550PM) can be as high as 98\%.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{./figure_4ab}
\caption{\label{fig:transmission} Calculated double-pass power transmission $T_{123}$ of the Si$_3$N$_4$ feedback arm containing three cascaded rings with radii $R_1=99$~$\mu$m, $R_2=102$~$\mu$m and $R_3=1485$~$\mu$m across a range corresponding to the gain bandwidth (a) and across a small range near the maximum of the gain at 1.54~$\mu$m (b). The peak transmission amounts to 51\% as calculated with an effective group index of $n_g=1.715$, the Sagnac mirror reflectance set to 90\%, and a propagation loss of $0.1$~dB/cm.}
\end{figure}
At this point we recall that we do not aim on low loss per entire roundtrip through the hybrid cavity. Instead we maximize only the optical length and thus the photon travel time in the dielectric feedback arm of the laser cavity, while keeping the loss in the feedback arm much lower than the intrinsic loss in the remaining part of the laser cavity roundtrip. With the circuit design realized here, the feedback arm provides a high peak reflectivity of $R_{\textrm{f}}=51\%$, assuming the Sagnac mirror reflectance is set to $R_{\textrm{s}}=90\%$ and using nominal values for the propagation loss of $\alpha_{\textrm{f}}=0.1$~dB/cm and power coupling of the rings of $\kappa^2=10\%$. In contrast, the loss in the remaining parts of the laser cavity is much higher, \textit{i.e.}, $R_{\textrm{i}} \approx 3\%$. The latter is calculated from double-passing 80\% loss in the amplifier, 10\% loss at at the amplifier back facet, and double-passing 10\% loss at the InP-Si$_3$N$_4$ interface. The loss estimates show that the laser would operate in the strong feedback regime, where $R_{\textrm{f}} \gg R_{\textrm{i}}$, such that a long roundtrip length in the feedback circuit should enable significant linewidth narrowing.
In order to induce single frequency oscillation across the 70~nm (8~THz) wide gain bandwidth in spite of an expected, dense mode spacing of a long laser cavity, we use three cascaded microring resonators in add-drop configuration, all with a power coupling of $\kappa^2=10\%$ to their bus waveguides. Two short resonators with a small difference in radius are used in Vernier configuration for coarse frequency selection ($R = 99$ and 102~$\mu$m, average FSR 278~GHz, finesse~28, quality factor $Q \approx 20,000$). The third microring resonator provides fine spectral filtering ($R_3=1485$~$\mu$m, FSR 18.6~GHz, finesse~28, $Q \approx 290,000$). Taking into account that all resonators are double-passed in the silicon nitride feedback circuit and assuming 0.1~dB/cm propagation loss, we calculate a FWHM of the spectral filter's transmission peak of 450~MHz (3.6~pm). Behind the resonators the extended cavity is closed with a Sagnac loop mirror of adjustable reflectivity via a tunable balanced Mach-Zehnder interferometer.
For a fixed setting of the mirror, the fraction of power coupled out of the laser cavity typically varies less than 20\% over the gain bandwidth of the laser. However, by tuning the Mach-Zehnder interferometer, the outcoupling can approximately be kept constant in the experiment when the oscillating wavelength is varied. The output power is collected into a single-mode output fiber (Fujikura 1550PM) that is butt-coupled with index matching glue to the polished end facet of the feedback chip.
For spectrally aligned and resonant microring resonators, we calculate a laser cavity optical roundtrip length of $2L^{\textrm{(o)}} = 0.49$~m which, via $\textrm{FSR}=c/2L^{\textrm{(o)}}$, $c$ being the speed of light in vacuum, corresponds to a free spectral range of 610~MHz. The length is calculated with double-passing the optical length of the three resonators, each having a power-coupling of $\kappa^2=10\%$ (which corresponds to multiplying each length with the approximate number of nine round trips at resonance \cite{liu_2001APL}), a 33~mm long waveguide spiral for further cavity length extension, the length of the amplifier, and various smaller sections of bus waveguides including the loop mirror (all geometric lengths are converted to optical lengths). With this cavity length the passive cavity photon lifetime already starts to saturate (see Fig.~\ref{fig:photon_lifetime}). We note that the cavity mode spacing varies noticeably with the light frequency, which is mainly due to strong dispersion in transmission through the long microring resonator. For light at transmission resonance of the long resonator, this places the two closest cavity modes at 965~MHz distance. For light in the midpoint wing of the transmission resonance, the closest cavity mode is located at 750~MHz. In comparison, the 450-MHz bandwidth of the feedback filtering is smaller, \textit{i.e.}, the condition of single-mode resolved filtering is fulfilled. We further note that, based on measurements on similar structures, the power coupling $\kappa^2$ of the 10\% directional coupler typically increases by almost a factor of 2 when the wavelength increases from 1500 to 1600~nm. This means that the cavity length extension is longer at the short wavelength side of the gain bandwidth. As for these lengths the cavity photon lifetime already starts to saturate, we expect a reduced variation in cavity photon lifetime over this gain bandwidth.
The calculated double-pass filter spectrum obtained with the three-ring circuit across a range corresponding to the gain bandwidth is shown in Fig.~\ref{fig:transmission}(a) and across a small range around the resonant wavelength of 1.5359~$\mu$m in Fig.~\ref{fig:transmission}(b). For a Sagnac mirror reflectivity of 90\%, as used for spectral recordings, we calculate a high feedback of $R_{\textrm{f}}=51\%$, which is due to low loss in the Si$_3$N$_4$ waveguides. The feedback at the next-highest side resonance of the long resonator is lower by -12.5 dB. For setting the highest transmission peak to any laser cavity mode within the laser gain, the resonators are equipped with thin-film thermo-electric phase shifters with a $0-2\pi$ range. With the described spectral filtering and due to the dominance of homogeneous gain broadening in the quantum well amplifier, it is expected that single-mode oscillation with high side mode suppression ratio is possible at any wavelength within the gain bandwidth.
\section{Results}
\begin{figure}[btp]
\centering
\includegraphics[width=0.85\linewidth]{./PI_curve}
\caption{\label{fig:PI_curve} Typical laser output power as measured with increasing pump current, yielding a maximum output of 23 mW. The discontinuities indicate spectral mode hops. This particular measurement was performed at a wavelength of 1561~nm.}
\end{figure}
Figure~\ref{fig:PI_curve} shows a typical measurement of the fiber-coupled output power behind the Sagnac loop mirror versus pump current. For achieving high output, the Sagnac mirror was set to a high transmission of about 80\%, and the laser wavelength was set to 1561~nm, near the center of the gain spectrum, via the phase shifter of the first microring resonator. The pump current is stepwise varied and fine-tuned, in order to maintain single-mode operation. The laser shows a threshold pump current of about 42~mA and a maximum output power of 23~mW is achieved at a pump current of 320~mA. This is almost half of the specified maximum power of the amplifier of 60~mW. The discontinuities in the output power versus pump current correspond to spectral mode hops. The reason is that increasing the pump current also changes the refractive index in the amplifier, which tunes the laser cavity length with regard to the transmission spectrum of the feedback filter.
To discuss the presence of nonlinear loss, we estimate the maximum intracavity intensity that occurs at the maximum output power. Assuming a Sagnac mirror transmission of 10\%, which is typically used for the linewidth measurements, we calculate a power of approximately 4~W in the largest microring resonator (2~W in each direction). Using a mode area of $1.6 \times 1.7$~$\mu$m$^2$ the according intensity is high, of the order of 0.15~GW/cm$^2$. However, loss from two-photon absorption can safely be neglected~\cite{moss_2013NatPhot} due to the wide bandgap of Si$_3$N$_4$. For comparison, in a silicon waveguide the same power and a typical mode field area of $0.5 \times 0.5$~$\mu$m$^2$ would cause significant two-photon absorption, \textit{i.e.}, of the order of 5~dB/cm~\cite{kuyken_2017NANOPHOT}. This would make it difficult to implement sharp spectral filtering to realize long, resonator-enhanced, feedback lengths and to narrow the intrinsic linewidth via increasing the laser power.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./spectrum_BOSA}
\caption{\label{fig:single_line_BOSA} Typical power spectrum recorded across a range of 30~pm with 0.1~pm resolution (3.7~GHz and 12~MHz, respectively).}
\end{figure}
To verify that the laser oscillates at a single wavelength, the laser output spectrum is measured at the fiber-coupled output from the through port of the first small resonator (monitor port in Fig.~\ref{fig:hybrid_laser}). There it would be possible to observe also light that is not resonant with the microring resonators. To obtain a higher resolution than the mode spacing, the laser spectrum was recorded with an optical spectrum analyzer based on stimulated Brillouin scattering (Aragon Photonics, BOSA400), and the small resonators are tuned for single-mode oscillation. All spectral measurements are performed behind an optical isolator and using tilted fiber connections to avoid feedback into the laser. Figure~\ref{fig:single_line_BOSA} shows a typical power spectrum recorded with the maximum resolution of 0.1~pm (12~MHz) across a 30~pm (3.7~GHz) wide interval around the oscillating mode. This range spans four to five mode spacings, such that possibly oscillating side modes would have become detectable. The measured spectrum confirms clean single-mode oscillation, with a side mode suppression of about 62~dB. Using a second optical spectrum analyzer (ANDO, AQ6317), set at a lower resolution of 50~pm but larger scan range, we confirmed single mode oscillation over the complete gain bandwidth.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./RIN_covega_new}
\caption{\label{fig:RIN} Typical power spectrum of the relative intensity noise (RIN). The spectrum is flat except for a small intermittent peak around 950~MHz. The optical output power was 1.2~mW.}
\end{figure}
For further characterization we measure relative intensity noise (RIN) with a fast photodiode and RF spectrum analyzer (10~kHz resolution and 100~kHz video bandwidth). Figure~\ref{fig:RIN} shows a typical RIN spectrum when the optical output power was 1.2~mW, which displays flat, broadband and low intensity noise around -157~dBc/Hz. Single narrowband features, here at 940 MHz, are likely due to spurious RF pickup, as not all spectra display these.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./tuning_coarse_a}
\caption{\label{fig:tuning_coarse} Superimposed output spectra recorded by tuning the laser wavelength in steps of 2~nm across a range of $>70$~nm.}
\end{figure}
To explore the overall spectral coverage of single-mode oscillation, the laser was manually tuned via the phase shifters on top of the microring resonators using a maximum heater power of 270~mW per heater. Figure~\ref{fig:tuning_coarse} shows an example of superimposed laser spectra, with the laser tuned to 35 different wavelengths. For coarse wavelength tuning, the heater current of one of the small microresonators is increased. This gives rise to discrete wavelength changes at a stepsize of about 2~nm, which corresponds to the FSR of the other small resonator. After the wavelength is set to a desired value, also the heating current of the other small resonator is adjusted for maximum laser output, to improve the spectral alignment of all resonators. The approximately flat tuning envelope is obtained by adjusting the Sagnac mirror feedback with wavelength tuning, at a pump current of 200~mA. We obtain a spectral coverage of 74~nm and at least 3 mW of output power. This compares well with the current record for monolithic, heterogeneously and hybrid integrated lasers~\cite{latkowski_2015PJ,fan_2017CLEO, tran_2020JSTQE}. Fine-tuning shown in steps of the FSR of the large microring resonator is shown in Fig.~\ref{fig:tuning_fine}. This was achieved via tuning the small resonators and loop mirror without heating the long resonator.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./tuning_fine_a}
\caption{\label{fig:tuning_fine} Superimposed spectra when fine tuning the laser in steps of 0.15~nm.}
\end{figure}
The intrinsic linewidth of the laser is measured using two independent setups based on delayed self-heterodyne detection ~\cite{richter_1986JQE,mercer_1991JLT}. The first, a proprietary setup, uses a Mach-Zehnder interferometer with 5.4~m optical arm length difference, a 40-MHz acousto-optic modulator, and two photodiodes for balanced detection. The beat signal is recorded versus time and analyzed with a computer to obtain the power spectral density of frequency noise (PSD). Free-running lasers, as investigated here, typically display increased technical noise at low frequencies whereas, at high noise frequencies, the PSD level levels off to the intrinsic laser linewidth. The second uses an arm length difference of 20~km and an 80-MHz modulator (AA Opto Electronic, MT80-IIR60-F10-PM0.5-130.s with ISOMET 232A-2 AOM driver). The power spectrum of the beat signal is recorded with an RF spectrum analyzer (Agilent E4405B with 25~kHz RBW). The intrinsic linewidth is retrieved with Lorentzian fits to the line wings where the Lorentzian shape is minimally obstructed, \textit{i.e.}, avoiding the low-frequency noise regime near the line center, as well as the range close to the electronic noise floor. Linewidth measurements are carried out at various different pump currents at a wavelength of 1561~nm near the center of the gain spectrum.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./noise_PSD}
\caption{\label{fig:noise_PSD}Double-sided power spectral density (PSD) of laser frequency noise for a pump current of 255~mA, plotted for positive frequencies. The dashed line at 6.5 Hz$^2$/Hz represents the mean of PSD values for noise frequencies between 4 and 7.5~MHz. The detection limit is at 0.5~Hz$^{2}$/Hz.}
\end{figure}
Figure~\ref{fig:noise_PSD} shows the PSD measured at a pump current of 255~mA, after adjusting for lowest noise only via the small microring resonators, while also monitoring the optical spectrum with an OSA to verify single-mode oscillation. The laser noise spectrum becomes flat for noise frequencies above $>2$~MHz. The upper bound for the white noise limit, indicated as dashed line, is taken as $6.5 \pm 1.3$~Hz$^2$/Hz. These values are obtained by taking the mean value and standard deviation of the Gaussian distribution of PSD values between noise frequencies of 4 and 7.5 MHz. After multiplying with $2\pi$ this corresponds to an intrinsic linewidth of $40 \pm 8$~Hz. This is significantly lower than our previous result of 290~Hz~\cite{fan_2017CLEO}. The lower linewidth has been obtained by using a different gain section, the COVEGA SAF 1126 InP gain chip, and by using a different outcoupling by the Sagnac loop mirror, about 10\% instead of 50\%, resulting in a doubling of the fiber coupled output power.
To verify the low linewidth level, the measurement is repeated with the second heterodyne setup using the same heater settings. The pump current was increased and decreased in steps and fine-tuned for lowest RF linewidth, while monitoring the optical spectrum with an OSA for single-mode oscillation. Figure~\ref{fig:lorentzian_width} displays the Lorentzian linewidth component versus pump current $I_{\textrm{p}}$ expressed as the threshold factor $X=(I_{\textrm{p}}-I_{\textrm{p,th}})/I_{\textrm{p,th}}$, where $I_{\textrm{p,th}}$ is the threshold pump current of approximately 42~mA. The error bars express the uncertainty in fitting. A doubly logarithmic plot is chosen to facilitate comparison with the expected inverse power law dependence of the linewidth as straight line with negative unity slope. The red line is a least-square fit with fixed negative unity slope versus the inverse threshold factor, $\frac{1}{X}$, showing that the measured linewidth narrows approximately inversely with laser power as theoretically expected. The power narrowing data do not display a levelling-off, in spite of significant intensity build-up in the high-finesse ring resonators. The lowest linewidth obtained from power spectral density recordings (shown as round symbol for comparison) is in good agreement with the data obtained from Lorentz fitting. The linewidth limit of 40~Hz in Fig.~\ref{fig:noise_PSD} is the narrowest intrinsic linewidth ever reported for a hybrid or heterogeneously integrated diode laser.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./lorentzian_width_log}
\caption{\label{fig:lorentzian_width}Double logarithmic plot of Lorentzian linewidth versus the threshold factor, $X=(I_{\textrm{p}}-I_{\textrm{p,th}})/I_{\textrm{p,th}}$, which is proportional to the output power, $P_{\textrm{out}}$. Unfilled symbols show measurements vs decreasing power. Measurements vs increasing power (filled symbols) yield slightly smaller linewidths. The solid line is a least-square fit to the lower linewidth data with negative unity slope (inverse power law, $\propto P_{\textrm{out}}^{-1}$). The linewidth obtained from PSD measurements (Fig.~\ref{fig:noise_PSD}) is shown as a black round symbol at $X=5.07$ (255~mA pump current).}
\end{figure}
\section{Conclusions}
We have demonstrated a hybrid integrated and widely tunable single-frequency diode laser with an intrinsic linewidth as low as 40~Hz, a spectral coverage of more than 70~nm and a maximum fiber-coupled output of 23~mW. The narrow linewidth is achieved via feedback from a low-loss dielectric waveguide circuit that extends the laser cavity to a roundtrip optical length of $2L^{\textrm{(o)}}=0.5$~m, in combination with single-mode resolving filtering. Realizing such high-finesse filtering with cascaded microring resonators with essentially a single roundtrip through a long and low-loss feedback arm allows strong linewidth narrowing in the presence of significant laser cavity roundtrip losses. The tolerance to loss in this approach is important because semiconductor amplifiers are intrinsically lossy, such as also the mode transitions between different waveguide platforms in hybrid or heterogeneously integrated photonic circuits. Choosing dielectric feedback waveguides based on silicon nitride is important for avoiding nonlinear loss because GW/cm$^2$-level intensities readily occur in lasers with tens of mW output and high-finesse intracavity filtering. The approach demonstrated here is promising for further linewidth narrowing through stronger pumping, as no hard linewidth limit through nonlinear loss is apparent with dielectric feedback circuits. Although some promise lies in further extension of the cavity length, as the cavity photon lifetime is not yet fully saturated, in combination with tighter filtering, our analysis shows that further significant improvement requires reduction of the propagation loss in the feedback circuit. This route appears very feasible because silicon nitride waveguides can be fabricated with extremely low loss down to 0.045~dB/m ~\cite{bauters_2011OE}, while several meter long silicon nitride resonator circuits have been demonstrated with a spectral selectivity better than 100~MHz ~\cite{taddei_2018PTL}. These properties and options indicate the feasibility of Hertz-level integrated diode lasers on a chip.
\section*{Funding}
This research was funded by the IOP Photonic Devices program of RVO (Rijksdienst voor Ondernemend Nederland), a division of the Ministry for Economic Affairs, The Netherlands and in part by the European Union's Horizon 2020 research and innovation programme under grant agreement 780502 (3PEAT).
\section*{Disclosures}
The authors declare no conflicts of interest.
\section{Introduction}
\label{sec:intro}
Semiconductor lasers with narrow linewidth and wide tunability are of central interest in photonic applications where controlling the optical phase is essential, for instance for microwave photonics ~\cite{marpaung_2019NP}, optical beamforming networks~\cite{zhuang_2010JLT}, coherent optical communications~\cite{zhang_2009PTL}, light detection and ranging (LIDAR)~\cite{koroshetz_2005}, optical sensing~\cite{he_2011NT}, or precision metrology and timing, including GPS systems ~\cite{hemmerich_1990OC, jiang_2011NP,newman_2019O}. Of particular interest are narrow linewidth semiconductor lasers for pumping Raman and Brillouin lasers~\cite{spillane_2002N,li_2017O,gundavarapu_2019NPhot}, integration into functional photonic circuits, to serve as light engines, such as for electrically driven and fully integrated Kerr frequency combs~\cite{stern_2018N, raja_2019NatCom}.
A measure for a laser's intrinsic phase stability is the intrinsic linewidth (Schawlow-Townes limit), which can only be narrowed via increasing the photon lifetime of the laser cavity, or via increasing the laser power ~\cite{schawlow_1958PR, lax_1968PR}. However, in monolithic diode lasers both are problematic due to linear and nonlinear loss. The intrinsic waveguiding loss in these semiconductor amplifiers is high, which limits the photon lifetime. Furthermore, the spectral filtering circuitry required for single-frequency oscillation causes additional loss, while efficient output coupling decreases the lifetime further. Also, at high laser power nonlinear loss occurs. This leads to large intrinsic linewidths typically in the range of a MHz~\cite{akulova_2002JSTQE}.
Many orders of magnitude smaller intrinsic linewidths have been achieved with hybrid and heterogeneously integrated diode lasers, ultimately reaching into the sub-kHz-range~\cite{boller_2019Phot}. In all these approaches the cavity is extended with additional waveguide circuitry fabricated from a different material platform selected for low loss. For extending the cavity length and maintaining single longitudinal mode oscillation, spectral filtering has mostly been based on microring resonators employing Si waveguides~\cite{kita_2015APL,hulme_2013OE,kobayashi_2015JLT, tran_2020JSTQE}, SiON~\cite{matsumoto_2010OFCC}, SiO$_2$~\cite{debregeas_2014ISLC} and Si$_3$N$_4$~\cite{oldenbeuving_2013LPL, fan_2014SPIE, fan_2017CLEO}, thereby reducing the intrinsic linewidth from hundreds of~kilohertz~\cite{hulme_2013OE, fan_2017CLEO} to 220~Hz~\cite{tran_2020JSTQE}. Silicon waveguides bear the advantage of heterogeneous integration~\cite{santis_2018PNAS, huang_2019O}. However, beyond certain intra-cavity intensities and laser powers, using silicon limits the lowest achievable linewidth through nonlinear loss~\cite{vilenchik_2015PROC, santis_2018PNAS}, specifically, due to two-photon absorption across the relatively small bandgap of silicon \cite{kuyken_2017NANOPHOT}. Avoiding high intensities is difficult when having to select a single longitudinal mode within the wide semiconductor gain spectrum, because high-finesse filtering for strong side mode suppression is associated with resonantly enhanced power. Relying on external amplification and operating the diode laser at low power is not a viable route, because the linewidth increases inversely with lowering the laser output~\cite{schawlow_1958PR}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./laser_overview}
\caption{\label{fig:hybrid_laser} Schematic view of the hybrid laser comprising an InP gain section and a Si$_{3}$N$_{4}$ feedback circuit that extends the cavity length physically via a spiral, with a length of 33~mm, and optically via three ring resonators. The cavity mirrors are formed by the HR coating on the back facet of the gain section and the Sagnac mirror. The combined total optical length is significantly larger than the optical length of the solitary semiconductor chip.}
\end{figure}
To overcome these inherent limitations of the silicon platform, we use a wide bandgap, Si$_3$N$_4$ waveguide circuit, for which two-photon absorption is negligible~\cite{moss_2013NatPhot}, coupled to an InP gain section to realize a hybrid integrated semiconductor laser with an intrinsic linewidth as low as 40~Hz. This is achieved by realizing a laser cavity of long photon lifetime, in spite of almost 100\% passive roundtrip loss, and in spite of high intracavity intensity. A scheme of the laser is displayed in Fig.~\ref{fig:hybrid_laser}, comprising an InP semiconductor amplifier and a dielectric, low-loss silicon nitride waveguide feedback circuit for cavity length extension. In this particular design, optical cavity length extension is obtained by a physical increase of length via a 33-mm long spiral in combination with an optical increase via resonant excitation of intracavity microring resonators. The end mirrors are the reflection at the back facet of the gain chip and the Sagnac loop mirror, meaning that the light passes the microring resonators twice per roundtrip. Narrow linewidth is achieved with three basic considerations. The first is providing a long photon lifetime already in a single roundtrip through a low-loss and long extension circuit. This decouples the laser cavity photon lifetime from intrinsically high loss in the remaining parts of the cavity, specifically, in the semiconductor amplifier, but also from loss resulting from coupling between different waveguide platforms, and due to strong output coupling for increased power efficiency. Second, we exploit low propagation loss in the cavity extension to implement single-mode resolved spectral filtering already in a single roundtrip through the extension. This imposes single-mode oscillation with high side-mode suppression, which enables adjusting for stable laser operation at lowest linewidth without spectral mode hops. Third, to prevent that nonlinear loss does not compromise the photon lifetime, we use a wide-bandgap dielectric waveguide platform for laser cavity extension and restrict high-finesse spectral filtering solely to the dielectric part of the cavity. Thereby the laser linewidth can be decreased inversely with increasing the laser output.
\section{Conditions for narrow linewidth}
\label{sec:conditions}
To illustrate the key ingredients in our approach we recall the main conditions to induce narrow linewidth in extended cavity single-mode diode lasers~\cite{patzak_1983EL, henry_1986JLT, ujihara_1984JQE, bjork_1987JQE, koch_1990JLT}. The first condition is a long photon lifetime of the passive cavity because this increases the phase memory time of the laser resonator. If the total roundtrip loss can be reduced to below a few percent, the photon lifetime can be extended via multiple roundtrips in a short resonator~\cite{santis_2018PNAS}. In this case, due to the large free spectral range of short resonators, lower-finesse intracavity spectral filtering is sufficient for achieving single-mode oscillation. However, this approach is usually hard to realize due to intrinsically high passive roundtrip loss in semiconductor lasers.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./laser_schematic}
\caption{\label{fig:hybrid_laser_schematic} Schematic representation of a laser with an extended external cavity. The gain section has an intrinsic loss, $\alpha_{\textrm{i}}$, and gain, $g$, per unit length. Further, $L_{\textrm{g}}$ is the length of the gain section, $R_{\textrm{b}}$ is the back reflectivity and $T_{\textrm{c}}$ is the mode coupling efficiency at the interface. The passive gain section provides a total effective roundtrip reflectivity $R_{\textrm{i}}$. The feedback chip provides a low propagation loss, $\alpha_{\textrm{f}}$ ($\ll \alpha_{\textrm{i}}$), a long effective length, $L_{\textrm{f}}$, an end mirror with reflectivity $R_{\textrm{s}}$ and a spectral filtering with width $\Delta\nu_{\textrm{f}}$ to ensure single-mode oscillation, that are all combined in a single reflectivity $R_{\textrm{f}}=|r_{\textrm{f}}(\nu)|^2$, with $r_{\textrm{f}}(\nu)$ the total complex amplitude reflectivity of the feedback circuit. The large $L_{\textrm{f}}$ dominates the total length of the laser cavity and is responsible for increasing the photon lifetime and narrowing the laser linewidth, even in the presence of high intrinsic loss $\alpha_{\textrm{i}}$ (\textit{i.e.}, low $R_{\textrm{i}}$).}
\end{figure}
Our approach provides a long photon lifetime in spite of high passive roundtrip loss, by extending the laser cavity with a long feedback arm as displayed in Fig.~\ref{fig:hybrid_laser_schematic}. The laser comprises a gain section of length $L_{\textrm{g}}$ with intrinsic loss $\alpha_{\textrm{i}}$ and gain $g$ per unit length and a feedback arm of length $L_{\textrm{f}}$ having a propagation loss $\alpha_{\textrm{f}}$ per unit length. We define $L^{\textrm{(o)}}= n_{\textrm{g}} L$ as the group-index weighted optical length corresponding to a waveguide of length $L$ with effective group index $n_{\textrm{g}}$. The feedback arm also contains a narrow spectral filter with bandwidth $\Delta\nu_\textrm{f}$ to enable singe-mode oscillation. The end mirrors of the cavity have reflectances $R_{\textrm{b}}$ and $R_{\textrm{s}}$ through which a power $P_b$ and $P_f$ is extracted, respectively, from the laser cavity. The mode coupling at the interface between the gain section and feedback arm results in a transmittance $T_{\textrm{c}}$. To illustrate how the photon lifetime of the passive laser cavity changes with the length of the feedback arm, we assume, for simplicity, that $T_{\textrm{c}}=1$, \textit{i.e.}, we assume perfect coupling between the two sections, and that all microrings, that constitute the narrow band optical filter, are tuned to be perfectly resonant at the laser wavelength. Under these conditions, the photon lifetime $\tau_p$ is given by~\cite{coldren_2012}
\begin{equation}
\frac{1}{\tau_p}= \frac{1}{L_{\textrm{g}}+L_{\textrm{f}}}\left(\alpha_{\textrm{i}}v_{\textrm{g,i}} L_{\textrm{g}} +\alpha_{\textrm{f}}v_{\textrm{g,f}} L_{\textrm{f}} - \frac{1}{2}\left\langle v_{\textrm{g}} \right\rangle\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right),
\label{eq:photon_lifetime}
\end{equation}
where $v_\textrm{g,i}=c/n_\textrm{g,i}$ and $v_\textrm{g,f}=c/n_\textrm{g,f}$ are the effective group velocities of the gain and feedback section, respectively, with $c$ the speed of light in vacuum, $\alpha_m= -\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})/2(L_{\textrm{g}}+L_{\textrm{f}})$ is the distributed mirror loss and $\left\langle v_{\textrm{g}} \right\rangle = (v_{\textrm{g,i}} L_{\textrm{g}} +v_{\textrm{g,f}} L_{\textrm{f}})/(L_{\textrm{g}} +L_{\textrm{f}})$ is the length weighted average group velocity of the propagating optical mode. Taking as typical values $R_{\textrm{b}}=0.9$, $R_{\textrm{s}}=0.8$, $\alpha_{\textrm{i}}=1600$~m$^{-1}$, $n_\textrm{g,i}=3.6$, $L_{\textrm{g}}=1$ mm and $n_\textrm{g,f}=1.715$, Fig.~\ref{fig:photon_lifetime} shows the calculated photon lifetime versus $L_{\textrm{f}}$ for a nominal propagation loss of $\alpha_{\textrm{f}}=2.3$ m$^{-1}$ (0.1~dB/cm) and for losses that are a factor of 5 smaller and larger, for comparison. Figure~\ref{fig:photon_lifetime} shows that for very small extension of the laser cavity, $L_{\textrm{f}} \ll L_{\textrm{g}}$, the photon lifetime is a constant. In this regime, $1/\tau_p\approx v_{\textrm{g,i}}\left(\alpha_{\textrm{i}} - \frac{1}{2L_{\textrm{g}}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right)$, which is a constant independent of $L_{\textrm{f}}$. As $L_{\textrm{f}}$ increases, the photon lifetime starts to increase linearly with $L_{\textrm{f}}$ independent of the propagation loss $\alpha_{\textrm{f}}$ of the waveguide. In this regime $1/\tau_p\approx (1/L_{\textrm{f}})\left(\alpha_{\textrm{i}}v_{\textrm{g,i}} L_{\textrm{g}} - \frac{1}{2}v_{\textrm{g,f}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})\right) $ as $L_{\textrm{f}} \gg L_{\textrm{g}}$ and still $\alpha_{\textrm{f}} v_{\textrm{g,f}} L_{\textrm{f}} \ll \alpha_{\textrm{i}} v_{\textrm{g,i}} L_{\textrm{g}} - \frac{1}{2}v_{\textrm{g,f}}\textrm{ln}(R_{\textrm{b}} R_{\textrm{s}})$. Indeed, the photon lifetime is independent of $\alpha_{\textrm{f}}$ in this regime (see Fig.~\ref{fig:photon_lifetime}). This means that in stationary state the gain coefficient of the hybrid laser only weakly depends on the propagation loss of the feedback section and that the amount of spontaneous emission, which is the source for the intrinsic linewidth, is approximately constant when increasing $L_{\textrm{f}}$. The resulting increase in phase memory time corresponds to a narrowing of the intrinsic linewidth. Furthermore, if we define $R_\textrm{i}=R_{\textrm{b}} e^{-2\alpha_{\textrm{i}} L_{\textrm{g}}} T_{\textrm{c}}^2$ as the reflectance of the passive gain section and $R_{\textrm{f}}(\nu) = |r_{\textrm{f}}(\nu)e^{i\phi_{\textrm{f}}(\nu)}|^2$, with $r_{\textrm{f}}(\nu)e^{i\phi_{\textrm{f}}(\nu)}$ the frequency dependent effective complex amplitude reflectivity of the feedback arm (see Fig.~\ref{fig:hybrid_laser_schematic}), we have $R_{\textrm{f}} \gg R_{\textrm{i}}$ for typical values of $R_{\textrm{s}}$ used in our experiment. We define this as the strong feedback regime. If $L_{\textrm{f}}$ is still further increased, the total propagation loss will eventually become the dominant loss leading to a saturation of the photon lifetime. In this regime $1/\tau_p \approx v_{\textrm{g,f}}\alpha_{\textrm{f}}$. This is clearly visible in Fig.~\ref{fig:photon_lifetime} for the different propagation losses. For the nominal propagation loss, the photon lifetime saturates at about 2.5~ns for $L_{\textrm{f}} \gtrsim 1$~m. Note that we have kept the mirror reflectances constant and only changed the effective length of the feedback arm.
Including a more realistic $T_{\textrm{c}}$ does not change the above observations as a proper design of the hybrid laser will lead to $T_{\textrm{c}} \gtrsim 0.9$. Including the associated loss in the intrinsic loss $\alpha_{\textrm{i}}$ of the gain section only slightly increases this value. From this we conclude that extending the length of the feedback arm above a threshold, that, for a given outcouple loss, is set by the length of the gain section, will linearly increase the photon lifetime, and hence reduce the intrinsic linewidth, as long as the total loss, due to passive losses in the gain section and outcoupling, dominates the total propagation loss in the feedback circuit.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\linewidth]{./photon_lifetime}
\caption{\label{fig:photon_lifetime}Calculated photon lifetime $\tau_{\textrm{p}}$ as a function of the geometric length $L_{\textrm{f}}$ of the feedback arm for a typical propagation loss of $\alpha_{\textrm{f}}=2.3$ m$^{-1}$ (0.1~dB/cm) and for losses that are a factor of 5 smaller and larger, while $\alpha_{\textrm{g}}=1600$~m$^{-1}$. Other parameters are $R_{\textrm{b}}=0.9$, $R_{\textrm{s}}=0.8$, $n_\textrm{g,i}=3.6$, $n_\textrm{g,f}=1.715$, and $L_{\textrm{g}}=1$ mm.}
\end{figure}
Including off-resonance effects of the filter on the photon lifetime, and thus on the intrinsic linewidth of the laser, is more complicated as the total feedback loss increases and becomes strongly frequency dependent, while at the same time the length of the feedback arm reduces when the filter is detuned from the line center. To illustrate the effect on the instrinsic linewidth, we consider the whole feedback arm as a lumped reflectance $R_{\textrm{f}}(\nu)$ (see Fig.~\ref{fig:hybrid_laser_schematic}) and recall the expression for the intrinsic or Schawlow-Townes linewidth $\Delta\nu_{\textrm{ST}}$~\cite{fan_2017OE}
\begin{equation} \label{eq:schawlow_townes}
\Delta\nu_{\textrm{ST}}=\frac{h\nu}{4\pi}\frac{n_{\textrm{sp}} \gamma_{\textrm{tot}} \gamma_{\textrm{m}} F_{\textrm{P}}}{P_0 K(\nu)} \frac{1+\alpha_{\textrm{H}}^2}{F^2}.
\end{equation}
Here, $\gamma_{\textrm{m}}=-\frac{v_{\textrm{g}}}{2L_{\textrm{g}}} \textrm{ln}(R_{\textrm{b}} R_{\textrm{f}}(\nu))$ is the mirror loss rate and $\gamma_{\textrm{tot}}$ is the total loss rate, both assumed to be homogeneously distributed over the length of the gain section. Further, $h\nu$ is the photon energy of the laser oscillation, $P_0$ the output power at a particular output port, and $K(\nu)>1$ a weight factor accounting for power emitted from other ports. $F_{\textrm{P}} >1$ is the longitudinal Petermann factor increasing the linewidth, in case that reflective feedback ($R_{\textrm{b}}$ and $R_{\textrm{f}}(\nu)$) becomes very small~\cite{ujihara_1984JQE, henry_1986JLT}. $\alpha_{\textrm{H}}$ is the Henry linewidth enhancement factor due to gain-index coupling~\cite{henry_1982JQE} and $n_{\textrm{sp}}$ is the spontaneous emission enhancement factor that takes into account the reduction in inversion due to reabsorption by valence band electrons. Typically, $n_{\textrm{sp}}$ takes a value of around 2. Finally, $F=1+A+B$, where $A=\frac{1}{\tau_{\textrm{g}}} \frac{d}{d\nu}\phi_{\textrm{f}}(\nu)$, $B=\frac{\alpha_{\textrm{H}}}{\tau_{\textrm{g}}}\frac{d}{d\nu} \textrm{ln}(|r_{\textrm{f}}(\nu)|)$ and $\tau_{\textrm{g}}$ is the roundtrip time of the solitary gain section. At resonance, \textit{i.e.}, when the center of the filter's reflection peak coincides with the oscillation frequency, $B=0$ and $A$ is maximum and equal to the ratio of the optical length of the feedback arm to the optical length of the gain section~\cite{fan_2017OE, tran_2020JSTQE}. We find that $\Delta\nu_{\textrm{ST}}$ reduces with the inverse of $L_{\textrm{f}}^2$ when keeping the end mirror reflectances constant, in agreement with our discussion on the photon lifetime above. Off-resonance, $A$ decreases and $B$ increases on the rising edge of the filter peak whenever gain-index coupling is present, \textit{i.e.}, whenever $\alpha_{\textrm{H}}>0$. For a sufficiently sharp reflection peak, the maximum in $F$ is found for an oscillation frequency on the rising edge of the filter's reflection peak, slightly detuned from the line center, where spontaneous emission-induced index and frequency fluctuations are compensated by a steep, frequency dependent resonator loss~\cite{kazarinov_1987JQE, olsson_1987APL, koch_1990JLT, tran_2020JSTQE}.
A full numerical analysis of Eq.~(\ref{eq:schawlow_townes}), including changes in $R_{\textrm{f}}(\nu)$ and, consequently, $K(\nu)$ and $F_{\textrm{P}}$ for a more complete off-resonance description of the linewidth behaviour, is still of limited value due to the underlying assumptions used in deriving Eq.~(\ref{eq:schawlow_townes}). For example, the spatial distribution of the inversion density varies notably along the axis of the gain section, which is due to the relatively high intrinsic loss $\alpha_{\textrm{i}}$. This means that the mean field approximation used in deriving Eq.~(\ref{eq:schawlow_townes}) is not well justified. While not suitable for accurate predictions of the intrinsic linewidth for our hybrid diode laser, Eqs.~(\ref{eq:photon_lifetime}) and (\ref{eq:schawlow_townes}) are still very useful to determine scaling properties and design strategies for lowering the intrinsic linewidth of the laser. We have shown that increasing the optical length of the feedback arm narrows the linewidth as long as the total propagation loss in the feedback arm is not the dominant loss in the laser, \textit{i.e.}, the total propagation loss, including nonlinear loss, is only a small fraction of the remaining loss. This means that the maximum obtainable photon lifetime is set by the propagation loss of the feedback arm.
The second condition is a sufficiently high spectral resolution of the feedback filter for single-mode oscillation to allow the use of Eq.~(\ref{eq:schawlow_townes}). Furthermore, compensating the linewidth enhancement due to gain-index coupling via strongly frequency selective loss at the low-frequency side of the feedback filter transmission requires fine tuning of the laser without spectral mode hops. Such a fine-tuning requires single-mode resolved spectral filtering in the feedback arm. Single-mode filtering is obtained if the FWHM bandwidth of the reflection peak of the filter, $\Delta\nu_f$, is narrower than the laser cavity mode spacing.
The third condition for narrow linewidth is operating the laser maximally high above threshold. This reduces the relative rate of randomly phased spontaneous emission as compared to phase-preserving stimulated emission. At a given roundtrip loss, high-above-threshold operation can only be achieved by increasing the pump power. In the experiment, we increase the laser power for linewidth narrowing. To maintain single-mode oscillation with high-finesse spectral filtering, we use a dielectric waveguide platform for extending the cavity length, where spectral filtering is implemented only with dielectric materials. This choice ensures that high intracavity intensity, occurring at high laser power due to filter-induced enhancement, is only present in the dielectric part of the laser. There, nonlinear loss can be safely neglected~\cite{moss_2013NatPhot, kuyken_2017NANOPHOT} due to the wide bandgap of dielectric materials.
\section{Laser design}
Figure~\ref{fig:hybrid_laser} shows the schematic design of the hybrid laser, comprising an InP semiconductor optical amplifier (gain section) and an extended cavity made of a long Si$_3$N$_4$ low-loss dielectric waveguide circuit that provides frequency selective feedback to the amplifier. As the directional couplers and, to a lesser extent, the effective refractive index of the waveguide used in the laser design are wavelength dependent, in the following the nominal wavelength of 1560~nm is assumed unless otherwise specified.
The InP semiconductor amplifier (COVEGA, SAF 1126) for generation of light in a single transverse mode at around 1.55~$\mu$m wavelength has a length of $L_{\textrm{g}}=1000$~$\mu$m and a specified typical output power of 60~mW based on amplification in multiple quantum wells. The back facet is high-reflection coated ($R_{\textrm{b}}=90\%$) to provide double-pass amplification. In order to suppress back-reflection into the amplifier, the front facet is anti-reflection coated to a specified reflectivity of $10^{-4}$ for an external index of 1.5, which is close to the effective refractive index of the tapered input end of the Si$_3$N$_4$ waveguide circuit (1.584). The semiconductor waveguide is tilted by 6$^{\circ}$, to further reduce back-reflection. Derived from the far-field specifications, the mode field diameter at the exit facet is 4.4~$\mu$m in the horizontal and 1.3~$\mu$m in the vertical direction. The amplifier is integrated with the Si$_3$N$_4$ circuit via alignment for maximizing the amount of amplified spontaneous emission (ASE) entering the Si$_3$N$_4$ circuit, followed by bonding with an adhesive. The integrated laser is mounted on a thermoelectric cooler and kept at 25~$^{\circ}$C. The electrical connects are wire bonded to a fan-out electronic circuit board. For driving the amplifier with a low-noise current, we use a battery-driven power supply (ILX Lightwave, LDX3620).
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\linewidth]{./figure_3}
\caption{\label{fig:double_stripe} Schematic view of the cross section of the double stripe Si$_{3}$N$_{4}$ waveguide used in the photonic feedback circuit for the hybrid laser. The supported single transverse optical mode has a cross section of $1.6 \times 1.7$~$\mu$m$^2$.}
\end{figure}
A long optical path length for linewidth narrowing, and sharp spectral filtering for single-mode oscillation, is provided with a Si$_3$N$_4$ circuit optimized for low-loss and high frequency selectivity. In this platform~\cite{roeloffzen_2018JSTQE} the core cross section can be adjusted to obtain a proper combination of tight guiding and low loss. We select a symmetric double-stripe geometry, see Fig.~\ref{fig:double_stripe}, that comprises two Si$_3$N$_4$ cores (1.2~$\mu$m $\times$ 170~nm) separated by 500~nm embedded in a SiO$_2$ cladding. This cross section yields a single-spatial mode of size $1.6 \times 1.7$~$\mu$m$^2$ for the TE polarization and an effective group index of 1.715. The propagation loss is smaller than 0.1~dB/cm, which agrees with values reported by Roeloffzen \textit{et al}.~\cite{roeloffzen_2018JSTQE}, and is determined from light scattering measurements with an IR camera using test structures from the same wafer with lengths of 5, 10 and 15~cm. The chosen cross section and the high index contrast between core and cladding ($\Delta n \approx 0.53$) provides tight guiding, making radiative loss (bending loss) negligible also for waveguides with tight bending radii as small as 100~$\mu$m. This enables to employ small-radius, low-loss microring resonators for Vernier-filtering with a wide free spectral range (FSR) comparable to the gain bandwidth~\cite{oldenbeuving_2013LPL}. Tight guiding in combination with low loss enables to realize significant on-chip optical path lengths. For example, extending the cavity length such that the returning power drops to a fraction of $R_{\textrm{f}}=1/3$ and assuming nominal parameters otherwise, \textit{i.e.}, a Sagnac mirror reflectance of $R_{\textrm{s}}=0.9$, a loss coefficient of $\alpha_{\textrm{f}}=0.1$~dB/cm, and a ring power coupling of $\kappa^2=10\%$, results in a roundtrip optical length of the laser cavity of about $2L^{\textrm{(o)}}=74$~cm. This corresponds to extending the photon lifetime to about 1 nanoseconds (see Fig.~\ref{fig:photon_lifetime}). The selected waveguide cross section is also suitable for low-loss adiabatic tapering. With two-dimensional tapering, the calculated maximum power coupling to the mode field of the gain section is in the range of $T_{\textrm{c}}$=90 to 93\% ~\cite{fan_2016PJ}, and the coupling to the $10.5 \pm 0.8 \mu$m diameter mode of single-mode output fibers (Fujikura 1550PM) can be as high as 98\%.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{./figure_4ab}
\caption{\label{fig:transmission} Calculated double-pass power transmission $T_{123}$ of the Si$_3$N$_4$ feedback arm containing three cascaded rings with radii $R_1=99$~$\mu$m, $R_2=102$~$\mu$m and $R_3=1485$~$\mu$m across a range corresponding to the gain bandwidth (a) and across a small range near the maximum of the gain at 1.54~$\mu$m (b). The peak transmission amounts to 51\% as calculated with an effective group index of $n_g=1.715$, the Sagnac mirror reflectance set to 90\%, and a propagation loss of $0.1$~dB/cm.}
\end{figure}
At this point we recall that we do not aim on low loss per entire roundtrip through the hybrid cavity. Instead we maximize only the optical length and thus the photon travel time in the dielectric feedback arm of the laser cavity, while keeping the loss in the feedback arm much lower than the intrinsic loss in the remaining part of the laser cavity roundtrip. With the circuit design realized here, the feedback arm provides a high peak reflectivity of $R_{\textrm{f}}=51\%$, assuming the Sagnac mirror reflectance is set to $R_{\textrm{s}}=90\%$ and using nominal values for the propagation loss of $\alpha_{\textrm{f}}=0.1$~dB/cm and power coupling of the rings of $\kappa^2=10\%$. In contrast, the loss in the remaining parts of the laser cavity is much higher, \textit{i.e.}, $R_{\textrm{i}} \approx 3\%$. The latter is calculated from double-passing 80\% loss in the amplifier, 10\% loss at at the amplifier back facet, and double-passing 10\% loss at the InP-Si$_3$N$_4$ interface. The loss estimates show that the laser would operate in the strong feedback regime, where $R_{\textrm{f}} \gg R_{\textrm{i}}$, such that a long roundtrip length in the feedback circuit should enable significant linewidth narrowing.
In order to induce single frequency oscillation across the 70~nm (8~THz) wide gain bandwidth in spite of an expected, dense mode spacing of a long laser cavity, we use three cascaded microring resonators in add-drop configuration, all with a power coupling of $\kappa^2=10\%$ to their bus waveguides. Two short resonators with a small difference in radius are used in Vernier configuration for coarse frequency selection ($R = 99$ and 102~$\mu$m, average FSR 278~GHz, finesse~28, quality factor $Q \approx 20,000$). The third microring resonator provides fine spectral filtering ($R_3=1485$~$\mu$m, FSR 18.6~GHz, finesse~28, $Q \approx 290,000$). Taking into account that all resonators are double-passed in the silicon nitride feedback circuit and assuming 0.1~dB/cm propagation loss, we calculate a FWHM of the spectral filter's transmission peak of 450~MHz (3.6~pm). Behind the resonators the extended cavity is closed with a Sagnac loop mirror of adjustable reflectivity via a tunable balanced Mach-Zehnder interferometer.
For a fixed setting of the mirror, the fraction of power coupled out of the laser cavity typically varies less than 20\% over the gain bandwidth of the laser. However, by tuning the Mach-Zehnder interferometer, the outcoupling can approximately be kept constant in the experiment when the oscillating wavelength is varied. The output power is collected into a single-mode output fiber (Fujikura 1550PM) that is butt-coupled with index matching glue to the polished end facet of the feedback chip.
For spectrally aligned and resonant microring resonators, we calculate a laser cavity optical roundtrip length of $2L^{\textrm{(o)}} = 0.49$~m which, via $\textrm{FSR}=c/2L^{\textrm{(o)}}$, $c$ being the speed of light in vacuum, corresponds to a free spectral range of 610~MHz. The length is calculated with double-passing the optical length of the three resonators, each having a power-coupling of $\kappa^2=10\%$ (which corresponds to multiplying each length with the approximate number of nine round trips at resonance \cite{liu_2001APL}), a 33~mm long waveguide spiral for further cavity length extension, the length of the amplifier, and various smaller sections of bus waveguides including the loop mirror (all geometric lengths are converted to optical lengths). With this cavity length the passive cavity photon lifetime already starts to saturate (see Fig.~\ref{fig:photon_lifetime}). We note that the cavity mode spacing varies noticeably with the light frequency, which is mainly due to strong dispersion in transmission through the long microring resonator. For light at transmission resonance of the long resonator, this places the two closest cavity modes at 965~MHz distance. For light in the midpoint wing of the transmission resonance, the closest cavity mode is located at 750~MHz. In comparison, the 450-MHz bandwidth of the feedback filtering is smaller, \textit{i.e.}, the condition of single-mode resolved filtering is fulfilled. We further note that, based on measurements on similar structures, the power coupling $\kappa^2$ of the 10\% directional coupler typically increases by almost a factor of 2 when the wavelength increases from 1500 to 1600~nm. This means that the cavity length extension is longer at the short wavelength side of the gain bandwidth. As for these lengths the cavity photon lifetime already starts to saturate, we expect a reduced variation in cavity photon lifetime over this gain bandwidth.
The calculated double-pass filter spectrum obtained with the three-ring circuit across a range corresponding to the gain bandwidth is shown in Fig.~\ref{fig:transmission}(a) and across a small range around the resonant wavelength of 1.5359~$\mu$m in Fig.~\ref{fig:transmission}(b). For a Sagnac mirror reflectivity of 90\%, as used for spectral recordings, we calculate a high feedback of $R_{\textrm{f}}=51\%$, which is due to low loss in the Si$_3$N$_4$ waveguides. The feedback at the next-highest side resonance of the long resonator is lower by -12.5 dB. For setting the highest transmission peak to any laser cavity mode within the laser gain, the resonators are equipped with thin-film thermo-electric phase shifters with a $0-2\pi$ range. With the described spectral filtering and due to the dominance of homogeneous gain broadening in the quantum well amplifier, it is expected that single-mode oscillation with high side mode suppression ratio is possible at any wavelength within the gain bandwidth.
\section{Results}
\begin{figure}[btp]
\centering
\includegraphics[width=0.85\linewidth]{./PI_curve}
\caption{\label{fig:PI_curve} Typical laser output power as measured with increasing pump current, yielding a maximum output of 23 mW. The discontinuities indicate spectral mode hops. This particular measurement was performed at a wavelength of 1561~nm.}
\end{figure}
Figure~\ref{fig:PI_curve} shows a typical measurement of the fiber-coupled output power behind the Sagnac loop mirror versus pump current. For achieving high output, the Sagnac mirror was set to a high transmission of about 80\%, and the laser wavelength was set to 1561~nm, near the center of the gain spectrum, via the phase shifter of the first microring resonator. The pump current is stepwise varied and fine-tuned, in order to maintain single-mode operation. The laser shows a threshold pump current of about 42~mA and a maximum output power of 23~mW is achieved at a pump current of 320~mA. This is almost half of the specified maximum power of the amplifier of 60~mW. The discontinuities in the output power versus pump current correspond to spectral mode hops. The reason is that increasing the pump current also changes the refractive index in the amplifier, which tunes the laser cavity length with regard to the transmission spectrum of the feedback filter.
To discuss the presence of nonlinear loss, we estimate the maximum intracavity intensity that occurs at the maximum output power. Assuming a Sagnac mirror transmission of 10\%, which is typically used for the linewidth measurements, we calculate a power of approximately 4~W in the largest microring resonator (2~W in each direction). Using a mode area of $1.6 \times 1.7$~$\mu$m$^2$ the according intensity is high, of the order of 0.15~GW/cm$^2$. However, loss from two-photon absorption can safely be neglected~\cite{moss_2013NatPhot} due to the wide bandgap of Si$_3$N$_4$. For comparison, in a silicon waveguide the same power and a typical mode field area of $0.5 \times 0.5$~$\mu$m$^2$ would cause significant two-photon absorption, \textit{i.e.}, of the order of 5~dB/cm~\cite{kuyken_2017NANOPHOT}. This would make it difficult to implement sharp spectral filtering to realize long, resonator-enhanced, feedback lengths and to narrow the intrinsic linewidth via increasing the laser power.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./spectrum_BOSA}
\caption{\label{fig:single_line_BOSA} Typical power spectrum recorded across a range of 30~pm with 0.1~pm resolution (3.7~GHz and 12~MHz, respectively).}
\end{figure}
To verify that the laser oscillates at a single wavelength, the laser output spectrum is measured at the fiber-coupled output from the through port of the first small resonator (monitor port in Fig.~\ref{fig:hybrid_laser}). There it would be possible to observe also light that is not resonant with the microring resonators. To obtain a higher resolution than the mode spacing, the laser spectrum was recorded with an optical spectrum analyzer based on stimulated Brillouin scattering (Aragon Photonics, BOSA400), and the small resonators are tuned for single-mode oscillation. All spectral measurements are performed behind an optical isolator and using tilted fiber connections to avoid feedback into the laser. Figure~\ref{fig:single_line_BOSA} shows a typical power spectrum recorded with the maximum resolution of 0.1~pm (12~MHz) across a 30~pm (3.7~GHz) wide interval around the oscillating mode. This range spans four to five mode spacings, such that possibly oscillating side modes would have become detectable. The measured spectrum confirms clean single-mode oscillation, with a side mode suppression of about 62~dB. Using a second optical spectrum analyzer (ANDO, AQ6317), set at a lower resolution of 50~pm but larger scan range, we confirmed single mode oscillation over the complete gain bandwidth.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./RIN_covega_new}
\caption{\label{fig:RIN} Typical power spectrum of the relative intensity noise (RIN). The spectrum is flat except for a small intermittent peak around 950~MHz. The optical output power was 1.2~mW.}
\end{figure}
For further characterization we measure relative intensity noise (RIN) with a fast photodiode and RF spectrum analyzer (10~kHz resolution and 100~kHz video bandwidth). Figure~\ref{fig:RIN} shows a typical RIN spectrum when the optical output power was 1.2~mW, which displays flat, broadband and low intensity noise around -157~dBc/Hz. Single narrowband features, here at 940 MHz, are likely due to spurious RF pickup, as not all spectra display these.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./tuning_coarse_a}
\caption{\label{fig:tuning_coarse} Superimposed output spectra recorded by tuning the laser wavelength in steps of 2~nm across a range of $>70$~nm.}
\end{figure}
To explore the overall spectral coverage of single-mode oscillation, the laser was manually tuned via the phase shifters on top of the microring resonators using a maximum heater power of 270~mW per heater. Figure~\ref{fig:tuning_coarse} shows an example of superimposed laser spectra, with the laser tuned to 35 different wavelengths. For coarse wavelength tuning, the heater current of one of the small microresonators is increased. This gives rise to discrete wavelength changes at a stepsize of about 2~nm, which corresponds to the FSR of the other small resonator. After the wavelength is set to a desired value, also the heating current of the other small resonator is adjusted for maximum laser output, to improve the spectral alignment of all resonators. The approximately flat tuning envelope is obtained by adjusting the Sagnac mirror feedback with wavelength tuning, at a pump current of 200~mA. We obtain a spectral coverage of 74~nm and at least 3 mW of output power. This compares well with the current record for monolithic, heterogeneously and hybrid integrated lasers~\cite{latkowski_2015PJ,fan_2017CLEO, tran_2020JSTQE}. Fine-tuning shown in steps of the FSR of the large microring resonator is shown in Fig.~\ref{fig:tuning_fine}. This was achieved via tuning the small resonators and loop mirror without heating the long resonator.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./tuning_fine_a}
\caption{\label{fig:tuning_fine} Superimposed spectra when fine tuning the laser in steps of 0.15~nm.}
\end{figure}
The intrinsic linewidth of the laser is measured using two independent setups based on delayed self-heterodyne detection ~\cite{richter_1986JQE,mercer_1991JLT}. The first, a proprietary setup, uses a Mach-Zehnder interferometer with 5.4~m optical arm length difference, a 40-MHz acousto-optic modulator, and two photodiodes for balanced detection. The beat signal is recorded versus time and analyzed with a computer to obtain the power spectral density of frequency noise (PSD). Free-running lasers, as investigated here, typically display increased technical noise at low frequencies whereas, at high noise frequencies, the PSD level levels off to the intrinsic laser linewidth. The second uses an arm length difference of 20~km and an 80-MHz modulator (AA Opto Electronic, MT80-IIR60-F10-PM0.5-130.s with ISOMET 232A-2 AOM driver). The power spectrum of the beat signal is recorded with an RF spectrum analyzer (Agilent E4405B with 25~kHz RBW). The intrinsic linewidth is retrieved with Lorentzian fits to the line wings where the Lorentzian shape is minimally obstructed, \textit{i.e.}, avoiding the low-frequency noise regime near the line center, as well as the range close to the electronic noise floor. Linewidth measurements are carried out at various different pump currents at a wavelength of 1561~nm near the center of the gain spectrum.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./noise_PSD}
\caption{\label{fig:noise_PSD}Double-sided power spectral density (PSD) of laser frequency noise for a pump current of 255~mA, plotted for positive frequencies. The dashed line at 6.5 Hz$^2$/Hz represents the mean of PSD values for noise frequencies between 4 and 7.5~MHz. The detection limit is at 0.5~Hz$^{2}$/Hz.}
\end{figure}
Figure~\ref{fig:noise_PSD} shows the PSD measured at a pump current of 255~mA, after adjusting for lowest noise only via the small microring resonators, while also monitoring the optical spectrum with an OSA to verify single-mode oscillation. The laser noise spectrum becomes flat for noise frequencies above $>2$~MHz. The upper bound for the white noise limit, indicated as dashed line, is taken as $6.5 \pm 1.3$~Hz$^2$/Hz. These values are obtained by taking the mean value and standard deviation of the Gaussian distribution of PSD values between noise frequencies of 4 and 7.5 MHz. After multiplying with $2\pi$ this corresponds to an intrinsic linewidth of $40 \pm 8$~Hz. This is significantly lower than our previous result of 290~Hz~\cite{fan_2017CLEO}. The lower linewidth has been obtained by using a different gain section, the COVEGA SAF 1126 InP gain chip, and by using a different outcoupling by the Sagnac loop mirror, about 10\% instead of 50\%, resulting in a doubling of the fiber coupled output power.
To verify the low linewidth level, the measurement is repeated with the second heterodyne setup using the same heater settings. The pump current was increased and decreased in steps and fine-tuned for lowest RF linewidth, while monitoring the optical spectrum with an OSA for single-mode oscillation. Figure~\ref{fig:lorentzian_width} displays the Lorentzian linewidth component versus pump current $I_{\textrm{p}}$ expressed as the threshold factor $X=(I_{\textrm{p}}-I_{\textrm{p,th}})/I_{\textrm{p,th}}$, where $I_{\textrm{p,th}}$ is the threshold pump current of approximately 42~mA. The error bars express the uncertainty in fitting. A doubly logarithmic plot is chosen to facilitate comparison with the expected inverse power law dependence of the linewidth as straight line with negative unity slope. The red line is a least-square fit with fixed negative unity slope versus the inverse threshold factor, $\frac{1}{X}$, showing that the measured linewidth narrows approximately inversely with laser power as theoretically expected. The power narrowing data do not display a levelling-off, in spite of significant intensity build-up in the high-finesse ring resonators. The lowest linewidth obtained from power spectral density recordings (shown as round symbol for comparison) is in good agreement with the data obtained from Lorentz fitting. The linewidth limit of 40~Hz in Fig.~\ref{fig:noise_PSD} is the narrowest intrinsic linewidth ever reported for a hybrid or heterogeneously integrated diode laser.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.85\linewidth]{./lorentzian_width_log}
\caption{\label{fig:lorentzian_width}Double logarithmic plot of Lorentzian linewidth versus the threshold factor, $X=(I_{\textrm{p}}-I_{\textrm{p,th}})/I_{\textrm{p,th}}$, which is proportional to the output power, $P_{\textrm{out}}$. Unfilled symbols show measurements vs decreasing power. Measurements vs increasing power (filled symbols) yield slightly smaller linewidths. The solid line is a least-square fit to the lower linewidth data with negative unity slope (inverse power law, $\propto P_{\textrm{out}}^{-1}$). The linewidth obtained from PSD measurements (Fig.~\ref{fig:noise_PSD}) is shown as a black round symbol at $X=5.07$ (255~mA pump current).}
\end{figure}
\section{Conclusions}
We have demonstrated a hybrid integrated and widely tunable single-frequency diode laser with an intrinsic linewidth as low as 40~Hz, a spectral coverage of more than 70~nm and a maximum fiber-coupled output of 23~mW. The narrow linewidth is achieved via feedback from a low-loss dielectric waveguide circuit that extends the laser cavity to a roundtrip optical length of $2L^{\textrm{(o)}}=0.5$~m, in combination with single-mode resolving filtering. Realizing such high-finesse filtering with cascaded microring resonators with essentially a single roundtrip through a long and low-loss feedback arm allows strong linewidth narrowing in the presence of significant laser cavity roundtrip losses. The tolerance to loss in this approach is important because semiconductor amplifiers are intrinsically lossy, such as also the mode transitions between different waveguide platforms in hybrid or heterogeneously integrated photonic circuits. Choosing dielectric feedback waveguides based on silicon nitride is important for avoiding nonlinear loss because GW/cm$^2$-level intensities readily occur in lasers with tens of mW output and high-finesse intracavity filtering. The approach demonstrated here is promising for further linewidth narrowing through stronger pumping, as no hard linewidth limit through nonlinear loss is apparent with dielectric feedback circuits. Although some promise lies in further extension of the cavity length, as the cavity photon lifetime is not yet fully saturated, in combination with tighter filtering, our analysis shows that further significant improvement requires reduction of the propagation loss in the feedback circuit. This route appears very feasible because silicon nitride waveguides can be fabricated with extremely low loss down to 0.045~dB/m ~\cite{bauters_2011OE}, while several meter long silicon nitride resonator circuits have been demonstrated with a spectral selectivity better than 100~MHz ~\cite{taddei_2018PTL}. These properties and options indicate the feasibility of Hertz-level integrated diode lasers on a chip.
\section*{Funding}
This research was funded by the IOP Photonic Devices program of RVO (Rijksdienst voor Ondernemend Nederland), a division of the Ministry for Economic Affairs, The Netherlands and in part by the European Union's Horizon 2020 research and innovation programme under grant agreement 780502 (3PEAT).
\section*{Disclosures}
The authors declare no conflicts of interest.
|
2,869,038,156,715 | arxiv | \section{Introduction}
The quantum information theory is important for the multitude of its promising new applications in such varied fields as quantum communication and teleportation, quantum cryptography, quantum computing, etc. The concept of entanglement also plays crucial roles in black hole thermodynamics \cite{muko, levay} and in the information loss problem \cite{horo, ahn, adesso}, which have given rise to
many studies aimed at measuring the generation and degradation of entanglement in a wide spectrum of systems. These studies include investigation of entanglement in both inertial \cite{alsing} and non-inertial frames \cite{alsingg, funtess, alsinggg, martin, telee, bruschi} as well as its generation in expanding spacetime \cite{funtes2pla, funtesprd} and in relativistic quantum fields \cite{4funtes}.
Although many of these works are far from being experimental, they are valuable as they offer a refined understanding of quantum information. In this paper, we explore the generation of entanglement using Schwinger pair production.
For this purpose, we will investigate the effect of background electric field on the generation of entanglement
for scalar and spinor fields.
It is well known that when an external electric field is applied to the quantum electrodynamical vacuum, the vacuum becomes unstable and decays into pairs of charged particles. In fact, the quantum vacuum is unstable under the influence of an external electric field, as the virtual electron-positron dipole pairs gain energy from the external field. When the field is sufficiently strong, these virtual pair particles gain the threshold pair creation energy and become real pairs. This remarkable phenomenon was first predicted by F. Sauter \cite{sauter} to be later refined by W. Heisenberg and H. Euler \cite{heisenberg} and formalized in the language of QED by Schwinger \cite{Schwinger1}, hence its designation nowadays as the Schwinger pair production effect. This phenomenon has been investigated by scholars and workers from a variety of fields \cite{niki, dune}. Efforts in the 1900s and early 21st century aimed at descriptions of more realistic field configurations led to the development of different formalisms such as the quantum kinetic approach which were used for the numerical computation of the Schwinger effect \cite{blasch}. Other approaches used include the closely related scattering-like formalism in terms of the Riccati equation \cite{dumlu}, the Dirac-Heisenberg-Wigner formalism \cite{heben}, and the numerical worldline formalism\cite{gies}. The critical electric field required for pair creation is almost $10^{16}V/cm$ which is too enormous to be directly observed. However, the feasibility of its experimental realization in ultra-intense laser field system \cite{taji, dunne} has recently led to a re-thinking of the Schwinger effect. It has been realized that the Schwinger limit laser intensity of $4\times10^{29}W/cm^{2}$ is not necessarily a strict limit and might be lowered by several orders of magnitude through manipulating the form of the laser pulses \cite{rsch, gvd, piaza, bulanov, monin}. Furthermore, it has been proposed that the Schwinger pair production effect may be observed in graphene \cite{graphene}. These considerations motivated the authors to study the generation of entanglement using an electric field.
The present paper is organized as follows. In Section II, we utilize the Schwinger effect for scalar particles with zero spin and Dirac fermions in the presence of a constant electric field. We will demonstrate that a constant electric field can generate the entanglement that its value can be determined. We will also consider the variation of the entanglement produced for bosonic and fermionic modes with respect to different parameters. In Section III, we extend our investigation to the pulsed electric field. Finally, conclusions will be presented in Section VI.
\section{ Entanglement Generation IN A Constant Electric Field }
The Minkowski vacuum becomes unstable by a strong electric field and decays into pairs of charged particles.
One can use the '$in$' and '$out$' formalism in order to investigate the entanglement generation. '$In$' and '$out$' are related to asymptotic times $t= - \infty$ and $t= +\infty$, respectively. If the separable '$in$' state can be expanded in terms of the entangled '$out$' state, the generated entanglement can then be determined. The state of two particles $A$ and $B$ is a vector in a $(d\times d')$-dimensional Hilbert space $H_{ab}=H_{a}\otimes H_{b}$. The space $H_{ab}$ is the tensor product of the subspaces $H_{a}$ and $H_{b}$ of each particle. An element of the space $H_{ab}$ is written as $|\Phi\rangle_{ab}=\sum_{i,j} C_{ij}|i\rangle_{a}\otimes |j\rangle_{b}$. A state $|\Phi\rangle_{ab}\in H_{a}\otimes H_{b}$ is separable if $|\Phi\rangle_{ab}=|\Phi\rangle_{a} \otimes \Phi\rangle_{b}$. An entangled state is a state that is not separable \cite{shi}.
In the following subsections, we study entanglement entropy for charged scalar and fermion particles in the presence of an electric field.
\subsection{ Entanglement Entropy For Scalar Particles }
In the study of entanglement generation, we use asymptotic solutions of equation of motion for charged scalar particles in the presence of an electric field.
Consider an electric field along the z-direction. It is related to the gauge potential through $E_{z}(t)=-\partial A_{z}(t)/\partial t$. For a scalar particle of mass $m$ and Charge $q$, the Klein-Gordon equation on the four dimensional Minkowski spacetime with the metric $(+,-,-,-)$ is given by
\begin{eqnarray}
[(\partial_{\mu}-iqA_{\mu})(\partial^{\mu}-iqA^{\mu})+m^{2}]\phi(t,x)=0,\label{basicequation}
\end{eqnarray}
where, $A_{\mu}=(0,0,0,A_{z}(t))$ and $\phi$ is the scalar field.
For the purpose of the present subsection, we restrict ourselves to the constant electric field and rewrite Eq. (\ref{basicequation}) for $E_z(t)=E_{0}$:
\begin{eqnarray}
[\partial^{2}_{t}+m^{2}+{\hat k}^{2}_{\perp}+({\hat k}_{z}-qE_{0}t)^{2}]\phi_{ks}(t,x)=0\label{basic}
\end{eqnarray}
In the above equation, natural units are used in which $c=\hbar=1$ and $A_{z}(t)$ is replaced by $A_{z}(t)=-E_{0}t$. This equation is used for scalar particles with zero spin. After turning to the momentum space, we have
\begin{eqnarray}
\phi(t,r)=(2\pi)^{-3/2}\int dk \exp(ik.r)\widetilde\phi(t,k).
\end{eqnarray}
Therefore, Eq. (\ref{basic}) in the momentum space is given by
\begin{eqnarray}
[\partial^{2}_{t}+m^{2}+k^{2}_{\perp}+(k_{z}-qE_{0}t)^{2}]\widetilde\phi_{k}(t)=0\label{equation}
\end{eqnarray}
where, $\widetilde\phi_{k}(t)=\widetilde\phi(t,k)$ is the Fourier component of the Klein-Gordon equation for scalar particles and
$k^{2}_{\perp}=k^{2}_{x}+k^{2}_{y}$. Changing to the following convenient variables
\begin{eqnarray}
z&=&\sqrt{2}\xi e^{i\pi/4},\ \ \xi=\frac{(k_{z}-qE_{0}t)}{\sqrt{qE_0}},\nonumber\\
\nu&=&-\frac{1}{2}-i\frac{\mu}{2},\ \ \mu=\frac{m^{2}+k^{2}_{\perp}}{qE_0},\label{zp}
\end{eqnarray}
Eq. (\ref{equation}) will be converted to the following equation:
\begin{eqnarray}
[\partial^{2}_{z}+(\nu+\frac{1}{2}-\frac{z^{2}}{4})]\widetilde\phi_{\nu}(z)=0\label{solution}.
\end{eqnarray}
The solutions of Eq. (\ref{solution}) are the parabolic cylinder functions denoted by the symbol $D_{\nu}(z)$ \cite{gradshteyn}
\begin{eqnarray}
D_{\nu}(z)=2^{\frac{\nu-1}{2}}e^{\frac{-z^{2}}{4}}\Psi(\frac{1-\nu}{2},\frac{3}{2},\frac{z^{2}}{2})\label{confluent}
\end{eqnarray}
where, $\Psi (a,b,z)$ is the confluent hypergeometric function. The functions $D_{\nu}(-z)$ and $D_{-\nu-1}(\pm iz)$ also satisfy Eq. (\ref{solution})
\cite{gradshteyn}. The following linear relations between parabolic cylinder functions show how any three of the solutions are connected:
\begin{eqnarray}
D_{\nu}(z)&=&\frac{\Gamma(\nu+1)}{2\pi}[e^{\pi/2}D_{-\nu-1}(iz)+e^{\frac{-i\pi \nu}{2}}D_{-\nu-1}(-iz)]\nonumber\\
&=&\frac{\sqrt{2\pi}}{\Gamma(-\nu)}e^{\frac{-i\pi (\nu+1)}{2}}D_{-\nu-1}(iz)+e^{-i\pi \nu}D_{\nu}(-z).\label{linear}
\end{eqnarray}
Therefore, there are precisely two linearly independent solutions of Eq. (\ref{solution}). For all values of $\nu$, $D_{\nu}(z)$ and $D_{-\nu-1}(\pm iz)$ are linearly independent. In order to calculate entanglement, we need the asymptotic solutions at $t_{in}\rightarrow - \infty$ and $t_{out}\rightarrow + \infty$ because we are interested in solutions with negative and positive frequencies. The asymptotic behavior of the solutions for large values of $|z|$ is given by \cite{gradshteyn}
\begin{eqnarray}
D_{\nu}(z)&\approx& e^{-z^{2}/4}z^{\nu},(\mid z\mid\gg\mid \nu\mid, \mid arg (z)\mid<\frac{3\pi}{4})
\end{eqnarray}
Using Eqs. (\ref{solution}) and (\ref{confluent}), one can find the asymptotic solution at $t_{in}=-\infty$ for a particle with momentum $k$ and charge $q$
\begin{eqnarray}
D_{\nu}(z)&=&D_{-\frac{1}{2}-i\frac{\mu}{2}}({\sqrt{2}}\xi e^{i\pi/4})\nonumber\\
&\approx&(2\xi^{2})^{(-i\mu-1)/4}e^{(\mu-i)\pi/8}e^{-i\xi^{2}/2}\label{particle},
\end{eqnarray}
where, $\xi\gg1$.
Using $k\rightarrow -k$ and $q\rightarrow -q$ in Eq. (\ref{equation}), we obtain another solution with negative frequency which describes an incoming antiparticle as below:
\begin{eqnarray}
D_{-\nu-1}(-iz)=D_{-\frac{1}{2}+i\frac{\mu}{2}}(\sqrt{2}\xi e^{-i\pi/4})\nonumber\\
\approx(2\xi^{2})^{(i\mu-1)/4}e^{(\mu+i)\pi/8}e^{i\xi^{2}/2}\label{antiparticle}.
\end{eqnarray}
In these solutions, the asymptotic phases and frequencies are:
\begin{eqnarray}
\pm\frac{1}{2}\xi^{2}&=& \frac{1}{2qE_0}k_{z}^2-k_{z}t+\frac{1}{2}qE_0t^2,\nonumber\\
\pm\partial_{t}\frac{1}{2} {\xi^{2}}&=&\pm(-k_{z}+qE_0t)\sim\pm \omega.
\end{eqnarray}
We can also find the sets of solutions at $t_{out}=+\infty$. For an outgoing particle with momentum $k$ and charge $q$, the convenient solution is
\begin{eqnarray}
D_{-\nu-1}(iz)=D_{-\frac{1}{2}-i\frac{\mu}{2}}(\sqrt{2}|\xi|e^{-i\pi/4}),
\end{eqnarray}
where, $|\xi|=\frac{-k_{z}+qE_0t}{\sqrt{qE_0}}\gg1$. In the same manner, $D_{\nu}(-z)$ describes an outgoing antiparticle.
Using Bogoliubov transformation \cite{carol}, one can expand the sets of solutions at $t_{in}=- \infty$ in terms of the sets of solutions at $t_{out}=+ \infty$ as follows:
\begin{eqnarray}
\phi_{in,k}^{+}=\alpha_{k} \phi_{out,k}^{+}+\beta_{k}\phi_{out,k}^{-}\label{inout},
\end{eqnarray}
where, $\alpha_{k}$ and $\beta_{k}$ are Bogoliubov coefficients. The '$in$' positive frequency mode $\phi_{in,k}^{+}=D_{\nu,in}(z)$ is expressed as a linear combination of the '$out$' positive $\phi_{out,k}^{+}=D_{-\nu-1,out}(iz)$ and negative $\phi_{out,k}^{-}=D_{\nu,out}(-z)$ frequency modes. Using a linear relation between $D_{\nu}(\pm z)$ and $D_{-\nu-1}(\pm iz)$, Eq. (\ref{linear}), one can achieve the Bogoliubov coefficients as follows:
\begin{eqnarray}
\alpha_{k}=\frac{\sqrt{2\pi}}{\Gamma(-\nu)}e^{\frac{-i\pi (\nu+1)}{2}},\ \ \beta_{k}=e^{-i\pi \nu}.\label{alpha}
\end{eqnarray}
Taking into account\cite{gradshteyn}
\begin{eqnarray}
\frac{\pi}{|\Gamma(\frac{1}{2}+ix)|^{2}}=\cosh(\pi x),
\end{eqnarray}
these coefficients for scalar particles will satisfy the following relation
\begin{eqnarray}
|\alpha_{k}|^{2}-|\beta_{k}|^{2}=1.\label{bs}
\end{eqnarray}
Now, we calculate the entanglement which is generated by the background constant electric field.
It is necessary to specify the '$in$' and '$out$' states and operators. The operators $a_{k,in},b_{k,in}$ and $a_{k,out},b_{k,out}$
annihilate the '$in$' $|0_{k}0_{-k}\rangle_{in}$ and '$out$' $|0_{k}0_{-k}\rangle_{out}$ vacuum for each momentum, respectively.
\begin{eqnarray}
a_{k,in}|0_{k}0_{-k}\rangle_{in}&=&b_{k,in}|0_{k}0_{-k}\rangle_{in}=0\nonumber\\
a_{k,out}|0_{k}0_{-k}\rangle_{out}&=&b_{k,out}|0_{k}0_{-k}\rangle_{out}=0\label{vacu}
\end{eqnarray}
where, the (k,-k) subscripts indicate the particle and antiparticle modes. Using the Bogoliubov transformation, the relation
between these operators is given by \cite{carol}
\begin{eqnarray}
a_{k,in}&=&\alpha^{*}_{k}\ \ a_{k,out}-\beta^{*}_{k}\ \ b^{\dagger}_{k,out}\nonumber\\
b_{k,in}&=&\alpha^{*}_{k}\ \ b_{k,out}-\beta^{*}_{k}\ \ a^{\dagger}_{k,out}\label{oprator}
\end{eqnarray}
Now, using the convenient calculations, we show that the separable '$in$' states can be expanded in terms of the '$out$' entangled state.
The state vector of the system can be described by the tensor product of
the two Hilbert spaces $H_{k}\bigotimes H_{k'}$, where $H_{k}$ indicates the Hilbert space related to particles and $H_{k'}$ to the antiparticles created by the electric field.
The $in$-vacuum state is defined by the absence of any mode excitations
$$|0\rangle_{in}=\prod_{kk'} (|0_{k}\rangle|0_{k'}\rangle)_{in},$$
Using the Schmidt decomposition, the in-vacuum state for each mode can be expanded in terms of the out-states \cite{Schmidt}
$$(|0_{k}\rangle|0_{-k}\rangle)_{in}=\sum_{n}c_{n}(|n_{k}\rangle|n_{-k}\rangle)_{out},$$
where, $n$ indicates the number of particles with momentum k and the number of antiparticles with momentum $-k$ created by the electric field.
For simplicity, $|n_{k}\rangle|n_{-k}\rangle$ is replaced by
$|n_{k} n_{-k}\rangle$, and we will, therefore, have
\begin{eqnarray}
|0_{k}0_{-k}\rangle_{in}=\sum_{n}c_{n}|n_{k}n_{-k}\rangle_{out}.\label{schmit}
\end{eqnarray}
The $|0_{k}0_{-k}\rangle_{in}$ is a separable state from the view point of an observer in the $in$-region.
If there are more than one non-zero coefficients on the right hand side of Eq.(\ref{schmit}), then the separable $in$-state is the entangled state from the view point of an inertial observer in the $out$-region. Therefore, we have to determine $c_{n}$ to evaluate the measure of the entanglement. For this purpose, we use the definition of vacuum and its normalization.
Substituting Eqs. (\ref{oprator}) and (\ref{schmit}) in the definition of vacuum
\begin{eqnarray}
& a_{k,in}&|0_{k}0_{-k}\rangle_{in}=0\nonumber\\
(\alpha^{*}_{k}\ \ a_{k,out}-\beta^{*}_{k} &b^{\dagger}_{k,out}&)\sum_{n}c_{n}|n_{k}n_{-k}\rangle_{out}=0,\label{difi}
\end{eqnarray}
leads to
\begin{eqnarray}
c_{n+1}=\frac{\beta^{*}_{k}}{\alpha^{*}_{k}}c_{n},
\end{eqnarray}
Normalization of vacuum, $$\langle 0_{k}0_{-k}|0_{k}0_{-k}\rangle_{in}=1$$ leads to
\begin{eqnarray}
\sum_{n} |c_{n}|^{2}&=&1\nonumber\\
&=&|c_{0}|^{2}(1+|\frac{\beta_{k}}{\alpha_{k}}|^{2}+|\frac{\beta_{k}}{\alpha_{k}}|^{4}+...).\label{vaccum}
\end{eqnarray}
Thus, the coefficients $c_{n}$ are given by:
\begin{eqnarray}
|c_{0}|^{2}&=&|\frac{1}{\alpha_{k}}|^{2}\nonumber\\
|c_{n}|^{2}&=&|\frac{\beta_{k}}{\alpha_{k}}|^{2n}|c_{0}|^{2}\nonumber\\
&=&(1-|c_{0}|^{2})^{n}|c_{0}|^{2}\label{cn}
\end{eqnarray}
Based on the values of $c_{n}$ thus obtained, we expect an entanglement generation to occur. We can utilize an appropriate measure of entanglement, namely the
von Neumann entropy defined as follows:
\begin{eqnarray}
S(\rho_{k})=-Tr(\rho_{k}\log_{2}(\rho_{k})).\label{entropy}
\end{eqnarray}
First, we have to specify the density matrix of the whole system, $\rho_{k,-k}$, followed by reduced density matrix of the subsystem, $\rho_{k}$. All the
properties of the system can be deduced from the density matrix
\begin{eqnarray}
\rho_{k,-k}&=&|0_{k}0_{-k}\rangle_{in}\langle 0_{k}0_{-k}|\nonumber\\
&=&\sum_{n,m}c_{n}c^{*}_{m}|n_{k}n_{-k}\rangle_{out}\langle m_{k}m_{-k}|.
\end{eqnarray}
As we wish to deal with only one of the subsystems, we use the concept of reduced density matrix. One can find the reduced density matrix for the
subsystem related to the particles (denoted by k), obtained by tracing $\rho_{k,-k}$ over all the states of the subsystem related to the antiparticles
(denoted by -k), so that
\begin{eqnarray}
\rho_{k}&=&Tr_{-k}(\rho_{k,-k})\nonumber\\
&=&\sum_{l}\langle l_{-k}|\rho_{k,-k}|l_{-k}\rangle\nonumber\\
&=&\sum_{n}|c_{n}|^{2}|n_{k}\rangle\langle n_{k}|.\label{reducedmatrix}
\end{eqnarray}
The von Neumann entropy for scalar modes described by Eq. (\ref{entropy}) is given by
\begin{eqnarray}
S_{k}&=&-\sum_{n}|c_{n}|^{2}\log_{2}|c_{n}|^{2}\nonumber\\
&=&\log_{2}\frac{x^{\frac{x}{x-1}}}{1-x}\label{scalarentropy},
\end{eqnarray}
where, $x=|\frac{\beta_{k}}{\alpha_{k}}|^{2}$ and is determined by Eq. (\ref{alpha})
\begin{eqnarray}
x=\frac{|\beta_{k}|^{2}}{1+|\beta_{k}|^{2}},\ \ |\beta_{k}|^{2}= e^{\frac{-\pi(m^{2}+k_{\perp}^{2})}{qE_{0}}}.\label{betaalpha}
\end{eqnarray}
Therefore, we get the von Neumann entropy with respect to the electric field, transverse components
of momentum, as well as particle's mass and charge. Eq. (\ref{scalarentropy}) can be written in terms of $|\beta_{k}|^{2}$
\begin{eqnarray}
S_{k}=-|\beta_{k}|^{2} \log_{2}|\beta_{k}|^{2}+(1+|\beta_{k}|^{2})\log_{2}(1+|\beta_{k}|^{2})\label{sbetta}.
\end{eqnarray}
According to Eq. (\ref{sbetta}), the increase in $|\beta_{k}|^{2}$ value enhances the von Neumann entropy. Both the von Neumann entropy, $S_{k}$, and $|\beta_{k}|^{2}$ are increasing functions with respect to $E_{0}$. The variation of the $S_{k}$, and $|\beta_{k}|^{2}$ as a function of the electric field $E_{0}$ is shown in Fig. \ref{figE}. In the large electric fields, $|\beta_{k}|^{2}$ tends to its maximum value, $|\beta_{k}|^{2}\rightarrow1$. Thus, regarding Eq. (\ref{sbetta}), entropy is a function of $|\beta_{k}|^{2}$ and at large values of the electric fields $E_{0}$ tends to a constant value $(S_{k}=2)$.
$|\beta_{k}|^2$ is the mean number of the particles (antiparticles) produced in mode $k$ $(-k)$
\begin{eqnarray}
\langle 0_{k}0_{-k}|a_{out,k}^{\dagger} a_{out,k}|0_{k}0_{-k}\rangle_{in}=|\beta_{k}|^2.\label{production}
\end{eqnarray}
When the mean number of the produced pairs increases, the entanglement generated for bosonic modes will also increase. $S_{k}$ and $|\beta_{k}|^{2}$ exhibit similar behaviors for bosonic modes.
The variation of entanglement with respect to mass is shown in Fig. \ref{figm}. A specific electric field creates more particles of smaller mass than those of larger mass. Fig. \ref{figm} indicates that the measure of entanglement for fixed values of $q$, $k_{\perp}$ and $E_{0}$ is greater for particles with smaller mass than it is for those of larger mass. The maximum value of entanglement for fixed values of $q$, $k_{\perp}$ and $E_{0}$ occurs in $m=0$ which is obtained by substituting $|\beta_{k}|^{2}=\exp(\frac{-\pi k_{\perp}^{2}}{2qE_{0}})$ in Eq. (\ref{sbetta}).
Since $m$ and $k_\perp$ appear in the same form in $S_{k}$, the entanglement behavior will be similar with respect to $k_{\perp}$ and $m$.
In Eq. (\ref{schmit}), we express $in$-vacuum in terms of $out$-states. The probability of vacuum-to-vacuum transition is given by
\begin{eqnarray}
|\langle 0,out|0,in\rangle|^{2}=|c_{0}|^{2}=\frac{1}{|\alpha_{k}|^{2}}.
\end{eqnarray}
The maximum value of $|c_{0}|^{2}=1$; this means that the vacuums of the '$in$' and '$out$' regions are the same. Decreasing value of $|c_{0}|^{2}$ means that the initial vacuum decays to the more pairs in the '$out$' region. Therefore, it is reasonable to suggest that a smaller value of $|c_{0}^{2}|$ leads to a more entangled state.
Since the value of $|\beta|^{2}$ ranges between $0$ and $1$, and also because $|\alpha_{k}^{2}|=1+|\beta|^{2}$, the minimum value of $|c_{0}^{2}|$ occurs at $|\beta|^{2}=1$.
\subsection{ Entanglement Entropy for Fermion Particles }
In this subsection, we will investigate the generation of entanglement for fermionic modes. We use asymptotic solution of equation of motion for charged fermion particles in the presence of an electric field.
Consider an electric field along the $z$-direction. It is related to the gauge potential through $E_{z}(t)=-\partial A_{z}(t)/\partial t$. For a particle of mass $m$ and charge $q$, the Dirac equation on the four dimensional Minkowski spacetime with the metric (+,-,-,-) is given by
\begin{eqnarray}
(i\gamma^{\mu}\partial_{\mu}-q\gamma^{\mu} A_{\mu}-m)\Psi(x)=0\label{dirac}
\end{eqnarray}
where, $A_{\mu}=(0,0,0,A_{z}(t))$. $\gamma^{\mu}$ and $\Psi$ are the Dirac matrix and spinors, respectively \cite{gama}.
One can introduce $\Psi(x)$ to have:
\begin{eqnarray}
\Psi(x)=(i\gamma^{\nu}\partial_{\nu}-q\gamma^{\nu} A_{\nu}+m)R(x)\label{psi}
\end{eqnarray}
The second order differential equation is
\begin{eqnarray}
&(&i\gamma^{\mu}\partial_{\mu}-q\gamma^{\mu} A_{\mu}-m)(i\gamma^{\nu}\partial_{\nu}-q\gamma^{\nu} A_{\nu}+m)R \label{equ}\nonumber\\
=&[&-\partial^{2}_{t}-m^{2}-\partial^{2}_{x}-\partial^{2}_{y}-(\partial_{z}-iqA_{z})^{2}+iqE_{z}\alpha^{3}] R\nonumber\\
\end{eqnarray}
where, $\alpha^{3}=\gamma^{0}\gamma^{3}=\left(
\begin{array}{cc}
0 & \sigma_{3} \\
\sigma_{3} &0 \\
\end{array}
\right)$.
We search for a solution of Eq. (\ref{equ}) of the following form:
\begin{eqnarray}
R(x)&=&\varphi(t,\mathbf{x}) \chi\nonumber\\
\varphi(t,\mathbf{x})&=&(2\pi)^{-3/2}\int e^{i\mathbf{k}.\mathbf{x}} \phi (t,\mathbf{k}) d\mathbf{k}\label{requ}
\end{eqnarray}
where, $\varphi(x)$ is a complex scalar function and $\chi$ designates the eigenbispinors of the $\alpha^{3}$ and $\Sigma_{3}=\frac{i}{2}\gamma^1\gamma^2$ :
\begin{eqnarray}
\chi_{+}^{\uparrow}&=&\left(
\begin{array}{c}
0\\
1\\
0\\
-1\\
\end{array}
\right),\chi_{+}^{\downarrow}=\left(
\begin{array}{c}
1\\
0\\
1\\
0\\
\end{array}
\right),\nonumber\\
\chi_{-}^{\uparrow}&=&\left(
\begin{array}{c}
1\\
0\\
-1\\
0\\
\end{array}
\right),\chi_{-}^{\downarrow}=\left(
\begin{array}{c}
0\\
1\\
0\\
1\\
\end{array}
\right),\nonumber\\
&&\alpha^{3}\chi_{\pm}^{s}=\chi_{\pm}^{s},~~ (s\in\{\uparrow,\downarrow\})\nonumber\\
&&{\Sigma_{3}}\chi_{\pm}^{\uparrow}=+\chi_{\pm}^{\uparrow},~~\Sigma_{3}\chi_{\pm}^{\downarrow}=-\chi_{\pm}^{\downarrow},
\label{eigen}
\end{eqnarray}
$\Sigma_{3}$ is the matrix of the spin component along the direction of the electric field and commutes with $\alpha^{3}$.
Using Eqs. (\ref{equ}-\ref{eigen}) and substituting the standard representation for the Dirac matrix, we have
\begin{eqnarray}
[\partial^{2}_{t}+m^{2}+k^{2}_{\perp}+(k_{z}+qA_{z}(t))^{2}+ iqE(t)] \phi_{k,s}(t)\chi_{+}^{s}=0\label{second}\nonumber\\
\end{eqnarray}
where, $\phi_{k,s}(t)\chi_{+}^{s}$ and $\phi_{k,s}^{*}(t)\chi_{+}^{s}$ specify the spin-up and down particle and antiparticle, respectively.
Then, the solutions of Eq. (\ref{second}) form a complete set.
Another solution of the second order differential equation (\ref{equ}) with negative eigenvalue , $- iqE(t)$, satisfies the following equation
\begin{eqnarray}
[\partial^{2}_{t}+m^{2}+k^{2}_{\perp}+(k_{z}+qA_{z}(t))^{2}- iqE(t)] \phi_{k,s}(t)\chi_{-}^{s}=0\label{secondorder}\nonumber\\
\end{eqnarray}
The second order differential equation (\ref{equ}) leads to Eqs.(\ref{second}) and (\ref{secondorder}), while the Dirac equation is a first order one. Therefore, it will suffice to have one complete set of solutions corresponding to either (\ref{second}) or (\ref{secondorder}). In fact, Eq. (\ref{secondorder}) does not lead to any new result. Therefore, we consider Eq. (\ref{second}) and write $R$ in the following form
\begin{eqnarray}
R(t,k)&=&\sum_{s} (c_{1}^{s}\phi_{k,s}(t)\chi_{+}^{s}+c_{2}^{s}\phi_{k,s}^{*}(t)\chi_{+}^{s})\label{R}
\end{eqnarray}
Using (\ref{psi}), $\Psi$ takes the following form
\begin{eqnarray}
\Psi(t,k)=\sum_{k,s}(a_{k,s}u_{k}^{s}(x)+ b^{\dagger}_{k,s} v_{k}^{s}(x))
\end{eqnarray}
with
\begin{eqnarray}
u_{k}^{s}(t)&=&(i\gamma^{0}\partial_{t}-\vec{\gamma}.(\vec{k}+q\vec{A})+m)\phi(t,k) \chi_{+}^{s}\nonumber\\
v_{k}^{s}(t)&=&(i\gamma^{0}\partial_{t}-\vec{\gamma}.(\vec{k}+q\vec{A})+m)\phi(t,k)^{*} \chi_{+}^{s}\label{uv}
\end{eqnarray}
According to Eq. (\ref{solution}), $ \phi_{k,s}(t)$ in Eq. (\ref{second}) are
parabolic cylinder functions
\begin{eqnarray}
D_{\nu}(\pm z), D_{-\nu-1}(\pm iz) \nonumber\\
\nu=-1-i\frac{\mu}{2} ,\ \mu=\frac{m^2+k^{2}_{\perp}}{q E}.\label{para}
\end{eqnarray}
Using Eqs. (\ref{uv}) and (\ref{para}) and the invariant inner product
\begin{eqnarray}
(f_{k}^{r}(t,x),g_{p}^{s}(t,x))=\int (f_{k}^{r}(t,x))^\dagger g_{p}^{s}(t,x)d^3x\label{innerproduct}
\end{eqnarray}
one can evaluate the Bogoliubov coefficients for Dirac's fermions in a background constant electric
field as follows
\begin{eqnarray}
(u^{s,in}_{k},u^{r,out}_{p})=\delta_{rs}\delta(\vec{k}-\vec{p})\alpha^{s}_{k}\nonumber\\
(u^{s,in}_{k},v^{r,out}_{p})=\delta_{rs}\delta(\vec{k}-\vec{p})\beta^{s}_{k} ,\label{fabb}
\end{eqnarray}
with
\begin{eqnarray}
\alpha^{s}_{k}&=&\alpha^{\uparrow}_{k}=\alpha^{\downarrow}_{k}=\sqrt{\frac{\mu}{\pi}}\Gamma(\frac{i\mu}{2}) \sinh(\frac{\pi\mu}{2})e^{-\frac{\pi\mu}{4}}\nonumber\\\label{fab}
\beta^{s}_{k}&=&\beta^{\uparrow}_{k}=\beta^{\downarrow}_{k}=e^{-\pi\frac{\mu}{2}}.
\end{eqnarray}
As indicated in Eq. (\ref{eigen}) , $s\in\{\uparrow,\downarrow\}$ is related to the positive and negative eigenvalues of the matrix of the spin component along the direction of the electric field. Since the spin has no interaction with the electric field, the Bogoliubov coefficients for the up and down spins are the same.
Taking \cite{gradshteyn} into account
\begin{eqnarray}
|\Gamma(ix)|^{2}=\frac{\pi}{x\sinh \pi x}
\end{eqnarray}
these coefficients will satisfy the relation below
\begin{eqnarray}
|\alpha|^{2}+|\beta|^{2}=1\label{relation}
\end{eqnarray}
The relationship between the '$in$' and the '$out$' operators is expressed by
\begin{eqnarray}
a_{d,out}=\alpha_{d}\ \ a_{d,in}-\beta^{*}_{d}\ \ b^{\dagger}_{d,in}\nonumber\\
b^{\dagger}_{d,out}=\alpha^{*}_{d}\ \ b^{\dagger}_{d,in}+\beta_{d}\ \ a_{d,in},\label{abspinor}
\end{eqnarray}
\noindent where, $a_{d}$ and $b_{d}$ are the annihilation operators for particle and antiparticle, respectively, and subscript $d$ stands for momentum $k$ and
spin $s\in\{\uparrow,\downarrow\}$.
The vacuum state is given by
\begin{eqnarray}
|0\rangle_{in}=\prod_{k,k',s}(|0^{s}_{k}\rangle|0^{s}_{k'}\rangle)_{in}.
\end{eqnarray}
Using the Schmidt decomposition and Pauli exclusion principle, we can expand the '$in$' vacuum state in terms of the '$out$' states for a single mode $k$
\begin{eqnarray}
|0_{k}0_{-k}\rangle_{in}&\equiv&(|0_{k}^{\uparrow}0_{-k}^{\downarrow}\rangle|0_{k}^{\downarrow}0_{-k}^{\uparrow}\rangle)_{in}=\sum_{n=0,1}c'_{n}|n_{k}^{\uparrow}n_{-k}^{\downarrow}\rangle_{out}
\sum_{m=0,1}c^{''}_{m}|m_{k}^{\downarrow}m_{-k}^{\uparrow}\rangle_{out} \nonumber\\
&=&c_{0}|0^{\uparrow}_{k}0^{\downarrow}_{-k}\rangle_{out}|0^{\downarrow}_{k}0^{\uparrow}_{-k}\rangle_{out}
+ c_{1}|1^{\uparrow}_{k}1^{\downarrow}_{-k}\rangle_{out}|1^{\downarrow}_{k}1^{\uparrow}_{-k}\rangle_{out}\nonumber\\
&+&
c_{2} |1^{\uparrow}_{k}1^{\downarrow}_{-k}\rangle_{out}|0^{\downarrow}_{k}0^{\uparrow}_{-k}\rangle_{out}
+
c_{3} |0^{\uparrow}_{k}0^{\downarrow}_{-k}\rangle_{out} |1^{\downarrow}_{k}1^{\uparrow}_{-k}\rangle_{out} ,\label{fermionstate}
\end{eqnarray}
where, the coefficients $c'_{n}c^{''}_{m}$ are replaced to $c_{i}$ and symbol $\uparrow$ $(\downarrow)$ indicate up(down) spin.
Imposing $a_{d,in}|0_{k}0_{-k}\rangle_{in}=0$, $(\langle 0_{k}0_{-k}|0_{k}0_{-k}\rangle)_{in}=1$ and using Eq. (\ref{abspinor}),
we obtain the four coefficients $c_{i}$
\begin{eqnarray}
|c_{0}|^{2}&=&|\alpha^{\uparrow}_{k}|^{2}|\alpha^{\downarrow}_{k}|^{2},\ \ |c_{1}|^{2}=|\beta^{\uparrow}_{k}|^{2}|\beta^{\downarrow}_{k}|^{2}\nonumber\\
|c_{2}|^{2}&=&|\alpha^{\uparrow}_{k}|^{2}|\beta^{\downarrow}_{k}|^{2},\ \ |c_{3}|^{2}=|\alpha^{\downarrow}_{k}|^{2}|\beta^{\uparrow}_{k}|^{2}.\label{cnf}
\end{eqnarray}
The states in (\ref{fermionstate}) are designated by A, B, C and D.
\begin{eqnarray}
|0_{A}\rangle_{out}&\equiv&|0^{\uparrow}_{k}\rangle_{out},\ \ |1_{A}\rangle_{out}\equiv|1^{\uparrow}_{k}\rangle_{out},\nonumber\\
|0_{B}\rangle_{out}&\equiv&|0^{\downarrow}_{k}\rangle_{out},\ \ |1^{\downarrow}_{k}\rangle_{out}\equiv|1_{B}\rangle_{out},\nonumber\\
|0_{C}\rangle_{out}&\equiv&|0^{\uparrow}_{-k}\rangle_{out},\ \ |1^{\uparrow}_{-k}\rangle_{out}\equiv|1_{C}\rangle_{out},\nonumber\\
|0_{D}\rangle_{out}&\equiv& |0^{\downarrow}_{-k}\rangle_{out},\ \ |1^{\downarrow}_{-k}\rangle_{out}\equiv|1_{D}\rangle_{out},\nonumber\\\label{ABCD}
\end{eqnarray}
Using the representation (\ref{ABCD}), the '$in$' vacuum state is expressed by
\begin{eqnarray}
(|0_{k}^{\uparrow}0_{-k}^{\downarrow}\rangle|0_{k}^{\downarrow}0_{-k}^{\uparrow}\rangle)_{in}=|\Psi_{ABCD}\rangle_{out}.
\end{eqnarray}
We can calculate the measure of entanglement between the one part of system to the rest.
$S_{A(BCD)}$, for example, is the measure of entanglement between the state A and the states B, C, D as follows
\begin{eqnarray}
S_{A(BCD)}=-Tr(\rho_{A}\log_{2}\rho_{A}).\label{sa}
\end{eqnarray}
The reduced density operator $\rho_{A}$ in $S_{A(BCD)}$ after tracing on B, C, and D is given by
\begin{eqnarray}
\rho_{A}&=&Tr_{(BCD)}(|\Psi_{ABCD}\rangle_{out}\langle\Psi_{ABCD}|)\nonumber\\
&=&(|c_{0}|^{2}+|c_{3}|^{2})|0_{A}\rangle_{out}\langle 0_{A}|\nonumber\\
&+&(|c_{1}|^{2}+|c_{2}|^{2})|1_{A}\rangle_{out}\langle 1_{A}|\nonumber\\
&=&|\alpha_{k}|^{2}|0_{A}\rangle_{out}\langle 0_{A}|+|\beta_{k}|^{2}|1_{A}\rangle_{out}\langle 1_{A}|.\label{ro}
\end{eqnarray}
$S_{A(BCD)}$, the entanglement entropy between a spin up particle with mode $k$ to the rest of the system is obtained by
\begin{eqnarray}
S_{A(BCD)}=-|\alpha_{k}|^{2}\log_{2}(|\alpha_{k}|^{2})-|\beta_{k}|^{2}\log_{2}(|\beta_{k}|^{2}).
\end{eqnarray}
In the same manner, $S_{B(ACD)}$, $S_{C(ABD)}$ and $S_{D(ABC)}$ can be calculated and their values are equal to $S_{A(BCD)}$
\begin{eqnarray}
S_{k}=-|\alpha_{k}|^{2}\log_{2}(|\alpha_{k}|^{2})-|\beta_{k}|^{2}\log_{2}(|\beta_{k}|^{2}),\ \ |\beta_{k}|^{2}=e^{\frac{-\pi (m^{2}+k_{\perp}^{2})}{2qE_{0}}},\ \ |\alpha_{k}|^{2}=1+|\beta_{k}|^{2}\label{ssfermion}
\end{eqnarray}
We can also get the average von Neumann entropy \cite{gilad} as follows
\begin{eqnarray}
S=\frac{1}{4}(S_{A(BCD)}+S_{B(ACD)}+S_{C(ABD)}+S_{D(ABC)})\label{sfermion}
\end{eqnarray}
As expected, all of the entanglement entropies in Eq. (\ref{sfermion}) have the same value, because the electric field can not distinguish spin up and down particles. In other words, each of the von Neumann entropies in Eq. (\ref{sfermion}) means the entanglement between one part of the system which is a particle(antiparticle) with mode $k$ and spin $s$ with the rest of the system.
Eqs. (\ref{fermionstate}) and (\ref{cnf}) indicate the
expanding of $in$- vacuum in terms of '$out$' states. Based on (\ref{relation}), $|\alpha_{k}|^2$ and $|\beta_{k}|^2$ in fermionic modes range between $0$ and $1$. If $|\alpha_{k}|^2$ and $|\beta_{k}|^2$ have a value of either zero or one, we will have a separable state that leads to zero entanglement. The maximum entanglement occurs when all the coefficients $c_{n}$ in Eq. (\ref{fermionstate}) are equal and nonzero. Therefore, the entanglement will have its maximum value in $|\alpha_{k}|^2=|\beta_{k}|^2=\frac{1}{2}$.
The behavior of entanglement entropy for fermionic modes is shown in Figs. \ref{figEf} and \ref{figmf}. In very small or large electric fields the value of $|\beta_{k}|^2$ tends to $0$ or $(1)$, respectively; therefore, the entropy has its minimum value, $S_{min}=0$, as indicated in Fig. \ref{figEf}. The maximum value of $S$ can be deduced as below
\begin{eqnarray}
\frac{\partial S}{\partial E_{0}}=0,
\end{eqnarray}
And, therefore,
\begin{eqnarray}
E_{0}=\frac{\pi (m^{2}+k_{\perp}^{2})}{q\ln(2)},\label{smax}
\end{eqnarray}
which is equivalent to $|\beta_{k}|^2=\frac{1}{2}$. In Fig. \ref{figmf} the entanglement entropy is obtained by substituting $|\beta_{k}|^{2}=\exp(\frac{-\pi k_{\perp}^{2}}{2qE_{0}})$ in Eq. (\ref{ssfermion}), for massless particles. Large values of the electric fields correspond to large values of $|\beta_{k}|^2$ and therefore the small value of $S_{k}$. According to Eq. (\ref{ssfermion}), for fixed values of $E_{0}$, $k_{\perp}$ and $q$, the maximum value of entropy is equal to one for $m^{2}=\frac{E_{0}q\ln(2)}{\pi}-k_{\perp}^{2}$.
\section{THE SAUTER-TYPE ELECTRIC FIELD AND THE ENTANGLEMENT GENERATION}
In the previous Section, we showed that the constant electric field can generate the entanglement and worked out its variation. Now, one can explore entanglement generation by the
pulsed electric field for scalar particles and Dirac fermions.
\subsection{ Scalar Particles }
Eq. (\ref{equation}) can be rewritten for Sauter-type electric field along
the $z$ direction as: $E(t)=E_{0}sech^{2}(t/\tau)$ \cite{sauter} in which $\tau$
is the width of the electric field.
One can choose $A_{\mu}$ as
\begin{eqnarray}
A_{\mu}=(0,0,0,-E_{0}\tau \tanh(\frac{t}{\tau})),
\end{eqnarray}
The Fourier time component of the Klein-Gordon equation for the scalar particle with zero spin satisfies the equation
\begin{eqnarray}
[\partial^{2}_{t}+\omega^{2}_{k}(t)]\phi_{k}(t)=0,\label{Etss}
\end{eqnarray}
where
\begin{eqnarray}
\omega^{2}_{k}(t)=[k_{z}-qE_{0}\tau \tanh(\frac{t}{\tau})]^{2}+k^{2}_{\perp}
+m^{2}
\end{eqnarray}
As before, we use the asymptotic solutions at $t_{in}=-\infty$ and $t_{out}=\infty$ in order to expand the separable '$in$' state in terms of the entangled '$out$' state.
In the following, the Bogoliubov coefficient which relates the asymptotic solutions to each other is used to obtain the reduced density matrix and the von-Neumann entropy for scalar particles.
Ref. \cite{gradshteyn} gives two linearly independent solutions of Eq. (\ref{Etss}):
\begin{eqnarray}
\phi_{k}(t)= (z-1)^{i\tau\omega_{k,out}/2}z^{-i\tau\omega_{k,in}/2} [C_{1} F(a,b,c;z)\nonumber\\
+C_{2}z^{1-c}F(a-c+1,b-c+1,2-c;z)],\label{hyper}\nonumber\\
\end{eqnarray}
where, $F$ is the hypergeometric function, and
\begin{eqnarray}
z&=&\frac{1}{2}\tanh(\frac{t}{\tau})+\frac{1}{2}, \lambda=\sqrt{(qE_{0}\tau^{2})^{2}-\frac{1}{4}},\nonumber\\
a&=&\frac{1}{2}+\frac{i}{2}(\tau\omega_{k,out}-\tau\omega_{k,in})-i\lambda,\nonumber\\
b&=&\frac{1}{2}+\frac{i}{2}(\tau\omega_{k,out}-\tau\omega_{k,in})+i\lambda,\nonumber\\
c&=&1-i\tau\omega_{k,in}.\label{z}
\end{eqnarray}
in which, $\omega_{in}$ and $\omega_{out}$ are the kinetic energies of the field modes at asymptotic times $t_{in}=-\infty$ and $t_{out}=\infty$
\begin{eqnarray}
\omega_{k,in}&=&\sqrt{(k_{z}+qE_{0}\tau)^{2}+k^{2}_{\perp}+m^{2}},\nonumber\\
\omega_{k,out}&=&\sqrt{(k_{z}-qE_{0}\tau)^{2}+k^{2}_{\perp}+m^{2}}.
\end{eqnarray}
As we know, the asymptotic solutions are related through the Bogoliubov coefficients. Using the properties of the hypergeometric function discussed in more detail in appendix, one can find Bogoliubov coefficients as follows
\begin{eqnarray}
|\beta_{k}|^2&=&\frac{\cosh [\pi\tau(\omega_{out}-\omega_{in})]+\cosh (2\pi\lambda)}{2\sinh(\pi\tau\omega_{in})\sinh(\pi\tau\omega_{out})}\nonumber\\
|\alpha_{k}|^2&=&\frac{\cosh [\pi\tau(\omega_{out}+\omega_{in})]+\cosh (2\pi\lambda)}{2\sinh(\pi\tau\omega_{in})\sinh(\pi\tau\omega_{out})}\label{b2a22}.
\end{eqnarray}
\noindent We can use the method described in Section II above to
expand the '$in$'- vacuum in terms of the '$out$' state and specify the coefficient $c_{n}$.
Below, Eqs. (\ref{cn}-\ref{scalarentropy}) and (\ref{b2a22}) are used to obtain the von Neumann entropy for scalar particles in the background of the Sauter-type electric field, we have
\begin{eqnarray}
S&=&\log_{2}\frac{x^{\frac{x}{x-1}}}{1-x}\nonumber\\
x&=& \frac{|\beta_{k}|^2}{|\alpha_{k}|^2}=\frac{\cosh [\pi\tau(\omega_{out}-\omega_{in})]+\cosh (2\pi\lambda)}{\cosh [\pi\tau(\omega_{out}+\omega_{in})]+\cosh (2\pi\lambda)}.\nonumber\\\label{alphaboson}
\end{eqnarray}
Variation of the entanglement entropy for the Sauter-type electric field with respect to $E_{0}$, $k_{z}$, and $\tau$ are indicated in Figs. \ref{figbtE}-\ref{figbtt}. According to Fig. \ref{figbtE}, for a small value of $\tau$, the entanglement entropy has a local maximum while the behavior of entanglement will be similar to that depicted in Fig \ref{figE} for large values of the same parameter. Dependence of $S$ on the longitude momentum, $k_{z}$, is indicated in Fig. \ref{figbtk}. There is a peak for a small value of $\tau$, while this dependence will be less for large values of $\tau$. As mentioned before, for bosonic modes higher values of $|\beta_{k}|^{2}$ lead to a greater entanglement. In other words, the behavior of $S$ is similar to that of $|\beta_{k}|^{2}$.
\subsection{ Fermion Particles }
In this subsection, we explore the variation of the entanglement that is generated by Sauter-type electric field between fermionic modes. Repeating the same procedure for the constant electric field, Eqs. (\ref{dirac}-\ref{second}), one can find the spin diagonal component
of the Dirac equation for spinor QED to satisfy the equation
\begin{eqnarray}
[\partial^{2}_{t}&+&[k_{z}-qE_{0}\tau \tanh(\frac{t}{\tau})]^{2}+k^{2}_{\perp}+m^{2}\nonumber\\
&+& iqE_{0}\sec h^{2}(t/\tau)]\phi_{k}(t)=0,\label{Eft}
\end{eqnarray}
Two linear solutions of Eq. (\ref{Eft}) are given by
\begin{eqnarray}
\phi_{k,s}(t)= (z-1)^{i\tau\omega_{k,out}/2}z^{-i\tau\omega_{k,in}/2} [C_{1} F(a,b,c;z)\nonumber\\
+C_{2}z^{1-c}F(a-c+1,b-c+1,2-c;z)],\label{hyperf}\nonumber\\
\end{eqnarray}
with
\begin{eqnarray}
z&=&\frac{1}{2}\tanh(\frac{t}{\tau})+\frac{1}{2}, \lambda=qE_{0}\tau^{2},\nonumber\\
a&=&\frac{i}{2}(\tau\omega_{k,out}-\tau\omega_{k,in})\pm i\lambda,\nonumber\\
b&=&1+\frac{i}{2}(\tau\omega_{k,out}-\tau\omega_{k,in})\mp i\lambda,\nonumber\\
c&=&1-i\tau\omega_{k,in}.\label{zf}
\end{eqnarray}
Using Eqs. (\ref{R},\ref{uv},\ref{innerproduct},\ref{fabb}) and the properties of the hypergeometric function, one can find Bogoliubov coefficients as follows
\begin{eqnarray}
|\beta^{\uparrow}_{k}|^{2}&=&|\beta^{\downarrow}_{k}|^{2}=|\beta_{k}|^{2}=\frac{\cosh(2\pi\lambda)-\cosh[\pi\tau(\omega_{out}-\omega_{in})]}{2\sinh(\pi\tau\omega_{in})\sinh(\pi\tau\omega_{out})}\nonumber\\
|\alpha^{\uparrow}_{k}|^{2}&=&|\alpha^{\downarrow}_{k}|^{2}=|\alpha_{k}|^{2}=\frac{\cosh[\pi\tau(\omega_{out}+\omega_{in})]-\cosh(2\pi\lambda)}{2\sinh(\pi\tau\omega_{in})\sinh(\pi\tau\omega_{out})}\nonumber\\
\label{alphabe}
\end{eqnarray}
where, $\uparrow(\downarrow)$ indicate the up (down) spins.
Exploiting the method described in section II, we may expand the '$in$'- vacuum in terms of the '$out$' states according to Eq. (\ref{fermionstate}). The coefficient ($c_{0},...,c_{3}$) given by Eq. (\ref{cnf}), may now be used to obtain the reduced density matrix and, thereby, the average von-Neumann entropy Eq. (\ref{sfermion}). In fact, the relations (\ref{abspinor}-\ref{ro}) remain valid. Using these relations, the entropy of von Neumann is obtained by:
\begin{eqnarray}
S=-|\alpha_{k}|^{2}\log_{2}(|\alpha_{k}|^{2})-|\beta_{k}|^{2}\log_{2}(|\beta_{k}|^{2})\label{sssfermion},
\end{eqnarray}
where, $|\alpha|^2$ and $|\beta|^2$ are specified by (\ref{alphabe}).
The variations of the entanglement with respect to $E_{0}$, $\tau$ and $k_{z}$ are indicated in Figs. \ref{figftE}-\ref{figftt}. The behavior of $S$ with respect to $m$ and $k_{\perp}$ for the pulsed and constant electric fields will be is the same. Eq. (\ref{sssfermion}) indicates that the maximum value of the entanglement entropy occurs at $|\beta|^2=\frac{1}{2}$. The minimum value of $S_{k}$ also occurs at $|\beta|^2$ is equal to one or zero.
When $\tau\rightarrow\infty$, the pulsed electric field $E(t)=E_{0} sech^{2}(t/\tau)$ tends to the constant electric field $E_{0}$. Also, in this limit, the Bogoliubov coefficients for bosonic and fermionic modes of Eqs. (\ref{b2a22}) and (\ref{alphabe}) tend to be as in Eqs. (\ref{alpha}) and (\ref{fab}). Since the generated entanglement is obtained in terms of Bogoliubov coefficients, the behavior of $S$ generated by the pulsed electric field can be observed to tend to the constant electric field for large values of $\tau$.
\section{ CONCLUSION}
We applied the Schwinger pair production theory to constant and pulsed electric fields on a Minkowski spacetime to demonstrate that the background electric field can generate the entanglement. We worked out the entanglement entropy for scalar particles and Dirac fermions created by the background electric field. The behavior of the entanglement was also depicted in figures with respect to different parameters.
For a constant electric field, the entanglement generated for boson modes as a function of $E_{0}$ was found to be an increasing function which tended to a specific constant value in the limit of sufficiently large fields (Fig. \ref{figE}) but that it monotonically decreased with respect to $m$ and $(k_{\perp})$ ( Fig. \ref{figm}). For the fermionic mode, however, it was found to be very different. Optimal values of $E_{0}$, $m$ and $k_{\perp}$ were observed for which the entanglement entropy would be maximized (Figs. \ref{figEf} and \ref{figmf}).
In the case of a pulsed electric field, the behavior of the entanglement generated by bosonic and fermionic modes with respect to $m$ and $k_{\perp}$ was observed to be similar to that in the case of the constant electric field. However, for small values of $\tau$, the bosonic entanglement as a function of $E_{0}$ was seen to have a local maximum (Fig. \ref{figbtE}). High power laser pulses make a good candidate for generating entangled states experimentally.
The authors intend to explore the generation of entanglement by other forms of electric and magnetic fields in future. As another interesting area of
research is to study entanglement generation by background electromagnetic fields at finite temperature.
\section{Appendix A:Hypergeometric function and Bogoliubov coefficients}
The hypergeometric function, $F(a,b,c,z)$, appears in $24$ different forms. These solutions are called Kummer's series and can be arranged in six sets such that the four series belonging to each set represent the same function. Any three of these are connected by a linear relation with constant coefficients \cite{gradshteyn}. We choose appropriate solutions among the six sets for $t=+\infty$ and $t=-\infty$. As we know, the asymptotic solutions are related through the Bogoliubov coefficients
\begin{eqnarray}
\phi_{k,in}= \alpha_{k}\phi_{k,out}+\beta_{k}\phi^{*}_{k,out}\label{bg}.
\end{eqnarray}
Using the behavior of the hypergeometric function, we can find the Bogoliubov coefficients. According to (\ref{z}), the value of $z$ at $t=-\infty(+\infty)$ is $0(1)$ . Therefore, the hypergeometric functions can be appropriately expressed in terms of $z$ and $1-z$.
Using the linear relations between the hypergeometric functions \cite{gradshteyn},
\begin{widetext}
\begin{eqnarray}
F(a,b,c;z)=\lambda_{11} F(a,b,a+b-c+1;1-z) +\lambda_{12}(1-z)^{c-a-b} F(c-a,c-b,c-a-b+1;1-z)\label{f1}
\end{eqnarray}
and
\begin{eqnarray}
z^{1-c}F(a-c+1,b-c+1,2-c;z)&=&\lambda_{21} F(a,b,a+b-c+1;1-z)\nonumber\\
&+&\lambda_{22}(1-z)^{c-a-b}F(c-a,c-b,c-a-b+1;1-z),\label{f2}
\end{eqnarray}
where
\begin{eqnarray}
\lambda_{11}&=&\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)},\ \ \lambda_{21} =\frac{\Gamma(2-c)\Gamma(c-a-b)}{\Gamma(1-a)\Gamma(1-b)}\nonumber\\
\lambda_{12}&=&\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)},\ \ \lambda_{22} =\frac{\Gamma(2-c)\Gamma(a+b-c)}{\Gamma(a-c+1)\Gamma(b-c+1)},\label{f3}
\end{eqnarray}
\end{widetext}
one can expand the sets of solutions at $t_{in}= − \infty$ in terms of the sets of solutions at $t_{out}=+\infty$.
Using Eqs. (\ref{bg}-\ref{f3}), we have
\begin{eqnarray}
\frac{|\beta_{k}|^2}{|\alpha_{k}|^2}=\frac{| \lambda_{12}|^2}{| \lambda_{11}|^2}=\frac{| \lambda_{22}|^2}{| \lambda_{21}|^2}\label{b2a2}.
\end{eqnarray}
|
2,869,038,156,716 | arxiv | \section{Introduction}
The goal of this paper is to prove the following:
\theorem{0.1} Let $X$ be a complex surface of general type. Then $X$ is
not diffeomorphic to a rational surface.
\endstatement
Using the results from [13], we obtain the following corollary, which settles
a problem raised by Severi:
\corollary{0.2} If $X$ is a complex surface diffeomorphic to a rational
surface, then $X$ is a rational surface. Thus, up to deformation equivalence,
there is a unique complex structure on the smooth $4$-manifolds $S^2\times S^2$
and $\Pee^2 \# n\overline{\Pee}^2$.
\endstatement
In addition, as discussed in the book [15], Theorem 0.1 is the last step in
the proof of the following, which was conjectured by Van de Ven [37] (see also
[14,15]):
\corollary{0.3} If $X_1$ and $X_2$ are two diffeomorphic complex surfaces, then
$$\kappa (X_1) = \kappa (X_2),$$
where $\kappa (X_i)$ denotes the Kodaira
dimension of $X_i$.
\endstatement
The first major step in proving that every complex surface diffeomorphic to a
rational surface is rational was Yau's theorem [40] that every complex
surface of the same homotopy type as $\Pee^2$ is biholomorphic to $\Pee^2$.
After this case, however, the problem takes on a different character: there do
exist nonrational complex surfaces with the same oriented homotopy type as
rational surfaces, and the issue is to show that they are not in fact
diffeomorphic to rational surfaces. The only known techniques for dealing with
this question involve gauge theory and date back to Donaldson's seminal paper
[9] on the failure of the $h$-cobordism theorem in dimension 4. In this paper,
Donaldson introduced analogues of polynomial invariants for 4-manifolds $M$
with $b_2^+(M) = 1$ and special
$SU(2)$-bundles. These invariants depend in an explicit way on a chamber
structure in the positive cone in $H^2(M; \Ar)$. Using these invariants, he
showed that a certain elliptic surface (the Dolgachev surface with multiple
fibers of multiplicities 2 and 3) was not diffeomorphic to a rational surface.
In [13], this result was generalized to cover all Dolgachev surfaces and their
blowups (the case of minimal Dolgachev surfaces was also treated in [28]) and
Donaldson's methods were also used to study self-diffeomorphisms of rational
surfaces. The only remaining complex surfaces which are homotopy equivalent
(and thus homeomorphic) to rational surfaces are then of general type, and a
single example of such surfaces, the Barlow surface, is known to exist [2]. In
1989, Kotschick [18], as well as Okonek-Van de Ven [29], using Donaldson
polynomials associated to
$SO(3)$-bundles, showed that the Barlow surface was not diffeomorphic to a
rational surface. Subsequently Pidstrigach [30] showed that no complex surface
of general type which has the same homotopy type as the Barlow surface was
diffeomorphic to a rational surface, and Kotschick [20] has outlined an
approach
to showing that no blowup of such a surface is diffeomorphic to a rational
surface. All of these approaches use $SO(3)$-invariants or $SU(2)$-invariants
for small values of the (absolute value of) the first Pontrjagin class $p_1$ of
the $SO(3)$-bundle, so that the dependence on chamber structure can be
controlled in a quite explicit way.
In [33], the second author showed that no surface $X$ of general type could be
diffeomorphic to $\Pee ^1\times \Pee ^1$ or to $\Bbb F_1$, the blowup of
$\Pee^2$ at one point. Here the main tool is the study of $SO(3)$-invariants
for large values of $-p_1$, as defined and analyzed in [19] and [21]. These
invariants also depend on a chamber structure, in a rather complicated and not
very explicitly described fashion. In [34], these methods are used to analyze
minimal surfaces $X$ of general type under certain assumptions concerning the
nonexistence of rational curves, which are always satisfied if $X$ has the same
homotopy type as $\Pee ^1\times \Pee ^1$ or $\Bbb F_1$, by a theorem of Miyaoka
on the number of rational curves of negative self-intersection on a minimal
surface of general type. The main idea of the proof is to show the following:
Let $X$ be a minimal surface of general type, and suppose that
$\{E_0, \dots, E_n\}$ is an orthogonal basis for $H^2(X; \Zee)$ with $E_0^2
=1$, $E_i^2 =-1$ for $i\geq 1$, and $[K_X] = 3E_0 -\sum _{i\geq 1}E_i$. Finally
suppose that the divisor $E_0 - E_i$ is nef for some $i\geq 1$. Then the class
$E_0 - E_i$ cannot be represented by a smoothly embedded 2-sphere. (Actually,
in [34], the proof shows that an appropriate Donaldson polynomial is not
zero whereas it must be zero if $X$ is diffeomorphic to a rational surface.
However, using [26], one can also show that if $E_0 - E_i$ is represented by a
smoothly embedded 2-sphere, then the Donaldson polynomial is zero.) At the
same
time, building on ideas of Donaldson, Pidstrigach and Tyurin [31], using Spin
polynomial invariants, showed that no minimal surface of general type is
diffeomorphic to a rational surface.
We now discuss the contents of this paper and the general strategy for the
proof of Theorem 0.1. The bulk of this paper is devoted to giving a new proof
of the results of Pidstrigach and Tyurin concerning minimal surfaces $X$. Here
our methods apply as well to minimal simply connected algebraic surfaces of
general type with $p_g$ arbitrary. Instead of looking at embedded 2-spheres of
self-intersection 0 as in [34], we consider those of self-intersection $-1$. We
show in fact the following in Theorem 1.10 (which includes a generalization
for blowups):
\theorem{0.4} Let $X$ be a minimal simply connected algebraic surface of
general type, and let $E\in H^2(X; \Zee)$ be a class satisfying $E^2 = -1$,
$E\cdot [K_X] = 1$. Then the class $E$ cannot be represented by a smoothly
embedded $2$-sphere.
\endstatement
In particular, if $p_g(X) = 0$, then $X$ cannot
be diffeomorphic to a rational surface. The method of proof of Theorem 0.4 is
to show that a certain value of a Donaldson polynomial invariant for $X$ is
nonzero (Theorem 1.5), while it is a result of Kotschick that if the class $E$
is represented by a smoothly embedded 2-sphere, then the value of the Donaldson
polynomial must be zero (Proposition 1.1). In case $p_g(X) =0$, once we have
have found a polynomial invariant which distinguishes $X$ from a rational
surface, it follows in a straightforward way from the characterization of
self-diffeomorphisms of rational surfaces given in [13] that no blowup of $X$
can be diffeomorphic to a rational surface either (see Theorem 1.7). This part
of the argument could also be used with the result of Pidstrigach and Tyurin to
give a proof of Theorem 0.1.
Let us now discuss how to show that certain Donaldson polynomials do not vanish
on certain classes. The prototype for such results is the nonvanishing theorem
of Donaldson [10]: if $S$ is an algebraic surface with $p_g(S) > 0$ and
$H$ is an ample line bundle on $S$, then for all choices of $w$ and all $p \ll
0$, the
$SO(3)$-invariant $\gamma _{w,p}(H, \dots, H) \neq 0$. We give a
generalization of this result in Theorem 1.4 to certain cases where $H$ is no
longer ample, but satisfies: $H^k$ has no base points for $k\gg 0$ and
defines a birational morphism from $X$ to a normal surface $\bar X$, and where
$p_g(X)$ is also allowed to be zero (for an appropriate choice of chamber).
Here we must assume that there is no exceptional curve $C$ such that $H\cdot
C=0$, as well as the following additional assumption concerning the
singularities of
$\bar X$: they should be rational or minimally elliptic in the terminology of
[22]. The proof of Theorem 1.4 is a straightforward generalization of
Donaldson's original proof, together with methods developed by J\. Li in [23,
24].
Given the generalized nonvanishing theorem, the problem becomes one of
constructing divisors $M$ such that $M$ is orthogonal to a class $E$ of square
$-1$ and moreover such that $M$ is eventually base point free.
(Here we recall that a divisor $M$ is {\sl
eventually base point free\/} if the complete linear system $|kM|$ has no base
points for all
$k\gg 0$.) There are various methods for finding base point free linear systems
on an algebraic surface. For example, the well-studied method of Reider
[35] implies that, if $X$ is a minimal surface of general type and $D$ is a
nef and big divisor on
$X$, then
$M= K_X+D$ is eventually base point free. There is also a
technical generalization of this result due to Kawamata [16]. However, the
methods which we shall need are essentially elementary. The general outline of
the construction is as follows. Let $E$ be a class of square $-1$ with $K_X
\cdot E=1$. It is known that, if $E$ is the class of a smoothly embedded
$2$-sphere, then $E$ is of type $(1,1)$ [6]. Thus
$K_X+E$ is a divisor orthogonal to $E$. If $K_X+E$ is ample we are done. If
$K_X+E$ is nef but not ample, then there exist curves $D$ with $(K_X+E)\cdot
D=0$, and the intersection matrix of the set of all such curves is negative
definite. Thus we may contract the set of all such curves to obtain a normal
surface $X'$. If
$X'$ has only rational singularities, then the divisor $K_X+E$ induces a
Cartier divisor on $X'$ which is ample, by the Nakai-Moishezon criterion, and
so some multiple of $K_X+E$ is base point free. Next suppose that $X'$ has a
nonrational singular point $p$ and let $D_1, \dots, D_t$ be the
irreducible curves on $X$ mapped to $p$.
Then we give a dual form of Artin's criterion [1] for a rational
singularity, which says the following: the point $p$ is a nonrational
singularity if and only if there exist nonnegative integers $n_i$, with at
least one $n_i >0$, such that $(K_X+\sum _in_iD_i)\cdot D_j \geq 0$ for all
$j$. Moreover there is a choice of the $n_i$ such that either the inequality is
strict for every $j$ or the contraction of the
$D_j$ with $n_j \neq 0$ is a minimally elliptic singularity. In this case,
provided that $K_X$ is itself nef, it is easy to show that $K_X+\sum
_in_iD_i$ is nef and big and eventually base point free, and defines the
desired contraction. The
remaining case is when
$K_X+E$ is not nef. In this case, by considering the curves $D$ with
$(K_X+E)\cdot D<0$, it is easy to find a
$\Bbb Q$-divisor of the form
$K_X+\lambda D$, where $D$ is an irreducible curve and
$\lambda \in \Bbb Q^+$, which is nef and big and such that some
multiple is eventually base point free, and which is orthogonal to $E$. The
details are given in Section 3. These methods can also handle the case of
elliptic surfaces (the case where $\kappa (X) =1$), but of course there are
more elementary and direct arguments here which prove a more precise result.
We have included an appendix giving a proof, due to the first author, R\.
Miranda, and J\.W\. Morgan, of a result characterizing the
canonical class of a rational surface up to isometry. This result seems to be
well-known to specialists but we were unable to find an explicit statement in
the literature. It follows from work of Eichler and Kneser on the number of
isomorphism classes of indefinite quadratic forms of rank at least 3 within a
given genus (see e\.g\. [17]) together with some calculation. However the proof
in the appendix is an elementary argument.
The methods in this paper are able to rule out the possibility of embedded
2-spheres whose associated class $E$ satisfies $E^2 = -1$, $E\cdot [K_X] = 1$.
However, in case $p_g(X) = 0$ and $b_2(X) \geq 3$, there are infinitely many
classes $E$ of square $-1$ which satisfy $|E\cdot K_X| \geq 3$. It is natural
to hope that these classes also cannot be represented by smoothly embedded
2-spheres. More generally we would like to show that the surface $X$ is
strongly minimal in the sense of [15]. Likewise, in case
$p_g(X) >0$, we have only dealt with the first case of the ``$(-1)$-curve
conjecture" (see [6]).
\medskip
\noindent
{\bf Acknowledgements:} We would like to thank
Sheldon Katz, Dieter Kotschick, and Jun Li for valuable help and stimulating
discussions.
\section{1. Statement of results and overview of the proof}
\ssection{1.1. Generalities on $\boldkey S \boldkey O\boldkey (\boldkey
3\boldkey )$-invariants}
Let $X$ be a smooth simply connected $4$-manifold with $b_2^+(X) = 1$,
and fix an $SO(3)$-bundle $P$ over $X$ with $w_2(P) = w$ and $p_1(P) = p$.
Recall that a {\sl wall of type $(w,p)$ for
$X$} is a class
$\zeta \in H^2(S; \Zee)$ such that $\zeta \equiv w \mod 2$ and
$p \leq \zeta ^2 <0$. Let
$$\Omega _X = \{x\in H^2(X; \Ar): x^2 >0\}.$$
Let $W^\zeta = \Omega _X \cap (\zeta)^\perp$. A
{\sl chamber of type $(w, p)$ for $X$} is a connected
component of the set
$$\Omega _X - \bigcup\{W^\zeta: \zeta {\text{ is a wall of type $(w,p)$}}\,
\}.$$
Let $\Cal C$ be a chamber of type $(w,p)$ for
$X$ and let
$\gamma_{w,p}(X;\Cal C)$ denote the associated Donaldson
polynomial, defined via [19] and [21]. Here $\gamma_{w,p}(X;\Cal C)$ is
only defined up to $\pm 1$, depending on the choice of an integral lift for
$w$, corresponding to a choice of orientation for the moduli space. The actual
choice of sign will not matter, since we shall only care if a certain value of
$\gamma_{w,p}(X;\Cal C)$ is nonzero. In the complex case we shall always
assume for convenience that the choice has been made so that the orientation
of the moduli space agrees with the complex orientation. Via Poincar\'e
duality, we shall view $\gamma_{w,p}(X;\Cal C)$ as a function on either
homology or cohomology classes. Given a class
$M$, we use the notation $\gamma_{w,p}(X;\Cal C) (M^d)$ for
the evaluation $\gamma_{w,p}(X;\Cal C) (M, \dots, M)$ on the class $M$ repeated
$d$ times, where
$d= -p-3$ is the expected dimension of the moduli space. We then have the
following vanishing result for
$\gamma_{w,p}(\Cal C)$, due to Kotschick [19, (6.13)]:
\proposition{1.1} Let $E \in H^2(X; \Bbb Z)$ be the cohomology class of a
smoothly embedded $S^2$ in $X$ with $E^2 = -1$. Let $w$ be the second
Stiefel-Whitney class of $X$, or more generally any class in $H^2(X,
\Zee/2\Zee)$ such that $w\cdot E \neq 0$. Suppose that
$M\in H_2(X; \Zee)$ satisfies $M^2 > 0$ and $M \cdot E = 0$. Then, for every
chamber $\Cal C$ of type $(w, p)$ such that the wall $W^E$ corresponding to $E$
passes through the interior of $\Cal C$,
$$\gamma_{w,p}(X;\Cal C)(M^d) = 0. \qed$$
\endproclaim
Note that if $w$ is the second Stiefel-Whitney class of $X$, then $W^E$ is
a wall of type $(w,p)$ (and so does not pass through the interior of any
chamber) if and only if $E^\perp$ is even. This case arises, for example,
if $X$ has the homotopy type of $(S^2\times S^2)\#\overline{\Pee}^2$ and $E$ is
the standard generator of $H^2(\overline{\Pee}^2; \Zee) \subseteq H^2(X; \Bbb
Z)$.
For the proof of Theorem 0.1, the result of (1.1) is sufficient. However, for
the slightly more general result of Theorem 1.10, we will also need the
following variant of (1.1):
\theorem{1.2} Let $E \in H^2(X; \Bbb Z)$ be the cohomology class of a
smoothly embedded $S^2$ in $X$ with $E^2 = -1$. Let $w$ be a class in
$H^2(X, \Zee/2\Zee)$ such that $w\cdot E \neq 0$. Suppose that
$M\in H_2(X; \Zee)$ satisfies $M^2 > 0$ and $M \cdot E = 0$. Then, for every
chamber $\Cal C$ of type $(w, p)$ containing $M$ in its closure,
$$\gamma_{w,p}(X;\Cal C)(M^d) = 0,$$
unless $p=-5$ and $w$ is the \rom{mod} $2$ reduction of $E$.
Thus, except in this last case, $\gamma_{w,p}(X;\Cal C)$ is divisible by $E$.
\endproclaim
\proof If $W^E$ is not a wall of type $(w,p)$ we are done by (1.1). Otherwise,
$E$ defines a wall of type $(w,p)$ containing $M$. Next let us assume that
$E^\perp \cap \overline{\Cal C}$ is a codimension one face of the
closure $\overline{\Cal C}$ of $\Cal C$. We have an induced
decomposition of $X$:
$$X = X_0 \# \overline{\Pee}^2.$$
Identify $H_2(X_0; \Bbb Z)$ with the subspace $E^\perp$ of $H_2(X; \Bbb Z)$,
and let $\overline{\Cal C}_0 = E^\perp \cap \overline{\Cal C}$. Then
$\overline{\Cal C}_0$ is the closure of some chamber ${\Cal C}_0$
of type $(w - e, p + 1)$ on $X_0$, where $e$ is the mod 2 reduction of $E$.
Choose a generic Riemannian metric $g_0$ on $X_0$ such that the cohomology
class $\omega _0$ of the self-dual harmonic 2-form associated to
$g_0$ lies in the interior of
$\Cal C_0$. By the results in [39], there is a family of metrics $h_t$ on the
connected sum $X_0\# \overline{\Cee P}^2$ which converge in an appropriate
sense to $g_0\amalg g_1$, where $g_1$ is the Fubini-Study metric on
$\overline{\Pee}^2$, and such that the cohomology classes of the self-dual
harmonic 2-forms associated to $h_t$ lie in the interior of
$\Cal C$ and converge to $\omega _0$.
Standard gluing and compactness arguments (see for example [15],
Appendix to Chapter 6) and dimension counts show that the restriction
of the invariant
$\gamma_{w,p}(X;\Cal C)$ to $H_2(X_0; \Bbb Z)$ vanishes.
Consider now the general case where $W^E$ is a wall of type $(w,p)$ and the
closure of $\Cal C$ contains $M$ but where
$W^E\cap \overline{\Cal C}$ is not necessarily a codimension one face of
$\overline{\Cal C}$. Since
$W^E$ is a wall of type $(w,p)$ and $M\in E^\perp$, there exists a chamber
$\Cal C'$ of type $(w, p)$ whose closure contains $M$ such that $W^E$ is a
codimension one face of $\overline{\Cal C}'$. By the previous argument,
$\gamma_{w,p}(X;\Cal C')(M^d) =0$ and so it
will suffice to show that
$$\gamma_{w,p}(X;\Cal C)(M^d) = \gamma_{w,p}(X;\Cal C')(M^d).$$
Note that $\Cal C$ and $\Cal C'$ are separated by finitely many walls of type
$(w, p)$ all of which contain the class $M$. Thus, we have a sequence of
chambers of type $(w, p)$:
$$\Cal C = \Cal C_1, \Cal C_2, \dots, \Cal C_{k - 1},
\Cal C_k = \Cal C'$$
such that for each $i$, $\Cal C_{i - 1}$ and $\Cal C_i$ are
separated by a single wall $W_i=W^{\zeta_i}$ of type $(w, p)$ which contains
$M$. Since $W_i$ contains $M$, $M \cdot \zeta_i = 0$. By [19, (3.2)(3)] (see
also [21]), the difference $\gamma_{w,p}(X;\Cal C_{i - 1}) -
\gamma_{w,p}(X;\Cal C_i)$ is divisible by the class $\zeta_i$ except in the
case where $p=-5$ and $w$ is the mod 2 reduction of $E$. It follows that,
except in this last case, for each $i$,
$$\gamma_{w,p}(X;\Cal C_{i - 1})(M^d) = \gamma_{w,p}(X;\Cal C_i)(M^d).$$
Hence $\gamma_{w,p}(X;\Cal C)(M^d) =
\gamma_{w,p}(X;\Cal C')(M^d)=0$.
\endproof
We shall also need the following ``easy" blowup formula:
\lemma{1.3} Let $X\#\overline{\Pee}^2 $ be a blowup of $X$, and identify
$H_2(X; \Bbb Z)$ with a subspace of
$H_2(X\#\overline{\Pee}^2; \Bbb Z)$ in the natural way.
Given $w\in H^2(X; \Zee/2\Zee)$, let $\tilde \Cal C$ be a chamber of
type $(w, p)$ for $X\#\overline{\Pee}^2$ containing
the chamber $\Cal C$ in its closure. Then
$$\gamma _{w,p}(X\#\overline{\Pee}^2;\tilde\Cal C)
|H_2(X; \Bbb Z) = \pm\gamma_{w,p}(X;\Cal C).$$
\endproclaim
\proof Choose a generic Riemannian metric $g$ on
$X$ such that the cohomology class $\omega$ of the self-dual harmonic 2-form
associated to $g$ lies in the interior of
$\Cal C$. We again use the results in [39] to choose a family of metrics $h_t$
on the connected sum $X\# \overline{\Pee}^2$ which converge in an appropriate
sense to $g\amalg g'$, where $g'$ is the Fubini-Study metric on
$\overline{\Pee}^2$, and such that the cohomology classes of the self-dual
harmonic 2-forms associated to $h_t$ lie in the interior of
$\tilde\Cal C$ and converge to $\omega$. Standard gluing and compactness
arguments (see e\.g\. [15], Chapter 6, proof of Theorem 6.2(i)) now show that
the restriction of $\gamma _{w,p}(X\#\overline{\Pee}^2;\tilde\Cal C)$ to
$H_2(X; \Bbb Z)$ (with the appropriate orientation conventions) is just
$\gamma_{w,p}(X;\Cal C)$.
\endproof
\ssection{1.2. The case of a minimal $\boldkey X$}
In this subsection we shall outline the results to be proved concerning minimal
surfaces of general type. One basic tool is a nonvanishing theorem for certain
values of the Donaldson polynomial:
\theorem{1.4} Let $X$ be a simply connected algebraic surface with $p_g(X)
=0$,
and let $M$ be a nef and big divisor on $X$ which is eventually
base point free. Denote by $\varphi\: X \to \bar X$ the birational morphism
defined by $|kM|$ for $k\gg 0$, so that $\bar X$ is a normal projective
surface. Suppose that $\bar X$ has only rational or minimally elliptic
singularities, and that $\varphi$ does not contract any exceptional curves to
points. Let
$w\in H^2(X;\Zee/2\Zee)$ be the \rom{mod} $2$ reduction of the class $[K_X]$.
Then there exists a constant $A$ depending only on $X$ and $M$ with the
following property:
For all integers $p\leq A$, let
$\Cal C$ be a chamber of type $(w,p)$ containing $M$ in its closure and suppose
that $\Cal C$ has nonempty intersection with the ample cone of $X$. Set $d
= -p-3$. Then
$$\gamma_{w,p}(X;\Cal C)(M^d) >0.$$
\endstatement
We shall prove Theorem 1.4 in Section 2, where we shall also recall the salient
properties of rational and minimally elliptic singularities. The proof also
works in the case where
$p_g(X)>0$, in which case
$\gamma_{w,p}(X)$ does not depend on the choice of a chamber.
We can now state the main result concerning minimal surfaces, which we shall
prove in Section 3:
\theorem{1.5} Let $X$ be a minimal simply connected algebraic surface of
general type, and let
$E\in H^2(X; \Zee)$ be a $(1,1)$-class satisfying $E^2=-1$, $E\cdot K_X = 1$.
Let $w$ be the \rom{mod} $2$ reduction of $[K_X]$. Then there exist:
\roster
\item"{(i)}" an integer $p$
and \rom(in case $p_g(X)=0$\rom) a chamber $\Cal C$ of type $(w,p)$ and
\item"{(ii)}" a $(1,1)$-class $M\in H^2(X; \Zee)$
\endroster
such that
$M\cdot E=0$ and $\gamma _{w,p}(X)(M^d) \neq 0$
\rom(or, in case $p_g(X)=0$,
$\gamma _{w,p}(X; \Cal C)(M^d) \neq 0$\rom).
\endstatement
The method of proof of (1.5) will be the following: we will show that there
exists an orientation preserving self-diffeomorphism $\psi$ of
$X$ with $\psi ^*[K_X] = [K_X]$ and a nef and big divisor $M$ on $X$ such
that:
\roster
\item"{(i)}" $M\cdot \psi ^*E = 0$.
\item"{(ii)}" $M$ is eventually base point free,
and the corresponding contraction $\varphi\: X \to \bar X$ maps $X$
birationally onto a normal surface $\bar X$ whose only singularities are either
rational or minimally elliptic.
\endroster
Using the naturality of $\gamma _{w,p}(X;\Cal C)$, it suffices to prove (1.5)
after replacing $E$ by $\psi ^*E$. In this case, by Theorem 1.4 with
$w= [K_X]$,
$\gamma _{w,p}(X;\Cal C)(M^d)
\neq 0$ for all
$p \ll 0$.
\corollary{1.6} Let $X$ be a simply connected minimal surface of general type
with $p_g(X)=0$. Then there exist
\roster
\item"{(i)}" a class $w\in H^2(X; \Zee/2\Zee)$;
\item"{(ii)}" an integer $p\in \Zee$;
\item"{(iii)}" a chamber $\Cal C$ for $X$ of type $(w,p)$, and
\item"{(iv)}" a homotopy equivalence $\alpha: X \rightarrow Y$, where $Y$ is
either the blowup of ${\Pee}^2$ at $n$ distinct points or $Y=\Pee ^1\times
\Pee ^1$,
\endroster
such that, for $w'= (\alpha ^*)^{-1}(w)$ and $\Cal C' = (\alpha ^*)^{-1}(\Cal
C)$,
$$\alpha ^*\gamma _{w',p}(Y;\Cal C') \neq
\pm\gamma_{w, p}(X; \Cal C).$$
\endproclaim
\proof If $X$
is homotopy equivalent to $\Pee ^1\times \Pee ^1$ then the theorem
follows from [33]. Otherwise $X$ is oriented homotopy equivalent to $\Pee ^2\#
n\overline{\Pee}^2$, for $1\leq n\leq 8$, and we claim that there exists a
homotopy equivalence $\alpha
\: X \to Y$ such that $\alpha ^*[K_Y] = -[K_X]$. Indeed, every
integral isometry $H^2(Y; \Zee) \to H^2(X; \Zee)$ is realized by an oriented
homotopy equivalence. Thus it suffices to show that every two characteristic
elements of
$H^2(Y; \Zee)$ of square $9-n$ are conjugate under the isometry group, which
follows from the appendix to this paper. Choosing
such a homotopy equivalence $\alpha$, let $e$ be the class of an exceptional
curve in $Y$ and let $E = \alpha ^*e$. Then $E^2 =-1$ and $E\cdot [K_X] = 1$.
We may now apply Theorem 1.5 to the class
$E$, noting that
$E$ is a $(1,1)$ class since $p_g(X)=0$. Let $M$ and $\Cal C$ be a divisor
and a chamber which satisfy the conclusions of Theorem 1.5 and let
$m = (\alpha ^*)^{-1} M$. If $w$ is the mod 2 reduction of $[K_X]$, then
$w'$ is the mod two reduction of $[K_Y]$, so that $w'$ is also
characteristic. Now
$e$ is the class of a smoothly embedded 2-sphere in $Y$ since it is the class
of an exceptional curve. Moreover
$m\cdot e =0$. By
Theorem 1.2,
$\gamma_{w', p}(Y;\Cal C')(m^d) = 0$ since $e$ is represented
by a smoothly embedded 2-sphere and $w'$ is characteristic. But $\gamma
_{w,p}(X;\Cal C)(M^d)
\neq 0$ by Theorem 1.5 and the choice of $M$. Thus
$\alpha ^*\gamma _{w',p}(Y;\Cal C') \neq
\pm\gamma_{w, p}(X; \Cal C)$.
\endproof
Using the result of Wall [38] that every homotopy self-equivalence from $Y$ to
itself is realized by a diffeomorphism, the proof above shows that the
conclusions of the corollary hold for {\it every\/} homotopy equivalence
$\alpha: X \rightarrow Y$.
\ssection{1.3. Reduction to the minimal case}
We begin by recalling some terminology and results from [13].
A {\sl good generic rational surface} $Y$ is a rational surface
such that $K_Y = -C$ where $C$ is a smooth curve,
and such that there does not exist a smooth rational curve on $Y$
with self-intersection $-2$. Every rational surface is
diffeomorphic to a good generic rational surface.
\theorem{1.7} Let $X$ be a minimal surface of general type and
let $\tilde X\to X$ be a blowup of $X$ at $r$ distinct points.
Let $E_1', \dots , E_r'$ be the homology classes of
the exceptional curves on $\tilde X$.
Let $\psi _0\: \tilde X \to \tilde Y$ be a diffeomorphism,
where $\tilde Y$ is a good generic rational surface.
Then there exist a diffeomorphism $\psi \: \tilde X \to \tilde Y$
and a good generic rational surface $Y$ with the following properties:
\roster
\item"{(i)}" The surface $\tilde Y$ is the blowup of $Y$ at
$r$ distinct points.
\item"{(ii)}" If $e_1, \dots, e_r$ are the classes of
the exceptional curves in $H^2(\tilde Y; \Bbb Z)$ for the blowup $\tilde Y \to
Y$, then possibly after renumbering $\psi^*(e_i) = E_i'$ for all $i$.
\item"{(iii)}" Identifying $H^2(X)$ with a subgroup of $H^2(\tilde X)$
and $H^2(Y)$ with a subgroup of $H^2(\tilde Y)$ in the obvious way,
we have $\psi^*(H^2(Y)) = H^2(X)$.
\endroster
Moreover, for every choice of an isometry $\tau$ from $H^2(Y)$ to $H^2(X)$,
there exists a choice of a diffeomorphism $\psi$ satisfying \rom{(i)--(iii)}
above and such that $\psi ^*|H^2(Y) = \tau$.
\endstatement
\proof Let $e_i' \in H^2(\tilde Y; \Zee)$ satisfy $\psi_0^*(e_i') = E_i'$.
Thus the Poincar\'e dual of $e_i'$ is represented by
a smoothly embedded 2-sphere in $\tilde Y$.
It follows that reflection $r_{e_i'}$ in $e_i'$ is realized
by an orientation-preserving self-diffeomorphism of $\tilde Y$.
To see what this says about $e_i'$, we shall recall the following
terminology from [13].
Let $\bold H(\tilde Y)$ be the set
$\{\, x\in H^2(\tilde Y; \Ar)\mid x^2 =1\,\}$ and
let $\Cal K(\tilde Y)\subset H^2(\tilde Y; \Ar)$ be intersection of
the closure of the K\"ahler cone of $\tilde Y$ with $\bold H(\tilde Y)$.
Then $\Cal K(\tilde Y)$ is a convex subset of $\bold H(\tilde Y)$
whose walls consist of the classes of exceptional curves on $\tilde Y$
together with $[-K_{\tilde Y}]$ if $b_2^-(\tilde Y)
\geq 10$, which is confusingly called the {\sl exceptional wall\/} of $\Cal
K(\tilde Y)$. Let
$\Cal R$ be the group generated by the reflections in the walls of $\Cal
K(\tilde Y)$ defined by exceptional classes and define the super $P$-cell
$\bold S = \bold S(P)$ by
$$\bold S = \bigcup _{\gamma \in \Cal R}\gamma \cdot \Cal K(\tilde Y).$$
By Theorem 10A on p\. 355 of [13], for an integral isometry
$\varphi$ of $H^2(\tilde Y; \Ar)$, there exists a diffeomorphism of
$\tilde Y$ inducing $\varphi$ if and only if
$\varphi (\bold S) = \pm \bold S$. (Here, if $b_2^-(\tilde Y) \leq 9$,
$\bold S = \bold H$ and the result reduces to a result
of C\.T\.C\. Wall [38].) Note that $\bold H(\tilde Y)$ has two connected
components, and reflection $r_e$ in a class $e$ of square $-1$ preserves the
set of connected components. Thus if $r_e(\bold S) = \pm \bold S$, then
necessarily
$r_e(\bold S) = \bold S$.
Next we have the following purely algebraic lemma:
\lemma{1.8} Let $e$ be a class of square $-1$ in $H^2(\tilde Y; \Zee)$ such
that the reflection $r_e$ satisfies $r_e(\bold S) = \bold S$. Then there is an
isometry $\varphi$ of $H^2(\tilde Y; \Zee)$ preserving $\bold S$ which sends
$e$ to the class of an exceptional curve.
\endstatement
\proof We first claim that, if $W$ is the wall corresponding to $e$,
then $W$ meets the interior of $\bold S$. Indeed,
the interior $\operatorname{int}\bold S$ of $\bold S$ is connected, by
Corollary 5.5 of [13] p\. 340. If $W$ does not meet $\operatorname{int} \bold
S$, then the sets
$$\{\, x\in \operatorname{int}\bold S\mid e\cdot x > 0\,\}$$
and
$$\{\, x\in \operatorname{int}\bold S\mid e \cdot x < 0\,\}$$
are disjoint open sets covering $\operatorname{int}\bold S$
which are exchanged under the reflection $r_e$.
Since at least one is nonempty, they are both nonempty,
contradicting the fact that $\operatorname{int} \bold S$ is connected.
Thus $W$ must meet $\operatorname{int}\bold S$.
Now let $C$ be a chamber for the walls of square $-1$ which has $W$ as a wall.
It follows from Lemma 5.3(b) on p\. 339 of [13] that $C\cap \bold S$ is a
$P$-cell $P$ and that
$W$ defines a wall of $P$ which is not the exceptional wall.
By Lemma 5.3(e) of [13],
$\bold S$ is the unique super $P$-cell containing $P$, and the reflection group
generated by the elements of square $-1$ defining the walls of $P$ acts simply
transitively on the $P$-cells in $\bold S$. There is thus an element $\varphi$
in this reflection group which preserves $\bold S$ and sends $P$ to $\Cal
K(\tilde Y)$ and $W$ to a wall of $\Cal K(\tilde Y)$ which is not
an exceptional wall. It follows that $\varphi (e)$ is the class of an
exceptional curve on $\tilde Y$.
\endproof
Returning to the proof of Theorem 1.7, apply the previous lemma to the
reflection in $e_r'$. There is thus an isometry $\varphi$ preserving $\bold S$
such that $\varphi(e_r') = e_r$, where $e_r$ is the class of an exceptional
curve on $\tilde Y$. Moreover
$\varphi$ is realized by a diffeomorphism.
Thus after composing with the diffeomorphism inducing $\varphi$,
we can assume that $e_r' = e_r$, or equivalently that $\psi _0^*e_r = [E_r']$.
Let $\tilde Y \to \tilde Y_1$ be the blowing down of the exceptional curve
whose class is $e_r$. Then
$\tilde Y_1$ is again a good generic surface by [13] p\. 312 Lemma 2.3. Since
$e_1', \dots, e_{r-1}'$ are orthogonal to
$e_r$, they lie in the subset $H^2(\tilde Y_1)$ of $H^2(\tilde Y)$.
For $i \neq r$, the reflection in $e_i'$ preserves $W\cap \bold S$,
where $W= (e_r)^\perp$. Now $W$ is just $H^2(\tilde Y_1)$ and
$\Cal K(\tilde Y) \cap H^2(\tilde Y_1) = \Cal K(\tilde Y_1)$
by [13] p\. 331 Proposition 3.5. The next lemma relates the corresponding super
$P$-cells:
\lemma{1.9} $W\cap \bold S$ is the super $P$-cell
$\bold S_1$ for $\tilde Y_1$ containing $\Cal K(\tilde Y_1)$.
\endstatement
\proof Trivially $\bold S_1 \subseteq W\cap \bold S$,
and both sets are convex subsets with nonempty interiors.
If they are not equal, then there is a $P$-cell $P' \subset \bold S_1$
and an exceptional wall of $P'$ which passes through the interior
of $\bold S\cap W$. If $\kappa (P')$ is the exceptional wall meeting
$\bold S\cap W$, then, by [13] p\. 335 Lemma 4.6, $\kappa (P') - e_r$ is
an exceptional wall of $P$ for a well-defined $P$-cell in $\bold S$,
and $\kappa (P') - e_r$ must pass through the interior of $\bold S$.
This is a contradiction. Hence $\bold S\cap W = \bold S_1$ is
a super $P$-cell of $\tilde Y_1$, and we have seen that it contains $\Cal
K(\tilde Y_1)$.
\endproof
Returning to the proof of Theorem 1.7, reflection in $e_{r-1}'$
preserves $\bold S_1$. Applying Lemma 1.8, there is a diffeomorphism of
$\tilde Y_1$ which sends $e_{r-1}'$ to the class of an exceptional curve
$e_{r-1}$. Of course, there is an induced diffeomorphism of $\tilde Y$
which fixes $e_r$. Now we can clearly proceed by induction on $r$.
The above shows that after replacing $\psi _0$ by
a diffeomorphism $\psi$ we can find $Y$ as above
so that (i) and (ii) of the statement of Theorem 3 hold.
Clearly $\psi^*(H^2(Y)) = H^2(X)$. By the theorem of C\.T\.C\. Wall mentioned
above, there is a diffeomorphism of $Y$ realizing every integral isometry of
$H^2(Y)$. So after further modifying by a diffeomorphism of $Y$,
which extends to a diffeomorphism of $\tilde Y$ fixing
the classes of the exceptional curves, we can assume that
the diffeomorphism $\psi$ restricts to $\tau$ for
any given isometry from $H^2(Y)$ to $H^2(X)$.
\endproof
We can now give a proof of Theorem 0.1:
\theorem{0.1} No complex surface of general type is diffeomorphic to a rational
surface.
\endstatement
\proof Suppose that $X$ is a minimal surface of general type and that
$\rho \: \tilde X \to X$ is a blowup of $X$ diffeomorphic to a rational
surface.
We may assume that $\tilde X$ is diffeomorphic via $\psi$
to a good generic rational surface ${\tilde Y}$,
and that $\rho '\: {\tilde Y} \to Y$ is a blow up of ${\tilde Y}$
such that $Y$ and $\psi$ satisfy (i)--(iii) of Theorem 1.7. Choose
$w, p, \alpha, \Cal C$ for
$X$ such that the conclusions of Corollary 1.6 hold, with $\Cal C'$ the
corresponding chamber on $Y$, and let
$\tilde
\Cal C'$ be any chamber for ${\tilde Y}$ containing $\Cal C'$ in its closure.
Then $\psi ^*\tilde \Cal C' = \tilde \Cal C$ is a chamber on $\tilde X$
containing $\Cal C$ in its closure. Using the last sentence of Theorem 1.7, we
may assume that
$\psi ^*|H^2(Y) = \alpha^*$. Thus $\psi ^* (\rho ') ^* = \rho ^*\alpha ^*$. By
the functorial properties of Donaldson polynomials, and viewing $H^2(X;
\Zee/2\Zee)$ as a subset of $H^2(\tilde X; \Zee/2\Zee)$, and similarly for
$\tilde Y$, we have
$$\psi ^*\gamma _{w', p}({\tilde Y}, \tilde \Cal C')
= \pm\gamma _{\psi ^*w', p}(\tilde X, \tilde \Cal C)=
\pm\gamma _{w, p}(\tilde X, \tilde \Cal C).$$
Restricting each side to $\psi ^*H_2(Y) = H_2(X)$,
we obtain by repeated application of Lemma 1.3 that
$$\alpha ^*\gamma _{w',p}(Y;\Cal C') = \pm\gamma _{w, p}(X;\Cal C).$$
But this contradicts Corollary 1.6.
\endproof
Using Theorem 1.5, we have the following generalization of Theorem 0.4 in the
introduction to the case of nonminimal algebraic surfaces:
\theorem{1.10} Let $X$ be a minimal simply connected surface of general type,
and let $E\in H^2(X; \Zee)$ satisfy $E^2 = -1$ and
$E\cdot K_X = 1$. Let $\tilde X$ be a blowup of $X$. Then, viewing $H^2(X;
\Zee)$ as a subset of $H^2(\tilde X; \Zee)$, the class $E$ is not represented
by a smoothly embedded $2$-sphere in $\tilde X$.
\endstatement
\proof Suppose instead that $E$ is represented by a smoothly embedded
$2$-sphere. If
$p_g(X) >0$, then it follows from the results of [6] that $E$ is a
$(1,1)$-class, i\.e\. $E$ lies in the image of
$\operatorname{Pic}X$ inside $H^2(X; \Zee)$. Of course, this is automatically
true if $p_g(X) = 0$. Next assume that $p_g(X) = 0$. By Theorem 1.5,
there exists a
$w\in H^2(X; \Zee/2\Zee)$, an integer $p$, and a chamber
$\Cal C$ of type $(w,p)$, such that $\gamma _{w,p}(X;\Cal C)(M^d)\neq
0$, where $M$ is a class in the closure of $\Cal C$ and $M\cdot E = 0$.
Consider the Donaldson polynomial
$\gamma_{w,p}(\tilde X;\tilde \Cal C)$, where we view $w$ as an element
of $H^2(\tilde X;\Zee/2\Zee)$ in the natural way and $\tilde
\Cal C$ is a chamber of type $(w,p)$ on $\tilde X$ containing $\Cal C$ in its
closure. Then $\tilde
\Cal C$ also contains $M$ in its closure. Thus, by Theorem 1.2,
$\gamma_{w,p}(\tilde X;\tilde \Cal C)(M^d) = 0$. On the other hand, by Lemma
1.3, $\gamma_{w,p}(\tilde X;\tilde
\Cal C)(M^d) = \pm \gamma _{w,p}(X;\Cal C)(M^d)\neq
0$. This is a contradiction. The case where $p_g(X) > 0$ is similar.
\endproof
We also have the following corollary, which works under the assumptions of
Theorem 1.10 for surfaces with $p_g>0$:
\corollary{1.11} Let $X$ be a simply connected surface of general type with
$p_g(X) >0$, not necessarily minimal, and let $E\in H^2(X; \Zee)$ satisfy
$E^2 = -1$ and $E\cdot K_X = -1$. Suppose that $E$ is represented by a smoothly
embedded $2$-sphere. Then $E$ is the cohomology class
associated to an exceptional curve.
\endstatement
\proof Using [15] and [6], we see that if $E$ is not the cohomology class
associated to an exceptional curve, then $E\in H^2(X_{\text{min}}; \Zee)$,
where $X_{\text{min}}$ is the minimal model of $X$ and we have the natural
inclusion $H^2(X_{\text{min}}; \Zee) \subseteq H^2(X; \Zee)$. We may then apply
Theorem 1.10 to conclude that $-E$ cannot be represented by a smoothly embedded
$2$-sphere, and thus that $E$ cannot be so represented, a contradiction.
\endproof
\section{2. A generalized nonvanishing theorem}
\ssection{2.1. Statement of the theorem and the first part of the proof}
In this section, we shall prove Theorem 1.4. We first recall its statement:
\theorem{1.4} Let $X$ be a simply connected algebraic surface with $p_g(X) =0$,
and let $M$ be a nef and big divisor on $X$ which is eventually
base point free. Denote by $\varphi\: X \to \bar X$ the birational morphism
defined by $|kM|$ for $k\gg 0$, so that $\bar X$ is a normal projective
surface.
Suppose that $\bar X$ has only rational or minimally elliptic singularities,
and that $\varphi$ does not contract any exceptional curves to points. Let
$w\in H^2(X;\Zee/2\Zee)$ be the \rom{mod} $2$ reduction of the class $[K_X]$.
Then there exists a constant $A$ depending only on $X$ and $M$ with the
following property:
For all integers $p\leq A$, let
$\Cal C$ be a chamber of type $(w,p)$ containing $M$ in its closure and suppose
that $\Cal C$ has nonempty intersection with the ample cone of $X$. Set $d
= -p-3$. Then
$$\gamma_{w,p}(X;\Cal C)(M^d) >0.$$
A similar conclusion holds if $p_g(X) >0$.
\endstatement
\proof We begin by fixing some notation. For
$L$ an ample line bundle on
$X$, given a divisor $D$ on $X$ and an
integer $c$, let $\frak M_L(D, c)$ denote the moduli space of isomorphism
classes of
$L$-stable rank two holomorphic vector bundles on $X$ with $c_1(V) = D$ and
$c_2(V) = c$. Let $w$ be the mod 2 reduction of $D$ and let $p= D^2 - 4c$. Then
we also denote $\frak M_L(D, c)$ by
$\frak M_L(w, p)$, the moduli space of equivalence classes of $L$-stable
rank two holomorphic vector bundles on $X$ corresponding to the choice of
$(w,p)$. Here we recall that two vector bundles $V$ and $V'$ are {\sl
equivalent\/} if there exists a holomorphic line bundle $F$ such that $V'
= V\otimes F$. The invariants $w$ and $p$ only depend on the equivalence
class of $V$. Let
$\overline{\frak M_L(w, p)}$ denote the Gieseker compactification of
$\frak M_L(w, p)$, i\.e\. the
Gieseker compactification $\overline{\frak M_L(D, c)}$ of $\frak M_L(D, c)$.
Thus $\overline{\frak M_L(w, p)}$ is a projective variety.
We now fix a compact neighborhood $\Cal N$ of $M$ inside the positive cone
$\Omega _X$ of
$X$. Note that, since $M$ is nef, such a neighborhood has nontrivial
intersection with the ample cone of $X$. Using a straightforward extension of
the theorem of Donaldson [10] on the dimension of the moduli space (see e\.g\.
[12] Chapter 8, [32], [24]), there exist constants $A$ and $A'$ such that, for
all ample line bundles $L$ such that $c_1(L) \in \Cal N$, the following
holds:
\roster
\item If $p \leq A$, then the moduli space $\overline{\frak M_L(w, p)}$ is
good, in other words it is generically reduced of the correct dimension $-p-3$;
\item $\frak M_L(w, p)$ is a dense open subset of $\overline{\frak M_L(w, p)}$
and the generic point of $\overline{\frak M_L(w, p)}- \frak M_L(w, p)$
correspond to a torsion free sheaf $V$ such that the length of
$V\spcheck{}\spcheck/V$ is one and such that the support of
$V\spcheck{}\spcheck/V$ is a generic point of $X$;
\item For all $p' \geq A$, the dimension $\dim \frak M_L(w, p') \leq A'$.
\endroster
We shall need to make one more assumption on the integer $p$. Let $\varphi\: X
\to \bar X$ be the contraction morphism associated to $M$. For each connected
component $E$ of the set of exceptional fibers of $\varphi$, fix a (possibly
nonreduced) curve $Z$ on $X$ whose support is exactly $E$. In practice we shall
always take $Z$ to be the fundamental cycle of the singularity, to be defined
in Subsection 2.3 below. A slight generalization ([12], Chapter 8) of
Donaldson's theorem on the dimension of the moduli space then shows the
following: after possibly modifying the constant $A$,
\roster
\item"(4)" The generic
$V\in \frak M_L(w,p)$ satisfies: the natural map
$$H^1(X; \operatorname{ad}V) \to H^1(Z; \operatorname{ad}V|Z)$$
is surjective. In other words, the local universal deformation of $V$ is versal
when viewed as a deformation of $V|Z$ (keeping the determinant fixed).
\endroster
We now assume that $p\leq A$.
Let $L$ be
an ample line bundle which is not separated from $M$ by any wall of type
$(w,p)$ (or equivalently of type $(D, c)$), and moreover does not lie on
any wall of type $(w,p)$. Thus by assumption, none of the points of
$\overline{\frak M_L(D, c)}$ corresponds to a strictly semistable sheaf.
Let $C\subset
X$ be a smooth curve of genus
$g$. Suppose that $C\cdot D = 2a$ is even. Choosing a line bundle $\theta$ of
degree $g-1-a$ on
$C$, we can form the determinant line bundle $\Cal L(C, \theta)$ on the moduli
functor associated to torsion free sheaves corresponding to the values $w$ and
$p$ ([15], Chapter 5). Using Proposition 1.7 in [23], this line bundle descends
to a line bundle on
$\overline{\frak M_L(w, p)}$, which we shall continue to denote by $\Cal L(C,
\theta)$. Moreover, by the method of proof of Theorem 2 of [23], the line
bundle $\Cal L(C, \theta)$ depends only on the linear equivalence class of $C$,
in the sense that if $C$ and $C'$ are linearly equivalent and $\theta '$ is a
line bundle of degree $g-1-a$ on $C'$, then $\Cal L(C, \theta) \cong \Cal L(C',
\theta')$.
Next we shall use the following result, whose proof is deferred to the next
subsection:
\lemma{2.1} In the above notation, if $k\gg 0$ and $C\in |kM|$ is a smooth
curve, then, for all $N\gg 0$, the linear system associated to $\Cal L(C,
\theta)^N$ has no base points and defines a generically finite morphism from
$\overline{\frak M_L(w, p)}$ to its image. In particular, if $d = \dim
\overline{\frak M_L(w, p)}$, then
$$c_1(\Cal L(C, \theta))^d > 0.$$
\endstatement
It follows by applying an easy adaptation of Theorem 6 in [23] or the results
of [25] to the case $p_g(X) =0$ that, since the spaces $\frak M_L(w,p')$ have
the expected dimension for an appropriate range of $p'\geq p$,
$c_1(\Cal L(C, \theta))^d$ is exactly the value $k^d\gamma_{w,p}(X;\Cal
C)(M^d)$. Thus we have proved
Theorem 1.4, modulo the proof of Lemma 2.1. This proof will be given below.
\enddemo
\ssection{2.2. A generalization of a result of Bogomolov}
We keep the notation of the preceding subsection. Thus $M$ is a nef and big
divisor such that the complete linear system $|k M|$ is base point free
whenever $k \gg 0$. Throughout, we shall further assume that $M$ is divisible
by $2$ in $\operatorname{Pic}X$. Moreover
$w$ and $p$ are now fixed and $L$ is an ample line bundle such that $c_1(L) \in
\Cal N$ is not separated from $M$ by a wall of type $(w,p)$ and moreover that
$c_1(L)$ does not lie on a wall of type $(w,p)$. In particular the determinant
line bundle $\Cal L(C,
\theta)$ is defined for all smooth $C$ in $|kM|$ for all $k\gg 0$.
We then have the following generalization of a restriction theorem due to
Bogomolov [4]:
\lemma{2.2} With the above notation, there exists a constant $k_0$ depending
only on $w$, $p$, $M$, and $L$, such that for all
$k\geq k_0$ and all smooth curves
$C\in |kM|$, the following holds: for all $c' \leq c$ and $V \in \frak M_L(D,
c')$, either $V|C$ is semistable or there exists a divisor $G$
on $X$, a zero-dimensional subscheme
$\Cal Z$ and an exact sequence
$$0 \to \scrO_X(G) \to V \to \scrO_X(D-G) \otimes I_{\Cal Z} \to 0,$$
where $2G-D$ defines a wall of type $(w,p)$ containing $M$ and $C \cap
\operatorname{Supp}\Cal Z \neq \emptyset$.
\endstatement
\proof The proof follows closely the original proof of Bogomolov's theorem
[4] or [15] Section 5.2.
Choose $k_0 \geq -p$ and assume also that there exists a smooth curve $C$ in
$|kM|$ for all $k\geq k_0$. Suppose that $V|C$ is not semistable. Then there
exists a surjection $V|C \to F$, where $F$ is a line bundle on $C$ with $\deg
F=f< (D\cdot C)/2$. Let $W$ be the kernel of the induced surjection $V\to F$.
Thus
$W$ is locally free and there is an exact sequence
$$0 \to W \to V \to F \to 0.$$
A calculation gives
$$\align
p_1(\operatorname{ad}W) &= p_1(\operatorname{ad}V) + 2D\cdot C + (C)^2 -
4f\\
&> p + k^2(M)^2 \geq p + p^2 \geq 0.
\endalign$$
By Bogomolov's inequality, $W$ is unstable with respect to every ample line
bundle on
$X$. Thus there exists a divisor $G_0$ and an injection $\scrO_X(G_0) \to W$
(which we may assume to have torsion free cokernel) such that
$2(L\cdot G_0) > L\cdot (D-C)$, i\.e\. $L \cdot (2G_0 - D +C) > 0$. By
hypothesis there is an exact sequence
$$0 \to \scrO_X(G_0) \to W \to \scrO_X(-G_0+D-C) \otimes I_{\Cal Z_0}\to 0.$$
Thus
$$0< p_1(\operatorname{ad}W) = (2G_0 - D +C)^2 - 4\ell (\Cal Z_0) \leq
(2G_0 - D +C)^2.$$
It follows that $(2G_0 - D +C)^2>0$.
As $L \cdot (2G_0 - D +C) > 0$ and $(2G_0 - D +C)^2 > 0$, $M \cdot (2G_0 - D
+C) \geq 0$ as well, i\.e\. $-(M \cdot (2G_0 - D))\leq k(M)^2$. On the other
hand, since $V$ is $L$-stable, $L\cdot (2G_0 - D)< 0$. Since $L$ and $M$ are
not separated by any wall of type $(w,p)$, it follows that $M\cdot (2G_0 -
D)\leq 0$. Finally using
$$\align
p_1(\operatorname{ad}W) &= (2G_0 - D +C)^2 - 4\ell (\Cal Z_0) \\
&= p_1(\operatorname{ad}V) + 2D\cdot C + (C)^2 -
4f\\
&> p + k^2(M)^2,
\endalign$$
we obtain
$$(2G_0 - D)^2 +2k(2G_0 - D)\cdot M > p.$$
Let $m = -(2G_0 - D)\cdot M$. As we have seen above $m\leq kM^2$ and $m\geq 0$.
The above inequality can be rewritten as
$$ 2km < (2G_0 - D)^2 -p.$$
We claim that $m=0$. Otherwise
$$2k < \frac{(2G_0 - D)^2}{m} -\frac{p}{m} .$$
By the Hodge index theorem $(2G_0 - D)^2 M^2 \leq \left[(2G_0 - D)\cdot
M\right]^2 = m^2$, so that $(2G_0 - D)^2 \leq m^2/M^2$. Plugging this into the
inequality above, using $-p\geq 0$, gives
$$2k < \frac{m}{M^2} - \frac{p}{m} \leq k - p,$$
i\.e\. $k< -p$, contradicting our choice of $k$. Thus $m= -(2G_0 - D)\cdot M
=0$.
Now the inclusions $\scrO_X(G_0) \subset W\subset V$ define an inclusion
$\scrO_X(G_0) \subset V$. Thus there is an effective divisor $E$ and an
inclusion $\scrO_X(G_0+E) \to V$ with torsion free cokernel. Let $G = G_0 +E$.
Thus there is an exact sequence
$$0 \to \scrO_X(G) \to V \to \scrO_X(-G+D)\otimes I_{\Cal Z}\to 0.$$
We claim that $(2G-D)\cdot M = 0$. Since $V$ is $L$-stable, $(2G-D)\cdot L <
0$, and since $L$ and $M$ are not separated by a wall of type $(w,p)$,
$(2G-D)\cdot M \leq 0$. On the other hand,
$$(2G-D)\cdot M = (2G_0 - D) \cdot M
+ 2(E\cdot M) = -m + 2(E\cdot M)=2(E\cdot M).$$
As $E$ is effective and $M$ is nef, $2(E\cdot M) \geq 0$. Thus $(2G-D)\cdot M =
0$. As $M^2 >0$, we must have $(2G-D)^2<0$. Using $p = (2G-D)^2 -4\ell(\Cal
Z)\leq (2G-D)^2$, we see that $2G-D$ is a wall of type $(w,p)$.
Finally note that $\operatorname{Supp}\Cal Z\cap C \neq \emptyset$, for
otherwise we would have $V|C$ semistable. This concludes the proof of Lemma
2.2.
\endproof
Returning to the proof of Lemma 2.1, we claim first that, given $k\gg 0$ and
$C\in |kM|$, for all $N$ sufficiently large the sections of
$\Cal L (C, \theta)^N$ define a base point free linear series on
$\overline{\frak M_L(w,p)}$. To see this, we first claim that, for $k \gg 0$,
and for a generic $C\in |kM|$, the restriction map $V\mapsto V|C$ defines a
rational map $r_C\: \frak M_L(w,p)\dasharrow \frak M(C)$, where $\frak M(C)$ is
the moduli space of equivalence classes of semistable rank two bundles on $C$
such that the parity of the determinant is even. It suffices to prove that, for
every component $N$ of $\frak M_L(w,p)$ there is one $V\in N$ and
one $C \in |kM|$ such that $V|C$ is semistable, for then the same will hold for
a Zariski open subset of $|kM|$. Now given $V$, choose a fixed $C_0 \in |kM|$.
If $V|C_0$ is not semistable, then by Lemma 2.2 there is an exact sequence
$$0 \to \scrO_X(G) \to V \to \scrO_X(-G+D)\otimes I_{\Cal Z} \to 0,$$
where $\Cal Z$ is a zero-dimensional subscheme of $X$ meeting $C_0$. Choosing
$C$ to be a curve in $|kM|$ disjoint from $\Cal Z$, which is possible since
$|kM|$ is base point free, it follows that the restriction $V|C$ is semistable.
For $C$ fixed, let
$$\align
B_C= \{\,V \in \overline{\frak M_L(w,p)}:&\text{ either $V$ is not locally free
over some point of $C$}\\
&\text{ or $V|C$ is not semistable }\,\}.
\endalign$$
By the openness of stability and local freeness, the set $B_C$ is a closed
subset of $\overline{\frak M_L(w,p)}$ and
$r_C$ defines a morphism from
$\overline{\frak M_L(w,p)} -B_C$ to
$\frak M(C)$. Standard estimates (cf\. [10], [12], [32], [24], [27]) show
that, possibly after modifying the constant $A$ introduced at the beginning of
the proof of Theorem 1.4, the codimension of
$B_C$ is at least two in $\overline{\frak M_L(w,p)}$ provided that $p\leq A$
(where as usual $A$ is independent of $k$ and depends only on $X$ and $M$).
Indeed the set of bundles
$V$ which fit into an exact sequence
$$0 \to \scrO_X(G) \to V \to \scrO_X(D-G)\otimes I_{\Cal Z} \to 0,$$
where $G$ is a divisor such that $(2G-D)\cdot M = 0$,
may be parametrized by a scheme of dimension $-\frac34p + O(\sqrt{|p|})$ by
e\.g\. [12], Theorem 8.18. Moreover the constant implicit in the notation
$O(\sqrt{|p|})$ can be chosen uniformly over $\Cal N$. The case of
nonlocally free
$V$ is taken care of by assumption (2) in the discussion of the constant $A$:
it follows from standard deformation theory (see again [12], [24]) that at a
generic point of the locus of nonlocally free sheaves corresponding to the
semistable torsion free sheaf $V$ the deformations of $V$ are versal for the
local deformations of the singularities of $V$. Thus for a general nonlocally
free $V$,
$V$ has just one singular point which is at a general point of $X$ and so does
not lie on $C$. Thus the set of
$V$ which are not locally free at some point of $C$ has codimension at least
two
(in fact exactly two) in $\overline{\frak M_L(w,p)}$.
Let
$\Cal L_C$ be the determinant line bundle on $\frak M(C)$ associated to the
line bundle $\theta$ (see for instance [15] Chapter 5 Section 2). Then by
definition the pullback via
$r_C$ of
$\Cal L_C$ is the restriction of $\Cal L (C, \theta)$ to $\overline{\frak
M_L(w,p)} -B_C$. Since $B_C$ has codimension two, the sections of
$\Cal L_C^N$ pull back to sections of $\Cal L (C, \theta)^N$ on
$\overline{\frak M_L(w,p)}$. Since $\Cal L_C$ is ample, given $V\in
\overline{\frak M_L(w,p)} -B_C$, there exists an $N$ and a section of $\Cal
L_C^N$ not vanishing at
$r_C(V)$, and thus there is a section of $\Cal L (C, \theta)^N$ not vanishing
at $V$. Moreover by [23], for all smooth $C'\in |kM|$ and choice of an
appropriate line bundle $\theta '$ on $C'$, there is an isomorphism $\Cal L (C,
\theta)^N \cong
\Cal L(C', \theta ')^N$. Next we claim that, for every $V\in
\overline{\frak M_L(w,p)}$, there exists a $C$ such that $V$ is locally free
above $C$ and $V|C$ is semistable. Given $V$, it fails to be locally free
at a finite set of points, and its double dual $W$ is again semistable. Thus
applying the above to $W$, and again using the fact that $|kM|$ has no base
points, we can find $C$ such that $V$ is locally free over $C$ and such that
$V|C = W|C$ is semistable. Thus, given $V$, there exists an $N$ and a section
of $\Cal L (C, \theta)^N$ which does not vanish at $V$. Since $\overline{\frak
M_L(w,p)}$ is of finite type, there exists an $N$ which works for all $V$, so
that the linear system corresponding to $\Cal L (C, \theta)^N$ has no base
points.
Finally we must show that, for $k\gg 0$, the morphism induced by $\Cal L (C,
\theta)^N$ is in fact generically finite for $N$ large. We claim that it
suffices to show that the restriction of the rational map $r_C$ to
$\overline{\frak M_L(w,p)} -B_C$ is generically finite (it is here that we must
use the condition on the singularities of
$\bar X$ in the statement of Theorem 1.4). Supposing this to be the case, and
fixing a $V \in \overline{\frak M_L(w,p)} -B_C$ for which
$r_C^{-1}(r_C(V))$ is finite, we consider the intersection of all the divisors
in $\Cal L (C, \theta)^N$ containing $V$, where $N$ is chosen so that $\Cal
L_C^N$ is very ample. This intersection always contains $V$ and is a subset of
$r_C^{-1}(r_C(V)) \cup B_C$. In particular $V$ is an isolated point of the
fiber, and so the morphism defined by $\Cal L (C, \theta)^N$ cannot have all
fibers of purely positive dimension. Thus it is generically finite.
To see that $r_C$ is generically finite, we shall show that, for generic
$V$, the restriction map
$$r\: H^1(X;\operatorname{ad}V) \to H^1(C;\operatorname{ad}V|C))$$
is injective. The map $r$ is just the differential of the map $r_C$ from $\frak
M_L(w,p)$ to $\frak M(C)$ at the point corresponding to $V$, and so if $V$ is
generic then $r_C$ is finite. Now the kernel of the map
$r$ is a quotient of
$H^1(X; \operatorname{ad}V\otimes \scrO_X(-C))$, and we need to find
circumstances where this group is zero, at least if
$C\in |kM|$ for
$k$ sufficiently large. By Serre duality it suffices to show that $H^1(X;
\operatorname{ad}V\otimes \scrO_X(C)\otimes K_X)=0$ for $k$ sufficiently large.
By applying the Leray spectral sequence to the morphism
$\varphi\: X\to \bar X$, it suffices to show that
$$H^1(\bar X;R^0\varphi _* (\operatorname{ad}V\otimes \scrO_X(C)\otimes
K_X)) =0$$
and that
$R^1\varphi _* (\operatorname{ad}V\otimes \scrO_X(C)\otimes
K_X)=0$. Now $M$ is the pullback of an ample line bundle $\bar M$ on
$\bar X$, and $\scrO_X(C)$ is the pullback of $(\bar M)^{\otimes k}$. Thus for
fixed $V$ and $k \gg 0$,
$$\align
&H^1(\bar X;R^0\varphi _* (\operatorname{ad}V\otimes \scrO_X(C)\otimes
K_X)) \\=
&H^1(\bar X;R^0\varphi _* (\operatorname{ad}V\otimes
K_X)\otimes (\bar M^k)) =0.
\endalign$$
Moreover $R^1\varphi _* (\operatorname{ad}V\otimes \scrO_X(C)\otimes
K_X)) = R^1\varphi _* (\operatorname{ad}V\otimes
K_X)\otimes (\bar M^k)$, so that it is enough to show that $R^1\varphi _*
(\operatorname{ad}V\otimes K_X)=0$. By the formal functions theorem,
$$R^1\varphi _* (\operatorname{ad}V\otimes K_X) = \varprojlim _mH^1(mZ;
\operatorname{ad}V\otimes K_X |mZ),$$ where $Z = \bigcup Z_i$ is the union of
the connected components
$Z_i$ of the one-dimensional fibers of
$\varphi$. Thus it suffices to show that, for all $i$ and all positive integers
$m$,
$H^1(mZ_i; \operatorname{ad}V\otimes K_X |mZ_i) = 0$. Now by the adjunction
formula
$\omega _{mZ_i} = K_X\otimes \scrO_X(mZ_i)|mZ_i$, where $\omega _{mZ_i}$ is the
dualizing sheaf of the Gorenstein scheme $mZ_i$. Thus $K_X|mZ_i =
\scrO_X(-mZ_i) |mZ_i\otimes\omega _{mZ_i}$ and we must show the vanishing of
$$H^1(mZ_i; (\operatorname{ad}V\otimes \scrO_X(-mZ_i)) |mZ_i\otimes\omega
_{mZ_i}).$$ By Serre duality, it suffices to show that, for all $m>0$,
$$H^0(mZ_i; (\operatorname{ad}V\otimes \scrO_X(mZ_i)) |mZ_i)=0.$$
We shall deal with this problem in the next subsection.
\medskip
\noindent {\bf Remark.} (1) Instead of arguing that the restriction map $r_C$
was generically finite, one could also check that it was generically one-to-one
by showing that for generic $V_1$, $V_2$, the restriction map
$$H^0(X; Hom (V_1, V_2)) \to H^0(C; Hom (V_1, V_2)|C)$$
is surjective (since then an isomorphism from $V_1|C$ to $V_2|C$ lifts to a
nonzero map from $V_1$ to $V_2$, necessarily an isomorphism by stability). In
turn this would have amounted to showing that
$H^1(X; Hom(V_1, V_2)\otimes \scrO_X(-C))=0$ for generic $V_1$ and $V_2$, and
this would have been essentially the same argument.
\smallskip
(2) Suppose that $\varphi \: X\to \bar X$ is the blowup of a smooth surface
$\bar X$ at a point $x$, and that $M$ is the pullback of an ample divisor on
$\bar X$. Let
$Z\cong\Pee ^1$ be the exceptional curve. In this case, if $c_1(V)\cdot Z$ is
odd, say $2a+1$, then the generic behavior for $V|Z$ is $V|Z \cong \scrO_{\Pee
^1}(a)\oplus
\scrO_{\Pee ^1}(a+1)$ and the restriction map
exhibits $\frak M_L(w,p)$ (generically) as a $\Pee ^1$-bundle over its image
(see for instance [5]). Thus the hypothesis that $\varphi$ contracts no
exceptional curve is essential.
\ssection{2.3. Restriction of stable bundles to certain curves}
Let us recall the
basic properties of rational and minimally elliptic singularities. Let $x$ be a
normal singular point on a complex surface $\bar X$, and let $\varphi \: X \to
\bar X$ be the minimal resolution of singularities of $\bar X$. Supppose that
$\varphi ^{-1}(x) = \bigcup _iD_i$. The singularity is a {\sl rational\/}
singularity if
$(R^1\varphi _*\scrO_X)_x = 0$. Equivalently, by [1], $x$ is rational if and
only if, for every choice of nonnegative integers $n_i$ such that at least one
of the $n_i$ is strictly positive, if we set $Z = \sum _in_iD_i$, the
arithmetic genus $p_a(Z)$ of the effective curve $Z$ satisfies $p_a(Z) \leq 0$.
Here $p_a(Z) = 1 - \chi (\scrO_Z) = 1 - h^0(\scrO_Z) + h^1(\scrO_Z)\leq
h^1(\scrO_Z)$; moreover we have the adjunction formula
$$p_a(Z) = 1 + \frac12 (K_X+Z)\cdot Z.$$
Now every minimal resolution of a normal surface singularity $x$ has a {\sl
fundamental cycle\/} $Z_0$, which is an effective cycle $Z_0$ supported in the
set $\varphi ^{-1}(x)$ and satisfying $Z_0
\cdot D_i \leq 0$ and $Z_0 \cdot D_i < 0$ for some $i$ which is minimal with
respect to the above properties. We may find $Z_0$ as follows [22]: start
with an arbitrary component $A_1$ of $\varphi ^{-1}(x)$ and set $Z_1= A_1$.
Now either $Z_0 = A_1$ or there exists another component $A_2$ with $Z_1\cdot
A_2>0$. Set $Z_2 = Z_1 + A_2$ and continue this process. Eventually we reach
$Z_k = Z_0$. Such a sequence $A_1, \dots , A_k$ with $Z_i = \sum _{j\leq
i}A_j$ and $Z_i\cdot A_{i+1} >0$, $Z_k = Z_0$ is called a {\sl computation
sequence}. By a theorem of Artin [1],
$x$ is rational if and only if $p_a(Z_0) \leq 0$, where $Z_0$ is the
fundamental
cycle, if and only if $p_a(Z_0) = 0$. Moreover, if
$x$ is a rational singularity, then every component $D_i$ of $\varphi ^{-1}(x)$
is a smooth rational curve, the
$D_i$ meet transversally at at most one point, and the dual graph of $\varphi
^{-1}(x)$ is contractible.
Next we recall the properties of minimally elliptic singularities [22]. A
singularity $x$ is {\sl minimally elliptic\/} if and only if there exists a
{\sl minimally elliptic cycle\/} $Z$ for $x$, in other words a cycle $Z= \sum
_in_iD_i$ with all $n_i>0$ such that $p_a(Z) = 1$ and $p_a(Z') \leq 0$ for all
nonzero effective cycles $Z'<Z$ (i\.e\. such that $Z' = \sum _in_i'D_i$ with
$0\leq n_i'\leq n_i$ and $Z'\neq Z$). In this case it follows that $Z=Z_0$ is
the fundamental cycle for $x$, and $(K_X+Z_0)\cdot D_i = 0$ for every component
$D_i$ of
$\varphi ^{-1}(x)$. If $Z_0$ is reduced, i\.e\. if $n_i = 1$ for all $i$, then
the possibilities for $x$ are as follows:
\roster
\item $\varphi ^{-1}(x)$ is an irreducible curve of arithmetic genus one, and
thus is either a smooth elliptic curve or a singular rational curve with
either a node or a cusp;
\item $\varphi ^{-1}(x)= \bigcup _{i=1}^tD_i$ is a cycle of $t\geq 2$ smooth
rational curves meeting transversally, i\.e\. $D_i\cdot D_{i+1} =1$, $D_i
\cdot D_j \neq 0$ if and only if $i \equiv j \pm 1 \mod t$, except for $t=2$
where $D_1 \cdot D_2 = 2$;
\item $\varphi ^{-1}(x)= D_1\cup D_2$, where the $D_i$ are smooth
rational, $D_1\cdot D_2 = 2$ and
$D_1\cap D_2 $ is a single point (so that $\varphi ^{-1}(x)$ has a tacnode
singularity) or $\varphi ^{-1}(x)= D_1\cup D_2 \cup D_3$ where the $D_i$ are
smooth rational, $D_i\cdot D_j =
1$ but
$D_1\cap D_2 \cap D_3$ is a single point (the three curves meet at a common
point).
\endroster
Here $x$ is called a {\sl simple elliptic singularity\/} in case $\varphi
^{-1}(x)$ is a smooth elliptic curve, a {\sl cusp singularity\/} if $\varphi
^{-1}(x)$ is an irreducible rational curve with a node or a cycle as in (2),
and a {\sl triangle singularity\/} in the remaining cases.
If $Z_0$ is not reduced, then all components $D_i$ of $\varphi ^{-1}(x)$ are
smooth rational curves meeting transversally and the dual graph of $\varphi
^{-1}(x)$ is contractible.
With this said, and using the discussion in the previous subsection, we will
complete the proof of Theorem 1.4 by showing that
$H^0(mZ_i; \operatorname{ad}V\otimes \scrO_X(mZ_i)|mZ_i) = 0$ for all $i$,
where $x_1, \dots, x_k$ are the singular points of $\bar X$ and $Z_i$ is an
effective cycle with $\operatorname{Supp}Z_i = \varphi ^{-1}(x_i)$. The precise
statement is as follows:
\theorem{2.3} Let $\varphi \: X \to \bar X$ be a birational morphism from $X$
to a normal projective surface $\bar X$, corresponding to a nef, big, and
eventually base point free divisor $M$. Let $w$ be the \rom{mod} $2$ reduction
of $[K_X]$, and suppose that
\roster
\item"{(i)}" $\varphi$ contracts no exceptional curve; in other words, if $E$
is an exceptional curve of the first kind on $X$, then $M\cdot E>0$.
\item"{(ii)}" $\bar X$ has only rational and minimally elliptic singularities.
\endroster
Then there exists a constant $A$ depending only
on $p$ and $\Cal N$ with the following property: for every singular point
$x$ of $\bar X$, there exists an effective cycle $Z$ with
$\operatorname{Supp}Z = \varphi^{-1}(x)$ such that, for all ample line
bundles $L$ in
$\Cal N$, all
$p$ with $p \leq A$, and generic bundles $V$ in $\frak M_L(w,p)$,
$$H^0(mZ; \operatorname{ad}V\otimes \scrO_X(mZ)|mZ) = 0$$
for every positive integer $m$.
\endstatement
The statement of (i) may be rephrased by saying that $X$ is the {\sl minimal
resolution\/} of $\bar X$. As $\operatorname{ad}V \subset Hom(V,V)$,
it suffices to prove that $H^0(mZ; Hom(V,V)\otimes \scrO_X(mZ)|mZ) = 0$. We
will consider the case of rational singularities and minimally elliptic
singularities separately. Let us begin with the proof for rational
singularities. Let $\varphi ^{-1}(x) = \bigcup _iD_i$, where each
$D_i$ is a smooth rational curve. By the assumption (4) of the previous
subsection, we can assume that the constant $A$ has been chosen so that
$V|D_i$ is a generic bundle over $D_i\cong \Pee ^1$ for every
$i$. Thus either there exists an integer $a$ such that $V|D_i
\cong \scrO_{\Pee^1}(a)
\oplus \scrO_{\Pee^1}(a+1)$, if $w\cdot D_i \neq 0$, or there exists an $a$
such that $V|D_i \cong \scrO_{\Pee^1}(a) \oplus
\scrO_{\Pee^1}(a)$, if $w\cdot D_i = 0$. Next, we have the following claim:
\claim{2.4} Suppose that $x$ is a rational singularity. Let $\varphi\:X \to
\bar X$ be a resolution of $x$. There exist a sequence of curves $B_0, \dots,
B_k$, such that $B_i \subseteq \varphi ^{-1}(x)$ for all $i$, with the
following property:
\roster
\item Let $C_i = \sum _{j\leq i}B_i$. Then $B_i \cdot C_i \leq B_i^2 + 1$.
\item $C_k = Z_0$, the fundamental cycle of $x$.
\endroster
\endstatement
\proof Since $(K_X+Z_0)\cdot Z_0<0$, there must exist
a component $B^{(0)} = D_i$ of $\operatorname{Supp}Z_0 = \varphi ^{-1}(x)$ such
that $(K_X+Z_0)\cdot B^{(0)} < 0$. Thus
$$Z_0 \cdot B^{(0)} < -K_X\cdot B^{(0)} = (B^{(0)})^2 + 2.$$
Set $Z_1 = Z_0 - B^{(0)}$. Suppose that $Z_1$ is nonzero. Then $Z_1$ is again
effective, and by Artin's criterion $p_a(Z_1) \leq 0$. Thus by repeating the
above argument there is a
$B^{(1)}$ contained in the support of $Z_1$ such that $Z_1 \cdot B^{(1)} <
(B^{(1)})^2 + 2$. Continuing, we eventually find $B^{(2)}, \dots, B^{(k)}$
with $B^{(i)}$ contained in the support of $Z_i$, $Z_{i+1} = Z_i -B^{(i)}$ and
$Z_k = B^{(k)}$, and such that $Z_i \cdot B^{(i)} < (B^{(i)})^2 +2$. If we now
relabel $B^{(i)} = B_{k-i}$, then $Z_i = \sum _{j\leq n-i}B_j$ and the curves
$B_0,
\dots, B_k$ are as claimed.
\endproof
Returning to the proof of Theorem 2.3, we first prove that
$$H^0(Z_0; Hom(V, V)\otimes \scrO_X(Z_0)|Z_0) = 0.$$
We have the exact sequence
$$0 \to \scrO_{C_{i-1}}(C_{i-1}) \to \scrO_{C_i}(C_i) \to \scrO_{B_i}(C_i) \to
0.$$
Tensor this sequence by $Hom (V, V)$. We shall prove by induction that
$$H^0(Hom (V, V) \otimes \scrO_{C_i}(C_i)) = 0$$ for all $i$. It suffices to
show that
$H^0(Hom (V, V) \otimes \scrO_{B_i}(C_i)) = 0$ for all $i$. Now
$\scrO_{B_i}(C_i)$ is a line bundle on the smooth rational curve $B_i$. If
$V|B_i \cong \scrO_{\Pee^1}(a)
\oplus \scrO_{\Pee^1}(a+1)$, then $w\cdot B_i \neq 0$ and so $B_i^2$ is
odd. Since $B_i$ is not an exceptional curve, $B_i^2
\leq -3$ and so
$B_i \cdot C_i \leq -2$. Thus, as
$$Hom (V, V)|B_i = \scrO_{\Pee^1}(-1) \oplus \scrO_{\Pee^1}
\oplus \scrO_{\Pee^1} \oplus \scrO_{\Pee^1}(1),$$
we see that $H^0(Hom (V, V)\otimes \scrO_{B_i}(C_i)) = 0$. Likewise if
$V|D_i \cong \scrO_{\Pee^1}(a) \oplus \scrO_{\Pee^1}(a)$, then
using
$B_i \cdot C_i \leq -1$ we again have $H^0(Hom (V, V) \otimes
\scrO_{B_i}(C_i)) = 0$. Thus by induction
$$H^0(Hom (V, V) \otimes \scrO_{C_k}(C_k)) = H^0(Hom (V, V) \otimes
\scrO_{Z_0}(Z_0))=0.$$
The vanishing of $H^0(mZ_0; Hom(V, V)\otimes \scrO_X(mZ_0)|mZ_0)$ is
similar, using instead the exact sequence
$$0 \to \scrO_{mZ_0+C_{i-1}}(mZ_0 + C_{i-1}) \to \scrO_{mZ_0 +C_i}(mZ_0 + C_i)
\to \scrO_{B_i}(mZ_0 + C_i) \to 0.$$
This concludes the proof in the case of a rational singularity.
For minimally elliptic
singularities, we shall deduce the theorem from the following more general
result:
\theorem{2.5} Let $\varphi \: X \to \bar X$ be a birational morphism from $X$
to a normal projective surface $\bar X$, corresponding to a nef, big, and
eventually base point free divisor $M$. Let $w$ be an arbitrary element of
$H^2(X; \Zee/2\Zee)$, and suppose that
\roster
\item"{(i)}" $\varphi$ contracts no exceptional curve; in other words, if $E$
is an exceptional curve of the first kind on $X$, then $M\cdot E>0$.
\item"{(ii)}" If $D$ is a component of $\varphi^{-1}(x)$ such that $w\cdot D
\neq 0$, then $Z_0 \cdot D <0$,
where $Z_0$ is the fundamental cycle of $\varphi^{-1}(x)$.
\endroster
Then the conclusions of Theorem \rom{2.3} hold for the moduli space $\frak
M_L(w,p)$ for all $p\ll 0$. In particular the conclusions of Theorem \rom{2.3}
hold if
$\varphi^{-1}(x)$ is irreducible.
\endstatement
\demo{Proof that \rom{(2.5)} implies \rom{(2.3)}} We must show that every
minimally elliptic singularity satisfies the hypotheses of Theorem 2.5(ii),
provided that
$w$ is the mod 2 reduction of $K_X$. Suppose that $x$ is minimally elliptic and
that $w\cdot D \neq 0$. Thus $K_X\cdot D$ is odd. Moreover if $D$ is smooth
rational then $D^2\neq -1$ and $K_X\cdot D \geq 0$ so that $K_X\cdot D \geq 1$.
Now $(K_X+Z_0)\cdot D = 0$. Thus $Z_0
\cdot D = -(K_X\cdot D)\leq -1$. Likewise if $p_a(D) \neq 0$, so that $D$ is
not a smooth rational curve, then
$\varphi ^{-1}(x) = D$ is an irreducible curve and (2.3) again follows.
\endproof
\demo{Proof of Theorem \rom{2.5}} We begin with a lemma on sections
of line bundles over effective cycles supported in $\varphi^{-1}(x)$, which
generalizes (2.6) of [22]:
\lemma{2.6}
Let $Z_0$ be the fundamental cycle of $\varphi ^{-1}(x)$ and let $\lambda$
be a line bundle on $Z_0$ such that $\deg (\lambda|D)\leq 0$ for each component
$D$ of the support of $Z_0$. Then either $H^0(Z_0; \lambda) =0$ or $\lambda =
\scrO_{Z_0}$ and $H^0(Z_0; \lambda) \cong \Cee$.
\endstatement
\proof Choose a computation sequence for $Z_0$, say $A_1, A_2, \dots, A_k$.
Thus, if we set $Z_i = \sum _{j\leq i}A_j$, then $Z_i\cdot A_{i+1} >0$, and
$Z_k = Z_0$. Now we have an exact sequence
$$0 \to \scrO_{A_{i+1}}(-Z_i)\to \scrO_{Z_{i+1}} \to \scrO_{Z_i} \to 0.$$
Thus $\deg (\scrO_{A_{i+1}}(-Z_i) \otimes \lambda|A_{i+1})<0$. It follows that
$H^0(\scrO_{Z_{i+1}}\otimes \lambda) \subseteq H^0(\scrO_{Z_i}\otimes
\lambda)$ for all $i$. By induction $\dim H^0(\scrO_{Z_i}\otimes
\lambda) \leq 1$ for all $i$, $1\leq i \leq k$. Thus $\dim H^0(Z_0; \lambda)
\leq 1$. Moreover, if $\dim H^0(Z_0; \lambda) = 1$, then the natural map
$$H^0(\scrO_{Z_{i+1}}\otimes \lambda) \to H^0(\scrO_{Z_i}\otimes
\lambda)$$ is an isomorphism for all $i$, and so the induced map $H^0(Z_0;
\lambda) \to H^0(A_1; \lambda |A_1)$ is an isomorphism and $\dim H^0(A_1;
\lambda |A_1) =1$. Thus $\lambda |A_1$ is trivial and a nonzero section of
$H^0(Z_0; \lambda)$ restricts to a generator of $\lambda |A_1$. Since we can
begin a computation sequence with an arbitrary choice of $A_1$, we see that a
nonzero section $s$ of $H^0(Z_0; \lambda)$ restricts to a nonvanishing section
of $H^0(D; \lambda |D)$ for every $D$ in the support of $\varphi ^{-1}(x)$.
Thus
the map $\scrO_{Z_0} \to \lambda$ defined by $s$ is an isomorphism.
\endproof
\noindent {\bf Remark.} The lemma is also true if $\lambda$ is allowed to have
degree one on some components $D$ of $Z_0$ with $p_a(D)\geq 2$, provided that
$\lambda |D$ is general for these components, and a slight variation holds if
$\lambda$ is also allowed to have degree one on some components $D$ of $Z_0$
with $p_a(D)=1$.
\medskip
We next construct a bundle $W$ over $Z_0$ with certain vanishing properties:
\lemma{2.7} Suppose that $\varphi \: X \to \bar X$ is the minimal resolution of
the normal surface singularity $x$. Let
$\mu$ be a line bundle over the scheme
$Z_0$. Suppose further that, if $D$ is a component of $\varphi^{-1}(x)$ such
that $\deg (\mu |D)$ is odd, then
$Z_0 \cdot D <0$, where $Z_0$ is the fundamental cycle of $\varphi^{-1}(x)$.
Then there exists a rank two vector bundle
$W$ over $Z_0$ with $\det W = \mu$ and such that
$$H^0(Z_0; Hom(W,W)\otimes \scrO_X(mZ_0)|Z_0) = 0$$
for every $m\geq 1$.
\endstatement
\proof Let $\varphi ^{-1}(x) = \bigcup_iD_i$. Then there exists an
integer $a_i$ such that $\deg\mu |D_i = 2a_i$ or
$2a_i+1$, depending on whether $\deg (\mu| D_i)$ is odd or even. Since $\dim
Z_0 = 1$, the natural maps $\operatorname{Pic}Z_0 \to
\operatorname{Pic}(Z_0)_{\text{red}} \to \bigoplus _i\operatorname{Pic}D_i$ are
surjective. Thus we may choose a line bundle $L_1$ over $Z_0$ such that $\deg
(L_1|D_i) = a_i$. It follows that $\mu \otimes L_1^{\otimes
- -2}|D_i$ is a line bundle over $D_i$ of degree zero or 1, and if it is of
degree 1, then $Z_0\cdot D_i <0$. Hence $\mu \otimes
L_1^{\otimes -2}\otimes \scrO_{Z_0}(Z_0)$ has degree at most zero on $D_i$ for
every $i$.
Set $L_2 = \mu \otimes L_1^{-1}$. Thus $L_1 \otimes L_2 = \mu$ and $\deg
(L_2|D_i) = a_i$ or $a_i +1$ depending on whether $\deg (\mu| D_i)$ is even or
odd. The line bundle $L_1^{-1}\otimes L_2=\mu \otimes
L_1^{\otimes -2}$ thus has degree zero on
those components $D_i$ such that $\deg (\mu| D_i)$ is even and 1 on the
components $D_i$ such that $\deg (\mu| D_i)$ is odd. Moreover $\deg
(L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}(mZ_0)|D_i) \leq 0$ for every $i$.
\claim{2.8} Under the assumptions of \rom{(2.7)}, there exists a
nonsplit extension
$W$ of $L_2$ by $L_1$ except in the case where $x$ is rational, $\deg
(\mu| D_i)$ is odd for at most one $i$, and the multiplicity of $D_i$ in $Z_0$
is one for such $i$, or $\chi (\scrO_{Z_0}) = 0$ and $\deg \mu |D_i$ is even
for
every $i$.
\endstatement
\proof A nonsplit extension exists if and only if $h^1(L_2^{-1}\otimes L_1)
\neq 0$. Now by the Riemann-Roch theorem applied to
$Z_0 =
\sum _in_iD_i$, we have
$$h^1(Z_0; L_2^{-1}\otimes L_1) = h^0(Z_0; L_2^{-1}\otimes L_1) -\sum _in_i\deg
(L_2^{-1}\otimes L_1|D_i) - \chi (\scrO_{Z_0}).$$
Here $\deg (L_2^{-1}\otimes L_1|D_i)= 0$ on those $D_i$ with $\deg (\mu| D_i)$
even and $=-1$ on the $D_i$ with $\deg (\mu| D_i)$ odd. Moreover
$h^0(\scrO_{Z_0}) = 1$ by Lemma 2.6 and so $\chi (\scrO_{Z_0})\leq 1$, with
$\chi (\scrO_{Z_0}) =1$ if and only if $x$ is rational. Thus
$$h^1(Z_0; L_2^{-1}\otimes L_1) \geq \sum \{\, n_i\: \deg (\mu| D_i) \text{ is
odd}\,\} - \chi (\scrO_{Z_0}).$$
Hence if $h^1(Z_0; L_2^{-1}\otimes L_1) =0$, then either $x$ is rational, $\deg
(\mu| D_i)$ is odd for at most one $i$, and for such $i$ the multiplicity of
$D_i$ in $Z_0$ is one, or $\deg (\mu |D_i)$ is even for all $i$ and $\chi
(\scrO_{Z_0}) = 0$.
\endproof
Returning to the proof of (2.7), choose $W$ to be a nonsplit extension of $L_2$
by $L_1$ if such exist, and set $W = L_1 \oplus L_2$ otherwise. To see that
$H^0(Z_0; Hom(W,W)\otimes \scrO_X(mZ_0)|Z_0) = 0$, we consider the two exact
sequences
$$\gather
0 \to L_1 \to W \to L_2 \to 0;\\
0 \to L_1 \otimes \scrO_{Z_0}(mZ_0)\to W\otimes \scrO_{Z_0}(mZ_0) \to
L_2\otimes \scrO_{Z_0}(mZ_0) \to 0.
\endgather$$
Clearly $H^0(Z_0; Hom(W,W)\otimes \scrO_X(mZ_0)|Z_0) = 0$ if
$$H^0(L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}(mZ_0)) = H^0(\scrO_{Z_0}(mZ_0)) =
H^0(L_2^{-1}\otimes L_1 \otimes \scrO_{Z_0}(mZ_0)) =0.$$
The line bundles $\scrO_{Z_0}(mZ_0)$ and $L_2^{-1}\otimes L_1 \otimes
\scrO_{Z_0}(mZ_0)$ have nonpositive degree on each $D_i$ and (since $Z_0 \cdot
D_i <0$ for some
$i$) have strictly negative degree on at least one component. Thus by Lemma 2.6
$H^0(\scrO_{Z_0}(mZ_0))$ and $H^0(L_2^{-1}\otimes L_1 \otimes
\scrO_{Z_0}(mZ_0))$ are both zero. Let us now consider the group
$H^0(L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}(mZ_0))$. By the hypothesis that
$Z_0\cdot D_i < 0$ for each $D_i$ such that $\deg (\mu |D_i)$ is odd, the line
bundle $L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}(mZ_0)$ has nonpositive degree
on all components $D_i$. Thus by Lemma 2.6 either $H^0(L_1^{-1}\otimes L_2
\otimes \scrO_{Z_0}(mZ_0)) = 0$ or $L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}
(mZ_0) \cong \scrO_{Z_0}$. Clearly this last case is only possible if $m=1$ and
$L_1 \cong L_2 \otimes \scrO_{Z_0}(Z_0)$, and if moreover $Z_0 \cdot D_i =0$ if
$\deg(\mu |D_i)$ is even and $Z_0 \cdot D_i =-1$ if $\deg(\mu
|D_i)$ is odd. As $Z_0\cdot D_i <0$ for at least one $i$, $\deg(\mu
|D_i)$ is odd for at least one $i$ as well. In this case, if the nonzero
section of $L_1^{-1}\otimes L_2 \otimes \scrO_{Z_0}(Z_0)$ lifts to give a map
$L_1\to W\otimes \scrO_{Z_0}(Z_0)$, then the image of
$L_1$ in $W\otimes \scrO_{Z_0}(Z_0)$ splits the exact sequence
$$0 \to L_1 \otimes \scrO_{Z_0}(Z_0)\to W\otimes \scrO_{Z_0}(Z_0) \to
L_2\otimes \scrO_{Z_0}(Z_0) \to 0.$$
Thus $W$ is also a split extension. By Claim 2.8, since $\deg (\mu |D_i)$ is
odd for at least one $i$, it must therefore be the case that $x$ is rational,
$\deg (\mu| D_i)$ is odd for exactly one $i$, and for such
$i$ the multiplicity of $D_i$ in $Z_0$ is one. Moreover $Z_0\cdot D_j \neq 0$
exactly when $j=i$ and in this case $Z_0 \cdot D_i = -1$. But as the
multiplicity of
$D_i$ in $Z_0$ is 1, we can write $Z_0 = D_i + \sum _{j\neq i}n_jD_j$, and thus
$$Z_0^2 = Z_0\cdot D_i = -1.$$
By a theorem of Artin [1], however, $-Z_0^2$ is the multiplicity of the
rational
singularity $x$. It follows that $x$ is a smooth point and $\varphi$ is the
contraction of a generalized exceptional curve, contrary to hypothesis. This
concludes the proof of (2.7).
\endproof
We may now finish the proof of (2.5). Start with a generic vector bundle $V_0
\in \frak M_L(w,p)$ on
$X$ satisfying the condition that $H^1(X; \operatorname{ad}V_0)\to H^1(Z_0;
\operatorname{ad}V_0|Z_0)$ is surjective. If $\mu = \det V_0|Z_0$, note that,
according to the assumptions of (2.5),
$\mu$ satisfies the hypotheses of Lemma 2.7. For $V\in \frak M_L(w,p)$, let
$$H(mZ_0) = H^0(mZ_0; \operatorname{ad}V\otimes \scrO_X(mZ_0)|mZ_0).$$
Using the exact sequence
$$0 \to H((m-1)Z_0) \to H(mZ_0) \to H^0(Z_0;\operatorname{ad}V_0\otimes
\scrO_X(mZ_0)|Z_0),$$
we see that it suffices to show that, for a generic $V$,
$H^0(Z_0;\operatorname{ad}V\otimes
\scrO_X(mZ_0)|Z_0) =0$ for all $m\geq 1$. For a fixed $m$, the condition that
$H^0(Z_0;\operatorname{ad}V\otimes
\scrO_X(mZ_0)|Z_0) \neq 0$ is a closed condition. Thus since the moduli space
cannot be a countable union of proper subvarieties, it will suffice to show
that the set of $V$ for which $H^0(Z_0;\operatorname{ad}V\otimes
\scrO_X(mZ_0)|Z_0) = 0$ is nonempty for every $m$. Let $\Cal S$ be the germ of
the versal deformation of $V_0|Z_0$ keeping $\det V_0|Z_0$ fixed. By the
assumption that the map from the germ of the versal deformation of $V_0$ to
that
of $V_0|Z_0$ is submersive, it will suffice to show that, for each $m\geq 1$,
the set of
$W\in \Cal S$ such that $H^0(Z_0;\operatorname{ad}W\otimes
\scrO_{Z_0}(mZ_0)) = 0$ is nonempty. One natural method for doing so is to
exhibit a deformation from $V_0|Z_0$ to the $W$ constructed in the course of
Lemma 2.7; roughly speaking this amounts to the claim that the ``moduli space"
of vector bundles on the scheme $Z_0$ is connected. Although we shall proceed
slightly differently, this is the main idea of the argument.
Choose an ample line bundle $\lambda$ on $Z_0$. After passing to some power, we
may assume that both $(V_0|Z_0)\otimes \lambda$ and $W\otimes \lambda$ are
generated by their global sections. A standard argument shows that, in this
case, both $V_0|Z_0$ and $W$ can be written as an extension of $\mu \otimes
\lambda$ by $\lambda ^{-1}$: Working with $W$ for example, we must show that
there is a map $\lambda ^{-1}\to W$, corresponding to a section of $W\otimes
\lambda$, such that the quotient is again a line bundle. It suffices to show
that there exists a section $s\in H^0(Z_0; W\otimes \lambda)$ such that, for
each $z\in Z_0$, $s(z)\neq 0$ in the fiber of $W\otimes \lambda$ over $z$. Now
for $z$ fixed, the set of $s \in H^0(Z_0; W\otimes \lambda)$ such that $s(z)
=0$
has codimension two in $H^0(Z_0; W\otimes \lambda)$ since $W\otimes \lambda$ is
generated by its global sections. Thus the set of $s \in H^0(Z_0; W\otimes
\lambda)$ such that $s(z) =0$ for some $z\in Z_0$ has codimension at least one,
and so there exists an $s$ as claimed.
Now let $W_0 = \lambda ^{-1} \oplus (\mu \otimes \lambda)$. Let $(\Cal S_0,
s_0)$ be the germ of the versal deformation of $W_0$ (with fixed determinant
$\mu$). As
$Z_0$ has dimension one, $\Cal S_0$ is smooth. Both $V_0|Z_0$ and $W$
correspond
to extension classes $\xi, \xi ' \in \operatorname{Ext}^1(\mu \otimes \lambda,
\lambda ^{-1})$. Replacing, say, $\xi$ by the class $t\xi, t\in \Cee ^*$, gives
an isomorphic bundle. In this way we obtain a family of bundles $\Cal V$ over
$Z_0 \times \Cee$, such that the restriction of $\Cal V$ to $Z_0 \times t$ is
$V_0|Z_0$ if $t\neq 0$ and is $W_0$ if $t=0$. Hence in the germ $\Cal S_0$
there is a subvariety containing $s_0$ in its closure and consisting of bundles
isomorphic to
$V_0|Z_0$, and similarly for $W$. As $H^0(Z_0;\operatorname{ad}W\otimes
\scrO_{Z_0}(mZ_0)) = 0$, the locus of bundles $U$ in $\Cal S_0$ for which
$H^0(Z_0;\operatorname{ad}U\otimes
\scrO_{Z_0}(mZ_0)) = 0$ is a dense open subset. Since $\Cal S_0$ is a smooth
germ, it follows that there is a small deformation of $V_0|Z_0$ to such a
bundle. Thus the generic small deformation $U$ of $V_0|Z_0$ satisfies $H^0(Z_0;
\operatorname{ad}U\otimes \scrO_{Z_0}(mZ_0)) = 0$, and so the generic $V\in
\frak M_L(w,p)$ has the property that
$H^0(Z_0;\operatorname{ad}V\otimes
\scrO_X(mZ_0)|Z_0) = 0$ for all $m\geq 1$ as well. As we saw above, this
implies the vanishing of $H^0(mZ_0; \operatorname{ad}V\otimes
\scrO_X(mZ_0)|mZ_0)$.
\endproof
\noindent {\bf Remark.} (1) Suppose that $\bar X$ is a singular surface, but
that $\varphi \: X \to \bar X$ is not the minimal resolution. We may still
define the fundamental cycle $Z_0$ for the resolution $\varphi$. Moreover it is
easy to see that $Z_0 \cdot E = 0$ for every component of a generalized
exceptional curve contained in $\varphi ^{-1}(x)$. Thus the hypothesis of (ii)
of Theorem 2.5 implies that $w\cdot E=0$ for such curves.
\smallskip
(2) We have only considered contractions of a very special type, and have
primarily been interested in the case where $w$ is the mod two reduction of
$[K_X]$. However it is natural to ask if the analogues of Theorem 2.3 and 2.5
(and thus Theorem 1.4) holds for more general contractions and choices of $w$,
provided of course that no smooth rational curve of self-intersection $-1$ is
contracted to a point. Clearly the proof of Theorem 2.5 applies to a much wider
class of singularities. Indeed a little work shows that the proof goes over
(with some modifications in case there are components of arithmetic genus one)
to handle the case where we need only assume condition (ii) of (2.5) for those
components $D$ which are smooth rational curves. Another case where it is easy
to check that the conclusions of (2.5) hold is where $w$ is arbitrary and the
dual graph of the singularity is of type $A_k$. We make the following rather
natural conjecture:
\medskip
\noindent {\bf Conjecture 2.9.} The conclusions of Theorem \rom{1.4} hold for
arbitrary choices of $w$ and $\varphi$, provided that $\varphi$ does not
contract any exceptional curves of the first kind.
\section{3. Nonexistence of embedded $\boldkey 2$-spheres}
\ssection{3.1. A base point free theorem}
\theorem{3.1} Let $\pi \: X \to X'$ be a birational morphism from
the smooth surface $X$ to a normal surface $X'$, not necessarily projective.
Suppose that
$X$ is a minimal surface of general type, and that $p\in X'$ is
an isolated singular point which is a nonrational singularity. Let $\pi
^{-1}(p) = \bigcup _i D_i$. Then:
\roster
\item"{(i)}" There exist nonnegative integers $n_i$ with
$n_i >0$ for at least one $i$ such that $K_X+\sum _in_i D_i$ is nef and big.
\item"{(ii)}" Suppose that $q(X) =0$. Then there further exists a choice of $D=
\sum _in_iD_i$ satisfying
\rom{(i)} with $D$ connected and such that there exists a section of
$K_X + D$ which is nowhere vanishing in a neighborhood of
$$E=\bigcup\{\,D_j: (K_X + D)\cdot D_j = 0\,\}.$$
In this case either $E= \emptyset$ or $E=\operatorname{Supp}D$ and $D$ is the
fundamental cycle of the minimal resolution of a minimally elliptic
singularity.
\item"{(iii)}" With $D$ satisfying \rom{(i)} and \rom{(ii)}, the linear system
$K_X+D$ is eventually base point free. Moreover, if $\varphi \: X \to \bar X$
is the associated contraction, then $\bar X$ is a normal projective surface all
of whose singular points are either rational or minimally elliptic.
\endroster
\endproclaim
\proof To prove (i), consider the set of all effective cycles $D = \sum
_ia_iD_i$, where the
$a_i$ are nonnegative integers, not all zero, and such that $h^1(\scrO_D) \neq
0$. This set is not empty by the definition of a nonrational singularity, and
is partially ordered by $\leq$, where $D'\leq D$ if $D-D'$ is effective. Choose
a minimal element $D$ in the set. This means that $D = \sum _in_iD_i$ where
either $n_i = 1$ for exactly one $i$ and $h^1(\scrO_{D_i}) \neq 0$, or for
every irreducible $D_i$ contained in the support of $D$, $D-D_i=D'$ is
effective and $h^1(\scrO_{D'})= 0$. If $D''$ is then any nonzero effective
cycle with
$D''< D$, then there exists an $i$ such that $\scrO_{D-D_i} \to \scrO_{D''}$ is
surjective. By a standard argument, $H^1(\scrO_{D-D_i}) \to H^1(\scrO_{D''})$
is surjective and thus
$h^1(\scrO_{D''})=0$ for every nonzero effective $D''<D$. Finally note that $D$
is connected, since otherwise we could replace $D$ by some connected component
$D_0$ with $h^1(\scrO_{D_0})\neq 0$.
Next we claim that $K_X+D$ is nef. Since $K_X$ is nef, it is clear that
$(K_X+D)\cdot C \geq 0$ for every irreducible curve $C$ not contained in
the support of
$D$, and moreover, for such curves $C$, $(K_X+D)\cdot C = 0$ if and only if $C$
is a smooth rational curve of self-intersection $-2$ disjoint from the support
of $D$. Next suppose that $D_i$ is a curve in the support of $D$ and consider
$(K_X+D)\cdot D_i$. If
$D= D_i$ then $K_X+D_i |D_i = \omega _{D_i}$, the dualizing sheaf of $D_i$, and
this has degree $2p_a(D_i)-2 \geq 0$ since $p_a(D_i) =h^1(\scrO_{D_i}) > 0$.
Otherwise let
$D' = D-D_i$ consider the exact sequence
$$0 \to \scrO_{D_i}(-D') \to \scrO_D \to \scrO_{D'} \to 0.$$
Thus the natural map $H^1(\scrO_{D_i}(-D')) \to H^1(\scrO_D)$ is surjective
since $H^1(\scrO_{D'})=0$, and so $H^1(\scrO_{D_i}(-D'))\neq 0$ as
$H^1(\scrO_D)\neq 0$. By duality $H^0(D_i; \omega _{D_i}\otimes
\scrO_{D_i}(D'))\neq 0$. On the other hand $\omega _{D_i} = K_X+D_i|D_i$, and
so $\deg (K_X+D'+D_i)|D_i = (K_X+D)\cdot D_i \geq 0$; moreover $(K_X+D)\cdot
D_i
= 0$ only if the divisor class $K_X+D|D_i$ is trivial.
Next, $(K_X+D)^2 \geq K_X^2 >0$, so that $K_X+D$ is big. In fact,
$$(K_X+D)^2 = K_X\cdot (K_X+D)+(K_X+D)\cdot D\geq K_X^2 + K_X\cdot D \geq
K_X^2.$$ Thus $K_X+D$ is big.
To see (ii), let $E = \bigcup\{\,D_j\subseteq \operatorname{Supp}D: (K_X +
D)\cdot D_j = 0\,\}$. We shall also view $E$ as a reduced divisor. We claim
that $\scrO_E(K_X+D) =
\scrO_E$. First assume that $E = D$ (and thus in particular that $D$ is
reduced); in this case we need to show that
$\omega _D = \scrO_D$. By assumption
$D$ is connected. Then $\omega _D$ has degree zero on every reduced irreducible
component of $D$, and by Serre duality $\chi (\omega _D) = - \chi (\scrO_D) =
\frac12(K_X+D)\cdot D=0$. As
$h^1(\omega _D) = h^0(\scrO_D) = 1$,
$h^0(\omega _D) = 1$ as well. As $\omega _D$ has degree zero on every component
of $D$, if $s$ is a section of $\omega _D$, then the restriction of $s$ to
every component $D_i$ of $D$ is either identically zero or nowhere vanishing.
Thus if $s$ is nonzero, since $D$ is connected, $s$ must be nowhere vanishing.
It follows that the map
$\scrO_D\to
\omega _D$ is surjective and is thus an isomorphism.
If $D\neq E$, we apply the argument that showed above that $(K_X+D)\cdot D_i
\geq 0$ to each connected component $E_0$ of the divisor $E$, with $D'=D-E_0$,
to see that there is a section of
$\scrO_{E_0}(K_X+D)$. Since $\scrO_{E_0}(K_X+D)$ has degree zero on each
irreducible component of
$E_0$, the argument that worked for the case $D=E$ also works in this case.
Now let us show that, provided $q(X)=0$, a nowhere zero section of
$\scrO_E(K_X+D) =
\scrO_E$ lifts to a section of $K_X+D$. It suffices to show that, for every
connected component $E_0$ of $E$, a nowhere vanishing section of
$\scrO_{E_0}(K_X+D)$ lifts to a section of $K_X+D$. Let $D' = D-E_0$.
If $D'=0$ then $D=E=E_0$ and we ask if the map $H^0(\scrO_X(K_X+D)) \to
H^0(\scrO_D(K_X+D))$ is surjective. The cokernel of this map lies in
$H^1(K_X) = 0$ since $X$ is regular. Otherwise $D' \neq 0$. Beginning with the
exact sequence
$$0 \to \scrO_{D'}(-E_0)\to \scrO_D \to \scrO_{E_0} \to 0,$$
and tensoring with $\scrO_X(K_X+D)$, we obtain the exact sequence
$$0 \to \scrO_{D'}(K_X+D-E_0)\to \scrO_D(K_X+D) \to \scrO_{E_0} \to 0.$$
Now $\scrO_{D'}(K_X+D-E_0) = \scrO_{D'}(K_X+D')= \omega _{D'}$ and by duality
$h^0(\omega _{D'}) = h^1(\scrO_{D'}) = 0$. Thus $H^0(\scrO_D(K_X+D))$ includes
into $H^0(\scrO_{E_0}) = \Cee$ and so it suffices to prove that
$H^0(\scrO_D(K_X+D)) \neq 0$, in which case it has dimension one. On the other
hand, using the exact sequence
$$0 \to \scrO_X(K_X) \to \scrO_X(K_X+D) \to \scrO_D(K_X+D) \to 0,$$
we see that $h^0(\scrO_D(K_X+D)) \geq h^0(\scrO_X(K_X+D))-p_g(X)$. Since
$h^2(K_X+D) = h^0(-D)=0$, the Riemann-Roch theorem implies that
$$h^0(\scrO_X(K_X+D)) = h^1(\scrO_X(K_X+D)) + \frac12(K_X+D)\cdot D + 1 +
p_g(X).$$
Since all the terms are positive, we see that indeed
$h^0(\scrO_X(K_X+D))-p_g(X)\geq 1$, and that $h^0(\scrO_X(K_X+D))-p_g(X)=1$ if
and only if $h^1(\scrO_X(K_X+D)) = 0$ and $(K_X+D)\cdot D_i = 0$ for every
component $D_i$ contained in the support of $D$. This last condition says
exactly that $E= \operatorname{Supp}D$, and thus, as $D$ is connected,
that $E_0=E$. We claim that in this last case
$D$ is minimally elliptic. Indeed, for every effective divisor $D'$ with
$0<D'<D$, we have
$$p_a(D') = 1 - h^0(\scrO_{D'}) + h^1(\scrO_{D'})= 1 - h^0(\scrO_{D'})\leq 0.$$
Thus $D$ is the fundamental cycle for the resolution of a minimally elliptic
singularity.
Finally we prove (iii). The irreducible curves $C$ such that $(K_X+D)\cdot C=0$
are the components $D_i$ of the support of $D$ such that $(K_X+D)\cdot D_i =
0$,
as well as smooth rational curves of self-intersection
$-2$ disjoint from $\operatorname{Supp}D$. These last contribute
rational double points, so that we need only study the $D_i$ such that
$(K_X+D)\cdot D_i = 0$. We have seen in (ii) that either there are no such
$D_i$, or every $D_i$ in the support of $D$ satisfies $(K_X+D)\cdot D_i = 0$
and the contraction of $D$ is a minimally elliptic singularity.
Let $\bar X$ be the normal surface obtained by contracting all the
irreducible curves $C$ on
$X$ such that $(K_X+D)\cdot C=0$. The line bundle $\scrO_X(K_X+D)$ is trivial
in a neighborhood of these curves, either because they correspond to a rational
singularity or because we are in the minimally elliptic case and by (ii). So
$\scrO_X(K_X+D)$ induces a line bundle on $\bar X$ which is ample, by the
Nakai-Moishezon criterion. Thus $|k(K_X+D)|$ is base point free for all
$k\gg0$.
\endproof
\ssection{3.2. Completion of the proof}
We now prove Theorem 1.5:
\theorem{1.5} Let $X$ be a minimal simply connected algebraic surface of
general type, and let
$E\in H^2(X; \Zee)$ be a $(1,1)$-class satisfying $E^2=-1$, $E\cdot K_X = 1$.
Let $w$ be the \rom{mod} $2$ reduction of $[K_X]$. Then there exist:
\roster
\item"{(i)}" an integer $p$
and \rom(in case $p_g(X)=0$\rom) a chamber $\Cal C$ of type $(w,p))$ and
\item"{(ii)}" a $(1,1)$-class $M\in H^2(X; \Zee)$
\endroster
such that
$M\cdot E=0$ and $\gamma _{w,p}(X)(M^d) \neq 0$ \rom(or, in case $p_g(X)=0$,
$\gamma _{w,p}(X; \Cal C)(M^d) \neq 0$\rom).
\endstatement
\noindent {\it Proof.} We begin with the following lemma:
\lemma{3.2} With $X$ and $E$ as above, there exists an orientation preserving
diffeomorphism
$\psi \: X \to X$ such that
$\psi ^*[K_X] = [K_X]$ and such that $\psi ^*E\cdot [C] \geq 0$ for every
smooth rational curve $C$ on $X$ with $C^2 = -2$.
\endstatement
\proof Let $\Delta = \{[C_1], \dots, [C_k]\}$ be the set of smooth
rational curves on $X$ of self-intersection $-2$, and let $r_i\: H^2(X; \Zee)
\to H^2(X; \Zee)$ be the reflection about the class $[C_i]$. Then $r_i$ is
realized by an orientation-preserving self-diffeomorphism of $X$, $r_i^*[K_X] =
[K_X]$, and $r_i$ preserves the image of $\operatorname{Pic}X$ inside $H^2(X;
\Zee)$. Let $\Gamma$ be the finite group generated group by the $r_i$. Since
the classes $[C_i]$ are linearly independent, the set
$$\{\, x\in H^2(X; \Ar)\: x\cdot [C_i] \geq 0\,\}$$
has a nonempty interior. Moreover, if $\Delta ' = \Gamma \cdot \Delta$, and we
set $W^\delta = \delta ^\perp$ for $\delta \in \Delta'$, then the connected
components of the set $H^2(X; \Ar) - \bigcup _{\delta \in \Delta '}W^\delta$
are the fundamental domains for the action of $\Gamma$ on $H^2(X; \Ar)$.
Clearly at least one of these connected components lies inside $\{\, x\in
H^2(X; \Ar)\: x\cdot [C_i] \geq 0\,\}$. Thus given $E$ (or indeed an arbitrary
element of $H^2(X; \Ar)$), there exists a $\gamma \in \Gamma$ such that $\gamma
(E)\cdot [C_i] \geq 0$ for all $i$. As every $\gamma \in \Gamma$ is realized by
an orientation preserving self-diffeomorphism $\psi$, this concludes the
proof of (3.2).
\endproof
Thus, to prove Theorem 1.5, it is sufficient by the naturality of the
Donaldson polynomials to prove it for every class
$E$ satisfying $E^2=-1$, $E\cdot K_X = 1$, and $E\cdot [C] \geq 0$ for every
smooth rational curve $C$ on $X$ with $C^2 = -2$. We therefore make this
assumption in what follows.
Given Theorem 1.4, it therefore suffices to find a nef and big divisor $M$
orthogonal to $E$, which is eventually base point free, such that the
contraction morphism defined by $|kM|$ has an image with at worst rational and
minimally elliptic singularities (note that, since $X$ is assumed minimal, no
exceptional curves can be contracted). Thus we will be done by the following
lemma:
\lemma{3.3} There exists a nef and big divisor
$M$ which is eventually base point free and such that
\roster
\item $M\cdot E = 0$.
\item The contraction $\bar X$ of $X$ defined by $|kM|$ for all $k\gg 0$ has
only rational and minimally elliptic singularities.
\endroster
\endstatement
\proof To find $M$ we proceed as follows: consider the
divisor
$K_X + E=M$. As
$K_X\cdot E=1$ and $E^2 = -1$, $M$ is orthogonal to $E$. Moreover
$M^2=(K_X+E)^2 = K_X^2 + 1 > 0$. We now consider separately the cases where $M$
is nef and where $M$ is not nef.
\medskip
\noindent {\bf Case I:} $M= K_X+E$ is nef.
Consider the union of all the curves $D$ such that $M \cdot D = 0$. The
intersection matrix of the $D$ is negative definite, and so we can contract
all the $D$ on $X$ to obtain a normal surface $X'$. If $X'$ has only rational
singularities, then $M$ induces an ample divisor on $X'$ and so $M$ itself is
eventually base point free. In this case we are done. Otherwise we may apply
Theorem 3.1 to find a subset $D_1, \dots, D_t$ of the curves $D$ with $M\cdot
D=0$ and positive integers
$a_i$ such that the divisor
$K_X+\sum _ia_iD_i$ is nef, big, and eventually base point free, and such that
the contraction $\bar X$ of
$X$ has only rational and minimally elliptic singularities, with exactly one
nonrational singularity. Note that
$D_i\cdot E = -D_i \cdot K_X \leq 0$, and
$D_i \cdot E = 0$ if and only if $D_i \cdot K_X = 0$, or in other words if and
only if $D_i$ is a smooth rational curve of self-intersection $-2$. Setting $e
= -\sum _ia_i(D_i \cdot E)$, we have $e \geq 0$, and $e=0$ if and only if
$D_i
\cdot E =0$ for all $i$. But as $\bar X$ has a nonrational singularity, we
cannot have $D_i
\cdot E =0$ for all $i$, for then all singularities would be rational double
points. Thus $e>0$. Now the $\Bbb Q$-divisor $M'= K_X + \frac1{e}\sum _ia_iD_i$
is a rational convex combination of $K_X$ and $K_X+\sum _ia_iD_i$, and $M'
\cdot E=0$. Moreover either $M'$ is a strict convex combination of $K_X$ and
$K_X+\sum _ia_iD_i$ (if $e>1$) or $M'=K_X+\sum _ia_iD_i$ (if $e=1$). In the
second case,
$M'$ satisfies (1) of Lemma 3.3, and it is eventually base point free by (iii)
of Theorem 3.1. Thus $M'$ satisfies the conclusions of Lemma 3.3. In the first
case,
$M'$ is nef and big, and the only curves $C$ such that $M' \cdot C=0$ are
curves $C$ such that $K_X
\cdot C = 0$ and $(K_X+\sum _ia_iD_i )\cdot C=0$. The set of all such
curves must therefore be a subset of the set of all smooth rational curves on
$X$ with self-intersection $-2$. Hence, if $X''$ denotes the contraction of all
the curves $C$ on $X$ such that $M'\cdot C=0$, then $X''$ has only rational
singularities and $M'$ induces an ample $\Bbb Q$-divisor on $X''$. Once again
some multiple of
$M'$ is eventually base point free and (1) and (2) of Lemma 3.3 are satisfied.
Thus we have proved the lemma in case $K_X+E$ is nef.
\medskip
\noindent {\bf Case II:} $M= K_X+E$ is not nef.
Let $D$ be an irreducible curve with $M\cdot D<0$. We claim first that in this
case $D^2<0$. Indeed, suppose that $D^2 \geq 0$. As $\operatorname{Pic}X\otimes
_\Zee \Ar$ has signature $(1, \rho -1)$, the set
$$\Cal Q = \{\, x\in \operatorname{Pic}X\otimes _\Zee \Ar\: x^2 \geq 0, x\neq
0\,\}$$ has two connected components, and two classes $x$ and $x'$ are in the
same connected component of $\Cal Q$ if and only if $x\cdot x' \geq 0$ (cf\.
[13] p\. 320 Lemma 1.1). Now $(K_X+E)\cdot K_X = (K_X+E)^2 = K_X^2 +1 >0$, so
that
$K_X+E$ and
$K_X$ lie in the same connected component of $\Cal Q$. Likewise, if $D^2 \geq
0$, then since $K_X\cdot D \geq 0$, $K_X$ and $D$ lie in the same connected
component of $\Cal Q$. Thus $D$ and $K_X+E$ lie in the same connected component
of $\Cal Q$, so that $(K_X+E)\cdot D \geq 0$. Conversely, if $M\cdot D<0$, then
$D^2<0$.
Fix an irreducible curve $D$ with $M\cdot D<0$, and let $d = -E\cdot D
>K_X\cdot D \geq 0$. Recall that by assumption $E\cdot D \geq 0$ if $D$ is a
smooth rational curve of self-intersection $-2$. If
$p_a(D) \geq 1$, then set $M' = K_X+ \frac1{d}D$. Then $M'\cdot E = 0$ by
construction. Moreover we claim that $M'$ is nef and big. Indeed
$$(M')^2 = (K_X+ \frac1{d}D)^2 = K_X\cdot (K_X+ \frac1{d}D) + \frac1{d}(K_X+
\frac1{d}D)\cdot D.$$
Thus $M'$ is big if it is nef and to see that
$M'$ is nef it suffices to show that $M'\cdot D \geq 0$. But
$$M'\cdot D = K_X\cdot D + \frac1{d}D^2 = 2p_a(D) - 2 - \left(1-
\frac1{d}\right)D^2.$$
As $D^2 < 0$, we see that $M'\cdot D\geq 0$, and $M'\cdot D=0$ if and only if
$p_a(D) = 1$ and $d=1$. Suppose that $p_a(D) = 1$ and $M'\cdot D=0$.
Using the exact sequence
$$0 \to \scrO_X(K_X) \to \scrO_X(K_X+D)\to \omega _D \to 0,$$
and arguments as in the proof of Theorem 3.1, we see that the linear system
$M'$ is eventually base point free and that the associated contraction has just
rational double points and a minimally elliptic singular point which is the
image of $D$. In all other cases,
$M'\cdot D >0$, so that the curves orthogonal to $M'$ are smooth rational
curves of self-intersection
$-2$. Again, some positive multiple of $M'$ is eventually base point free and
the contraction has just rational singularities.
Thus we may assume that $p_a(D) = 0$ for every irreducible curve $D$ such that
$M\cdot D <0$. By assumption $D^2 \neq -1, -2$, so that $D^2 \leq -3$. Thus $d=
- -D\cdot E \geq 2$. If either $D^2 \leq -4$ or $D^2 = -3$ and $d \geq 3$, then
again let $M' = K_X+ \frac1{d}D$. Thus $M'\cdot E =0$ and
$$M'\cdot D = K_X\cdot D + \frac1{d}D^2 = - 2 - \left(1-
\frac1{d}\right)D^2\geq 0.$$
Thus $M'$ is nef and big, and some multiple of $M'$ is eventually base point
free, and the associated contraction has just rational singularities. The
remaining case is where there is a smooth rational curve $D$ on $X$ with
self-intersection
$-3$ and such that
$-D\cdot E = 2$. In this case $K_X \cdot D = 1$, and so $D-E$ is orthogonal to
$K_X$. Note that $D-E$ is not numerically trivial since $D$ is not numerically
equivalent to $E$. Thus, by the Hodge index theorem
$(D-E)^2 <0$. But
$$(D-E)^2 = -3 + 4 -1 =0,$$
a contradiction. Thus this last case does not arise.
\endproof
\section{Appendix: On the canonical class of a rational surface}
Let $\Lambda _n$ be a lattice of type $(1,n)$, i.e. a free $\Bbb Z$-module of
rank $n+1$, together with a quadratic form $q\: \Lambda _n\to \Bbb Z$, such
that there exists an orthogonal basis $\{ e_0, e_1, \dots , e_n\}$ of $\Lambda
_n$ with
$q(e_0) = 1$ and $q(e_i) = -1$ for all $i>0$. Fix once and for all such a
basis. We shall always view $\Lambda _n$ as included in $\Lambda _{n+1}$ in
the obvious way. Let $$\kappa _n = 3e_0 - \sum _{i=1}^ne_i.$$ Then $q(\kappa
_n) = 9-n$ and
$\kappa _n$ is characteristic, i.e. $\kappa _n\cdot\alpha \equiv q(\alpha)
\mod 2$ for all $\alpha \in \Lambda _n$.
The goal of this appendix is to give a proof, due to the first author, R\.
Miranda, and J\.W\. Morgan, of the following:
\theorem{A.1} Suppose that $n\leq 8$ and that $\kappa\in \Lambda _n$ is a
characteristic vector satisfying $q(\kappa ) = 9-n$. Then there exists an
automorphism $\varphi$ of $\Lambda _n$ such that $\varphi (\kappa) = \kappa
_n$.
A similar statement holds for $n=9$ provided that $\kappa$ is primitive.
\endproclaim
\demo{Proof} We shall freely use the notation and results of Chapter II of
[13] and shall quote the results there by number. For the purposes of the
appendix, chamber shall mean a chamber in $\{\,x\in \Lambda _n \otimes \Bbb R
\mid x^2 =1\,\}$ for the set of walls defined by the set $\{\, \alpha \in
\Lambda _n \mid \alpha ^2 = -1\,\}$. Let
$C_n$ be the chamber associated to $\kappa _n$ [13, p\. 329, 2.7(a)]: the
oriented walls of $C_n$ are exactly the set
$$\{\,\alpha\in \Lambda _n\mid q(\alpha ) = -1,\alpha\cdot \kappa _n =1\,\}.$$
Then
$\kappa _n$ lies in the interior of
$\Bbb R^+\cdot C_n$, by [13, p\. 329, 2.7(a)]. Similarly $\kappa$ lies in the
interior of a set of the form $\Bbb R^+\cdot C$ for some chamber $C$, since
$\kappa$ is not orthogonal to any wall (because it is characteristic) and
$q(\kappa )> 0$. But the automorphism group of $\Lambda _n$ acts transitively
on the chambers, by [13 p. 324]. Hence we may assume that $\kappa \in C_n$. In
this case we shall prove that $\kappa =
\kappa _n$. We shall refer to $C_n$ as the {\sl fundamental chamber} of
$\Lambda _n\otimes _{\Bbb Z}\Bbb R$. Let us record two lemmas about $C_n$.
\lemma{A.2} An automorphism $\varphi$ of $\Lambda _n$ fixes $C_n$ if and
only if it fixes $\kappa _n$.
\endproclaim
\demo{Proof} The oriented walls of $C_n$ are precisely
the
$\alpha \in \Lambda _n$ such that $q(\alpha) = -1$ and $\kappa _n\cdot \alpha
=1$. Thus, an automorphism fixing $\kappa _n$ fixes $C_n$. The converse follows
from [13, p\. 335, 4.4].
\endproof
\lemma{A.3} Let $\alpha = \sum _i\alpha _ie_i$ be an oriented wall of
$C_n$, where $e_0, \dots , e_n$ is the standard basis of $\Lambda _n$. After
reordering the elements $e_1, \dots, e_n$, let us assume that
$$|\alpha _1| \geq |\alpha _2| \geq \dots \geq |\alpha _n|.$$
Then for $n\leq 8$,
the possibilities for $(\alpha _0, \dots \alpha _n)$ are as follows
\rom(where we omit the $\alpha _i$ which are zero\rom):
\roster
\item $\alpha _0 = 0, \alpha _1 = 1$;
\item $\alpha _0 =1, \alpha _1 = \alpha _2 = -1$ $(n\geq 2)$;
\item $\alpha _0 = 2, \alpha _1 = \alpha _2 = \alpha _3 = \alpha _4 = \alpha
_5 =-1$ $(n \geq 5)$;
\item $\alpha _0 = 3, \alpha _1 = -2, \alpha _2 = \alpha _3 = \alpha _4 =
\alpha _5 =\alpha _6 = \alpha _7=-1$ $(n \geq 7)$;
\item $\alpha _0 = 4, \alpha _1 = \alpha _2 = \alpha _3 = 2, \alpha _4 =
\dots = \alpha _8 = -1$ $(n=8)$;
\item $\alpha _0 = 5, \alpha _1 = \dots = \alpha _6 = -2, \alpha _7 =
\alpha _8 = -1$ $(n=8)$;
\item $\alpha _0 = 6, \alpha _1 = -3, \alpha _2 = \dots = \alpha _8 = -2$
$(n=8)$.
\endroster
\endproclaim
\demo{Proof} This statement is extremely well-known as the characterization of
the lines on a del Pezzo surface (see [7], Table 3). We can
give a proof as follows. It clearly suffices to prove the result for $n=8$. But
for $n=8$, there is a bijection between the $\alpha$ defining an oriented wall
of $C_8$ and the elements $\gamma \in \kappa _8^{\perp}$ with $q(\gamma )=-2$.
This bijection is given as follows: $\alpha$ defines an oriented wall
of $C_8$ if and only if $q(\alpha ) = -1$ and $\kappa _8\cdot \alpha = 1$.
Map $\alpha$ to $\alpha - \kappa _8 = \gamma$. Thus, as $q(\kappa _8 ) = 1$,
$q(\gamma ) = -2$ and $\gamma \cdot \kappa _8 = 0$. Conversely, if $\gamma \in
\kappa _8^{\perp}$ satisfies $q(\gamma )=-2$, then $\gamma + \kappa _8$ defines
an oriented wall of $C_8$.
Now the number of $\alpha$ listed above, after we are allowed to reorder the
$e_i$, is easily seen to be
$$8+\binom 82 +\binom 85 +8\cdot 7 +\binom 83 +\binom 82 + 8 = 240.$$
Since this is exactly the number of vectors of square $-2$ in $-E_8$, by e\.g\.
[36], we must have enumerated all the possible $\alpha$.
\endproof
Write $\kappa = \sum _{i=0}^na_ie_i$, where $e_i$ is the standard basis of
$\Lambda _n$ given above. Since $\kappa \cdot e_i>0$, $a_i <0$. After
reordering the elements $e_1, \dots, e_n$, we may assume that
$$|a_1|\geq |a_2| \geq \dots \geq |a_n|.$$
By inspecting the cases in Lemma
A.3, for every
$\alpha = \sum _i\alpha
_ie_i$ not of the form $e_i$,
$\alpha _i \leq 0$ for all $i\geq 1$. Given $\alpha = \sum _i\alpha
_ie_i$ with $\alpha
\neq e_i$ for any
$i$, let us call $\alpha$ {\sl
well-ordered} if
$$|\alpha_1|\geq |\alpha_2| \geq \dots \geq |\alpha_n|.$$
Quite generally, given $\alpha = \alpha _0e_0 + \sum _{i>0}\alpha
_ie_i$, we define the {\sl reordering} $r(\alpha)$ of $\alpha$ to be
$$r(\alpha ) = \alpha _0e_0 + \sum
_{i>0}\alpha _{\sigma (i)}e_i,$$
where $\sigma $ is a permutation of $\{ 1,
\dots , n\}$ such that $r(\alpha)$ is well-ordered. Clearly $r(\alpha)$ is
independent of the choice of $\sigma$.
We then have the following:
\claim{A.4} $\kappa \in C_n$ if and only if $\kappa \cdot \alpha >0$ for every
well-ordered wall $\alpha$.
\endstatement
\proof Clearly if $\kappa \in C_n$, then $\kappa \cdot \alpha >0$ for
every $\alpha$, well-ordered or not. Conversely, suppose that $\kappa \cdot
\alpha >0$ for every well-ordered wall $\alpha$.
We claim that
$$\alpha \cdot \kappa \geq r(\alpha )\cdot \kappa,\tag{$*$}$$
which clearly implies (A.4) since $r(\alpha)$ is well-ordered.
Now
$$\alpha \cdot \kappa = \alpha _0 a_0 - \sum _{i>0}\alpha _i a_i.$$
Since $\alpha _i < 0$ and $a_i <0$,
($*$) is easily reduced to the following statement about positive real numbers:
if $c_1\geq \dots \geq c_n$ is a sequence of positive real numbers and $d_1,
\dots, d_n$ is any sequence of positive real numbers, then a permutation
$\sigma$ of $\{1,
\dots , n\}$ is such that $\sum _ic_id_{\sigma (i)}$ is maximal exactly when
$d_{\sigma (1)} \geq \dots \geq d_{\sigma (n)}$. We leave the proof of this
elementary fact to the reader.
\endproof
Next, we claim the following:
\lemma{A.5} View $\Lambda _n\subset \Lambda _{n+1}$.
Defining $\kappa _{n+1}$ and $C_{n+1}$ in the natural way for $\Lambda _{n+1}$,
suppose that $\kappa \in C_n$. Then $\kappa ' =\kappa - e_{n+1} \in C_{n+1}$.
\endproclaim
\demo{Proof} We have ordered our basis $\{e_0, \dots, e_n\}$ so that
$$|a_1|\geq |a_2| \geq \dots \geq |a_n|.$$
Since $a_i <0$ for all $i$, $|a_n|\geq 1$. Thus the coefficients of $\kappa '$
are also so ordered. Note also that all coefficients of $\kappa
'$ are less than zero, so that the inequalities from (1) of Lemma A.3 are
automatic. Given any other wall $\alpha '$ of $C_{n+1}$, to verify that $\kappa
'\cdot \alpha '> 0$, it suffices to look at $\kappa '\cdot r(\alpha ')$, where
$r(\alpha ')$ is the reordering of $\alpha '$. Expressing $\alpha '$ as a
linear combination of the standard basis vectors, if some coefficient is zero,
then $r(\alpha ') \in \Lambda _n$. Clearly, in this case, viewing $r(\alpha ')$
as an element of $\Lambda _n$, it is a wall of $C_n$. Since then $\kappa '
\cdot r(\alpha ') = \kappa \cdot r(\alpha ')$, we have $\kappa ' \cdot r(\alpha
') >0$ in this case.
In the remaining case, $r(\alpha ')$ does not lie in $\Lambda _n$. This can
only happen for $n=1,4,6,7$, with $\alpha '$ one of the new types of walls
corresponding
to the cases (2) --- (7) of Lemma A.3. Thus, the only thing we need to check is
that, every time we introduce a new type of wall, we still get the inequalities
as needed. Since $r(\alpha ')$ is well-ordered, we can assume that it is in
fact one of the walls listed in Lemma A.3.
The $n=1$ case simply says that $a_0 > -a_1+1$. However, we can easily solve
the equations $a_0^2 - a_1^2 = 8$, $a_1<0$ to get $a_0 = 3, a_1 =
- -1$. Since $3> 1+1$, we are done in this case.
Next assume
that $n =4$. We have $\kappa = a_0e_0 +
\sum _{i=1}^4a_ie_i$. We must show that $2a_0> -\sum _{i=1}^4a_i + 1$.
We know that
$a_0>-a_1-a_2$, hence that
$a_0 \geq -a_1-a_2 +1$. Moreover
$a_0\geq -a_3-a_4 +1$ since $|a_1|\geq |a_2| \geq |a_3| \geq |a_4|$.
Adding gives
$2a_0 \geq -\sum _{i=1}^4a_i + 2 = (-\sum _{i=1}^4a_i+ 1) +1$
and therefore
$2a_0> -\sum _{i=1}^4a_i + 1$.
The case where $n=6$ is similar: we must show that
$3a_0 > -2a_1 -\sum _{i=2}^6 a_i +1$.
But we know that
$2a_0 \geq -\sum _{i=1}^5a_i +1$
and that
$a_0 \geq -a_1 -a_6 +1$.
Adding gives the desired inequality.
For $n=7$, we have three new inequalities to check. The inequality
$$4a_0> -2a_1 - 2a_2 - 2a_3 - \sum _{i=4}^7a_i +1$$
follows by adding the inequalities
$3a_0 > -2a_1 -\sum _{i=2}^7 a_i$
and $a_0> -a_2 - a_3$.
The inequality
$$5a_0 > -2\sum _{i=1}^6a_i -a_7 +1$$
follows from adding
the inequalities
$3a_0 > -2a_1 - \sum _{i=2}^7a_i$
and $2a_0 > -\sum _{i=2}^5a_i$.
Likewise, the last inequality
$$6a_0 > -3a_1 - 2\sum _{i=2}^7a_i+2$$
follows by adding up the three inequalities
$3a_0 > -2a_1 -\sum _{i=2}^7 a_i$,
$2a_0 > -\sum _{i=1}^5a_i$,
and
$a_0 > -a_6-a_7$.
Thus we have established the
lemma.
\endproof
\noindent {\it Completion of the proof of Theorem \rom{A.1}.} Begin with
$\kappa$. Applying Lemma A.5 and induction, if $n<8$, then the vector
$\eta = \kappa -
\sum _{j=n+1}^8e_j$ lies in the fundamental chamber of $\Lambda _8$. Moreover
$\eta$ is a characteristic vector of square $1$. Thus $\eta ^{\perp} \cong
- -E_8$. The same is true for $\kappa _8 = 3e_0 - \sum _{i=1}^8e_i= \kappa _n -
\sum _{j=n+1}^8e_j$. Clearly, then, there is an automorphism $\varphi$ of
$\Lambda _8$ such that
$\eta=\varphi (\kappa _8)$. But both $\eta $ and $\kappa _8$ lie in the
fundamental chamber for
$\Lambda _8$. Since the automorphism group preserves the chamber structure, the
automorphism
$\varphi$ must stabilize the fundamental chamber. By Lemma A.2, $\varphi
(\kappa _8) = \kappa _8$. Thus $\eta =
\kappa _8$. Hence
$\kappa - \sum _{j=n+1}^8e_j = \kappa _n- \sum _{j=n+1}^8e_j$.
It follows that $\kappa = \kappa _n$.
\endproof
\remark{Note} To handle the case $n=9$, we argue that every vector $\kappa\in
\Lambda _9$ which is primitive of square zero and characteristic is conjugate
to $\kappa _9$ as above. To do this, an easy argument shows that, if $\kappa$
is such a class, then there is an orthogonal splitting
$$\Lambda _9 \cong \langle \kappa , \delta\rangle \oplus (-E_8),$$
where $\delta$ is an element of $\Lambda _9$ satisfying $\delta \cdot \kappa =
1$ and $q(\delta )=1$. Thus clearly every two such $\kappa$ are conjugate.
\endremark
\Refs
\ref \no 1\by M. Artin \paper On isolated rational singularities of
surfaces\jour Amer. J. Math.
\vol 88 \pages 129--136 \yr 1966
\endref
\ref \no 2\by R. Barlow\paper A simply connected surface of general type with
$p_g = 0$\jour Invent. Math. \vol 79\pages 293--301 \yr 1985\endref
\ref \no 3\by W. Barth, C. Peters, A. Van de Ven \book Compact Complex
Surfaces , {\rm Ergebnisse der Mathematik und ihrer Grenz\-gebiete 3.
Folge} {\bf 4}\publ Springer\publaddr Berlin Heidelberg New York Tokyo\yr 1984
\endref
\ref \no 4\by F. Bogomolov \paper Holomorphic tensors and vector bundles
on projective varieties \jour Math. USSR Izvestiya \vol 13 \yr 1979 \pages
499--555\endref
\ref \no 5\by R. Brussee \paper Stable bundles on blown up surfaces \jour Math.
Zeit. \vol 205 \yr 1990 \pages 551--565
\endref
\ref \no 6\bysame \paper On the $(-1)$-curve conjecture of Friedman and
Morgan \jour Invent. Math. \vol 114 \pages 219--229 \yr 1993 \endref
\ref \no 7\by M. Demazure \paper Surfaces de del Pezzo II \inbook
S\'eminaire sur les Singularit\'es des Surfaces {\rm Palaiseau, France
1976--1977} \eds M. Demazure, H. Pinkham, B. Teissier \bookinfo Lecture Notes
in Mathematics {\bf 777}
\publ Springer \publaddr Berlin Heidelberg New York \yr 1980\endref
\ref \no 8\by S.K. Donaldson \paper Anti-self-dual Yang-Mills connections over
complex algebraic surfaces and stable vector bundles\jour Proc. Lond. Math.
Soc. \vol 50\pages 1--26 \yr 1985\endref
\ref \no 9\bysame \paper Irrationality and the $h$-cobordism
conjecture\jour J. Differ. Geom. \vol 26\pages 141--168 \yr 1987\endref
\ref \no 10\bysame\paper Polynomial invariants for smooth
four-manifolds \jour Topology \vol 29 \pages 257--315 \yr 1990\endref
\ref \no 11\by S.K. Donaldson, P. Kronheimer \book The Geometry of
Four-Manifolds
\publ Clarendon Press \publaddr Oxford \yr 1990 \endref
\ref \no 12\by R. Friedman \book Stable Vector Bundles over Algebraic Varieties
\toappear
\endref
\ref \no 13\by R. Friedman, J. W. Morgan\paper On the
diffeomorphism types of certain algebraic surfaces I \jour J. Differ. Geom.
\vol 27
\pages 297--369 \yr 1988\endref
\ref \no 14\bysame\paper Algebraic surfaces and
$4$-manifolds: some conjectures and speculations \jour Bull. Amer. Math. Soc.
(N.S.)
\vol 18 \pages 1--19 \yr 1988\endref
\ref \no 15\bysame\book Smooth Four-Manifolds and
Complex Surfaces, {\rm Ergebnisse der Mathematik und ihrer Grenz\-gebiete 3.
Folge} {\bf 27} \publ Springer \publaddr Berlin Heidelberg New York \yr
1994\endref
\ref \no 16\by Y. Kawamata, K. Matsuda, K. Matsuki \paper Introduction to the
minimal model problem \inbook Algebraic Geometry,
{\rm Sendai Adv. Stud. Pure Math. }\ed T. Oda\vol 10
\pages 283--360\publ Kinokuniya and Amsterdam North-Holland \publaddr Tokyo
\yr 1987\endref
\ref \no 17\by M. Kneser\paper Klassenzahlen indefiniter quadratischer
Formen in drei oder mehr Ver\"anderlichen \jour Arch. der Math. \vol 7
\pages 323--332
\yr 1956\endref
\ref \no 18\by D. Kotschick \paper On manifolds homeomorphic to $\Bbb CP ^2
\# 8\overline {\Bbb CP} ^2$\jour Invent. Math. \vol 95 \pages 591--600
\yr 1989\endref
\ref \no 19\bysame\paper $SO(3)$-invariants for $4$-manifolds
with
$b_2^+ = 1$\jour Proc. Lond. Math. Soc. \vol 63 \pages 426--448 \yr
1991\endref
\ref \no 20\bysame \paper Positivity versus rationality of algebraic
surfaces \toappear \endref
\ref \no 21\by D. Kotschick, J.W. Morgan\paper $SO(3)$-invariants for
$4$-manifolds with $b_2^+ = 1$ II \jour J. Differ. Geom.\vol 39 \yr 1994 \pages
443--456
\endref
\ref \no 22\by H.B. Laufer\paper On minimally elliptic singularities \jour
Amer. J. Math.
\vol 99 \pages 1257--1295 \yr 1977\endref
\ref \no 23\by J. Li \paper Algebraic geometric interpretation of
Donaldson's polynomial invariants \jour J. Differ. Geom. \vol 37 \pages
417--466
\yr 1993\endref
\ref \no 24\bysame \paper Kodaira dimension of moduli space of vector
bundles on surfaces\jour Invent. Math. \vol 115 \pages 1--40 \yr 1994\endref
\ref \no 25\by J.W. Morgan \paper Comparison of the Donaldson polynomial
invariants with their alge\-bro-geo\-metric analogues \jour Topology \vol 32
\pages 449--488 \yr 1993\endref
\ref \no 26\by J.W. Morgan, T. Mrowka, D. Ruberman \book
The $L^2$-Moduli Space and a Vanishing Theorem for Donaldson Polynomial
Invariants \toappear \endref
\ref \no 27\by K. O'Grady \paper Moduli of bundles on surfaces: some global
results \toappear\endref
\ref \no 28\by C. Okonek, A. Van de Ven\paper Stable bundles
and differentiable structures on certain elliptic surfaces\jour Invent. Math.
\vol 86 \yr 1986\pages 357--370 \endref
\ref \no 29\bysame \paper $\Gamma$-type-invariants associated to
$PU ( 2)$-bundles and the differentiable structure of Barlow's
surface\jour Invent. Math. \vol 95 \pages 601--614 \yr 1989\endref
\ref \no 30\by V.Y. Pidstrigach\paper Deformation of instanton surfaces\jour
Math. USSR Izvestiya \vol 38 \pages 313--331 \yr 1992\endref
\ref \no 31\by V.Y. Pidstrigach, A.N. Tyurin \paper Invariants
of the smooth structure of an algebraic surface arising from the Dirac
operator
\jour Russian Academy of Science Izvestiya Mathematics, Translations of the
AMS
\vol 40 \pages 267--351 \yr 1993\endref
\ref \no 32\by Z.B. Qin\paper Birational properties of moduli spaces of
stable locally free rank-$2$ sheaves on algebraic surfaces\jour
Manuscripta Math. \vol 72 \pages 163--180 \yr 1991\endref
\ref \no 33\bysame \paper Complex structures on certain differentiable
$4$-manifolds\jour Topology \vol 32 \pages 551--566 \yr 1993\endref
\ref \no 34\bysame \paper On smooth structures of potential surfaces of
general type homeomorphic to rational surfaces\jour
Invent. Math. \vol 113 \pages 163--175 \yr 1993\endref
\ref \no 35\by I. Reider \paper Vector bundles of rank $2$ and linear systems
on algebraic surfaces \jour Annals of Math. \vol 127 \pages 309--316 \yr
1988 \endref
\ref \no 36\by J.P. Serre \book Cours d'Arithm\'etique \publ Presses
Universitaires de France \publaddr Paris \yr 1970 \endref
\ref \no 37\by A. Van de Ven \paper On the differentiable structure of
certain algebraic surfaces\paperinfo S\'em. Bourbaki vol. 1985--1986
Expos\'es 651--668 n${}^\circ$ {\bf 667} Juin 1986 \jour Ast\'erisque \vol
145--146 \yr 1987\endref
\ref \no 38\by C.T.C. Wall\paper Diffeomorphisms of $4$-manifolds\jour J.
Lond. Math. Soc. \vol 39 \pages 131--140 \yr 1964\endref
\ref \no 39\by H.J. Yang \paper Transition functions and a blow-up formula
for Donaldson polynomials\paperinfo Columbia University Thesis\yr 1992
\endref
\ref \no 40\by S.T. Yau \paper On the Ricci curvature of compact K\"ahler
manifolds and the complex Monge-Amp\`ere equation\jour Comm. Pure Appl. Math.
\vol 31 \pages 339--411 \yr 1978\endref
\endRefs
\enddocument
------- End of Forwarded Message
|
2,869,038,156,717 | arxiv | \section{Introduction} \label{S1}
The theory of \emph{integral currents} was born in the 1960's after the work of Federer and Fleming \cite{FF60} out of the desire to solve Plateau's problem. Integral currents were thus introduced to provide a mathematical framework where the existence of orientable surfaces minimizing the volume among those spanning a given contour could be rigorously proved by direct methods in any dimension and codimension.
In order to deal with non-orientable surfaces, Ziemer \cite{Zie62} introduced the notion of \emph{integral currents modulo $2$}. Further generalizations, such as \emph{integral currents modulo $p$} and \emph{flat chains modulo $p$}, were considered in order to treat a wider class of surfaces which can be realized, for instance, as soap films. An interesting property of such surfaces is that they can develop singularities in low codimension, unlike the classical solutions to Plateau's problem (see, for instance, \cite{Mor86} and \cite{WH86}).
Moreover, integral currents, flat chains and their generalizations have proved to be flexible enough to describe and tackle similar problems in more abstract settings (see, in particular, \cite{WH99}, \cite{AK00}, \cite{DPH12} and \cite{AG13}).
Despite the substantial interest in the subject, the very structure of flat chains and integral currents modulo $p$ is yet to be completely understood. The initial idea is to define flat chains modulo $p$ by identifying currents which differ by $pT$, where $T$ is a ``classical'' flat chain. This definition, however, has one major drawback: the closedness of the classes with respect to the flat norm is a-priori not guaranteed. Hence, it is more convenient to define the classes of flat chains modulo $p$ as the flat closure of the equivalence classes mentioned above. The equivalence of the two definitions is still an open problem.
A second issue regards the structure of integral currents modulo $p$. They are defined as flat chains modulo $p$ with finite $p$-mass and finite $p$-mass of the boundary. It is not known whether each equivalence class contains at least one classical integral current.
In this work, we specifically address these two problems. In Section \ref{S2} we recall the basic terminology and the main results about classical flat chains and integral currents. Flat chains and integral currents modulo $p$ are introduced in Section \ref{S3}, where we also formulate two questions related to the two above problems and collect some partial answers from the literature. Finally, in Section \ref{S4} we throw light on the connection between the two questions, and we provide a positive answer to the second one in the case of $1$-dimensional currents. Moreover, we give an example illustrating how it is possible to produce situations in which the answer to the second question is negative in higher dimension.
\subsection*{Acknowledgements.} The authors would like to thank Camillo De Lellis for having posed the problem and for helpful discussions. A. M. and S. S. are supported by the ERC-grant ``Regularity of area-minimizing currents'' (306247).
\section{Classical results on flat chains} \label{S2}
In what follows we recall the basic terminology related to the theory of currents. We refer the reader to the introductory presentation given in \cite{Mor09} or to the standard textbooks \cite{Sim83}, \cite{KP08} for further details. The most complete reference remains the treatise \cite{Fed69}.
\subsection{Currents.}
An \emph{$m$-dimensional current} $T$ in $\mathbb{R}^n$ ($m \leq n$) is a continuous linear functional on the space $\mathscr{D}^{m}(\mathbb{R}^n)$ of smooth compactly supported differential $m$-forms in $\mathbb{R}^n$, endowed with a locally convex topology built in analogy with the topology on $C^{\infty}_{c}(\mathbb{R}^n)$ with respect to which distributions are dual.
The \emph{boundary} of $T$ is the $(m-1)$-dimensional current $\partial T$ defined by
\[
\langle \partial T, \omega \rangle := \langle T, d\omega \rangle
\]
for any smooth compactly supported $(m-1)$-form $\omega$. The \emph{mass} of $T$, denoted by $\mathbf{M}(T)$, is the (possibly infinite) supremum of $\langle T, \omega \rangle$ over all forms $\omega$ with $|\omega| \leq 1$ everywhere.
The \emph{support} of a current $T$, denoted $\mathrm{spt}(T)$, is the intersection of all closed sets $C$ in $\mathbb{R}^n$ such that $\langle T, \omega \rangle = 0$ whenever $\omega\equiv 0$ on $C$.
\subsection{Rectifiable currents.}
A subset $E \subset \mathbb{R}^n$ is said to be $m$-\emph{rectifiable} if $\mathcal{H}^{m}(E) < \infty$ and $E$ can be covered, except for an $\mathcal{H}^{m}$-null subset, by countably many $m$-dimensional surfaces of class $C^{1}$. If $E$ is $m$-rectifiable, then a suitable notion of $m$-dimensional \emph{approximate tangent space} to $E$ can be defined for $\mathcal{H}^{m}$-a.e. $x \in E$. Such a tangent space will be denoted ${\rm Tan}(E,x)$ and it coincides with the classical tangent space if $E$ is a (piece of a) $C^1$ $m$-surface.
Let $E$ be an $m$-rectifiable set in $\mathbb{R}^n$. An \emph{orientation} of $E$ is an $m$-vectorfield $\tau$ on $\mathbb{R}^n$ such that $\tau(x)$ is a simple $m$-vector with $|\tau(x)| = 1$ which spans ${\rm Tan}(E,x)$ at $\mathcal{H}^{m}$-a.e. point $x$. A \emph{multiplicity} on $E$ is an integer-valued function $\theta$ such that
\[
\int_{E} |\theta| \, d\mathcal{H}^{m} < \infty.
\]
For every choice of a triple $(E,\tau,\theta)$ as above, we denote by $T = \llbracket E, \tau, \theta \rrbracket$ the $m$-dimensional current whose action on a form $\omega$ is given by
\[
\langle T, \omega \rangle := \int_{E} \langle \omega(x), \tau(x) \rangle \theta(x) \, d\mathcal{H}^{m}(x).
\]
Currents of this type are called \emph{integer rectifiable $m$-currents}. The set of integer rectifiable $m$-currents in $\mathbb{R}^n$ with support in a compact $K \subset \mathbb{R}^n$ will be denoted $\mathscr{R}_{m,K}(\mathbb{R}^n)$. The symbol $\mathscr{R}_{m}(\mathbb{R}^n)$ will denote the union of $\mathscr{R}_{m,K}(\mathbb{R}^n)$ corresponding to all compact subsets $K \subset \mathbb{R}^n$. If $T = \llbracket E, \tau, \theta \rrbracket \in \mathscr{R}_{m}(\mathbb{R}^n)$, we denote by $\| T \|$ the measure given by
\[
\|T\|(A) := \int_{A \cap E} |\theta| \, d\mathcal{H}^{m} \hspace{0.5cm} \mbox{for every } A \subset \mathbb{R}^n \mbox{ Borel.}
\]
One can check that $\mathbf{M}(T) = \|T\|(\mathbb{R}^n)$ and thus, in particular, integer rectifiable currents have finite mass.\\
With the symbol $\mathscr{I}_{m,K}(\mathbb{R}^n)$ we denote the set of \emph{integral $m$-currents} supported in $K$, that is currents in $\mathscr{R}_{m,K}(\mathbb{R}^n)$ with integer rectifiable boundary. The meaning of the symbol $\mathscr{I}_{m}(\mathbb{R}^n)$ is understood. We recall the following fundamental
\begin{theorem}[Boundary Rectifiability, cf. {\cite[Theorem 4.2.16]{Fed69}}] \label{b_rect:thm}
\begin{equation} \label{b_rect}
\mathscr{I}_{m,K}(\mathbb{R}^n) = \lbrace T \in \mathscr{R}_{m,K}(\mathbb{R}^n) \, \colon \, \mathbf{M}(\partial T) < \infty \rbrace.
\end{equation}
\end{theorem}
\subsection{Flat chains.} The set of (integral) \emph{flat $m$-chains} in $\mathbb{R}^n$ with support in a compact $K \subset \mathbb{R}^n$ is denoted by $\mathscr{F}_{m,K}(\mathbb{R}^n)$ and defined by
\begin{equation} \label{chains}
\mathscr{F}_{m,K}(\mathbb{R}^n) := \lbrace T = R + \partial S \, \colon \, R \in \mathscr{R}_{m,K}(\mathbb{R}^n), S \in \mathscr{R}_{m+1,K}(\mathbb{R}^n) \rbrace.
\end{equation}
We also define the set $\mathscr{F}_{m}(\mathbb{R}^n)$ as the \emph{union} of the sets $\mathscr{F}_{m,K}(\mathbb{R}^n)$ corresponding to all compact subsets $K \subset \mathbb{R}^n$.
For any $T \in \mathscr{F}_{m,K}(\mathbb{R}^n)$, we define the \emph{flat norm}
\begin{equation} \label{flat}
\mathbf{F}_{K}(T) := \inf\lbrace \mathbf{M}(R) + \mathbf{M}(S) \, \colon \, R \in \mathscr{R}_{m,K}(\mathbb{R}^n), S \in \mathscr{R}_{m+1,K}(\mathbb{R}^n) \mbox{ and } T = R + \partial S \rbrace.
\end{equation}
It turns out that $\mathbf{F}_{K}$ induces a complete metric $d_{\mathbf{F}_K}$ on $\mathscr{F}_{m,K}(\mathbb{R}^n)$ setting
\[
d_{\mathbf{F}_K}(T_{1}, T_{2}) := \mathbf{F}_{K}(T_{1} - T_{2}).
\]
Moreover, the mass $\mathbf{M}$ is lower semi-continuous with respect to the convergence in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ induced by $d_{\mathbf{F}_{K}}$.
For every $K$, the class $\mathscr{I}_{m,K}(\mathbb{R}^n)$ is dense in $\mathscr{R}_{m,K}(\mathbb{R}^n)$ in mass, and consequently $\mathscr{I}_{m,K}(\mathbb{R}^n)$ is dense in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ in flat norm. In fact, the same result holds more in general whenever the ambient space is a closed convex subset $E$ of a Banach space, if working with the general theory of currents in metric spaces introduced by Ambrosio and Kirchheim in \cite{AK00} (cf \cite[Proposition 14.7]{AK11}). In particular, given $T \in \mathscr{F}_{m,K}(\mathbb{R}^n)$ there exist sequences $\{ T_{j} \} \subset \mathscr{I}_{m,K}(\mathbb{R}^n)$, $\{ R_j \} \subset \mathscr{R}_{m,K}(\mathbb{R}^n)$, $\{ S_j \} \subset \mathscr{R}_{m+1,K}(\mathbb{R}^n)$ such that
\[
T = T_{j} + R_{j} + \partial S_{j}
\]
and
\[
\mathbf{M}(R_j) + \mathbf{M}(S_j) \to 0.
\]
If $T$ has finite mass, then the $\partial S_j$'s have finite mass too, and thus $S_j \in \mathscr{I}_{m+1,K}(\mathbb{R}^n)$. Therefore, the currents $T_{j} + \partial S_{j} \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ approximate $T$ in mass, and this suffices to conclude that $T \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ (cf. \cite[Lemma 27.5]{Sim83}). We have shown the following result:
\begin{theorem}[Rectifiability of flat chains with finite mass, cf. {\cite[Theorem 4.2.16]{Fed69}}] \label{mass_chain:thm}
\begin{equation} \label{mass_chain}
\mathscr{R}_{m,K}(\mathbb{R}^n) = \lbrace T \in \mathscr{F}_{m,K}(\mathbb{R}^n) \, \colon \, \mathbf{M}(T) < \infty \rbrace.
\end{equation}
\end{theorem}
We finally recall the following remarkable
\begin{theorem}[Compactness Theorem, cf. {\cite[Theorem 4.2.17]{Fed69}}] \label{compactness:thm}
Let $K \subset \mathbb{R}^n$ be a compact set and $\{T_{j}\}_{j=1}^{\infty} \subset \mathscr{I}_{m,K}(\mathbb{R}^n)$ a sequence of integral currents such that
\begin{equation} \label{compactness:hp}
\sup_{j \geq 1} \{ \mathbf{M}(T_{j}) + \mathbf{M}(\partial T_{j}) \} < \infty.
\end{equation}
Then, there exist $T \in \mathscr{I}_{m,K}(\mathbb{R}^n)$ and a subsequence $\{T_{j_{\ell}}\}$ such that
\begin{equation} \label{compactness:th}
\lim_{\ell \to \infty} \mathbf{F}_{K}(T - T_{j_{\ell}}) = 0.
\end{equation}
\end{theorem}
\subsection{Polyhedral chains.} Given an $m$-dimensional simplex $\sigma$ in $\mathbb{R}^n$ with constant unit orientation $\tau$, we denote by $\llbracket \sigma \rrbracket$ the rectifiable current $\llbracket \sigma, \tau, 1 \rrbracket$. Finite linear combinations of (the currents associated with) oriented $m$-simplexes with integer coefficients are called (integral) \emph{polyhedral $m$-chains}. The set of polyhedral $m$-chains in $\mathbb{R}^n$ will be denoted $\mathscr{P}_{m}(\mathbb{R}^n)$. The main motivation for introducing polyhedral chains is the following theorem.
\begin{theorem}[Polyhedral approximation, cf. {\cite[Corollary 4.2.21]{Fed69}}] \label{poly_app}
If $T \in \mathscr{I}_{m}(\mathbb{R}^n)$, ${\varepsilon > 0}$, $K \subset \mathbb{R}^n$ is a compact set such that $\mathrm{spt}(T) \subset {\rm int}K$, then there exists $P \in \mathscr{P}_{m}(\mathbb{R}^n)$, with $\mathrm{spt}(P) \subset K$, such that
\begin{equation} \label{poly_app:eq}
\mathbf{F}_{K}(T - P) < \varepsilon, \hspace{0.5cm}
\mathbf{M}(P) \leq \mathbf{M}(T) + \varepsilon, \hspace{0.5cm} \mathbf{M}(\partial P) \leq \mathbf{M}(\partial T) + \varepsilon.
\end{equation}
\end{theorem}
\section{Flat chains modulo $p$} \label{S3}
In this section, we recall the definitions of the sets of currents with coefficients in $\mathbb{Z}_{p}$, and collect some of the most relevant open questions regarding their structure.
\subsection{Definitions and basic properties.}
Let $p$ be a positive integer. For any ${T \in \mathscr{F}_{m,K}(\mathbb{R}^n)}$, we define
\begin{equation} \label{p_flat}
\begin{split}
\mathbf{F}_{K}^{p}(T) := \inf \lbrace \mathbf{M}(R) + \mathbf{M}(S) \, \colon \, &R \in \mathscr{R}_{m,K}(\mathbb{R}^n), S \in \mathscr{R}_{m+1,K}(\mathbb{R}^n) \mbox{ s.t. } \\
&T = R + \partial S + pQ \mbox{ for some } Q \in \mathscr{F}_{m,K}(\mathbb{R}^n) \rbrace.
\end{split}
\end{equation}
Observe that, since $\mathscr{I}_{m,K}(\mathbb{R}^n)$ is flat-dense in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, the infimum is unchanged if we let $Q$ run in $\mathscr{I}_{m,K}(\mathbb{R}^n)$. Also notice that the inequality $\mathbf{F}_{K}^{p}(T) \leq \mathbf{F}_{K}(T)$ holds for any $T \in \mathscr{F}_{m,K}(\mathbb{R}^n)$.
Now, we introduce the equivalence relation ${\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$: given $T, \tilde{T} \in \mathscr{F}_{m,K}(\mathbb{R}^n)$, we say that $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ if and only if $\mathbf{F}_{K}^{p}(T - \tilde{T}) = 0$. The corresponding quotient group will be denoted $\mathscr{F}_{m,K}^{p}(\mathbb{R}^n)$. As in the classical case, $\mathbf{F}_{K}^{p}$ induces a distance $d_{\mathbf{F}_{K}^{p}}$ which makes $\mathscr{F}_{m,K}^{p}(\mathbb{R}^n)$ a complete metric space.
It is evident that if $T - \tilde{T} = pQ$ for some $Q \in \mathscr{F}_{m,K}(\mathbb{R}^n)$, then $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, but the converse implication is not known (see Question \ref{pb1} below).
We say that two flat $m$-chains $T, \tilde{T} \in \mathscr{F}_{m}(\mathbb{R}^n)$ are equivalent ${\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$, and we write $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$ if there exists a compact $K \subset \mathbb{R}^n$ such that $\mathbf{F}_{K}^{p}(T - \tilde{T}) = 0$. The elements of the corresponding quotient group $\mathscr{F}_{m}^{p}(\mathbb{R}^n)$ are called \emph{flat $m$-chains modulo $p$} and they will be denoted by $\left[T \right]$.
\begin{remark}
\begin{itemize}
\item[$(i)$] Note that if $T\in\mathscr{F}_{m}(\mathbb{R}^n)$ and $\mathrm{spt}(T)\subset K$, then it is false in general that $T\in\mathscr{F}_{m,K}(\mathbb{R}^n)$. The simplest counterexample being the $0$-dimensional current obtained as the boundary of (the rectifiable 1-current associated to) a countable union of disjoint intervals $S_i$ contained in $\left[ 0, 1 \right]$ and clustering only at the origin, when $K=\{0\}\cup\bigcup_i \partial S_i$. Nevertheless, it is a consequence of the polyhedral approximation theorem \ref{poly_app} that if $\mathrm{spt}(T) \subset {\rm int} K$, then indeed $T \in \mathscr{F}_{m,K}(\mathbb{R}^n)$ (see also \cite[Theorem 4.2.22]{Fed69}).
\item[$(ii)$] One would expect that the following property holds. If $T=\tilde{T}\, {\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$, then $T=\tilde{T}\, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, whenever $K$ is a compact set which contains $\mathrm{spt}(T)$ and $\mathrm{spt}(\tilde T)$, and $T,\tilde{T}\in\mathscr{F}_{m,K}(\mathbb{R}^n)$. Nevertheless, the validity of this property does not appear to be obvious for a general compact set $K$. On the other hand, if $K$ is also convex, the validity of the property is immediate. Indeed, let $K'$ be a compact set such that $T-\tilde{T}=R_j+\partial S_j+pQ_j$ with $R_j \in \mathscr{R}_{m,K'}(\mathbb{R}^n), S_j \in \mathscr{R}_{m+1,K'}(\mathbb{R}^n)$, $Q_j \in \mathscr{F}_{m,K'}(\mathbb{R}^n)$ and $\mathbf{M}(R_j) + \mathbf{M}(S_j)\leq \frac{1}{j}$. Then, denoting by $\pi$ the (1-Lipschitz) closest-point projection on $K$, and by $\pi_\sharp$ the push-forward operator through $\pi$ (see \cite[Section 4.1.14]{Fed69}), we have that
$$T-\tilde{T}=\pi_\sharp T-\pi_\sharp\tilde{T}=\pi_\sharp R_j + \partial\pi_\sharp S_j + p\pi_\sharp Q_j,$$
where $\pi_\sharp R_j \in \mathscr{R}_{m,K}(\mathbb{R}^n), \pi_\sharp S_j \in \mathscr{R}_{m+1,K}(\mathbb{R}^n)$ and $\pi_\sharp Q_j \in \mathscr{F}_{m,K}(\mathbb{R}^n)$. Moreover $\mathbf{M}(\pi_\sharp R_j) + \mathbf{M}(\pi_\sharp S_j)\leq\mathbf{M}(R_j) + \mathbf{M}(S_j)$, hence $T=\tilde{T}\, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$.
\item[$(iii)$] Observe that, using the same argument as in $(ii)$, we are able to conclude that if $T\in\mathscr{F}_{m}(\mathbb{R}^n)$ and $\mathrm{spt}(T)\subset K$ then $T\in\mathscr{F}_{m,K}(\mathbb{R}^n)$ when $K$ is convex (or, more in general, whenever there exists a Lipschitz projection onto $K$).
\end{itemize}
\end{remark}
\subsection{Boundary, mass and support modulo $p$.}
It is immediate to see that if ${T = \tilde{T} \, {\rm mod}(p)}$ in $\mathscr{F}_{m}(\mathbb{R}^n)$ (resp. in $\mathscr{F}_{m,K}(\mathbb{R}^n)$), then also $\partial T = \partial \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m-1}(\mathbb{R}^n)$ (resp. in $\mathscr{F}_{m-1,K}(\mathbb{R}^n)$), and therefore a boundary operator $\partial$ can be defined also in the quotient groups $\mathscr{F}_{m}^{p}(\mathbb{R}^n)$ (resp. in $\mathscr{F}_{m,K}^{p}(\mathbb{R}^n)$) in such a way that
\begin{equation} \label{p_bound}
\partial \left[ T \right] = \left[ \partial T \right] \hspace{0.5cm} \mbox{for every } T \in \mathscr{F}_{m}(\mathbb{R}^n).
\end{equation}
For $T \in \mathscr{F}_{m}(\mathbb{R}^n)$, we also define its \emph{mass modulo $p$}, or simply $p$-\emph{mass} $\mathbf{M}^{p}(T)$, as the least $t \in \mathbb{R} \cup \{+\infty\}$ such that for every $\varepsilon > 0$ there exists a compact $K \subset \mathbb{R}^n$ and a rectifiable current $R \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ satisfting
\begin{equation} \label{p_mass}
\mathbf{F}_{K}^{p}(T - R) < \varepsilon \quad \mbox{and} \quad \mathbf{M}(R) \leq t + \varepsilon.
\end{equation}
One has that $\mathbf{M}^{p}(T_1 + T_2) \leq \mathbf{M}^{p}(T_1) + \mathbf{M}^{p}(T_2)$ and $\mathbf{M}^{p}(T) = \mathbf{M}^{p}(\tilde{T})$ if $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$. This allows to regard $\mathbf{M}^{p}$ as a functional on the quotient group $\mathscr{F}_{m}^{p}(\mathbb{R}^n)$. Such functional is lower semi-continuous with respect to the $\mathbf{F}_{K}^{p}$-convergence for every $K$.
Finally, we denote by $\mathrm{spt}^{p}(\left[T\right])$ the \emph{support modulo $p$} of $\left[ T \right] \in \mathscr{F}_{m}^{p}(\mathbb{R}^n)$, given by
\begin{equation} \label{p_spt}
\mathrm{spt}^{p}(\left[T\right]) := \bigcap \lbrace \mathrm{spt}(\tilde{T}) \, \colon \, \tilde{T} \in \mathscr{F}_{m}(\mathbb{R}^n), \, \tilde{T} = T \, {\rm mod}(p) \mbox{ in } \mathscr{F}_{m}(\mathbb{R}^n) \rbrace.
\end{equation}
\subsection{Rectifiable and integral currents modulo $p$.}
We define now the group $\mathscr{R}_{m}^{p}(\mathbb{R}^n)$ of the \emph{integer rectifiable currents modulo $p$} by setting
\begin{equation} \label{p_irc}
\mathscr{R}_{m,K}^{p}(\mathbb{R}^n) := \lbrace \left[ T \right] \in \mathscr{F}_{m,K}^{p}(\mathbb{R}^n) \, \colon \, T \in \mathscr{R}_{m,K}(\mathbb{R}^n) \rbrace.
\end{equation}
As usual, $\mathscr{R}_{m}^{p}(\mathbb{R}^n)$ is the union over $K$ compact of $\mathscr{R}_{m,K}^{p}(\mathbb{R}^n)$. Clearly, not all the elements in a class $\left[ T \right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n)$ are classical rectifiable currents, but whenever we write $\left[ T \right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n)$ we will always implicitly intend that $T$ is a rectifiable representative of its class.\\
A current $R = \llbracket E, \tau, \theta \rrbracket \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ is called \emph{representative modulo $p$} if and only if
\[
\| R \|(A) \leq \frac{p}{2} \mathcal{H}^{m}(E \cap A) \hspace{0.5cm} \mbox{for every Borel set } A \subset \mathbb{R}^n.
\]
Evidently, this condition is equivalent to ask that
\[
|\theta(x)| \leq \frac{p}{2} \hspace{0.5cm} \mbox{for $\|R\|$-a.e. } x.
\]
Since obviously for any integer $z$ there exists a (unique) integer $- \frac{p}{2} < \tilde{z} \leq \frac{p}{2}$ with ${z \equiv \tilde{z} \, ({\rm mod}\, p)}$, then for any $T \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ there exists an integer rectifiable current $R\in \mathscr{R}_{m,K}(\mathbb{R}^n)$ such that $R = T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ and $R$ is representative modulo $p$. We immediately conclude that any $T \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ can be written as
\begin{equation} \label{decomp}
T = R + p Q,
\end{equation}
where $R, Q \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ and $R$ is representative modulo $p$. It is proved in \cite[4.2.26, p. 430]{Fed69} that
\begin{equation} \label{representative}
\mathbf{M}^{p}(T) = \mathbf{M}(R), \hspace{0.5cm} \mathrm{spt}^{p}(\left[T\right]) = \mathrm{spt}(R),
\end{equation}
if $R$ is representative modulo $p$ of the current $T$.
A modulo $p$ version of Theorem \ref{mass_chain:thm} is contained in \cite[(4.2.16)$^{\nu}$, p. 431]{Fed69}:
\begin{theorem}[Rectifiability of flat chains modulo $p$] \label{p_mass_chain:thm}
\begin{equation} \label{p_mass_chain}
\mathscr{R}_{m,K}^{p}(\mathbb{R}^n) = \lbrace \left[ T \right] \in \mathscr{F}_{m,K}^{p}(\mathbb{R}^n) \, \colon \, \mathbf{M}^{p}(\left[T\right]) < \infty \rbrace.
\end{equation}
\end{theorem}
Hence, if $\left[ T \right] \in \mathscr{F}_{m,K}^{p}(\mathbb{R}^n)$ has finite $\mathbf{M}^p$ mass, then there exists $R \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ such that $R = T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, $\mathbf{M}(R) = \mathbf{M}^{p}(\left[T\right])$, and $\mathrm{spt}(R) = \mathrm{spt}^{p}(\left[T\right]).$\\
Next, we define the group $\mathscr{I}_{m}^{p}(\mathbb{R}^n)$ of the \emph{integral currents modulo $p$} as the union of the groups
\[
\mathscr{I}_{m,K}^{p}(\mathbb{R}^n) := \lbrace \left[ T \right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n) \, \colon \, \partial\left[ T \right] \in \mathscr{R}_{m-1,K}^{p}(\mathbb{R}^n) \rbrace.
\]
The conclusions about integer rectifiable currents modulo $p$ deriving from the above discussion allow us to say that if $\left[ T \right] \in \mathscr{I}_{m,K}^{p}(\mathbb{R}^n)$ then $\mathbf{M}^{p}(\left[T\right]) < \infty$, $\mathbf{M}^{p}(\partial \left[T\right]) < \infty$ and that there are currents $R \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ and $S \in \mathscr{R}_{m-1,K}(\mathbb{R}^n)$ such that $R = T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ and $S = \partial T \, {\rm mod}(p)$ in $\mathscr{F}_{m-1,K}(\mathbb{R}^n)$. In particular, $R$ and $S$ may be chosen to be representative modulo $p$, so that $\mathbf{M}(R) = \mathbf{M}^{p}(T)$ and $\mathbf{M}(S) = \mathbf{M}^{p}(\partial T)$. It is not known whether it is possible to choose $I \in \mathscr{I}_{m,K}(\mathbb{R}^n)$ such that $T = I \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ (see Question \ref{pb2} below).
A modulo $p$ version of the Boundary Rectifiability Theorem can be straightforwardly deduced from Theorem \ref{p_mass_chain:thm}, as we have:
\begin{theorem}[Boundary Rectifiability modulo $p$, cf. {\cite[(4.2.16)$^{\nu}$]{Fed69}}] \label{p_b_rect:thm}
\begin{equation} \label{p_b_rect}
\mathscr{I}_{m,K}^{p}(\mathbb{R}^n) = \lbrace \left[T\right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n) \, \colon \, \mathbf{M}^{p}(\partial \left[T\right]) < \infty \rbrace.
\end{equation}
\end{theorem}
We conclude with the following modulo $p$ version of the Polyhedral approximation Theorem \ref{poly_app}, which can be deduced from \cite[(4.2.20)$^{\nu}$]{Fed69}. Since the statement does not appear in \cite{Fed69}, for the reader's convenience we include here the proof.
\begin{theorem}[Polyhedral approximation modulo $p$] \label{p_poly_app}
If $\left[T\right] \in \mathscr{I}_{m}^{p}(\mathbb{R}^n)$, ${\varepsilon > 0}$, $K \subset \mathbb{R}^n$ is a compact set such that $\mathrm{spt}^{p}(\left[T\right]) \subset {\rm int}K$, then there exists $P \in \mathscr{P}_{m}(\mathbb{R}^n)$, with $\mathrm{spt}(P) \subset K$, such that
\begin{equation} \label{p_poly_app:eq}
\mathbf{F}_{K}^{p}(T - P) < \varepsilon, \hspace{0.5cm}
\mathbf{M}^{p}(P) \leq \mathbf{M}^{p}(T) + \varepsilon, \hspace{0.5cm} \mathbf{M}^{p}(\partial P) \leq \mathbf{M}^{p}(\partial T) + \varepsilon.
\end{equation}
\end{theorem}
\begin{proof}
Let $T \in \mathscr{R}_{m}(\mathbb{R}^{n})$ be a (rectifiable) representative modulo $p$ of $\left[ T \right]$. In particular, by formula \eqref{representative}, $T$ satisfies $\mathrm{spt}(T) = \mathrm{spt}^{p}(\left[ T \right]) \subset {\rm int}K$ and $\mathbf{M}(T) = \mathbf{M}^{p}(T)$. Fix $\varepsilon > 0$, and let $0 < \delta \leq \varepsilon$ be such that $\{ x \in \mathbb{R}^{n} \, \colon \, {\rm dist}(x, \mathrm{spt}(T)) < \delta \} \subset K$. By \cite[Theorem (4.2.20)$^{\nu}$]{Fed69}, there exist $P \in \mathscr{P}_{m}(\mathbb{R}^{n})$ with $\mathrm{spt}(P) \subset K$ and a diffeomorphism $f \in C^{1}(\mathbb{R}^{n}, \mathbb{R}^{n})$ such that:
\begin{itemize}
\item[$(i)$] $\mathbf{M}^{p}(P - f_{\sharp}T) + \mathbf{M}^{p}(\partial P - f_{\sharp}\partial T) \leq \delta$;
\item[$(ii)$] $\mathrm{Lip}(f) \leq 1 + \delta$, and $\mathrm{Lip}(f^{-1}) \leq 1 + \delta$;
\item[$(iii)$] $|f(x) - x| \leq \delta \mbox{ for } x \in \mathbb{R}^{n}$, and $f(x) = x \mbox{ if } {\rm dist}(x, \mathrm{spt}(T)) \geq \delta$.
\end{itemize}
From $(i)$ it readily follows that
\begin{equation} \label{ppa:1}
\mathbf{M}^{p}(P) \leq \delta + \mathbf{M}^{p}(f_{\sharp}T) \leq \delta + (1 + \delta)^{m} \mathbf{M}^{p}(T),
\end{equation}
and analogously
\begin{equation} \label{ppa:2}
\mathbf{M}^{p}(\partial P) \leq \delta + \mathbf{M}^{p}(f_{\sharp}\partial T) \leq \delta + (1 + \delta)^{m-1} \mathbf{M}^{p}(\partial T).
\end{equation}
In order to prove the estimate on the $\mathbf{F}^{p}_{K}$ distance, let $h$ be the affine homotopy from the identity map to $f$, i.e. $h(t,x) := (1 - t) x + t f(x)$, and observe that the homotopy formula (see \cite[Section 4.1.9]{Fed69}) yields
\begin{equation} \label{ppa:3}
P - T = P - f_{\sharp}T + \partial \left( h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times T) \right) + h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times \partial T).
\end{equation}
Now, since $\mathbf{M}^{p}(\partial T) < \infty$, there exists a rectifiable current $Z \in \mathscr{R}_{m-1,K}(\mathbb{R}^{n})$ such that
\begin{equation} \label{ppa:4}
\mathbf{F}_{K}^{p}(\partial T - Z) \leq \delta \hspace{0.2cm} \mbox{ and } \hspace{0.2cm} \mathbf{M}(Z) \leq \mathbf{M}^{p}(\partial T) + \delta.
\end{equation}
In particular, this implies the existence of $R \in \mathscr{R}_{m-1,K}(\mathbb{R}^{n})$, $S \in \mathscr{R}_{m,K}(\mathbb{R}^{n})$ and $Q \in \mathscr{I}_{m-1,K}(\mathbb{R}^{n})$ with $\mathbf{M}(R) + \mathbf{M}(S) \leq 2 \delta$ such that $\partial T - Z = R + \partial S + pQ$. If combined with \eqref{ppa:3}, this gives
\begin{equation} \label{ppa:5}
P - T = P - f_{\sharp}T + \partial h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times T) + h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times Z) + h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times (R + \partial S + pQ)).
\end{equation}
Since, again by the homotopy formula,
\[
h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times \partial S) = f_{\sharp}S - S - \partial h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times S),
\]
we can finally re-write equation \eqref{ppa:5} as follows:
\begin{equation} \label{ppa:6}
\begin{split}
P - T = \,& P - f_{\sharp}T \\
&+ h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times (Z + R)) + f_{\sharp}S - S \\
&+ \partial h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times (T - S)) \\
&+ p h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times Q).
\end{split}
\end{equation}
Therefore, we can finally estimate
\begin{equation} \label{ppa:7}
\begin{split}
\mathbf{F}_{K}^{p}(P - T) \leq \,& \mathbf{F}_{K}^{p}(P - f_{\sharp}T) \\
&+ \mathbf{M}(h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times (Z + R))) + \mathbf{M}(f_{\sharp}S) + \mathbf{M}(S) \\
&+ \mathbf{M}(h_{\sharp}(\llbracket \left( 0,1 \right) \rrbracket \times (T - S))) \\
\leq & 3\delta + 2\delta (1+\delta)^{m} + \delta (2 + \delta) (\mathbf{M}^{p}(T) + \mathbf{M}^{p}(\partial T) + 3\delta),
\end{split}
\end{equation}
where we have used \cite[Formula (26.23)]{Sim83} to estimate the first and last addenda in the second line.
The conclusion, formula \eqref{p_poly_app:eq}, clearly follows from \eqref{ppa:1}, \eqref{ppa:2} and \eqref{ppa:7} for a suitable choice of $\delta = \delta(\varepsilon, m, \mathbf{M}^{p}(T), \mathbf{M}^{p}(\partial T))$.
\end{proof}
\subsection{Questions on the structure of flat chains and integral currents modulo $p$.}
As already anticipated, two very natural questions arise about the structure of flat chains and integral currents modulo $p$ (see \cite[4.2.26]{Fed69}).
We fix a compact subset $K \subset \mathbb{R}^n$.
\begin{problema} \label{pb1}
Given $T,\tilde T\in\mathscr{F}_{m,K}(\mathbb{R}^n)$, is it true that $T = \tilde T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ if and only if $T-\tilde T= pQ$ for some $Q \in \mathscr{F}_{m,K}(\mathbb{R}^n)$? In other words, using the density of $\mathscr{I}_{m,K}(\mathbb{R}^n)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, the problem is to prove or disprove the following statement. Given three sequences $\{ R_j \} \subset \mathscr{R}_{m,K}(\mathbb{R}^n)$, $\{ S_j \} \subset \mathscr{R}_{m+1,K}(\mathbb{R}^n)$, $\{ Q_j \} \subset \mathscr{I}_{m,K}(\mathbb{R}^n)$ such that
\begin{equation} \label{pb1:hp1}
T- \tilde T= R_{j} + \partial S_{j} + p Q_{j} \hspace{0.5cm} \forall j,
\end{equation}
and
\begin{equation} \label{pb1:hp2}
\lim_{j\to \infty} \left( \mathbf{M}(R_j) + \mathbf{M}(S_j) \right) = 0,
\end{equation}
then $T-\tilde T = p Q$ for some $Q \in \mathscr{F}_{m,K}(\mathbb{R}^n)$.
\end{problema}
\begin{remark} \label{remarkone}
As we shall soon see, the answer to the above question is affirmative if the class $\mathscr{F}_{m,K}(\mathbb{R}^n)$ is replaced by the class $\mathscr{R}_{m,K}(\mathbb{R}^n)$: in other words, given integer rectifiable currents $T, \tilde{T}$ one has that $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ if and only if $T - \tilde{T} = p Q$ for some $Q \in \mathscr{R}_{m,K}(\mathbb{R}^n)$. As a corollary, Question \ref{pb1} admits affirmative answer for $m = n$, since $\mathscr{F}_{n,K}(\mathbb{R}^n) = \mathscr{R}_{n,K}(\mathbb{R}^n)$. For $0 \leq m \leq n-1$, the question is widely open.
\end{remark}
\begin{problema} \label{pb2}
Given $\left[ T \right] \in \mathscr{I}_{m,K}^{p}(\mathbb{R}^n)$, does there exist an \emph{integral} current $I \in \mathscr{I}_{m,K}(\mathbb{R}^n)$ such that $I = T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$? In other words: is it true that
\[
\mathscr{I}_{m,K}^{p}(\mathbb{R}^n) = \lbrace \left[ T \right] \, \colon \, T \in \mathscr{I}_{m,K}(\mathbb{R}^n) \rbrace?
\]
\end{problema}
\begin{remark}
The answer is trivial for $m=0$, since integral and integer rectifiable $0$-dimensional currents are the same class. In \cite[4.2.26, p. 426]{Fed69}, Federer does not really present this issue as a ``question'', but he rather claims that the answer is negative, in general dimension and codimension. Nevertheless, the counterexample he suggests (an infinite sum of disjoint ${\bf RP}^{2}$ in $\mathbb{R}^6$ with the property that the sum of the areas is finite but the sum of the lengths of the bounding projective lines is infinite) is not fully satisfactory (cf. \cite[Problem 3.3]{Bro86}). Indeed, it allows one to negatively answer the question only for very special choices of the set $K$ (in particular, the question remains open when $K$ is a convex set).
\end{remark}
\subsection{Some partial answers from the literature.}
An immediate consequence of \eqref{representative} is the following: if $T,\tilde T \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ are such that $T = \tilde T \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$, then evidently $\mathbf{M}^{p}(T-\tilde T) = \mathbf{M}^{p}(0) = 0$, and hence the representative modulo $p$ of $T-\tilde T$ is $R = 0$ because of \eqref{representative}. Therefore, equation \eqref{decomp} yields $T-\tilde T = p Q$ for some integer rectifiable current $Q\in\mathscr{R}_{m,K}(\mathbb{R}^n)$. In conclusion, we have the following
\begin{proposition}
The answer to Question \ref{pb1} is affirmative in the class of integer rectifiable currents. Therefore:
\begin{equation} \label{p_rect}
\mathscr{R}_{m,K}(\mathbb{R}^n)\cap\{T\in\mathscr{F}_{m,K}(\mathbb{R}^n)\,:\,T=0\,{\rm mod}(p)\text{ in } \mathscr{F}_{m,K}(\mathbb{R}^n)\} = \{pR\,:\,R\in\mathscr{R}_{m,K}(\mathbb{R}^n)\}.
\end{equation}
\end{proposition}
In particular, the following corollary holds true:
\begin{corollary}
Let $T, \tilde{T} \in \mathscr{R}_{m}(\mathbb{R}^n)$. Then, $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$ if and only if $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$ for every $K$ compact with $\mathrm{spt}(T) \cup \mathrm{spt}(\tilde{T}) \subset K$.
\end{corollary}
\begin{proof}
The ``if'' implication is trivial. For the converse, assume $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m}(\mathbb{R}^n)$ and fix any compact set $K$ such that $\mathrm{spt}(T) \cup \mathrm{spt}(\tilde{T}) \subset K$. By definition, there exists a compact set $K'$ such that $\mathbf{F}^{p}_{K'}(T - \tilde{T}) = 0$, which, by the above proposition, implies
\[
T - \tilde{T} = pQ
\]
for some $Q \in \mathscr{R}_{m,K'}(\mathbb{R}^n)$. Note that since $T - \tilde{T}$ is supported in $K$, so is $Q$, and thus $\mathbf{F}_{K}(T - \tilde{T}) = 0$, i.e. $T = \tilde{T} \, {\rm mod}(p)$ in $\mathscr{F}_{m,K}(\mathbb{R}^n)$.
\end{proof}
From now on, by virtue of the previous corollary, for rectifiable currents $T$ and $\tilde{T}$ in $\mathscr{F}_{m}(\mathbb{R}^n)$ which are equivalent modulo $p$ we will just write $T = \tilde{T} \, {\rm mod}(p)$ without specifying in which class the equivalence relation is meant.
\\
In codimension $0$, B. White \cite{WH79} gave an affirmative answer to Question \ref{pb2}.
\begin{theorem}[cf. {\cite[Proposition 2.3]{WH79}}] \label{White}
Let $T\in \mathscr{R}_{n,K}(\mathbb{R}^n)$. Then, $\left[ T \right] \in \mathscr{I}_{n,K}^{p}(\mathbb{R}^n)$ if and only if the select representative modulo $p$ of $T$ is an integral current.
\end{theorem}
The \emph{select} representative modulo $p$ of a rectifiable current $T=\llbracket E, \tau, \theta \rrbracket$ is the unique $T'=\llbracket E, \tau, \theta' \rrbracket$ representative modulo $p$ of $T$ with multiplicity $\theta' \in \left( - \frac{p}{2}, \frac{p}{2} \right]$.
White's proof relies on the following:
\begin{proposition} \label{S_n}
If $T \in \mathscr{R}_{n,K}(\mathbb{R}^n)$ is a select representative modulo $p$, then
\begin{equation} \label{claim}
\mathbf{M}(\partial T) \leq (p - 1) \mathbf{M}^{p}(\partial T).
\end{equation}
\end{proposition}
We sketch the proof of Theorem \ref{White}, having shown Proposition \ref{S_n}. Take $\left[ T \right] \in \mathscr{I}_{n,K}^{p}(\mathbb{R}^n)$, and let $T'$ be the unique select representative modulo $p$ of $T$. A priori, $T'$ is just an integer rectifiable current. On the other hand, since $\left[ T \right]$ is integral, $\mathbf{M}^{p}(\partial T)$ is finite by \eqref{p_mass_chain}. Then Proposition \ref{S_n} implies that $\mathbf{M}(\partial T')$ is finite. Hence, $T'$ is integral because of \eqref{b_rect}.\\
Unfortunately, in order to carry on the argument that White uses to prove Proposition \ref{S_n}, the codimension $0$ assumption is indispensable. The idea is the following. Firstly, Theorem \ref{p_poly_app} allows one to reduce the problem to the case of polyhedral chains. Now, for any given polyhedral chain $T$ which is a select representative modulo $p$ one writes $T = \llbracket \mathbb{R}^n, \mathbf{e_n}, \theta \rrbracket$, where $\mathbf{e_n}$ is the constant standard orientation of $\mathbb{R}^n$ and $\theta$ is a summable, piecewise constant, integer-valued function with values in $\left( - \frac{p}{2}, \frac{p}{2} \right]$. Then, White makes the following key observation: since the codimension is $0$, if $Z$ is a polyhedron in $\partial T$ then for $\mathcal{H}^{n-1}$-a.e. $x\in Z$ the multiplicity at $x$ is the difference of the values that the function $\theta$ takes on the two sides of $Z$ (with the correct sign), whose absolute value is in fact bounded by $p-1$ (because $T$ is a select representative modulo $p$).\\
In the next section, we will show that the validity of a statement like the one in Proposition \ref{S_n} is in fact the key not only for giving an affirmative answer to Question \ref{pb2}, but also for positively answering Question \ref{pb1}. Furthermore, we will answer Question \ref{pb2} in dimension $m=1$.
\section{Main results} \label{S4}
In this section, we will further analyze Questions \ref{pb1} and \ref{pb2}. First, we point out that the two questions are, in fact, connected.
\subsection{Connection between Questions \ref{pb1} and \ref{pb2}.}
For every $K \subset \mathbb{R}^n$ compact, consider the following family of statements $\mathcal{S}_{m}$, for $m = 1,\dots, n$.
\begin{taggedtheorem}{$\mathcal{S}_{m}$} \label{S_k}
There exists a constant $C = C(m,n,p,K)$ with the following property. For any $\left[ S \right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n)$ there exists a current $\tilde{S} \in \mathscr{R}_{m,K}(\mathbb{R}^n)$ with $\tilde{S} = S \, {\rm mod}(p)$ and such that
\[
\mathbf{M}(\partial \tilde{S}) \leq C \mathbf{M}^{p}(\partial S).
\]
\end{taggedtheorem}
Using Theorem \ref{p_poly_app}, it is easy to see that the validity of Statement \ref{S_k} follows from the validity of a slightly stronger property for polyhedral chains, which, on the other hand, might be easier to check.
\begin{taggedtheorem}{$\mathcal{P}_{m}$} \label{S_k_poly}
There exists a constant $C = C(m,n,p)$ \emph{independent of $K$} with the following property. For any $P \in \mathscr{P}_{m}(\mathbb{R}^n)$ with $\mathrm{spt}(P) \subset K$, there exists a current $\tilde{P} \in \mathscr{P}_{m}(\mathbb{R}^n)$, with $\tilde{P} = P \, {\rm mod}(p)$ and $\mathrm{spt}(\tilde P) \subset K$ such that
\[
\mathbf{M}(\partial \tilde{P}) \leq C \mathbf{M}^{p}(\partial P);\hspace{0.5cm} \mathbf{M}(\tilde{P}) \leq C \mathbf{M}^{p}(P).
\]
\end{taggedtheorem}
\begin{proposition}\label{equiv_statements}
The validity of Statement $\mathcal{P}_{m}$ implies that of Statement $\mathcal{S}_{m}$.
\end{proposition}
\begin{proof}
Let $\left[ S \right] \in \mathscr{R}_{m,K}^{p}(\mathbb{R}^n)$. We can assume that $\mathbf{M}^p(\left[ \partial S \right])$ is finite, otherwise the conclusion of Statement $\mathcal{S}_{m}$ is trivial. By Theorem \ref{p_poly_app}, for every $j=1,2,\ldots$ there exists $P_j\in \mathscr{P}_{m}(\mathbb{R}^n)$ such that, denoting
$$K_j:=\left\lbrace x\in\mathbb{R}^n:{\rm{dist}}(x,K)\leq\frac{1}{j}\right\rbrace,$$
one has $\mathrm{spt}(P_{j}) \subset K_{j}$ and
\begin{equation} \label{e1}
\mathbf{F}_{K_j}^{p}(S - P_j) < \frac{1}{j}, \hspace{0.5cm}
\mathbf{M}^{p}(P_j) \leq \mathbf{M}^{p}(S) + \frac{1}{j}, \hspace{0.5cm} \mathbf{M}^{p}(\partial P_j) \leq \mathbf{M}^{p}(\partial S) + \frac{1}{j}.
\end{equation}
Now, by Statement $\mathcal{P}_{m}$ there exist a constant $C$ (which does not depend on $j$) and a sequence $\{\tilde P_j\}$ of polyhedral chains with $\tilde P_j = P_j \, {\rm mod}(p)$ and $\mathrm{spt}(\tilde P_j)\subset K_j$ such that
\[
\mathbf{M}(\partial \tilde P_j) \leq C \mathbf{M}^{p}(\partial P_j);\hspace{0.5cm} \mathbf{M}(\tilde{P_j}) \leq C \mathbf{M}^{p}(P_j).
\]
Combining this with \eqref{e1}, we get
\[
\sup_{j \geq 1} \{ \mathbf{M}(\tilde P_j) + \mathbf{M}(\partial \tilde P_j) \} \leq C(\mathbf{M}^p(S)+\mathbf{M}^p(\partial S)+2)< \infty.
\]
Then, by the Compactness Theorem \ref{compactness:thm} there exist $\tilde S \in \mathscr{I}_{m,K_1}(\mathbb{R}^n)$ and a subsequence $\{\tilde P_{j_h}\}$ such that
\begin{equation}\label{e2}
\lim_{h \to \infty} \mathbf{F}_{K_1}(\tilde S - \tilde P_{j_h}) = 0.
\end{equation}
Moreover by the lower semi-continuity of the mass, it holds
$$\mathbf{M}(\partial \tilde{S}) \leq C \mathbf{M}^{p}(\partial S);\hspace{0.5cm} \mathbf{M}(\tilde{S}) \leq C \mathbf{M}^{p}(S)$$
and we claim that $\mathrm{spt} (\tilde S)\subset K$. Indeed, take $x\in\mathbb{R}^n\setminus K$. We will prove that there exists a closed set $C$ such that $x\not\in C$ and $\langle \tilde S,\omega\rangle=0$ whenever $\omega\equiv 0$ on $C$, which implies that $x\not\in\mathrm{spt}(\tilde S)$. Fix $\ell$ such that $x\not\in K_{j_\ell}$ and let $C:=K_{j_\ell}$. Let $\omega$ be an $m$-form with $\omega\equiv 0$ on $C$. Since for every $h\geq\ell$ it holds $\mathrm{spt}(\tilde P_{j_h})\subset C$, we have
\begin{equation}\label{fava}
\langle \tilde P_{j_h},\omega\rangle=0,\quad \text{for every $h\geq\ell$}.
\end{equation}
On the other hand, by \eqref{e2}, for every $\varepsilon>0$ there exists $h\geq\ell$ such that we can write $\tilde S-\tilde P_{j_h}=R+\partial Q$ for some $R\in\mathscr{R}_{m,K_1}(\mathbb{R}^n)$ and $Q\in\mathscr{R}_{m+1,K_1}(\mathbb{R}^n)$ with $\mathbf{M}(R)+\mathbf{M}(Q)\leq\varepsilon$. Hence it holds
$$\langle \tilde S-\tilde P_{j_h},\omega\rangle = \langle R,\omega\rangle+\langle \partial Q,\omega\rangle\leq\mathbf{M}(R)\|\omega\|_{\infty}+\mathbf{M}(Q)\|d\omega\|_{\infty}\leq\varepsilon(\|\omega\|_{\infty}+\|d\omega\|_{\infty}).$$
Hence by \eqref{fava} $\langle \tilde S,\omega\rangle=0$, which completes the proof of the claim.\\
Finally, we show that $\tilde S=S\,{\rm mod}(p)$. To this aim, for every $h=1,2,\ldots$, we compute
$$\mathbf{F}^p_{K_1}(\tilde S-S) \leq \mathbf{F}^p_{K_1}(\tilde S - \tilde P_{j_h})+\mathbf{F}^p_{K_1}(\tilde P_{j_h}-S)\leq \mathbf{F}_{K_1}(\tilde S - \tilde P_{j_h})+\mathbf{F}^p_{K_{{j_h}}}(\tilde P_{j_h}-S),$$
which by \eqref{e1} and \eqref{e2} tends to $0$ when $h$ tends to $\infty$.
\end{proof}
\begin{remark}
It follows from the above proof that if the Statement $\mathcal{P}_{m}$ holds true then the Statement $\mathcal{S}_{m}$ holds true with the same constant $C$. In particular, the constant would not depend on the compact set $K$.
\end{remark}
Clearly, if the Statement $\mathcal{S}_{m}$ is true then every $m$-dimensional integral current modulo $p$ in $K$ has an integral representative in $K$, and thus the answer to Question \ref{pb2} is affirmative in dimension $m$. The next theorem shows that, in fact, the validity of $\mathcal{S}_{m}$ has important consequences on Question \ref{pb1} as well.
\begin{theorem} \label{link:thm}
If $\mathcal{S}_{m}$ holds true, then Question \ref{pb1} has affirmative answer in $\mathscr{F}_{m-1,K}(\mathbb{R}^{n})$.
\end{theorem}
\begin{proof}
It is sufficient to prove that if $T \in \mathscr{F}_{m-1,K}(\mathbb{R}^n)$ is a flat $(m-1)$-chain such that $T = 0 \, {\rm mod}(p)$ in $\mathscr{F}_{m-1,K}(\mathbb{R}^n)$, then $T=pQ$ for some $Q\in\mathscr{F}_{m-1,K}(\mathbb{R}^n)$. Let ${\{ R_{j} \} \subset \mathscr{R}_{m-1,K}(\mathbb{R}^n)}$, $\{ S_{j} \} \subset \mathscr{R}_{m,K}(\mathbb{R}^n)$ and $\{ Q_{j} \} \subset \mathscr{I}_{m-1,K}(\mathbb{R}^n)$ be such that
\begin{equation} \label{link:1}
T = R_{j} + \partial S_{j} + p Q_{j} \hspace{0.5cm} \forall j
\end{equation}
and
\begin{equation} \label{link:2}
\lim_{j \to \infty} \left( \mathbf{M}(R_j) + \mathbf{M}(S_j) \right) = 0.
\end{equation}
Conditions \eqref{link:1} and \eqref{link:2} are equivalent to say that the currents $p Q_{j}$ converge to $T$ in flat norm $\mathbf{F}_{K}$. We want to conclude from this that $T = p Q$ for some $Q \in \mathscr{F}_{m-1,K}(\mathbb{R}^n)$. In other words, we are looking for a result of closedness of the currents of the form $p Q$ with respect to flat convergence. Now, observe the following. For every $j$, the current $R_{j}$ is rectifiable. Therefore, we can write
\begin{equation} \label{link:3}
R_{j} = \tilde{R}_{j} + p V_{j},
\end{equation}
with $V_j \in \mathscr{R}_{m-1,K}(\mathbb{R}^n)$ and $\tilde{R}_j$ representative modulo $p$. In particular, this implies that
\begin{equation} \label{link:4}
\mathbf{M}(\tilde{R}_{j}) = \mathbf{M}^{p}(R_j) \leq \mathbf{M}(R_j) \to 0.
\end{equation}
Also the currents $S_{j}$ are rectifiable, and of dimension $m$. Since $\mathcal{S}_{m}$ holds true, for every $j$ we can let $\tilde{S}_{j}$ be the representative of $\left[ S_{j} \right]$ given in there, so that
\begin{equation} \label{link:4bis}
S_{j} = \tilde{S}_{j} + p Z_{j}
\end{equation}
with $\tilde{S}_{j}, Z_{j} \in \mathscr{R}_{m,K}(\mathbb{R}^n)$, and
\begin{equation} \label{link:5}
\mathbf{M}(\partial \tilde{S}_{j}) \leq C(m,n,p,K) \mathbf{M}^{p}(\partial S_{j}).
\end{equation}
Now, since $\mathbf{M}^{p}(T - p Q_j) = \mathbf{M}^{p}(T) = 0$ for every $j$ and $\mathbf{M}^{p}(R_j) \to 0$, we deduce from \eqref{link:1} that also $\mathbf{M}^{p}(\partial S_j) \to 0$, and therefore also $\mathbf{M}(\partial \tilde{S}_{j}) \to 0$.
Thus, the above argument produces the following: modulo replacing $Q_j \in \mathscr{I}_{m-1,K}(\mathbb{R}^n)$ with $\tilde{Q}_{j} := Q_{j} + V_{j} + \partial Z_{j} \in \mathscr{F}_{m-1,K}(\mathbb{R}^n)$, we can replace \eqref{link:1} with
\begin{equation} \label{link:6}
T = \tilde{R}_j + \partial \tilde{S}_j + p \tilde{Q}_j \hspace{0.5cm} \forall j,
\end{equation}
and \eqref{link:2} with the stronger
\begin{equation} \label{link:7}
\lim_{j\to\infty} \left( \mathbf{M}(\tilde{R}_j) + \mathbf{M}(\partial \tilde{S}_j) \right) = 0,
\end{equation}
that is the currents $p \tilde{Q}_{j}$ are now approximating $T$ \emph{in mass}.
The problem, now, reduces to proving that the subset of flat chains in $\mathscr{F}_{m-1,K}(\mathbb{R}^n)$ of the form $pQ$ is closed with respect to convergence in mass: this question, though, is evidently much easier than the previous one, and it turns out to always have affirmative answer. Indeed, let $\{ Q_{j} \}_{j=1}^{\infty} \subset \mathscr{F}_{m-1,K}(\mathbb{R}^n)$ be a sequence of flat chains such that ${\mathbf{M}(T - pQ_{j}) \to 0}$. In particular, this would imply that the sequence $\{pQ_j\}$ is a Cauchy sequence in mass. Therefore, the sequence $\{Q_{j}\}$ is also a Cauchy sequence in mass, and in fact also in the flat norm $\mathbf{F}_{K}$, since $\mathbf{F}_{K}(T) \leq \mathbf{M}(T)$ for any $T \in \mathscr{F}_{m-1,K}(\mathbb{R}^n)$.\footnote{If $\mathbf{M}(T) = \infty$ there is of course nothing to prove. On the other hand, if $\mathbf{M}(T) < \infty$ then $T$ is integer rectifiable, and hence it is a competitor for the decomposition in the definition of the flat norm.} So, by completeness there is $Q \in \mathscr{F}_{m-1,K}(\mathbb{R}^n)$ such that $\mathbf{F}_{K}(Q - Q_j) \to 0$. This also implies $\mathbf{F}_{K}(pQ - pQ_j) \to 0$, since $\mathbf{F}_{K}(nT) \leq n \mathbf{F}_{K}(T)$ in general. So, $pQ$ is a flat limit of the sequence $pQ_j$. By uniqueness of the limit, one therefore has to conclude $T = pQ$.
\end{proof}
\begin{corollary} \label{cod1_cor}
The answer to Question \ref{pb1} is positive for $m = n-1$.
\end{corollary}
\begin{proof}
It immediately follows from Theorem \ref{link:thm}, since $\mathcal{S}_{n}$ is Proposition \ref{S_n}.
\end{proof}
\subsection{Answer to Question \ref{pb2} in dimension $m=1$.}
\begin{theorem}\label{t:main}
The answer to Question \ref{pb2} is positive for $m=1$.
\end{theorem}
In the proof, we will use the following elementary fact.
\begin{lemma}\label{l:chain} Let $P \in \mathscr{P}_{1}(\mathbb{R}^n)$ have positive multiplicities. Let $z$ be a point in $\mathrm{spt}(\partial P)$. Then one can select a finite sequence of oriented segments $S_1,\ldots, S_N$ supported in the support of $P$ such that:
\begin{enumerate}
\item the orientation of each segment $S_i$ coincides with the orientation of $P$ on $S_i$;
\item the second extreme of $S_{i}$ coincides with the first extreme of $S_{i+1}$, for $i=1,\ldots, N-1$;
\item If the multiplicity of $\partial P$ at $z$ is negative, then the first extreme of $S_1$ is $z$ and the second extreme of $S_N$ is a point $x$ of the support of $\partial P$ with positive multiplicity. Vice versa, if the multiplicity of $\partial P$ at $z$ is positive, then the first extreme of $S_1$ is a point $x$ of the support of $\partial P$ with negative multiplicity and the second extreme of $S_N$ is $z$;
\item $S_i\neq S_j$ for $i\neq j$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume without loss of generality that the multiplicity of $\partial P$ at $z$ is negative. Since the multiplicities on $P$ are all positive, then among the (finitely many) segments defining the support of $P$ there is at least a segment $S_1$ whose first extreme is $z$ such that
\begin{equation}\label{e:spezza_massa}
\mathbf{M}(P)=\mathbf{M}(P - \llbracket S_1 \rrbracket)+\mathbf{M}(\llbracket S_1 \rrbracket),
\end{equation}
If the second extreme $y$ of $S_1$ is not a point with positive multiplicity of $\partial P$, it is a point of negative multiplicity of $\partial (P-\llbracket S_1 \rrbracket)$, hence the procedure can be repeated with $P-\llbracket S_1 \rrbracket$ in place of $P$ and $y$ in place of $z$. The procedure has to terminate in a finite number of steps, because of \eqref{e:spezza_massa} and the fact that the mass of each $\llbracket S_{i} \rrbracket$ is bounded from below. When the procedure ends, one can easily see that the ordered sequence of segments collected satisfies properties $(1)-(3)$. Property $(4)$ is not necessarily satisfied. If a certain segment $S'$ is repeated in the procedure, it is sufficient to eliminate from the sequence one copy of $S'$ and all the segments appearing between two repetitions of $S'$. After this elimination, the sequence satisfies also property $(4)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:main}] By Proposition \ref{equiv_statements} it is sufficient to prove Statement $\mathcal{P}_1$. Consider $P\in \mathscr{P}_1(\mathbb{R}^n)$.
Firstly we choose a representative $Q\in \mathscr{P}_1(\mathbb{R}^n)$ modulo $p$ of $P$ with multiplicities in $\{1,\ldots,p-1\}$. Clearly we have $\mathbf{M}(Q)\leq (p-1)\mathbf{M}^p(P)$, but at the moment we have no control on $\mathbf{M}(\partial Q)$. Hence, we want to replace $Q$ with another representative $\tilde P\in \mathscr{P}_1(\mathbb{R}^n)$ of $P$, for which we can control both the mass and the mass of the boundary. More precisely, we want to find a representative $\tilde P$ with multiplicities in $\{1,\ldots,p-1\}$ and with the multiplicities of $\partial \tilde P$ in $\{-(p-1),\ldots,p-1\}$.\\
Consider a point $z \in {\rm{spt}}(\partial Q)$ with multiplicity $\theta_{z}$ such that $|\theta_{z}|\geq p$. Without loss of generality, we can assume $\theta_{z} < 0$. Given that the multiplicities on $Q$ are all positive, we can use Lemma \ref{l:chain} to select a finite sequence of oriented segments $S_1,\ldots, S_N$ supported in the support of $Q$, satisfying properties $(1)-(4)$ (with $Q$ in place of $P$).
Once we have found such a sequence of segments, denote by $Q^1$ the polyhedral current obtained from $Q$ by changing on every segment $S_i$ both the orientation and the multiplicity from $\theta_i$ to $\theta_i^1:=(p-\theta_i)$.
Clearly $Q^1$ has still multiplicities in $\{1,\ldots,p-1\}$. Moreover, if $\theta_z^1$ denotes the multiplicity of $ \partial Q^1$ at $z$ then one has $|\theta^1_z|=|\theta_z|-p$. On the other hand, if $x$ denotes the other endpoint of the chain of segments as in $(3)$ of Lemma \ref{l:chain} and $\theta_x$, $\theta_x^1$ are the multiplicities of $\partial Q$ and $\partial Q^1$ at $x$ respectively, then it holds $\theta_{x}^{1} = \theta_{x} - p$. Now, since by Lemma \ref{l:chain}(3) it holds $\theta_{x} \geq 1$, it follows that $\theta_{x}^{1} = (\theta_{x} - p) \in \left[ 1 - p, \theta_{x} \right]$. Hence, $|\theta_x^1|\leq|\theta_x|+p-2$.
Therefore, one has
\begin{equation} \label{cade_massa}
\mathbf{M}(\partial Q^1)\leq \mathbf{M}(\partial Q)-2.
\end{equation}
If possible, we repeat the procedure above with $Q^{1}$ in place of $Q$, producing a new polyhedral current $Q^{2}$. By formula \eqref{cade_massa}, the procedure can be iterated only a finite number $M$ of times. The corresponding $\tilde P:=Q^M$ has the required property, because any point $z \in \mathrm{spt}(\partial Q^M)$ has multiplicity $|\theta_{z}| \leq p-1$. Obviously we have
$$\mathbf{M}(\tilde P)\leq (p-1)\mathbf{M}^p(P)\quad{\rm{and}}\quad \mathbf{M}(\partial \tilde P)\leq (p-1)\mathbf{M}^p(\partial P),$$
and the proof is complete.
\end{proof}
Since we have actually proved the Statement $\mathcal{P}_{1}$, it follows from Proposition \ref{equiv_statements} that the Statement $\mathcal{S}_{1}$ holds true. By virtue of Theorem \ref{link:thm}, we can therefore deduce the following
\begin{corollary} \label{dim0_cor}
The answer to Question \ref{pb1} is positive for $m = 0$.
\end{corollary}
\subsection{Negative answer to Question \ref{pb2} in general dimension.}
It is evident that the choice of the compact set $K$ could be crucial for establishing an answer to Question \ref{pb2}. In the spirit of the counterexample suggested by Federer in \cite{Fed69}[4.2.26, p. 426] (see Remark \ref{remarkone} above), we provide a negative answer to the question, proving the existence of a compact subset $K \subset \mathbb{R}^5$ and a current $\left[ T \right] \in \mathscr{I}_{2,K}^{2}(\mathbb{R}^5)$ with $\partial T = 0 \, {\rm mod}(2)$ such that there exists no $I \in \mathscr{I}_{2,K}(\mathbb{R}^5)$ with $I = T \, {\rm mod}(2)$. Nevertheless, for a different choice of a compact $K' \supset K$ we can exhibit an integral current $I' \in \mathscr{I}_{2,K'}(\mathbb{R}^5)$ with $\partial I' = 0$ and $I' = T \, {\rm mod}(2)$.
In what follows, we will let $\mathcal{K}$ be the embedded Klein bottle in $\mathbb{R}^{4}$ (in particular, $\mathcal{K}$ is a non-orientable compact two dimensional surface without boundary in $\mathbb{R}^4$). There exist a closed curve $\gamma$ and an integral current $S := \llbracket \mathcal{K}, \tau, 1 \rrbracket \in \mathscr{I}_{2,\mathcal{K}}(\mathbb{R}^{4})$ such that the set of discontinuity points of $\tau$ coincides with $\gamma$. In particular, $\partial S$ is the integral current $\llbracket \gamma, \tau_{\gamma}, 2 \rrbracket$, $\tau_{\gamma}$ being the orientation of $\gamma$ naturally induced by $\tau$. We let $\left[ S \right] \in \mathscr{I}_{2,\mathcal{K}}^{2}(\mathbb{R}^4)$ be the associated current ${\rm mod}(2)$. In particular, $\partial \left[ S \right] = 0$ and $\mathbf{M}^{2}(\left[ S \right]) = \mathcal{H}^{2}(\mathcal{K})$. We have the following, elementary
\begin{lemma} \label{lem_neg}
There exists a constant $c = c(\mathcal{K})$ with the following property. If $R \in \mathscr{I}_{2,\mathcal{K}}(\mathbb{R}^4)$ is such that $R \in \left[ S \right]$, one has
\[
\mathbf{M}(\partial R) \geq c.
\]
\end{lemma}
\begin{proof}
By contradiction, let $\{\alpha_{j}\}_{j=1}^{\infty}$ be a sequence of positive numbers with $\alpha_{j} \searrow 0$, and $\{R_{j}\}_{j=1}^{\infty} \subset \mathscr{I}_{2,\mathcal{K}}(\mathbb{R}^4)$ be such that
\[
R_{j} \in \left[ S \right] \hspace{0.5 cm} \forall j,
\]
and
\[
\mathbf{M}({\partial R_{j}}) \leq \alpha_{j}.
\]
We write $R_{j} = \llbracket \mathcal{K}, \tau, \theta_{j} \rrbracket$, and we observe that, since $R_{j} = S \, {\rm mod}(2)$, from \eqref{p_rect} and from the definition of $S$ it follows that
\begin{equation} \label{congruenza}
\theta_{j}(x) \equiv 1 \, ({\rm mod} \, 2) \quad \mbox{for } \mathcal{H}^{2}\mbox{-a.e.} \, x \in \mathcal{K}.
\end{equation}
We replace every $R_{j}$ with the integral current $\tilde{R}_{j} = \llbracket \mathcal{K}, \tau, \tilde{\theta}_{j} \rrbracket$, where ${\tilde{\theta}_{j} := {\rm sign}(\theta_{j})}$. Clearly, by \eqref{congruenza} and the definition of $\tilde{\theta}_{j}$, $\tilde{R}_{j} = R_{j} \, {\rm mod}(2)$, and thus $\tilde{R}_{j} \in \left[ S \right]$ for every $j$. Notice, furthermore, that $\mathbf{M}(\tilde{R}_{j}) = \mathcal{H}^{2}(\mathcal{K})$ for every $j$, and that
\begin{equation} \label{red_bdry}
\mathbf{M}(\partial \tilde{R}_{j}) \leq \mathbf{M}(\partial R_{j}) \leq \alpha_{j}.
\end{equation}
In order to show \eqref{red_bdry}, let $U$ be any open set in $\mathcal{K}$ homeomorphic to a two-dimensional disc. Let also $\sigma$ be a fixed continuous orientation on $U$. We have that $R_{j} \mres U = \llbracket U, \sigma, \Theta_{j} \rrbracket$, where $\Theta_{j}$ is the function defined by
\[
\Theta_{j}(x) :=
\begin{cases}
\theta_{j}(x) &\mbox{if } \tau(x) = \sigma(x) \\
- \theta_{j}(x) &\mbox{if } \tau(x) = - \sigma(x).
\end{cases}
\]
As a consequence of \cite[Remark 27.7]{Sim83}, it holds
\[
\mathbf{M}((\partial R_{j}) \mres U) = |D\Theta_{j}|(U),
\]
where $| D \Theta_{j} |$ is the variation of the ${\rm BV}$ function $\Theta_{j}$. Analogously, $\tilde{R}_{j} \mres U = \llbracket U, \sigma, \tilde{\Theta}_{j} \rrbracket$, where $\tilde{\Theta}_{j}$ is the function defined by
\[
\tilde{\Theta}_{j}(x) :=
\begin{cases}
\tilde{\theta}_{j}(x) &\mbox{if } \tau(x) = \sigma(x) \\
- \tilde{\theta}_{j}(x) &\mbox{if } \tau(x) = - \sigma(x).
\end{cases}
\]
Observe that $\tilde{\Theta}_{j} \equiv {\rm sign}(\Theta_{j})$, and hence
\[
\mathbf{M}((\partial \tilde R_{j}) \mres U) = |D\tilde\Theta_{j}|(U) \leq |D \Theta_{j}|(U) = \mathbf{M}((\partial R_{j}) \mres U),
\]
which completes the proof of \eqref{red_bdry}.
Now, by the Compactness Theorem \ref{compactness:thm} there exists a current $\tilde{R} \in \mathscr{I}_{2,\mathcal{K}}(\mathbb{R}^4)$ and a subsequence (not relabeled) such that
\[
\lim_{j \to \infty} \mathbf{F}_{\mathcal{K}}(\tilde{R} - \tilde{R}_{j}) = 0.
\]
Moreover, by the lower semi-continuity of the mass one has $\partial \tilde{R} = 0$. Since the equivalence classes ${\rm mod}(2)$ are closed with respect to the flat convergence, $\tilde{R} \in \left[ S \right]$, which contradicts the fact that $\mathcal{K}$ is not orientable.
\end{proof}
\begin{remark} \label{rem_neg}
Observe that if $\mathcal{K}_{\lambda}$ is a homothetic copy of $\mathcal{K}$ with homothety ratio $\lambda$, then $c(\mathcal{K}_{\lambda}) = \lambda c(\mathcal{K})$.
\end{remark}
We finally define the compact set $K \subset \mathbb{R}^{5}$ and the current $\left[ T \right] \in \mathscr{I}_{2,K}^{2}(\mathbb{R}^5)$ as follows. For every $i = 1, 2, \dots$, we let $\Lambda_{i}$ be the homothety on $\mathbb{R}^{4}$ defined by $\Lambda_{i}(x) := \displaystyle \frac{x}{i}$, and $\pi_{i} \colon \mathbb{R}^{4} \to \mathbb{R}^5$ be the isometry $\pi_{i}(x) := \left( \displaystyle \frac{1}{i}, x \right)$. We set
\[
K := \{ 0 \} \cup \bigcup_{i=1}^{\infty} \pi_{i} \circ \Lambda_{i}(\mathcal{K}),
\]
which is evidently compact, and
\[
T := \sum_{i=1}^{\infty} (\pi_{i} \circ \Lambda_{i})_{\sharp}S .
\]
We let $\left[ T \right]$ denote the equivalence class of $T$ modulo $2$. Since $\mathbf{M}^{2}((\pi_{i} \circ \Lambda_{i})_{\sharp}S) = \displaystyle \frac{1}{i^{2}} \mathcal{H}^{2}(\mathcal{K})$, then $\left[ T \right]$ is well defined, and in particular $\partial \left[ T \right] = 0$. In the following proposition, we show that the choice of $K$ and $\left[ T \right]$ provides a negative answer to Question \ref{pb2}.
\begin{proposition}
In general, the answer to Question \ref{pb2} is negative.
\end{proposition}
\begin{proof}
Let $K$ and $\left[ T \right]$ be as above, and assume by contradiction that there exists $I \in \mathscr{I}_{2,K}(\mathbb{R}^5)$ with $I \in \left[ T \right]$. Then, the restriction of $I$ to each plane $x_{1} = \displaystyle \frac{1}{i}$ belongs to the class $\left[ (\pi_{i} \circ \Lambda_{i})_{\sharp} S \right]$, and thus by Lemma \ref{lem_neg} and Remark \ref{rem_neg} one has
\[
\mathbf{M}(\partial I) = c(\mathcal{K}) \sum_{i=1}^{\infty} \frac{1}{i} = \infty,
\]
which gives the desired contradiction.
\end{proof}
\begin{remark}
Observe that if we replace $\mathcal{K}$ with $\mathcal{K}' := \mathcal{K} \cup D$, where $D$ is a suitable two-dimensional disc, then Lemma \ref{lem_neg} fails, as there exists $R \in \mathscr{I}_{2,\mathcal{K}'}(\mathbb{R}^4)$ such that $R \in \left[ S \right]$ and $\partial R = 0$. Hence, it is possible to construct an integral representative of $\left[ T \right]$ with support in
\[
K' := \{0\} \cup \bigcup_{i=1}^{\infty} \pi_{i} \circ \Lambda_{i}(\mathcal{K}').
\]
\end{remark}
\subsection{Concluding remarks.}
Ambrosio and Wenger proved in \cite[Theorem 4.1]{AW11} a statement similar to our Theorem \ref{t:main}, under the hypothesis that $\partial \left[ T \right] = 0$. They were motivated by the will to prove the analogue of Theorem \ref{p_mass_chain:thm} above when the ambient space is a compact convex subset of a Banach space with mild additional assumptions. Even though our theorem covers also the case with boundary, our proof is considerably simpler than theirs, essentially because we can rely on the polyhedral approximation theorem, which is not available in their context. Actually, our result would follow directly from theirs if one could independently guarantee the validity of the following proposition. However, we were not able to devise a proof independent of Theorem \ref{t:main}.
\begin{proposition} \label{0sum_prop}
Let $\left[ T \right] \in \mathscr{I}_{1,K}^{p}(\mathbb{R}^n)$. Then, for any $R = \sum_{i=1}^{q} \theta_{i} \delta_{x_{i}} \in \mathscr{R}_{0,K}(\mathbb{R}^n)$ such that $R = \partial T \, {\rm mod}(p)$ one has:
\begin{equation} \label{0sum}
\sum_{i=1}^{q} \theta_{i} \equiv 0 \, ({\rm mod}\, p).
\end{equation}
\end{proposition}
Assume the validity of the Proposition. An alternative proof of our Theorem \ref{t:main} can be obtained as follows. Let $\left[ T \right] \in \mathscr{I}_{1,K}^{p}(\mathbb{R}^m)$ and let $R = \sum_{i=1}^{q} \theta_{i} \delta_{x_{i}} \in \mathscr{R}_{0,K}(\mathbb{R}^n)$ be such that $R = \partial T \, {\rm mod}(p)$. Fix $x_{0} \notin \{x_{1}, \dots, x_{q}\}$ and consider the cone $C$ with vertex $x_{0}$ over $R$, i.e. the integral $1$-current
\begin{equation}
C := \sum_{i=1}^{q} \llbracket S_{i}, \tau_{i}, \theta_{i} \rrbracket,
\end{equation}
where $S_{i}$ is the segment joining $x_{i}$ to $x_{0}$ and $\tau_{i}:= \displaystyle \frac{x_{0} - x_{i}}{|x_{0} - x_{i}|}$. By \eqref{0sum}, the multiplicity of $\partial C$ at $x_{0}$ is an integer multiple of $p$, and thus via a simple computation $\partial (T + C) = 0 \, {\rm mod}(p)$. Applying the result of Ambrosio and Wenger, we finally obtain that there exists an integral current $J \in \mathscr{I}_{1,K}(\mathbb{R}^n)$ such that $J = T + C \, {\rm mod}(p)$. Hence, $I := J - C$ is an integral current with $I = T \, {\rm mod}(p)$.
\\
Although the analogue of Proposition \ref{0sum_prop} for classical currents is a well known fact (i.e. the sum of the multiplicities in the boundary of an integral $1$-current is zero), the validity of Proposition \ref{0sum_prop} does not follow trivially. Nevertheless, it can in fact be deduced as a consequence of our Corollary \ref{dim0_cor}.
\begin{proof}[Proof of Proposition \ref{0sum_prop}]
Let $T$ and $R$ be as in the statement. Then, since $\mathbf{F}_{K}^{p}(\partial T - R) = 0$, Corollary \ref{dim0_cor} implies the existence of currents $Q \in \mathscr{R}_{0,K}(\mathbb{R}^n)$ and $S \in \mathscr{R}_{1,K}(\mathbb{R}^n)$ such that
\[
\partial T - R = p(Q + \partial S),
\]
that is
\[
\partial(T - pS) = R + pQ.
\]
In particular, $T - pS$ is a classical integral current, and thus the sum of the multiplicities in $R$ must equal that of $-pQ$, which concludes the proof.
\end{proof}
\nocite{*}
\bibliographystyle{plain}
|
2,869,038,156,718 | arxiv | \section{Introduction}
We start out with a few definitions from group theory.
Let $\pi$ be a group. We say that \emph{$\pi$ splits over the subgroup $B$}
if $\pi$ admits an HNN decomposition with base group $A$ and amalgamating subgroup $B$. More precisely,
$\pi$ splits over the subgroup $B$ if there exists an isomorphism
\[ \pi\xrightarrow{\cong} \ll A,t\,|\, \varphi(b)=tbt^{-1} \mbox{ for all }\, b\in B\rr,\]
where $B \subset A$ are subgroups of $\pi$ and $\varphi\colon B \to A$ is a monomorphism.
In this notation, relations of $A$ are implicit. We will write such a presentation more compactly as $ \ll A,t\,|\, \varphi(B)=tBt^{-1}\rangle.$
In this paper we are interested in splittings of knot groups.
Given a knot $K\subset S^3$ we denote the knot group $\pi_1(S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} K)$ by $\pi(K)$.
We denote by $g(K)$ the genus of the knot, the minimal genus of a Seifert surface $\Sigma$ for $K$.
It follows from the Loop Theorem and the Seifert-van Kampen theorem that we can split the knot group $\pi(K)$
over the free group $\pi_1(\Sigma)$ of rank $2g(K)$. The \emph{rank} $\rk(G)$ of a group $G$ is the minimal size of a set of generators for $G$.
It is well known
that if $K$ is a fibered knot, that is, the knot complement $S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} K$ fibers over $S^1$, then the group $\pi(K)$
splits only over free groups of rank $2g(K)$. (See, for example, Lemma \ref{lem:fibsplit}.)
We show that this property characterizes fibered knots. In fact, we can say much more.
\begin{theorem}\label{thm:fibsplitintro}
Let $K$ be a non-fibered knot. Then $\pi(K)$ splits over non-free groups of arbitrarily large rank.
\end{theorem}
Neuwirth \cite[Problem~L]{Ne65} asked whether there exists a knot $K$ such that $\pi(K)$ splits over a \emph{free} group
of rank other than $2g(K)$. By the above, such a knot would necessarily have to be non-fibered.
Lyon \cite[Theorem~2]{Ly71} showed that there does in fact exist a non-fibered genus-one knot $K$ with incompressible Seifert surfaces of arbitrarily large genus. This implies in particular that there exists a knot $K$ for which $\pi(K)$
splits over free groups of arbitrarily large rank. We give a strong generalization of this result.
\begin{theorem}\label{freesplitting}
Let $K$ be a non-fibered knot. Then for any integer $k\geq 2g(K)$ there exists a splitting of $\pi(K)$ over a free group of rank $k$.
\end{theorem}
Note that an incompressible Seifert surface gives rise to a splitting over a free group of \emph{even rank}. The splittings over free groups of \emph{odd rank} in the theorem are therefore not induced by incompressible Seifert surfaces.
Feustel and Gregorac \cite{FG73} showed that if $N$ is an aspherical, orientable $3$-manifold
such that $\pi=\pi_1(N)$ splits over the fundamental group of a \emph{closed} surface $\Sigma\ne S^2$, then
this splitting can be realized topologically by a properly embedded surface.
(More splitting results can be found in \cite[Proposition~2.3.1]{CS83}.)
The fact that fundamental groups of non-fibered knots can be split over free groups of odd rank shows that the result
of Feustel and Gregorac does not hold for splittings over fundamental groups of surfaces with boundary.
Theorems \ref{thm:fibsplitintro} and \ref{freesplitting} can be viewed as strengthenings of Stallings's fibering criterion.
We refer to Section \ref{section:stallings} for a precise statement.
Our third main theorem shows that Theorem \ref{freesplitting} is optimal.
\begin{theorem}\label{mainthm}\label{mainthm3}
If $K$ is a knot, then $\pi(K)$ does not split over a group of rank less than $2g(K)$.
\end{theorem}
The case $g(K)=1$ follows from the Kneser Conjecture and work of Waldhausen \cite{Wal68b}, as we show in Section \ref{section:genusone}. However, to the best of our knowledge, the classical methods
of 3-manifold topology do not suffice to prove Theorem \ref{mainthm} in the general case.
We use the recent result \cite{FV12a} that Wada's invariant detects the genus of any knot.
This result in turn relies on the seminal work of
Agol \cite{Ag08,Ag12}, Wise \cite{Wi09,Wi12a,Wi12b}, Przytycki--Wise \cite{PW11,PW12a} and Liu \cite{Liu11}.
Theorem \ref{mainthm} is of interest for several reasons:
\bn
\item It gives a completely group-theoretic chararcterization of the genus of a knot, namely
\[ g(K)=\frac{1}{2}\mbox{min}\{\rk(B)\,|\, \pi(K)\mbox{ splits over the group }B\}.\]
A different group-theoretic characterization was given by Calegari (see the proof of Proposition 4.4 in \cite{Ca09}) in terms of the `stable commutator length' of the longitude.
\item Theorem \ref{mainthm} fits into a long sequence of results showing that minimal-genus Seifert surfaces `stay minimal' even if one relaxes some conditions. For example, Gabai \cite{Ga83} showed that the genus of an \emph{immersed} surface cobounding a longitude of $K$ is at least $g(K)$.
Furthermore, minimal-genus Seifert surfaces give rise to surfaces of minimal complexity in the 0-framed surgery $N_K$
(see \cite{Ga87}) and in most $S^1$-bundles over $N_K$ (see \cite{Kr99,FV12b}).
\item Given a closed $3$-manifold $N$ it is obvious that $\rk(\pi_1(N))$ is a lower bound for the Heegaard genus $g(N)$ of $N$.
In light of Theorem \ref{mainthm} one might hope that this is in an equality; that is, that
$\rk(\pi_1(N))=g(N)$. This is not the case, though, as was shown by various authors (see \cite{BZ84,ScW07} and \cite{Li13}).
\en
The paper is organized as follows.
In Section \ref{section:hnnsplittings} we discuss several basic facts about HNN decompositions of groups.
In Section \ref{section:splitk} we recall that incompressible Seifert surfaces give rise to HNN decompositions of knot groups
and we characterize in Lemma \ref{lem:fibsplit} the splittings of fundamental groups of fibered knots.
In Section \ref{section:52} we consider the genus-one non-fibered knot $K=5_2$. We give explicit examples of splittings of the knot group
over a non-free group and over the free group $F_3$ of rank 3, and inequivalent splittings of the knot group over $F_2$.
Section \ref{section:splitnonfree} contains the proof of Theorem \ref{thm:fibsplitintro}, and in Section
\ref{section:splitfree} we give the proof of Theorem \ref{freesplitting}. In Section \ref{section:stallings} we show that these two theorems strengthen Stallings's fibering criterion.
In Section \ref{section:genusone} we give a proof of Theorem \ref{mainthm} for genus-one knots.
The proof relies mostly on the Kneser Conjecture and a theorem of Waldhausen. In Section \ref{section:wada} we review the definition of Wada's invariant of a group. Finally, in Section \ref{section:proof} we prove Theorem \ref{thm:technical}, which combined with the main result of \cite{FV12a} provides a proof of Theorem \ref{mainthm} for all genera.
We conclude this introduction with two questions. The precise notions are explained in Section \ref{section:hnnsplittings}.
\bn
\item Let $\pi$ be a word hyperbolic group and let $\epsilon\colon \pi\to \Z$ be an epimorphism such that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is not finitely generated.
Does $(\pi,\epsilon)$ admit splittings over (infinitely many) pairwise non-isomorphic groups? (The group $\pi=\pi(K)$ satisfies these conditions if $K$ is a non-fibered knot.)
\item Let $K$ be a non-fibered knot of genus $g$. Does $\pi(K)$ admit (infinitely many) inequivalent splittings over the free group $F_{2g}$ on $2g$ generators?
\en
\subsection*{Conventions and notations.}
All groups are assumed to be finitely presented unless we say specifically otherwise.
All $3$-manifolds are assumed to be connected, compact and orientable.
Given a submanifold $X$ of a $3$-manifold $N$, we denote by $\nu X\subset N$ an open tubular neighborhood of $X$ in $N$.
Given $k\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$ we denote by $F_k$ the free group on $k$ generators.
\subsection*{Acknowledgments.}
The first author wishes to thank the University of Sydney for its hospitality.
We are also very grateful to Eduardo Martinez-Pedroza, Saul Schleimer and Henry Wilton for very helpful conversations.
\section{Hnn-decompositions and splittings of groups}\label{section:hnnsplittings}
\subsection{Splittings of groups}
An \emph{HNN decomposition} of a group $\pi$ is a 4-tuple
$(A, B, t, \varphi)$ consisting of subgroups $B \le A$ of $\pi$, a \emph{stable letter} $t \in \pi$, and an injective
homomorphism $\varphi\colon B \to A$, such that the natural inclusion maps induce an isomorphism from $\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr$ to $\pi$.
Alternatively, a HNN-decomposition of $\pi$ can be viewed as an isomorphism
\[ f\colon \pi\xrightarrow{\cong} \ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
where $\varphi\colon B\to A$ is an injective map. We will frequently go back and forth between these two points of view.
We need a few more definitions:
\bn
\item
Given an HNN-decomposition $(A, B, t, \varphi)$
we refer to the homomorphism $\epsilon\colon \pi\to \Z$ that is given by $\epsilon(t)=1$ and $\epsilon(a)=0$ for $a\in A$ as the \emph{canonical epimorphism}.
\item
Let $\pi$ be a group and let $\epsilon\in \hom(\pi,\Z)$ be an epimorphism. A \emph{splitting of $(\pi, \epsilon)$ over a subgroup $B$
(with base group $A$)} is an HNN decomposition $(A, B, t, \varphi)$ of $\pi$ such that $\epsilon$ equals the canonical epimorphism.
With the alternative point of view explained above, a splitting of $(\pi,\epsilon)$ is an isomorphism
\[ f\colon \pi\xrightarrow{\cong} \ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
such that the following diagram commutes:
\[ \xymatrix{ \pi\ar[dr]_\epsilon \ar[rr]^-{f}&& \ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\ar[dl]^\psi\\
&\Z&}\]
where $\psi$ denotes the canonical epimorphism.
\item Two splittings $(A, B, t, \varphi)$ and $(A', B', t', \varphi')$ of $(\pi,\epsilon)$ are called \emph{weakly equivalent} if there exists an automorphism $\Phi$ of $\pi$ with $\Phi(B)=B'$. If $\Phi$ can be chosen to be an inner automorphism of $\pi$, then the two HNN decompositions are said to be \emph{strongly equivalent}.
\en
We conclude this section with the following well-known lemma of \cite{BS78}. It appears as Theorem B* in \cite{Str84} where an elementary proof can be found.
\begin{lemma}\label{lem:splitexists}
Let $\pi$ be a finitely presented group and let $\epsilon\in \hom(\pi,\Z)$ be an epimorphism. Then there exists a splitting
\[ f:\pi\xrightarrow{\cong} \ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
of $(\pi,\epsilon)$ where $A$ and $B$ are finitely generated.
\end{lemma}
\subsection{Splittings of pairs $(\pi,\epsilon)$ with finitely generated kernel}\label{section:fg}
The following lemma characterizes splittings of pairs $(\pi,\epsilon)$
for which $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is finitely generated.
\begin{lemma}\label{lem:splitkerfg}
Let $\pi$ be a finitely presented group, $\epsilon\colon \pi\to \Z$ an epimorphism, and $t$ an element of $\pi$ with $\epsilon(t)=1$.
If $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is finitely generated, then there exists a canonical isomorphism
\[ \pi=\ll B,t\,|\,\varphi(B)=tBt^{-1}\rr\]
where $B:=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ and where $\varphi\colon B\to B$ is given by conjugation by $t$.
Furthermore, any other splitting of $(\pi,\epsilon)$
is strongly equivalent to this splitting.
\end{lemma}
\begin{proof}
Let $\pi$ be a finitely presented group and let $\epsilon\colon \pi\to \Z$ be an epimorphism
such that $B=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is finitely generated. We have an exact sequence
\[ 1\to B\to \pi\xrightarrow{\epsilon}\Z \to 0.\]
Let $t\in \pi$ with $\epsilon(t)=1$. The map $n\mapsto t^n$ defines a right-inverse of $\epsilon$, and we see that $B$ is canonically isomorphic to
the semi-direct product $\ll t\rr\ltimes B$ where $t^n$ acts on $B$ by conjugation by $t^n$. That is, we have a canonical isomorphism
\[ \pi=\ll B,t\,|\,\varphi(B)=tBt^{-1}\rr.\]
We now suppose that we have another splitting
$\pi=\ll C,s\,|\, \psi(D)=sDs^{-1}\rr$
of $(\pi,\epsilon)$.
By our hypothesis the group $B=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is finitely generated.
On the other hand, it follows from standard results in the theory of graphs of groups (see \cite{Se80}) that
\[ \mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)\cong \cdots C_k*_{D_k} C_{k+1}*_{D_{k+1}} C_{k+2} \cdots,\]
where $C_i=C$ and $D_i=D$ for all $i\in \Z$ and each map $D_i\to C_{i+1}$ is given by $\psi$.
As in \cite{Ne65}, the fact that the infinite free product with amalgamation is finitely generated implies that $C_i=D_i=\psi(D_{i-1})$ for all $i\in \Z$. This, in turn, implies that each $C_i$ and $D_i$ is isomorphic to $D=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$. It is now clear that the identity on $\pi$ already has the desired property relating the two splittings of $(\pi,\epsilon)$.
\end{proof}
\subsection{Induced splittings of groups}\label{section:anm}
Let
\[ \pi=\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
be an HNN-extension. Given $n\leq m\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$ we denote by $A_{[n,m]}$ the result of amalgamating
the groups $t^{i}At^{-i}$, $i=n,\dots,m$ along the subgroups $t^{i}\varphi(B)t^{-i}=t^{i+1}Bt^{-i-1}$, $i=n,\dots,m-1$.
In our notation,
\[ A_{[n,m]}= \langle \ast_{i=m}^n t^i A t^{-i}\mid t^j\varphi(B)t^{-j} = t^{j+1}Bt^{-j-1}\ (j=n, \ldots, m-1) \rr.\]
Given any $k\leq m\leq n\leq l$, we have a canonical map $A_{[m,n]}\to A_{[k,l]}$
which is a monomorphism (see, for example, \cite{Se80} for details). If $\epsilon\colon \pi\to \Z$ is the canonical
epimorphism, then it is well known that
$\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is given by the direct limit of the groups $A_{[-m,m]}$, $m\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$; that is,
\[ \mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)=\lim_{m\to \infty}A_{[-m,m]}.\]
The following well-known lemma shows that a splitting of a pair $(\pi,\epsilon)$ gives rise to a sequence of splittings.
\begin{lemma}\label{lem:moresplittings}
Let
\[ \pi=\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
be an HNN-extension.
For any integer $n\ge 0$, let
\[ \varphi_n: \pi_1(A_{[0,n]})\to A_{[1,n+1]} \]
be the map that is given by conjugation by $t$.
Then the obvious inclusion maps induce an isomorphism
\[\ll A_{[0,n+1]},t\,|\, \varphi_n(A_{[0,n]})=tA_{[0,n]}t^{-1}\rr\xrightarrow{\iota} \pi=\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr.\]
\end{lemma}
\begin{proof}
We write
\[ \Gamma=\ll A_{[0,n+1]},t\,|\, \varphi_n(A_{[0,n]})=tA_{[0,n]}t^{-1}\rr.\]
We denote by $\pi'$ (respectively $\Gamma'$) the kernel of the canonical map from $\pi$ (respectively $\Gamma$) to $\Z$.
It is clear that it suffices to show that the restriction of $\iota:\Gamma\to \pi$ to $\pi'\to \Gamma'$ is an isomorphism.
For $i\in \Z$, we write $A_i:=t^iAt^{-i}$ and $B_i:=\varphi(t^{i+1}Bt^{-i-1})$.
Note that $\Gamma'$ is canonically isomorphic to
\[\cdots \left(A_0*_{B_0}\cdots*_{B_{n}}A_{n+1}\right)*_{A_1*_{B_1}\cdots*_{B_{n}}A_{n+1}} \left(A_1*_{B_1}\cdots*_{B_{n+1}}A_{n+2}\right)
*_{A_2*_{B_2}\cdots*_{B_{n+1}}A_{n+2}}
\cdots ,\]
and $\pi'$ is canonically isomorphic to
\[ \cdots *_{B_0}A_{-1}*_{B_{-1}}A_{0}*_{B_0}A_1*_{B_1}*\cdots \]
It is now straightforward to see that $\iota$ does indeed restrict to an isomorphism $\Gamma'\to \pi'$.
\end{proof}
Note that the isomorphism in Lemma \ref{lem:moresplittings} is canonical.
Throughout the paper we will therefore make the identification
\[\pi=\ll A_{[0,n+1]},t\,|\, \varphi_n(A_{[0,n]})=tA_{[0,n]}t^{-1}\rr.\]
In the paper we will also write $A=A_{[0,0]}$.
\section{Splittings of knot groups and incompressible surfaces}\label{section:splitk}
Now let $K\subset S^3$ be a knot, that is, an oriented embedded simple closed curve in $S^3$.
We write $X(K):=S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu K$ and
\[ \pi(K):=\pi_1(X(K))=\pi_1(S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu K).\]
The orientation of $K$ gives rise to a canonical epimorphism $\epsilon_K\colon \pi(K)\to \Z$ sending the oriented meridian to 1.
Let $\Sigma$ be a Seifert surface of genus $g$ for $K$; that is, a connected, orientable, properly embedded surface $\Sigma$ of genus $g$
in $X(K)$ such that $\partial \Sigma$ is an oriented longitude for $K$.
Note that $\Sigma$ is dual to the canonical epimorphism $\epsilon$.
Suppose that $\Sigma$ is incompressible.
(Recall that a surface $\Sigma$ in a $3$-manifold $N$ is called \emph{incompressible} if the inclusion-induced map $\pi_1(\Sigma)\to \pi_1(N)$ is injective.)
We pick a tubular neighborhood $\S\times [-1,1]$. The manifold
$X(K)\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (-1,1))$ is the result of \emph{cutting
along $\S$}. The Seifert--van Kampen theorem gives us
a splitting
\[ \pi_1(X(K))= \ll \pi_1(X(K)\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (-1,1)),t\,\,|\,\, \varphi(\pi_1(\S\times -1)=t\pi_1(\S\times 1)t^{-1}\rr\]
of $(\pi(K),\epsilon_K)$, where $\varphi$ is induced by the canonical homeomorphism $\S\times -1\to \S\times 1$.
We thus see that $\pi(K)$ splits over the free group $\pi_1(\Sigma)$ of rank $2g$.
Given a knot $K\subset S^3$, we denote by $g=g(K)$ the minimal genus of a Seifert surface. It follows from the Loop Theorem
(see, for example, \cite[Chapter~4]{He76}) that
a Seifert surface of minimal genus is incompressible. Hence $\pi(K)$ splits over a free group of rank $2g(K)$.
\medskip
If two incompressible Seifert surfaces of a knot $K$ are isotopic, then it is clear that the corresponding splittings of $\pi(K)$ are strongly equivalent.
There are many examples of knots that admit non-isotopic minimal genus Seifert surfaces; see e.g. \cite{Ly74b,Ei77a,Ei77b,Al12,HJS13}.
We expect that these surfaces give rise to splittings that are not strongly equivalent.
\medskip
On the other hand, if a knot is fibered, then it admits a unique minimal genus Seifert surface up to isotopy (see e.g. \cite[Lemma~5.1]{EL83}). It is therefore perhaps not entirely surprising that $\pi(K)$ admits a unique splitting up to strong equivalence.
More precisely, we have the following well-known
lemma, which is originally due to Neuwirth \cite{Ne65}.
\begin{lemma} \label{lem:fibsplit}
Let $K$ be a fibered knot of genus $g$ with fiber $\Sigma$. Then any splitting of $\pi(K)$ is strongly equivalent to
\[ \ll \pi_1(X(K)\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (-1,1)),t\,|\, \varphi(\pi_1(\S\times -1)=t\pi_1(\S\times 1) t^{-1}\rr.\]
In particular $\pi(K)$ only splits over the free group of rank $2g$.
\end{lemma}
\begin{proof}
If $\S$ is a fiber surface for $X(K)$, then the infinite cyclic cover of $X(K)$ is diffeomorphic to $\S\times \R$. Put differently, $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)\cong \pi_1(\S)$
which implies in particular that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ is finitely generated.
The lemma is now a straightforward consequence of Lemma \ref{lem:splitkerfg}.
\end{proof}
\section{Splitting of the knot group for $K=5_2$}\label{section:52}
In this section we give several explicit splittings of the knot group $\pi(K)$ where $K=5_2$, the first non-fibered knot in the Alexander-Briggs table.
We construct:
\bn
\item three splittings of $\pi(5_2)$ over the free group $F_2$, no two being weakly equivalent;
\item a splitting of $\pi(5_2)$ over the free group $F_3$ on three generators;
\item a splitting of $\pi(5_2)$ over a non-free group.
\en
Note that neither the second nor the third splitting is induced by an incompressible surface. We will also see that at least two of the three splittings over $F_2$ are not induced by an incompressible surface.
Since $K$ is a knot of genus one, a minimal-genus Seifert surface gives rise to a splitting of $\pi(K)$ over a free group of rank 2. In the following we will consider an explicit splitting that comes from a Wirtinger presentation of the knot group:
\[ \pi(K) =\ll a, b,t\,|\, tat^{-1}=b,\, tb^{-1}ab^{-1}t^{-1}=(b^{-1}a)^2\rr.\]
Here the knot group has an HNN decomposition $(A, B, t, \varphi)$,
where $A$ is the free group on $a, b$ while
$B$ is the subgroup freely generated by $a$ and $b^{-1}ab^{-1}$. The isomorphism $\varphi$ sends $a \mapsto b$ and $b^{-1}ab^{-1}\mapsto (b^{-1}a)^2$.
For the remainder of this section we identify $\pi(K)$ with $\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr$.
\begin{proposition}\label{prop:52splittings}
Consider the splittings:
\[ \begin{array}} \def\ea{\end{array}} \def\L{\Lambda{rcl} \pi(K)& =& \ll A,t\,|\, \varphi(B)=t Bt^{-1}\rr, \\
\pi(K)&=&\ll A_{[0,1]},t\,|\, \varphi_1(A)=tAt^{-1}\rr,\\
\pi(K)&=&\ll A_{[0,2]},t\,|\, \varphi_2(A_{[0,1]})=tA_{[0,1]}t^{-1}\rr\ea \]
where the latter two splittings are provided by Lemma \ref{lem:moresplittings}.
Then the following hold.
\begin{enumerate}[(i)]
\item Each is a splitting over a free group of rank two.
\item No two of the splittings of $(\pi(K),\epsilon_K)$ are weakly equivalent.
\item At least two of the splittings are not induced by an incompressible Seifert surface.
\end{enumerate}
\end{proposition}
In the proof of Proposition \ref{prop:52splittings} we will make use of the following lemma which is perhaps also of independent interest.
\begin{lemma}\label{lem:propersubgroup}
Let $M$ be a hyperbolic $3$-manifold with empty or toroidal boundary, and let $G$ be a subgroup of $\pi:=\pi_1(M)$.
If $f\colon \pi\to \pi$ is an automorphism with $f(G)\subset G$, then $f(G)=G$.
\end{lemma}
We do not know whether the conclusion of the lemma holds for any $3$-manifold.
\begin{proof}
Let $f\colon \pi\to \pi$ be an automorphism with $f(G)\subset G$.
Since $M$ is hyperbolic, it is a consequence of the Mostow Rigidity Theorem that the group of outer automorphisms of $\pi$ is finite.
(See, for example, \ \cite[Theorem~C.5.6]{BP92} and \cite[p.~213]{Jo79} for details.)
Consequently, there exists a positive integer $n$ and an element $x\in \pi$ such that $f^n(G)=xGx^{-1}$.
It follows from \cite[Theorem~4.1]{Bu07} that $f^n(G)=G$. The assumption that $f(G)\subset G$ implies inductively that $f^n(G)\subset f(G)$. Hence $f(G)=G$.
\end{proof}
We can now turn to the proof of Proposition \ref{prop:52splittings}.
\begin{proof}[Proof of Proposition \ref{prop:52splittings}]
It is clear that the first and the second splitting are over a free group of rank two.
It remains to show that $A_{[0,1]}$ is a free group of rank two.
First note that
\[A_{[0,1]}\cong \ll a_0, b_0, a_1, b_1\,|\, a_1=b_0,\, b_1^{-1} a_1b_1^{-1}=(b_0^{-1}a_0)^2\rr,\]
where $a_i$ and $b_i$ denote $t^iat^{-i}$ and $t^ibt^{-i}$, respectively. Using the first relation to eliminate the generator $b_0$, we obtain
$A_{[0,1]}\cong \ll a_0, a_1, b_1\,|\, r \rr,$ where $r= (a_1^{-1}a_0)^2 b_1a_1^{-1} b_1.$
We let $c = a_1^{-1}a_0$ and $d = b_1a_1^{-1}$. Clearly $\{c, d, r\}$ is a basis for the free group on $a_0, a_1, b_1$. Hence $A_{[0,1]}\cong \ll c, d, r\, |\, r \rr \cong \ll c, d\, |\, \rr$ is indeed a free group of rank 2.
This concludes the proof of (i).
We turn to the proof of (ii).
Since $K$ is not fibered it follows
from Stallings's theorem (see Theorem \ref{thm:st62})
that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)= \lim_{k\to \infty} A_{[-k,k]}$ is not finitely generated.
It follows that easily that for any $l\geq k$ the map $A_{[0,k]}\to A_{[0,l]}$ is a proper inclusion.
In particular, we have proper inclusions
$A \varsubsetneq A_{[0,1]} \varsubsetneq A_{[0,2]}$. Since $S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu K$ is hyperbolic, the desired statement now follows from
Lemma \ref{lem:propersubgroup}.
We prove (iii). It is well known (see, for example,\ \cite{Ka05}) that any two minimal-genus Seifert surfaces of $5_2$ are isotopic.
This implies, in particular, that any two splittings of $\pi(K)$ induced by minimal-genus Seifert surfaces are strongly equivalent.
It follows from (ii) that at least two of the three splittings are not induced by a minimal genus Seifert surface.
\end{proof}
\begin{figure}[h]
\begin{center}
\input{covering-graph.pstex_t}
\caption{Covering graph.}\label{Graph}
\end{center}
\end{figure}
We show that $\pi(K)$ admits a splitting over a free group of rank 3.
In order to do so we note that there exists a canonical isomorphism
\be \label{equ1} \begin{array}} \def\ea{\end{array}} \def\L{\Lambda{rcl} &&\ll a, b,t\,|\, tat^{-1}=b,\, tb^{-1}ab^{-1}t^{-1}=(b^{-1}a)^2\rr\\
&\cong&\ll a, b,c,t\,|\, tat^{-1}=b,\, tb^{-1}ab^{-1}t^{-1}=(b^{-1}a)^2, tb^{-2}ab^{-2}t^{-1}=c\rr.\ea \ee
Let $A'$ be the free group generated by $a,b,c$. Let $B'$ be the subgroup of $A'$ generated by $a,b^{-1}ab^{-1},b^{-2}ab^{-2}$.
The fundamental group of the covering graph in Figure \ref{Graph} is free on $a, b^{-1}ab^{-1}, b^{-1}a^2b, b^{-2}ab^{-2}$, and $b^4$, and so $B'$ is a free rank-3 subgroup of $A'$. The elements $b, b^{-1}ab^{-1}a, c$ of $A'$ also generate a free group of rank 3, since they are free in the abelianization of $A'$.
There exists therefore a unique homomorphism $\varphi':B'\to A'$ such that
$\varphi'(a)=b$, $\varphi'(b^{-1}ab^{-1})=b^{-1}ab^{-1}a$ and $\varphi'(b^{-2}ab^{-2})=c$.
It follows that $\varphi'$ is in fact a monomorphism.
Hence from (\ref{equ1}),
\[ \ll A',t\,|\, tB't^{-1}= \varphi(B')\rr\]
defines a splitting of $\pi(K)$ over the free group $B'$ of rank three.
Finally we give an explicit splitting of $\pi(K)$ over a subgroup that is not free.
Recall that by Lemma \ref{lem:moresplittings} the group $\pi(K)$ admits an HNN decomposition
with the HNN base $A_{[0,2]}$ defined as the amalgamated product of $A, tAt^{-1}$ and $t^2At^{-2}$.
It suffices to prove the following claim.
\begin{claim}
The group $A_{[0,2]}$ is not free.
\end{claim}
Note that $A_{[0,2]}$ has the presentation
\[\ll a_0, b_0, a_1, b_1, a_2, b_2\,|\, a_1=b_0,\, b_1^{-1}a_1b_1^{-1}=(b_0^{-1}a_0)^2,\, a_2=b_1, \, b_2^{-1}a_2b_2^{-1}=(b_1^{-1}a_1)^2\rr.\]
Using the first and third relations, we eliminate the generators $b_0$ and $b_1$. Thus
\[A_{[0,2]} \cong \ll a_0, a_1, a_2, b_2\,|\, r_1,\, r_2 \rr,\]
where $r_1=(a_1^{-1}a_0)^2a_2a_1^{-1}a_2$ and
$r_2 = (a_2^{-1}a_1)^2 b_2 a_2^{-1}b_2$.
Let $\epsilon=a_1^{-1}a_0$ and $f=a_2a_1^{-1}$.
One checks that
$\{\epsilon, f, r_1, b_2\}$ is a basis for the free group $\ll a_0, a_1, a_2, b_2\,|\, \rr.$ Using the substitutions
\[ a_0=f^{-2}\epsilon^{-2}r_1\epsilon,\, a_1=f^{-2}\epsilon^{-2}r_1 \mbox{ and } a_2=f^{-1}\epsilon^{-2}r_1,\]
we see
\[A_{[0,2]} \cong \ll \epsilon, f, b_2 \,|\, r_2 \rr \cong \ll \epsilon, f, b_2 \,|\, f^{-2}\epsilon^{-2}(b_2 \epsilon^2) f(b_2\epsilon^2)\rr.\]
We perform two more changes of variables. First we let $g = b_2\epsilon^2$ and eliminate $b_2$ to obtain
\[A_{[0,2]} \cong \ll \epsilon, f, g \,|\, \epsilon^{-2}(gf)^2f^{-3} \rr, .\]
Second, we let $h=gf$ and we eliminate $g$:
\[A_{[0,2]} \cong \ll \epsilon, f, h \,|\, \epsilon^{-2}h^2 = f^3 \rr.\]
We thus see that $A_{[0,2]}$ is a free product of two free groups amalgamated over an infinite cyclic group.
By Lemma 4.1 of \cite{BF94} (see Example 4.2), if the group $A_{[0,2]}$ is free, then
either $\epsilon^{-2}h^2$ or $f^3$ is a basis element in its respective factor.
Since neither element is a basis element (seen for example by abelianizing),
the group $A_{[0,2]}$ is not free. This concludes the proof of the claim.
\section{Splittings of fundamental groups of non-fibered knots over non-free groups}\label{section:splitnonfree}
In Section \ref{section:52} we saw that we can split the knot group $\pi(5_2)$ over a group that is not free.
We will now see that this example can be greatly generalized.
We recall the statement of our first main theorem.
\begin{theorem}\label{thm:fibsplit}
If $K$ is a non-fibered knot, then $\pi(K)$ admits splittings over \emph{non-free} subgroups of arbitrarily large rank.
\end{theorem}
\begin{proof}
Let $\Sigma\subset X(K)$ be a Seifert surface of minimal genus.
We write $A=\pi_1(X(K)\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \Sigma\times (-1,1)$ and $B=\pi_1(\S\times -1)$, and we consider the corresponding splitting
\[ \pi(K)=\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr \]
of $(\pi(K),\epsilon_K)$ over $\pi_1(\S)$.
Given $n\leq m $ we consider, as in Section \ref{section:anm}, the group
\[ A_{[n,m]}= \langle \ast_{i=m}^n t^i A t^{-i}\mid t^j\varphi(B)t^{-1} = t^{j+1}Bt^{-j-1}\ (j=n, \ldots, m-1) \rr.\]
By Lemma \ref{lem:moresplittings} the group $\pi(K)$ splits over the group $A_{[0,n]}$ for any non-negative integer $n$.
\begin{claim}
There exists an integer $m$ such that $A_{[0,n]}$ is not a free group for any $n\geq m$.
\end{claim}
As we pointed out in Section \ref{section:anm}, we have an isomorphism
\[ \mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K\colon \pi(K)\to \Z)\cong \lim_{k\to \infty}A_{[-k,k]}\]
where the maps $A_{[-l,l]}\to A_{[-k,k]}$ for $l\leq k$ are monomorphisms.
It follows from \cite[Theorem~3]{FF98} that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ is not locally free; that is, there exists a finitely generated subgroup of $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ which is not a free group.
But this implies that there exists $k\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$ such that $A_{[-k,k]}$ is not a free group.
We have a canonical isomorphism $A_{[-k,k]}\cong A_{[0,2k]}$, and for any $n\geq 2k$ we have a canonical monomorphism
$A_{[0,2k]}\to A_{[0,n]}$. It now follows that $A_{[0,n]}$ is not a free group for any $n\geq 2k$.
This concludes the proof of the claim.
To complete the proof of Theorem \ref{thm:fibsplit} it remains to prove the following claim:
\begin{claim}
Writing $H_n:=A_{[0,n]}$ we have
\[ \lim_{n\to \infty} \rk(H_n)=\infty.\]
\end{claim}
Since $\S\subset X(K)$ is not a fiber it follows from \cite[Theorem~10.5]{He76} that
there exists an element $g\in A\smallsetminus B$.
By work of Przytycki--Wise (see \cite[Theorem~1.1]{PW12b}) the subgroup
$B=\pi_1(\S\times -1)\subset \pi(K)$ is separable. This implies, in particular, that there exists an epimorphism
$\a\colon \pi(K)\to G$ onto a finite group $G$ such that
$\a(g)\not\in \a(B)$. Then
\[ D:=\a(B)\varsubsetneq C:=\a(A).\]
Given $n\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$ we denote by $\a_n$ the restriction of $\a$ to $H_n\subset \pi(K)$ and we write $G_n:=\a(H_n)$.
Note that in
\[ H_n=A_0*_{B_0}\dots*_{B_{n-1}}A_{n}\]
the groups $A_i$, viewed as subgroups of $\pi(K)$, are conjugate.
It follows
that the groups $\a_n(A_i)$ are conjugate in $G$. In particular, each of the groups $\a_n(A_i)$ has order $|C|$.
The same argument shows that each of the groups $\a_n(B_i)$ has order $|D|$.
Standard arguments about fundamental groups of graphs of groups (see, for example, \cite{Se80}) imply that
$\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\a_n:H_n\to G_n)$ is the fundamental group of a graph of groups, where the underlying graph $\ti{\mathcal{G}}$
is a connected graph with $(n+1)\cdot |G_n|/|C|$ vertices and $n\cdot |G_n|/|D|$ edges.
From the Reidemeister-Schreier theorem (see, for example, \cite[Theorem~2.8]{MKS76} and from the fact that
$\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\a_n:H_n\to G_n)$ surjects onto $\pi_1(\ti{\mathcal{G}})$ it then follows that
\[ \begin{array}} \def\ea{\end{array}} \def\L{\Lambda{rcl}\rk(H_n)&\geq &\frac{1}{|G_n|}\rk(\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\a_n:H_n\to G_n))\\
&\geq&\frac{1}{|G_n|}\rk(\pi_1(\ti{\mathcal{G}}))\\
&=& \frac{1}{|G_n|}\big(n\cdot |G_n|/|D|-(n+1)\cdot |G_n|/|C|+1\big)\\
&\geq &(n+1)\left(\frac{1}{|D|}-\frac{1}{|C|}\right).\ea \]
But this sequence diverges to $\infty$ since $|D|<|C|$.
\end{proof}
\section{Splittings of fundamental groups of non-fibered knots over free groups}\label{section:splitfree}
\subsection{Statement of the theorem}
Lyon \cite[Theorem~2]{Ly71}
showed that there exists a non-fibered knot $K$ of genus one that admits incompressible
Seifert surfaces of arbitrarily large genus (see also \cite{Sce67,Gu81,Ts04} for related examples).
By the discussion in Section \ref{section:splitk}, this implies that $\pi(K)$ splits over free groups of arbitrarily large rank.
Splitting along incompressible Seifert surfaces is a convenient way to produce knot group splittings.
Yet there are many non-fibered knots that have unique incompressible Seifert surfaces (see, for example, \cite{Wh73,Ly74a,Ka05}).
For such a knot, Seifert surfaces gives rise to only one type of knot group splitting.
In Section \ref{section:52} we saw an example of a splitting of a knot group over a free group that is not induced by an embedded surface.
We generalize the example in our second main theorem. We recall the statement.
\begin{theorem}\label{thm:splitfreelarge}
Let $K$ be a non-fibered knot. Then for any integer $k\geq 2g(K)$ there exists a splitting of $\pi(K)$ over a free group of rank $k$.
\end{theorem}
The key to extending the result in Section \ref{section:52} is the following theorem, which we will prove in the next subsection.
\begin{theorem}\label{thm:extendfree}
Let $K$ be a non-fibered knot. Then there exists a Seifert surface $\S$ of minimal genus such that for a given base point $p\in \S=\S\times 0$
there exists a nontrivial element $g \in \pi_1(S^3 \setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S \times (0,1),p)$ such that the subgroup of $\pi(K)$ generated by $\pi_1(\S\times 0,p)$ and $g$ is the free product of $\pi_1(\S\times 0,p)$ and the infinite cyclic group $\ll g \rr$.
\end{theorem}
Theorem \ref{thm:splitfreelarge} is now a consequence of Theorem \ref{thm:extendfree} and the following proposition about HNN decompositions.
\begin{proposition} Assume that $(\pi, \epsilon)$ splits over a free group $F$ of rank $n$
with base group $A$. If there exists an element $g \in A$ such that the subgroup of $\pi$ generated by $F$ and $g$ is the free product $F * \ll g \rr$, then $(\pi, \epsilon)$ splits over free groups of every rank greater than $n$. \end{proposition}
\begin{proof}
By hypothesis we can identify
$\pi$ with
\[\ll A, t \mid \varphi(x_i) = tx_it^{-1}\ (1 \le i \le n) \rr,\]
where $x_1,\ldots, x_n$ generate the group $F$ and where $\epsilon$ is given by $\epsilon(t)=1$ and $\epsilon(A)=0$.
The kernel of the second-factor projection
$F*\ll g \rr \to \ll g\rr=\Z$ is an infinite free product $*\{g^iFg^{-i} \mid i \in \Z\}$. Let $l$ be any positive integer. Choose a nontrivial element $z \in F$ and define $z_i = g^izg^{-i}$, for $1 \le i \le l$. Then
$F'= \ll F, z_1, \ldots, z_l \rr$ is a free subgroup of $F*\ll g\rr$ with rank $n + l$.
By hypothesis $F'$ is then also a free subgroup of $A$ of rank $n+l$.
Note that $\pi$ is canonically isomorphic to
\[\ll A, c_1, \dots,c_l, t \mid \varphi(x_i) = tx_it^{-1}, c_j = tz_jt^{-1} (1\le i \le n, 1\le j \le l) \rr.\]
We denote by $A'$ the free product
of $A$ and $\ll c_1,\dots,c_l\rr$,
and we denote by $\varphi'$
the unique homomorphism
\[ \varphi'\colon F'=F*\ll z_1, \ldots, z_l \rr \to A'=A*\ll c_1,\dots,c_l\rr \]
that extends $\varphi$ and that maps each $z_j$ to $c_j$. Since $\varphi'$ is the free product of two isomorphisms, it is also an isomorphism.
We then have a canonical isomorphism
\[ \pi \cong \ll A',t\mid \varphi'(F')=tF't^{-1}\rr.\]
We have thus shown that $(\pi, \epsilon)$ splits over the free group $F'$ of rank $n+l$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:extendfree}}
To prove Theorem \ref{thm:extendfree} we will need to discuss the JSJ pieces of knot complements. (See \cite{AFW12} for exposition about JSJ decompositions.)
It is therefore convenient to generalize a few notions for knots to more general 3-manifolds.
Given a $3$-manifold $N$, we can associate to each class $\epsilon\in H^1(N;\Z)$ its Thurston norm $x_N(\epsilon)$, which is defined as the minimal `complexity'
of a surface dual to $\epsilon$. We say that a class $\epsilon\in H^1(N;\Z)$ is \emph{fibered} if there exists a fibration $p\colon N\to S^1$ such that the induced
map $p_*\colon \pi_1(N)\to \pi_1(S^1)=\Z$ agrees with $\epsilon\in H^1(N;\Z)=\hom(\pi_1(N),\Z)$.
It is well known that given a non-zero $d\in \Z$, the class $\epsilon$ is fibered if and only if $d\epsilon$ is fibered.
Note that given a non-trivial knot $K\subset S^3$ we have $x_{X(K)}(\epsilon_K)=2g(K)-1$, and $\epsilon_K$ is fibered if and only if $K$ is fibered.
We refer to \cite{Th86} for background and more information.
We will need the following theorem, which in particular implies Theorem \ref{thm:extendfree} in the case that $S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu K$ is hyperbolic.
\begin{theorem}\label{thm:extendfreehyp}
Let $N$ be a hyperbolic $3$-manifold and let $\S$ be a properly embedded,
connected Thurston norm-minimizing surface that is not a fiber surface.
We write $M = N \setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (0,1)$ and we pick a base point $p$ on $\S\times 0=\S$. Then there exists a nontrivial element $g\in \pi_1(M,p)$ such that the subgroup of $\pi_1(M,p)$ generated by $\pi_1(\S,p)$ and $g$ is the free product of $\pi_1(\S,p)$ and $\langle g \rangle$.
\end{theorem}
\begin{proof}
Let $N$ be a hyperbolic $3$-manifold. We denote by $T_1,\dots,T_k$ the boundary components of $N$. Let $\S$ be a properly embedded,
connected Thurston norm-minimizing surface that is not a fiber surface.
We write $M = N \setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (0,1)$ and we pick a base point $p$ on $\S\times 0=\S$.
We now take all fundamental groups with respect to this base point.
It follows again from the Loop Theorem and the fact that $\S$ is Thurston norm-minimizing that the inclusion-induced map $\Gamma:=\pi_1(\S)\to \pi_1(M)$ is a monomorphism.
We will henceforth view $\Gamma=\pi_1(\S)$ as a subgroup of $\pi_1(M)$.
We first suppose that $\S$ hits all boundary components of $N$.
Since $\S$ is not a fiber surface, it follows
from the Tameness Theorem of Agol \cite{Ag04} and Calegari--Gabai \cite{CG06}
that $\pi_1(M)$ is word-hyperbolic and that $\Gamma=\pi_1(\S)$ is a quasi-convex subgroup of $\pi_1(M)$.
(We refer to \cite[Sections~14~and~16]{Wi12a} for more details.)
It then follows from work of Gromov \cite[5.3.C]{Gr87} (see also \cite[Theorem~1]{Ar01})
that there exists an element $g\in \pi_1(M)$ such that the subgroup of $\pi_1(M)$ generated by $\Gamma$ and $g$ is in fact the free product
of $\Gamma$ and $\langle g \rangle$.
We now suppose that there exists a boundary component $T_i$ that is not hit by $\S$.
We pick a path in $M$ connecting $T_i$ to the chosen base point and we henceforth view $\pi_1(T_i)$ as a subgroup of $\pi_1(M)$.
Note that $\pi_1(N)$ is hyperbolic relative to the subgroups $\pi_1(T_1),\dots,\pi_1(T_k)$.
Since $\S$ is not a fiber surface, it follows from the Tameness Theorem and from work of Hruska \cite[Corollary~1.3]{Hr10} that $\Gamma$ is a relatively quasi-convex subgroup of $\pi_1(N)$. Since $\Gamma$ is a non-abelian surface group we can find an element $g\in \Gamma$
such that $\ll g\rr\cap \pi_1(T_i)$ is trivial.
We see again from the Tameness Theorem that $\ll g\rr$ is a relatively quasi-convex subgroup of $\pi_1(N)$.
Summarizing, we have shown that $\pi_1(\S)$ and $\ll g\rr$ are two relatively quasi-convex subgroups of $\pi_1(N)$ which have trivial intersection
with the parabolic subgroup $\pi_1(T_i)$.
It now follows from Martinez-Pedroza \cite[Theorem~1.2]{MP09} that there exists a $h\in \pi_1(T_i)$ such that the subgroup of $\pi_1(N)$ generated by
$\Gamma$ and $hgh^{-1}$ is the free product of $\Gamma$ and $\ll hgh^{-1}\rr$. The proposition now follows from the observation that
according to our choices, both $\Gamma$ and $\ll hgh^{-1}\rr$ lie in $\pi_1(M)$.
\end{proof}
We can now prove Theorem \ref{thm:extendfree}.
For the reader's convenience we recall the statement.
\begin{theorem}\label{thm:infinitecyclic}
Let $K$ be a non-fibered knot. Then there exists a Seifert surface $\S$ of minimal genus
and a nontrivial element $g \in \pi_1(S^3 \setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S \times (0,1))$ such that the subgroup generated by $\pi_1(\S\times 0)$ and $g$ is the free product of $\pi_1(\S\times 0)$ and the infinite cyclic group $\ll g \rr$.
\end{theorem}
\begin{proof}
Let $K$ be a non-fibered knot. We write $X=S^3\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \nu K$. We denote by $X_v,v\in V$, the JSJ components,
and we denote by $T_\epsilon, \epsilon\in E$, the JSJ tori of $X$.
We let $\epsilon \in H^1(X;\Z)=\hom(H_1(X);\Z),\Z) \cong \Z$ be the generator that corresponds to the canonical homomorphism $\epsilon_K\colon H_1(X;\Z)\to \Z$.
For each $v\in V$, we denote by $\epsilon_v\in H^1(X_v;\Z)$ the restriction of $\epsilon$ to $X_v$.
The pair $(V,E)$ has a natural graph structure, since each JSJ torus cobounds two JSJ components. Since $X$ is a knot complement,
this graph is a based tree, where the base is the vertex $b\in V$ for which $X_b$ contains the boundary torus.
We now denote by $T_b$ the boundary torus of $X$, and for each $v\ne b$ we denote by $T_v$ the unique JSJ torus which is a boundary component of $X_v$ and which separates $X_v$ from $X_b$.
\begin{claim}
There exists an element $w\in V$ such that $X_w$ is hyperbolic and such that $\epsilon_w\in H^1(X_w;\Z)$ is not a fibered class.
\end{claim}
We say that a vertex $v\in V$ is non-fibered if $\epsilon_v\in H^1(X_v;\Z)$ is not a fibered class. Since $\epsilon=\epsilon_K$ is by assumption not fibered, it follows from \cite[Theorem~4.2]{EN85} that
some vertex is not fibered.
Let $w\in V$ be a non-fibered vertex of minimal distance to $b$.
Note that if $v\in V$ is fibered and if $\epsilon_v$ is non-trivial, then the restriction of $\epsilon_v$ to any boundary torus is also non-trivial.
Since $\epsilon_b$ is non-trivial and since $w\in V$ is a non-fibered vertex of minimal distance to $b$, we conclude that the restriction of $\epsilon_w$
to $T_w$ is non-trivial.
It follows from the Geometrization Theorem and from \cite[Lemma~VI.3.4]{JS79} that $X_w$ is one of the following:
\bn
\item the exterior of a torus knot;
\item a `composing space', that is, a product $S^1\times W_n$, where $W_n$ is the result of removing $n$ open disjoint disks from $D^2$;
\item a `cable space', that is, a manifold obtained from a solid torus $S^1\times D^2$ by removing an open regular neighborhood in
$S^1 \times \operatorname{Int}(D^2$) of a simple
closed curve $c$ that lies in a torus $S^1\times s $, where $s\subset \operatorname{Int}(D^2)$ is a simple closed curve
and $c$ is non-contractible in $S^1\times D^2$;
\item a hyperbolic manifold.
\en
As we argued above, the restriction of $\epsilon_w\in H^1(X_w;\Z)$ to one of the boundary tori, namely $T_w$, is non-trivial.
It is well known that in each of the first three cases, this would imply that
$\epsilon_w$ is a fibered class. Hence $X_w$ must be hyperbolic.
This concludes the proof of the claim.
In the following, given a vertex $v$ with $\epsilon_v$ non-zero, we denote by $d_v\in \Bbb{N}} \def\genus{\mbox{genus}} \def\l{\lambda} \def\ll{\langle} \def\rr{\rangle$ the divisibility of $\epsilon_v\in H^1(X_v;\Z)$. For all other vertices we write $d_v=0$.
\begin{claim}
There exists a minimal genus Seifert surface $\S$ for $K$ with the following properties:
\bn
\item $\S$ intersects each $T_e$ transversally;
\item each intersection $\S\cap T_e$ consists of a possibly empty union of parallel, non-null-homologous curves;
\item for each $v$ with $d_v\ne 0$ the surface $\S_v:=\S\cap X_v$ is the union of $d_v$ parallel copies of a surface $\S_v'$.
\en
\end{claim}
For each $v$ with $d_v\ne 0$ we pick a properly embedded Thurston norm-minimizing surface $\S_v'$ that represents $\frac{1}{d_v}\epsilon_v$.
After possibly gluing in annuli and disks, we may assume that at each boundary torus $T$ of $N_v$, all the components of $\S_v'\cap T$ are parallel as oriented curves
and no component of $\S_v'\cap T$ is null-homologous.
We now pick a tubular neighborhood $\S_v'\times [-1,2]$ of $\S_v'$
and we denote by $\S_v$ the union of $\S_v'\times r_i$ where $r_i=\frac{i}{d_v}$ with $i=0,\dots,d_v-1$.
For each $v$ with $d_v=0$ we denote by $\S_v'=\S_v$ the empty set.
The surfaces $\S_v$ are chosen such that at each JSJ torus the boundary curves are parallel.
Since at a JSJ edge the adjacent surfaces have to represent the same homology class, at each JSJ torus
the adjacent surfaces have exactly the same number of boundary components
which furthermore represent the same homology class in the JSJ torus. After an isotopy in the neighborhood of the tori we can therefore glue the surfaces $\S_v$ together to obtain a properly embedded surface $\S$.
Since the Thurston norm is linear on rays, it follows from \cite[Proposition~3.5]{EN85} that $\S$ is a connected Thurston norm-minimizing surface representing $\epsilon$. By construction,
the intersection of $\S$ with $\partial X$ consists of one curve, which is necessarily a longitude for $K$. We thus see that $\S$ is indeed a genus-minimizing Seifert surface for $K$.
It is now clear that $\S$ has the desired properties.
This concludes the proof of the claim.
Recall that $\epsilon_w\in H^1(X_w;\Z)$ is not a fibered class. By the discussion at the beginning of this section, this implies that
$\frac{1}{d_w}\epsilon_w$ is also not a fibered class, and so $\S_w'$ is not a fiber surface.
We pick a base point $p_w$ on $\S_w'=\S_w'\times 0$, which is then also a base point for $X_w$.
It follows from Theorem \ref{thm:extendfreehyp} that there exists an element $g\in \pi_1(X_w\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S_w'\times (0,1),p_w)$ such that the subgroup of $\pi_1(X_w\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S_w'\times (0,2],p_w)$ generated by $\pi_1(\S_w',p_w)$ and $g$ is in fact the free product of $\pi_1(\S_w',p_w)$ and $\ll g \rr $.
It now remains to prove the following claim.
\begin{claim}
The subgroup of $\pi_1(X,p_w)$ generated by $\pi_1(\S,p_w)$ and $g$ is the free product of $\pi_1(\S,p_w)$ and $\ll g \rr $.
\end{claim}
We may pick an oriented simple closed curve $c$ in $X_w\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S_w'\times (0,2]$ that intersects $\S_w'=\S_w'\times 0$ in precisely the base point $p_w$
and that represents $g\in \pi_1(X_w\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S_w'\times (0,2],p_w)$. Note that $\pi_1(\S\cup c,p_w)$ is precisely the free product of $\pi_1(\S,p_w)$ and $\ll g \rr $.
It thus suffices to show that the inclusion-induced map
\[ \pi_1(\S\cup c,p_w)\to \pi_1(X,p_w) \]
is injective.
\begin{figure}[h]
\begin{center}
\input{seifert-surface.pstex_t}
\caption{Schematic picture for the Seifert surface $\S$ and the curve $c$.}
\end{center}
\end{figure}
Let $h$ be an element in the kernel of this map.
We pick a representative curve $d$ which intersects the JSJ tori transversally. We will show that $h$ represents the trivial element
in $\pi_1(\S\cup c,p_w)$ by induction on
\[ n(d):=\sum_{v\in V} \#\mbox{components of $d\cap X_v$}.\]
If $n(d)=1$, then $d$ lies in the component of
$(\S\cup w)\cap X_w=\S_w\cup c$ that contains $p_w$.
Then $c$ lies completely in $\S'_w\cup c$.
But the map $\pi_1(\S'_w\cup c,p_w)=\pi_1(\S'_w,p_w)*\ll g\rr\to \pi_1(X_w)$ is injective,
and the map $\pi_1(X_w)\to \pi_1(X)$ is also injective. It thus follows that $h$ is the trivial element.
We now consider the case that $n:=n(d)>1$. We then think of $\pi_1(X)$ as the fundamental group of the graph of groups $\pi_1(X_v)$.
We can view the curve $d$ as a concatenation of curves $d_1,\dots,d_n$ such that each curve $d_i$ lies completely in some $X_{u}$.
Recall that we assume that $d$ represents the trivial element. A standard argument in the theory of fundamental groups of graph of groups (see e.g. \cite{He87}) implies that there exists a $d_i$ with the following two properties:
\bn
\item the two endpoints of $d_i$ lie on the same boundary torus $T$ of some $X_{u}$,
\item $d_i$ is homotopic in $X_{u}$ rel endpoints to a curve $s_i$ that lies completely in $T$.
\en
Note that the two endpoints of $d_i$ lie on $T\cap \S_u$.
In fact we can prove a stronger statement.
\begin{claim}
The two endpoints of $d_i$ lie on the same component of $T\cap \S_u$.
\end{claim}
We first make the following observation.
Let $S$ be a properly oriented embedded surface $S$ in an oriented 3-manifold $M$ and let $a$ be an oriented embedded arc that does
not intersect $S$ at the endpoints.
We can then associate to $S$ and $a$ the algebraic intersection number $S\cdot a\in \Z$, which has in particular the following two properties:
\bn
\item for any properly oriented embedded arc $b$ homotopic to $a$ rel base points we have $S\cdot a=S\cdot b$,
\item if $a$ lies completely in a boundary component $B$ of $M$, then $S\cdot a$ equals the algebraic intersection number of the oriented curve $\partial S$ with the oriented arc $a$ in $B$.
\en
We now turn to the proof of the claim. We first note that there exists a homeomorphism $r\colon X_u\to X_u$
which is the identity on $X_u\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S_u'\times (-1,2)$, which
has the property that for any $x\times t$ with $x\in \S_u'$ and $t\in [0,1]$ we have
\[ f(x\times t)=x\times (t-\frac{1}{2d_v})\]
and which is isotopic to the identity on $X_u$.
More informally, $r$ is a map that pushes everything on
$\S\times [0,1]$ slightly to the left.
Note that $r$ pushes everything on $\S_u$ off $\S_u$.
Furthermore, if $u=w$, then the intersection of $r(\S_w\cup c)$
with $\S_w$ is also empty.
Since $s_i$ and $d_i$ are homotopic rel base points and since $r$ is homotopic to the identity, the curves $r(s_i)$ and $r(d_i)$ are homotopic rel base points. It follows from the above that $\S_u\cdot r(s_i)=\S_u\cdot r(d_i)$.
But the latter is clearly zero, since $r(d_i)$ does not intersect $\S_u$.
We now conclude that $\partial \S_u \cdot r(s_i)=\S_u\cdot r(s_i)=0$.
Since the curves $\partial \S_u\cap T$ are all parallel it now follows that $r(s_i)$ does not intersect $\S_u\cap T$ at all.
But this means that the two endpoints of $s_i$, and thus also the two endpoints of $d_i$, have to lie on the same component of $T\cap \S_u$.
This concludes the proof of the claim.
We then make the following claim.
\begin{claim}
The curve $d_i$ is homotopic in $X_{u}$ rel end points to a curve $d_i'$ that lies completely in $T\cap \S_u$.
\end{claim}
By the previous claim we know that the two endpoints of $d_i$ lie on the same component of $T\cap \S_u$.
We denote the initial point of $d_i$ by $P$, and the terminal point by $Q$.
We denote by $r$ the component of $\partial \S_u$ that contains $P$. We endow $r$ with an orientation.
Note that $r$ is homologically essential on $T$. The curve $r$ thus defines a subsummand $\ll r\rr$ of $\pi_1(T,P)\cong \Z^2$.
We also pick a curve $t_i$ in $T\cap X_u$ from $P$ to $Q$.
The concatenation $s_it_i^{-1}$ lies in $T$, and also lies in $(\S\cup c)\cap X_u$.
The curve $s_it_i^{-1}$ thus represents an element in $\pi_1((\S\cup c)\cap X_u,P)\cap \pi_1(T,P)$.
But the group $\pi_1((\S\cup c)\cap X_u,P)$ is free (regardless of whether $c$ lies on the $P$-component of $(\S\cup c)\cap X_u$ or not) whereas $\pi_1(T,P)\cong \Z^2$.
The two groups thus intersect in an infinite cyclic subgroup. Furthermore, the intersection contains the subsummand $\ll r\rr$. It follows that the intersection equals $\ll r\rr$.
In particular, $s_i^{-1}t_i$ is homotopic rel $P$ to $r^k$ for some $k$. It now follows that relative to the end points we have the following homotopies:
\[ d_i\sim d_is_i^{-1}s_i\sim s_i\sim s_it_i^{-1}t_i\sim r^kt_i.\]
But the curve $d_i':=r^kt_i$ lies completely in $T\cap \S_u$.
This concludes the proof of the claim.
We can thus replace $d=d_1\dots d_{i-1}d_id_{i+1}\dots d_l$ by $d_1\dots d_{i-1}d_i'd_{i+1}\dots d_l$
and push $d_i'$ slightly into the adjacent JSJ component of $X$. We have found a representative of $h$ of smaller length than $d$.
The claim that $h$ represents the trivial element now follows by induction.
This concludes the proof that the subgroup of $\pi_1(X\setminus} \def\ep{\end{pmatrix}} \def\bn{\begin{enumerate}} \def\Hom{\mbox{Hom} \S\times (0,2],p_w)$ generated by $ \pi_1(\S,p_w)$ and $g$ is the free product of $\pi_1(\S,p_w)$ and $\ll g \rr $.
We are therefore done with the proof of Theorem \ref{thm:infinitecyclic}.
\end{proof}
\section{Comparison with Stallings's fibering criterion}\label{section:stallings}
Let $K$ be a knot. Recall that we denote by $\epsilon_K\colon \pi(K)\to \Z$ the unique epimorphism that sends the oriented meridian to 1.
Stallings \cite{St62} proved the following theorem.
\begin{theorem}\label{thm:st62}
If $K$ is not fibered, then $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ is not finitely generated.
\end{theorem}
It follows from Lemma \ref{lem:splitkerfg} that if
$\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ is finitely generated, then there exists precisely one group $B$ such that $\pi(K)$ splits over $B$.
Thus Stalling's theorem follows as a consequence of either Theroem \ref{thm:fibsplit} or Theorem \ref{thm:splitfreelarge}.
On the other hand, a group $\pi$ with an epimorphism $\epsilon\colon \pi\to \Z$ such that $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ is not finitely generated may still split over a unique group.
The Baumslag-Solitar group, the semidirect product $\Z\ltimes \Z[\frac{1}{2}]$ where $n\in \Z$ acts on $\Z[\frac{1}{2}]$ by multiplication by $2^n$, has abelianization $\Z$. The kernel of the abelianization $\epsilon:\pi\to \Z$ is the infinitely generated subgroup $\Z[\frac{1}{2}]$. Since every finitely generated subgroup of
$\Z[\frac{1}{2}]$ is isomorphic to $\Z$, $\Z\ltimes \Z[\frac{1}{2}]$ splits only over subgroups isomorphic to $\Z$. (In fact, any two splittings are easily seen to be strongly equivalent.)
This shows that the conclusions of Theorems \ref{thm:fibsplit} and \ref{thm:splitfreelarge} are indeed stronger than the conclusion of Theorem \ref{thm:st62}.
Stallings's fibering criterion has been generalized in several other ways.
For example, if $K$ is not fibered, then $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ can be written neither as a descending nor as an ascending HNN-extension \cite{BNS87}, $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$ admits uncountably many subgroups of finite index (see \cite[Theorem~5.2]{FV12c}, \cite{SW09a} and \cite[Theorem~3.4]{SW09b}),
the pair $(\pi(K),\epsilon_K)$ has `positive rank gradient' (see \cite[Theorem~1.1]{DFV12})
and $\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon_K)$ admits a finite index subgroup which is not normally generated by finitely many elements (see \cite[Theorem~5.1]{DFV12}).
\section{Proof of Theorem \ref{mainthm}}
In this section we will prove Theorem \ref{mainthm}, i.e. we will show that if $K$ is a knot, then $\pi(K)$ does not split over a group of rank less than $2g(K)$.
We will first give a `classical' proof for genus-one knots
before we provide the proof for all genera.
\subsection{Genus-one knots}\label{section:genusone}
In this subsection we prove:
\begin{theorem}\label{thm:genus1}
If $K$ is a genus-one knot, then $\pi(K)$ does not split
over a free group of rank less than two.
\end{theorem}
The main ingredients in the proof are
two classical results from 3-manifold topology.
First, we recall the statement of the Kneser Conjecture, which
was first proved by Stallings \cite{St59} in the closed case, and by
Heil \cite[p.~244]{Hei72} in the bounded case.
\begin{theorem} \textbf{\emph{(Kneser Conjecture)}}\label{thm:kneserconj}
Let $N$ be a $3$-mani\-fold with incompressible boundary.
If there exists an isomorphism $\pi_1(N)\cong \Gamma_1*\Gamma_2$, then there exist compact, orientable $3$-manifolds $N_1$ and $N_2$
with $\pi_1(N_i)\cong \Gamma_i$, $i=1,2$ and $N\cong N_1\# N_2$.
\end{theorem}
In the following, we say that a properly embedded 2-sided annulus $A$ in a 3-manifold $N$ is \emph{essential} if the inclusion map $A \hookrightarrow N$ induces a $\pi_1$-injection and if $A$ is not properly homotopic into $\partial N$. The second classical result we will use is the following,
which is a direct consequence of a theorem of Waldhausen \cite{Wal68b} (see Corollary 1.2(i) of \cite{Sco80}).
\begin{theorem}\label{thm:annulus}
Let $N$ be an irreducible 3-manifold with incompressible boundary.
If $\pi_1(N)$ splits over $\Z$, then $N$ contains an essential, properly embedded 2-sided annulus.
\end{theorem}
We turn to the proof of Theorem \ref{thm:genus1}.
\begin{proof}[Proof of Theorem \ref{thm:genus1}]
Let $K$ be a genus-one knot. Since $K$ is non-trivial, the Loop Theorem implies that $\partial X(K)$ is incompressible.
Since knot complements are prime 3-manifolds, it now follows from the Kneser Conjecture that $\pi(K)$ can not split over the trivial group, i.e. $\pi(K)$
cannot split over a free group of rank zero.
Now suppose that $J$ is a non-trivial knot such that $\pi(J)$ splits over a free group of rank one, that is, over a group isomorphic to $\Z$.
From Theorem \ref{thm:annulus} we deduce that $X(J)$ contains an essential,
properly embedded, 2-sided annulus $A$. Lemma 2 of \cite{Ly74a} (an immediate consequence of \cite{Wal68a}) implies that the knot $J$ is either a composite or a nontrivial cable knot. If $J$ is a composite knot, then it follows from the additivity of the knot genus
(see, for example, \cite[p.~124]{Ro90}) that the genus of $J$ is at least two.
Moreover, a well-known result of Schubert \cite{Sct53} (see Proposition 2.10 of \cite{BZ85}) implies that the genus of any cable knot is greater than one. Thus in both cases we see that $g(J)\geq 2$.
We now see that for the genus-one knot $K$ the group $\pi(K)$ cannot split over a free group of rank one.
\end{proof}
\subsection{Wada's invariant}\label{section:wada}
For the proof of Theorem \ref{mainthm3} we will need Wada's invariant,
which is also known as the twisted Alexander polynomial or the twisted Reidemeister torsion of a knot.
We introduce the following convention. If $\pi$ is a group and $\g:\pi\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,R)$ a representation over a ring,
then we denote by $\g$ also the $\Z$-linear extension of $\g$ to a map $\Z[\pi] \to M(k,R)$.
Furthermore, if $A$ is a matrix over $\Z[\pi]$ then we denote by $\g(A)$ the matrix given by applying
$\g$ to each entry of $A$.
Let $\pi$ be a group, $\epsilon\colon \pi\to \Z$ an epimorphism, and $\a \colon \pi\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ a representation.
First note that $\a$ and $\epsilon$ give rise to a tensor representation
\[ \begin{array}{rcl} \a \otimes \epsilon \colon \pi &\to & \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C[t^{\pm 1}]} \def\rt{R[t^{\pm 1}]) \\
g&\mapsto & t^{\epsilon(g)}\cdot \a(g).\end{array} \]
Now let
\[ \pi=\langle g_1,\dots,g_{k} \,|\, r_1,\dots,r_{l}\rangle\]
be a presentation of $\pi$. By adding trivial relations if necessary, we may assume that $l\geq k-1$.
We denote by $F_{k}$ the free group with generators $g_1,\dots,g_{k}$.
Given $j\in \{1,\dots,k\}$ we denote by $\frac{\partial }{\partial g_j}\colon \Z[F_{k}]\to \Z[F_{k}]$ the Fox derivative with respect to $g_j$, i.e. the unique $\Z$-linear map such that
\begin{eqnarray*}
\frac{\partial g_i}{\partial g_j}&=&\delta_{ij},\\
\frac{\partial uv}{\partial g_j}&=&\frac{\partial u}{\partial g_j}+u\frac{\partial v}{\partial g_j}
\end{eqnarray*}
for all $i,j \in \{1,\dots,k\}$ and $u,v\in F_{k}$.
We denote by
\[M:=\left(\frac{\partial r_i}{\partial g_j}\right)\]
the $l\times k$-matrix over $\Z[\pi]$ of all the Fox derivatives of the relators.
Given subsets $I=\{i_1,\dots,i_r\}\subset \{1,\dots,k\}$ and
$J=\{j_1,\dots,j_s\}\subset \{1,\dots,l\}$ we denote by $M_{J,I}$ the matrix formed by deleting the columns
$i_1,\dots,i_r$ and by deleting the rows $j_1,\dots,j_s$ of $M$.
Note that there exists at least one $i\in \{1,\dots,k\}$ such that $\epsilon(g_i)\ne 0$. It follows that
\[ \det((\a\otimes \epsilon)(1-g_i))=\det\left(\mbox{id}} \def\Aut{\mbox{Aut}} \def\im{\mbox{Im}} \def\sign{\mbox{sign}_k-t^{\epsilon(g_i)}\a(g_i)\right)\ne 0.\]
We define
\[ Q_i:=\mbox{gcd}\{\det((\a\otimes \epsilon)(M_{J,\{i\}}))\,|\, J\subset \{1,\dots,l\} \mbox{ with }|J|=l+1-k\}.\]
(Note that each $M_{J,\{i\}}$ is a $(k-1)\times (k-1)$-matrix.)
It is worth considering the special case that $l=k-1$; that is, the case of a presentation of deficiency one. Then the only choice for $J$ is the empty set, and hence
\[ Q_i=\det((\a\otimes \epsilon)(M_{\emptyset,\{i\}})).\]
Wada \cite{Wad94} introduced the following invariant of the triple $(\pi,\epsilon,\a)$.
\[ \Delta_{\pi,\epsilon}^\a :=Q_i\cdot \det((\a\otimes \epsilon)(1-g_i))^{-1}\in \C(t).\]
A priori, Wada's invariant depends on the various choices we made.
The following theorem proved by Wada \cite[Theorem~1]{Wad94} shows that the indeterminacy is well controlled.
\begin{theorem}\label{thm:wadadefined}
Let $\pi$ be a group, let $\epsilon\colon \pi\to \Z$ be an epimorphism, and let $\a \colon \pi\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$
be a representation.
Then $\Delta_{\pi,\epsilon}^\a$ is well-defined up to multiplication by a factor of the form $\pm t^kr$, where $k\in \Z$ and
$r\in \C^*$.
\end{theorem}
\medskip
Finally, let $K\subset S^3$ be a knot and let $\a \colon \pi(K)\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ be a representation.
As before, we denote by $\epsilon\colon \pi(K)\to \Z$ the epimorphism that sends the oriented meridian of $K$ to 1. We write
\[ \Delta_K^\a=\Delta_{\pi,\epsilon}^\a.\]
If $\a\colon \pi(K)\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(1,\C)$ is the trivial one-dimensional representation, then Wada's invariant is determined by the classical Alexander polynomial $\Delta_K$. More precisely, we have
\[ \Delta_K^\a=\frac{\Delta_K}{1-t}.\]
Wada's invariant equals the twisted Reidemeister torsion of a knot, and is closely related
to the twisted Alexander polynomial of a knot, which was first introduced by Lin \cite{Lin01}.
We refer to \cite{Ki96,FV10} for more details about Wada's invariant, its interpretation as twisted Reidemeister torsion and its relationship to twisted Alexander polynomials.
\subsection{Proof of Theorem \ref{mainthm}}\label{section:proof}
Before we provide the proof of Theorem \ref{mainthm} we need to introduce two more definitions.
First, given a non-zero polynomial $p(t)=\sum_{i=r}^s a_it^i\in \C[t^{\pm 1}]} \def\rt{R[t^{\pm 1}]$ with $a_r\ne 0$ and $a_s\ne 0$, we write
\[ \deg(p(t))=s-r.\]
If $f(t)=p(t)/q(t)\in \C(t)$ is a non-zero rational function, we write
\[ \deg(f(t))=\deg(p(t))-\deg(q(t)).\]
Note that if Wada's invariant of a triple $(\pi,\epsilon,\a)$ is non-zero, then the degree of Wada's invariant $\Delta_{\pi,\epsilon}^\a$ is well defined.
\medskip
We can now formulate the following theorem.
\begin{theorem}\label{thm:technical}
Let $\pi$ be a group and let
\[ f:\pi \to \ll A,t\,|\, f(B)=tBt^{-1}\rr\]
be a splitting.
We denote by $\epsilon\colon \ll A,t\,|\, f(B)=tBt^{-1}\rr\to \Z$ the canonical epimorphism which is given by
$\epsilon(t)=1$ and $\epsilon(a)=0$ for $a\in A$.
If $\a\colon \pi\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ is a representation such that $\Delta_{\pi,\epsilon}^\a\ne 0$, then
\[ \deg \Delta_{\pi,\epsilon}^\a \leq k(\rank(B)-1).\]
\end{theorem}
In \cite{FKm06} (see also \cite{Fr12}) it was shown
that if $K$ is a knot and $\a\colon \pi(K)\to\mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ is a representation such that $\Delta_K^\a\ne 0$, then
\be \label{equ:genuslowerbound} \deg \Delta_K^\a \leq k(2\operatorname{genus}(K)-1).\ee
In light of the discussion in Section \ref{section:splitk}, we can view Theorem \ref{thm:technical} as a generalization of (\ref{equ:genuslowerbound}).
\begin{proof}
Let $\pi$ be a group and let
\[ \pi =\ll g_1,\dots,g_k,t\,|\, r_1,\dots,r_l,\varphi(b)=tbt^{-1}\mbox{ for all }b\in B\rr\]
be a splitting, where $\varphi\colon B\to A$ is a monomorphism and $B$ is a rank-$d$ subgroup of $A=\ll g_1,\dots,g_k,t\,|\, r_1,\dots,r_l\rr$.
We pick generators $x_1,\dots,x_{d}$ for $B$. Note that
\[ \begin{array}} \def\ea{\end{array}} \def\L{\Lambda{ll} &\ll g_1,\dots,g_k,t\,|\, r_1,\dots,r_l,\varphi(b)=tbt^{-1}\mbox{ for all }b\in B\rr\\
=&\ll g_1,\dots,g_k,t\,|\, r_1,\dots,r_l,\varphi(x_1)^{-1}tx_1t^{-1},\dots,\varphi(x_{d})^{-1}tx_{d}t^{-1}\rr.\ea\] We write $K:=\mbox{Ker}}\def\be{\begin{equation}} \def\ee{\end{equation}} \def\tr{\mbox{tr}(\epsilon)$.
We denote by $M$
the $(l+{d})\times (k+1)$-matrix over $\Z[\pi]$ that is given by all the Fox derivatives of the relators. We make the following observations.
\bn
\item The relators $r_1,\dots,r_l$ are words in $g_1,\dots,g_k$. The Fox derivatives of the $r_i$ with respect to the $g_j$ thus lie in $\Z[K]$.
\item For any $i\in \{1,\dots,k\}$ and $j\in \{1,\dots,{r}\}$ we have
\[ \frac{\partial}{\partial g_i}\left(\varphi(x_j)^{-1}tx_jt^{-1}\right)=\frac{\partial}{\partial g_i}\left(\varphi(x_j)^{-1}\right)+\varphi(x_j)^{-1}t\frac{\partial}{\partial g_i}x_j.\]
The same argument as in (1) shows that the first term lies in $\Z[K]$, and one can similarly see that the second term is of the form $t\cdot g$, where $g\in \Z[K]$.
\en
Thus $M_{\emptyset,\{k+1\}}$, the matrix obtained from $M$ by deleting the $(k+1)$-st column, is of the form
\[ M_{\emptyset,\{k+1\}}=P+tQ,\]
where $P$ and $Q$ are matrices over $\Z[K]$, and where all but the last ${d}$ rows of $Q$ are zero.
Let $\a\colon \pi\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ be a representation and $J\subset \{1,\dots,d+l\}$ a subset with $|J|=d+l-k$.
It follows from the above that
\[ M_{J,\{k+1\}}=P_J+tQ_J,\]
where $P_J$ and $Q_J$ are matrices over $\Z[K]$ and where at most $d$ rows of $Q_J$ are non-zero.
We then see that
\[ \det((\a\otimes \epsilon)(M_{J,\{k+1\}}))=\det(\a(P_J)+t\a(Q_J)),\]
where at most $kr$ rows of $\a(Q_J)$ are non-zero.
If $\det(\a(P_J)+t\a(Q_J))$ is non-zero, then it follows from an elementary argument
that
\[ \deg(\det(\a(P_J)+t\a(Q_J)))\leq kr.\]
We now consider
\[ Q:=\mbox{gcd}\{\det((\a\otimes \epsilon)(M_{J,\{k+1\}}))\,|\, J\subset \{1,\dots,l\} \mbox{ with }|J|=d+l-k\}.\]
By the above, if $Q\ne 0$, then $\deg(Q)\leq kr$.
Since $\epsilon(t)=1$,
\[ \Delta_{\pi,\epsilon}^\a =Q\cdot \det((\a\otimes \epsilon)(1-t))^{-1}=Q\cdot \det(\mbox{id}} \def\Aut{\mbox{Aut}} \def\im{\mbox{Im}} \def\sign{\mbox{sign}_k-\a(t)t)^{-1}\in \C(t).\]
Finally, we suppose that $\Delta_{\pi,\epsilon}^\a\ne 0$. By the above, this implies that $Q\ne 0$.
In particular, we see that
\[ \begin{array}} \def\ea{\end{array}} \def\L{\Lambda{rcl} \deg(\Delta_{\pi,\epsilon}^\a)&=&\deg\left(Q\cdot \det(\mbox{id}} \def\Aut{\mbox{Aut}} \def\im{\mbox{Im}} \def\sign{\mbox{sign}_k-\a(t)t))\right)\\
&=&\deg(Q)-\deg(\det(\mbox{id}} \def\Aut{\mbox{Aut}} \def\im{\mbox{Im}} \def\sign{\mbox{sign}_k-\a(t)t))\\
&=&\deg(Q)-k\\
&\leq &kr-k=k(\rank{B}-1).\ea\]
This concludes the proof of the theorem.
\end{proof}
The last ingredient in the proof of Theorem \ref{mainthm} is the following result from \cite{FV12a}. The proof of the theorem builds on the virtual fibering theorem of Agol \cite{Ag08} (see also \cite{FKt12}), which applies for knot complements by the work of Liu \cite{Liu11}, Przytycki-Wise \cite{PW11,PW12a} and Wise \cite{Wi09,Wi12a,Wi12b}.
\begin{theorem}\label{thm:fv12a}
Let $K$ be a knot. Then there exists
a representation $\a\colon \pi(K)\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ such that $\Delta_K^\a\ne 0$ and such that
\[ \deg \Delta_K^\a=k(2g(K)-1).\]
\end{theorem}
In \cite[Theorem~1.2]{FV12a} an analogous statement is formulated for twisted Reidemeister torsion instead of Wada's invariant.
The theorem, as stated, now follows from the interpretation (see, for example, \cite{Ki96,FV10}) of Wada's invariant as twisted Reidemeister torsion.
We can now formulate and prove the following result, which is equivalent to Theorem \ref{mainthm}.
\begin{theorem}\label{mainthm2}
Let $K$ be a knot. If $\pi(K)$ splits over a group $B$, then $\rk(B)\geq 2g(K)$.
\end{theorem}
\begin{proof}
Let $K$ be a knot and let
\[ f\colon \pi(K) \to \pi=\ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\]
be an isomorphism. We denote by $\epsilon\colon \ll A,t\,|\, \varphi(B)=tBt^{-1}\rr\to \Z$ the canonical epimorphism which is given by
$\epsilon(t)=1$ and $\epsilon(a)=0$ for $a\in A$.
Note that $\epsilon\circ f\colon \pi(K)\to \Z$ is an epimorphism. In particular, it sends the meridian to either $1$ or $-1$. By possibly changing the orientation of the knot, we can assume that $\epsilon\circ f\colon \pi(K)\to \Z$ sends the meridian to $1$.
By Theorem \ref{thm:fv12a}, there exists
a representation $\a\colon \pi(K)\to \mbox{GL}} \def\Q{\Bbb{Q}} \def\F{\Bbb{F}} \def\Z{\Bbb{Z}} \def\R{\Bbb{R}} \def\C{\Bbb{C}(k,\C)$ such that $ \Delta_K^\a\ne 0$ and such that
\[ \deg \Delta_K^\a=k(2g(K)-1).\]
By definition, we have
\[ \Delta_K^\a=\Delta_{\pi(K),\epsilon\circ f}^\a=\Delta_{\pi,\epsilon}^\a.\]
Theorem \ref{thm:technical} implies that
\[
\rank(B)
\geq \frac{1}{k}\deg\left( \Delta_{\pi,\epsilon}^\a\right)+1= \frac{1}{k}\deg\left( \Delta_{K}^\a\right)+1=2g(K). \]
\end{proof}
|
2,869,038,156,719 | arxiv | \section{Harmonic Einstein equations, unshielded singularities and cosmic censorship}
In this section we consider the mathematical and physical background of unshielded or 'naked' singularities, which are, according to one attempt of definition, singularities located in the causal past of the future null infinity. The concept of a 'singularity' in classical gravity is elusive as the extensions of different proposes for this concept seem to be either too extensive or too narrow for different reasonable purposes. Even the reasonable concept of shielded singularities just mentioned is a bit narrow in the sense that it is usually defined relative to strongly asymptotically predictable space-times. Due to this situation, it seems to be easier to prove existence results of singularities than to exclude a certain type of singularities in a broad variety of senses. In order to prove a convincing existence result of unshielded singularities it is sufficient to choose a rather strong concept of a singularity such as the blow up of a curvature invariant along a curve of finite generalised affine parameter length. Our considerations here are motivated by a certain structure of the Einstein field equation, but similar constructions can be done for a certain class of quasilinear hyperbolic equations of second order as well.
The field equations determine the coefficients $g_{ij},~0\leq i,j\leq n$ of the line element
\begin{equation}
ds^2=\sum_{i,j=0}^n g_{ij}dx_idx_j
\end{equation}
of a semi-Riemannian manifold $M$, where the zero component refers to time by convention. The usual assumption is that there are three spatial dimensions, i.e. $n=3$, but since the Kalutza-Klein paper appeared there have always been hypotheses around with $n>3$, where the Lorentz metric can be generalized to arbitrary dimension straightforwardly, and no dimension-specific Lorentz-group structure is needed here. It seems reasonable to be not specific with respect to dimension and just assume $n\geq 3$.
Depending on the nature of singularities considered they are in general located on the boundary $\partial M$ of a manifold $M$ with respect to some topology which has to be defined according to the purposes of the investigation. Such boundaries can be very bizarr and may have counterintuitive properties. Since our intention in this paper is to construct singularities related to curvature blow-ups we may use a rather strong topology. Note that depending on the topology we can include or must exclude (parts of) the boundary from the manifold itself. Especially, if we want basic invariants such as dimension to be well-defined. We better work with $C^p$-manifolds for $p\geq 1$, maybe with exceptions for specific very restricted sets. For, otherwise, we may run into problems concerning the invariance of domain and so on. We shall consider a rather strong topology for $M\setminus \partial M$ imposed component functions $g_{ij}$ of the metric $g$, which is a covariant $2$-form tensor with Lorentzian signature. The components $g_{ij},~0\leq i,j\leq n$ (where the the component $0$ refers to time) are given in Euclidean coordinates and are in $C^{1,\delta}\left((-\epsilon,\epsilon)\times {\mathbb R}^n,{\mathbb R} \right)$, where the latter space denotes the space of differentiable functions with H\"{o}lder continuous first order derivatives of exponent $\delta\in (0,1)$.
Here, the local time interval $(-\epsilon,\epsilon)$ for some small $\epsilon >0$ indicates that we shall consider time-local solution in a neighborhood of a Cauchy surface with a definite Lorentzian signature. The harmonic field equations involve second order derivatives of the metric components such that there is no classical solution of these hyperbolic equations in the function space $C^{1,\delta}\setminus C^2$ of these metric components. In our construction there is only one point of space-time in the latter space. This point will be on the boundary of a classical solution. The solution of the harmonic field equation assume data $g_{0ij}\in C^{1,\delta}\left({\mathbb R}^n,{\mathbb R} \right) ,~1\leq i,j\leq n$, which are smooth in the complement of the origin and in $H^1$. Here the subscript $0$ indicates that time is fixed at $t=t_0$ such that the functions $g_{0ij}:{\mathbb R}^n\rightarrow {\mathbb R}$ are restrictions of the functions $g_{ij}:{\mathbb R}^{n+1}\rightarrow {\mathbb R}$. Related assumptions are made for the first order time derivatives (indicated by an upper dot), i.e., $\stackrel{\cdot}{g}_{0ij}\in C^{1,\delta}\left({\mathbb R}^n,{\mathbb R} \right) ,~1\leq i,j\leq n$, and the first order spatial derivatives, i.e., $g_{0ij,k}\in C^{0,\delta}\left({\mathbb R}^n,{\mathbb R} \right) ,~1\leq i,j\leq n$ of the metric component functions.
Note that the curvature invariants involve second order derivatives of the metric tensor, and can, hence, may blow up for some metric components $g_{ij}$ with $g_{ij}\in C^{1,\delta}$. In order to prove time-local existence it is usually assumed there is a Cauchy surface $C$ and a local time neighborhood of $C$ such that the metric components have a uniform Lorentzian signature.
In this neighboorhood of invariant signature the metric tensor components $g_{ij}$ satisfy a harmonic field equation on $(-\epsilon,\epsilon)\times {\mathbb R}^n$ with respect to harmonic coordinates $(t,x)$ and where $t\in (-\epsilon,\epsilon)$ for some small $\epsilon >0$. This allows us to subsume the field equations (in this region) under a standard class of quasilinear hyperbolic equations. The spatially global assumption may be weakened to a spatially local assumption (as is most likely the case with hyperbolic equations), but a detailed treatment of this generalisation would involve too much technicalities and may obscur the main idea to be communicated here. Therefore we are a bit generous with respect to the Cauchy surface assumption.
Note that the loss of well-posedness of the field equations (beyond local-time well-posedness) may be due to singularities or to the loss of a given Lorentzian signature as time passes by. In the following we sometimes use Einstein summation, and use the more classical notation with explicit symbols of sums if we want to emphasize some structure of equations. Notation of ordinary partial derivatives with respect to the variable $x^i$ is either denoted by a subscript $,i$ or by $\frac{\partial}{\partial x^i}$. It is well-known that the field equations can be subsumed under a certain class of quasilinear hyperbolic systems of second order -which were seemingly first studied systematically by Hilbert and Courant. This subsumption is used in \cite{HKM}, but the result obtained on the abstract level is not strong enough for our purposes. For this reason we stick with the special field equations, where we can use special features. In the physical context, as long as considerations of higher dimension seem to be of a speculative type, it seems appropriate to consider the classical field equations in classical space with spatial dimension three and then remark that the result can be generalized (if needed). It is in space-time dimension $3+1$, where calculations based on the field equations produced predictions which were confirmed by experiment. So we think of $n=4$ in general, but keep the treatment general as this costs us nothing. Recall that the signature of the metric $g_{ij}$ is the number of positive eigenvalues of the matrix $\left( g_{ij}\right)$, i.e., the spatial dimension $n$ in our case, where the index zero is reserved for the time dimension.
In the following representation of the Einstein field equation in (\ref{harm}) Greek indices run from $0$ to $n$ and latin indices run from $1$ to $n$ (cf. also similar notation in \cite{HKM}). For a Lorentz metric on ${\mathbb R}\times {\mathbb R}^n$ we may consider the field equations as a first order quasi-linear hyperbolic system with harmonic coordinates for $g_{\mu\nu},g_{\mu\nu,k},h_{\mu\nu}=\frac{\partial g_{\mu\nu}}{\partial t}$ of the form
\begin{equation}\label{harm}
\left\lbrace \begin{array}{ll}
\frac{\partial g_{\mu\nu}}{\partial t}=h_{\mu\nu}\\
\\
\frac{\partial g_{\mu\nu,k}}{\partial t}=\frac{\partial h_{\mu\nu}}{\partial x^k}\\
\\
\frac{\partial h_{\mu\nu}}{\partial t}=-g_{00}\left(2g^{0k}\frac{\partial h_{\mu\nu}}{\partial x^k}+g^{km}\frac{\partial g_{\mu\nu,k}}{\partial x^m} -2H_{\mu\nu}\right) ,
\end{array}\right.
\end{equation}
with data $g_{\mu\nu}(t_0,.)$ and $h_{\mu\nu}(t_0,.)$ at some time $t_0$, and where
\begin{equation}
\begin{array}{ll}
H_{\mu\nu}\mbox{ is given in (\ref{hmunu}) below.}
\end{array}
\end{equation}
In this context use the convention
\begin{equation}\label{gamma}
\Gamma^i=g^{\alpha\beta}\Gamma^{i}_{\alpha\beta},
\end{equation}
where we recall that the Christoffel symbols are defined to be
\begin{equation}
\Gamma^{\mu}_{\alpha\beta}=\frac{1}{2}g^{\mu\rho}\left(g_{\rho\alpha,\beta}+g_{\rho\beta,\alpha}-g_{\alpha\beta,\rho} \right).
\end{equation}
As we have have space-time dimension $n+1$ this is a system for
\begin{equation}
\frac{(n+1)(n+2)}{2}(1+n+1)=\frac{(n+1)}{2}(n+2)^2~\mbox{unknowns}~g_{\mu\nu},g_{\mu\nu,k},h_{\mu\nu},
\end{equation}
(or $50$ unknowns in case of space-time of dimension $3+1$).
This system is another way of writing the vacuum field equations
\begin{equation}
R^h_{\mu\nu}=0
\end{equation}
with additional variables of course, where the upper script $h$ indicates that the Ricci tensor $R_{\mu\nu}$ is written in harmonic coordinates. The coordinates are called harmonic because, usually, the Einstein equations (without energy-momentum source) are written in coordinates where they take the form
\begin{equation}\label{einsthilb}
G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=0,
\end{equation}
which contains an additional 'potential' term $\frac{1}{2}g_{\mu\nu}R$. More formally coordinates are called harmonic if $\Gamma^{\mu}(x)=0$, which implies that the coordinate functions themselves are harmonic with respect to the d' Alembert operator. Here, recall that the Ricci tensor is given by
\begin{equation}\label{curv}
R_{\mu\nu}=\frac{\partial \Gamma^{\alpha}_{\mu\nu}}{\partial x^{\alpha}}-\frac{\partial \Gamma^{\alpha}_{\alpha\mu}}{\partial x^{\nu}}-\Gamma^{\alpha}_{\mu\nu}\Gamma^{\beta}_{\alpha\beta}-\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha},
\end{equation}
and that the scalar curvature is given by
\begin{equation}
R=g^{\mu\nu}R_{\mu\nu}.
\end{equation}
Note that $R$ is a scalar function where the evaluation of a scalar at a point $p\in M$ is denoted by $R_p$.
Historically, it was a major step to find this additional term (saving covariance), and $G_{\mu\nu}$ is called the Einstein tensor. More precisely, in the presence of matter a conservationlaw should hold such that for (possibly variable) density $\rho_0$ and velocity $v^{\mu},~0\leq \mu\leq n$ the stress-energy-momentum tensor
\begin{equation}
T^{\mu\nu}=\rho_0 v^{\mu}v^{\nu},~0\leq\mu,\nu\leq n,
\end{equation}
satisfies
\begin{equation}
T^{\mu\nu}_{\hspace{0.3cm};\nu}=0,
\end{equation}
and this requirement leads to (\ref{einsthilb}) as we have
\begin{equation}
G^{\mu\nu}_{\hspace{0.3cm};\nu}=0.
\end{equation}
Maybe the tensor $G^{\mu\nu}$ should be called the Einstein-Hilbert tensor, because it is quite possible that Hilbert was the first in November 1915 who had a wrote the equation $R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=T_{\mu\nu}$ on a blackboard in G\"{o}ttingen in a derivation via variational calculus (with a clear insight that the inhomogeneous term $-\frac{1}{2}g_{\mu\nu}R$ is needed to keep covariance in the presence of matter) \footnote{Note that four pages of Hilbert's corresponding publication are missing in the archive of the academy in Berlin while the variational principle is given in the correct form, and it seems very unlikely that Hilbert did or could not not derive the equations in (\ref{einsthilb}) from the variational principles - probably it is a paragraph in the missing pages.}.
In is a major step to formulate the field equations in the presence of matter. Note that in the absence of matter we have $g^{\mu\nu}g_{\mu\nu}=n+1$, and therefore we get
\begin{equation}
g^{\mu\nu}\left( R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right)=R-\frac{n+1}{2}R=0,
\end{equation}
hence $R=0$, and the field equations reduce to the vacuum field equations
\begin{equation}
R_{\mu\nu}=0.
\end{equation}
Riemann could have written down them (or may be he has), but Einstein gave meaning to them. The harmonic coordinates used above can be used also in the presence of matter, of course.
In this article we construct singular solutions for the vacuum equations. We note that our method can be applied to extended equations of the form
\begin{equation}
G_{\mu\nu}+\Lambda g_{\mu\nu}=T_{\mu\nu},
\end{equation}
where $\Lambda$ is a cosmological constant, and $T_{\mu\nu}$ is the energy momentum tensor. Such generalisations depend on conditions on the additional terms, of course. For example a positive cosmological constant $\Lambda$ has a damping effect in the region where the metric tensor satisfies a Lorentz condition (and can be written in harmonic form). However, generalisations of the following results are possible for positive and negative cosmological constants. Actually sufficient conditions for the construction are described in the last section in the context of quasilinear hyperbolic equation systems of second order, and the consequences for the Einstein field equations can be drawn as long as the field equations can be subsumed under that systems equipped with a set of conditions for (some type) of local conditions (cf. last section). Up to the 'material' term $T_{\mu\nu}$ the field equations look locally like ordinary wave equations of course (easily to solve), but globally these simple equations are glued together which makes them nonlinear and difficult to solve. Accordingly, most of the research concerns specific solutions to the field equations, while general research is more in the context of hyperbolic systems of second order which are investigated by more general methods. Results are then applied to the field equations without using their special structure. For example, in \cite{HKM} it is observed that the field equations for a Lorentz metric can be subsumed by hyperbolic systems of second order of the form
\begin{equation}\label{hypsec}
a_{00}\frac{\partial^2\psi}{\partial t^2}=\sum_{i,j=1}^na_{ij}\frac{\partial^2\psi}{\partial x^i\partial x^j}+\sum_{i=1}^n\left(a_{0i}+a_{i0} \right)\frac{\partial^2\psi}{\partial t\partial x_i}+b,
\end{equation}
where $\psi=\left(\psi_1,\cdots,\psi_n \right)^T$ is a $n$-vector-valued function of time $t\in [0,T]$ and spatial variables $x=\left(x^1,\cdots,x^n \right)\in {\mathbb R}^n$, and $a_{ij},~1\leq i,j\leq n$ is a collection of $n\times n$-matrix valued functions of suppressed arguments of $t,x,\psi,\frac{\partial \psi}{\partial t},\nabla\psi$, and $b$ is a $n$-vector-valued function of the same arguments (the latter sentence is a citation of \cite{HKM}, p. 274 essentially).
Local triviality and global complexity are characteristics of the Einstein field equation as their derivation principles are simple (cf. the equivalence principle) while second order tensors like the Ricci tensor (which contain the global information) can have a rich structure, so rich, that general investigations of the field equations work with further assumptions, for example with the assumption of asymptotically predictable space- times. The difficulty of defining certain concepts such as 'singularity', 'black holes', 'weak cosmic censorship' is related to that richness such that these concepts are defined relative to such classes of space-time (such as the mentioned class of strongly asymptotically predictable space-times). We recall a related class of concepts which lead us to a concept of black holes, unshielded (naked) singularities, and a concept of weak censorship. We refer the reader to \cite{HE,N1,N2,P,W} for a more detailed discussion of these notions. First we introduce a class of space-times for which black holes are well-defined.
\begin{defi}
A strongly asymptotically predictable space-time is an asymptotically flat space-time $(M,g)$ such that there exists an open region $U$ in the conformal space-time extension $\left( \tilde{M},\tilde{g}\right)$ such that
\begin{itemize}
\item[i)] $U\supset M\cap J^-\left( I^+ \right)$,
\item[ii)] $(U,\tilde{g})$ is globally hyperbolic.
\end{itemize}
\end{defi}
In this context of asymptotically predictable space-times black holes can be defined without reference to elusive concept of a singularity. We have
\begin{defi}
The region $B\subseteq M$ is called a black hole of a strongly asymptotically predictable space-time $(M,g)$, if it is the complement of the causal part $J^-$ of the future null infinity $I^+$, i.e., $B=M\setminus J^-\left(I^+\right)$.
\end{defi}
\begin{defi}
The boundary $H^+:=M\cap\partial J^-\left(I^+\right)$ is the event horizon of a black hole.
\end{defi}
\begin{defi}
A singularity of space-time is called naked if it is located in the the causal past of null infinity $J^-\left(I^+\right)$
\end{defi}
The weak cosmic censorship conjecture maintains that there is no naked singularity. This concept is due to Hawking and Penrose, of course. It seems to be a rather involved concept, but it is a certain way of making precise Penrose's early statement of 1969 (citation):
\begin{center}
''does there exist a 'cosmic censor' who forbids the appearance of naked singularities closing each one in an absolute event horizon?''
\end{center}
The strong cosmic censorship hypothesis for a metric Lorentzian manifold $\left( M,g_{ij}\right)$ is often defined by strong global hyperbolicity, i.e., the requirement that there is a Cauchy surface $S$ such that
\begin{equation}
M=D^+\left(S\right)\cup D^-\left(S\right),
\end{equation}
where $D^+\left(S\right)$ (resp. $D^-\left(S\right)$) are arcwise connected components separated by $S$ and represent the causal future and the causal past relative to the Cauchy surface $S$ defined by causal curves.
The elusiveness of the concept of singularities then led to a weak interpretation of the concept of a singularity in terms of geodesically incompleteness. Next we discuss some notions of the vague concept of singularities and different attempts to make it precise. This is important in order to understand the role of the Hawking-Penrose theorem, the conjectures of weak and strong cosmic censorship, and the results and arguments of this paper, which can be read as comments on these theorems and claims.
Defining singularities by geodesic incompleteness is a well-motived approach because it seems that other definitions are far too narrow or far to wide. However, we may use a strong definition of singularity and prove its existence for a generic set of Lorentz metrics satisfying the Einstein evolution equation in order to disprove weak cosmic censorship statements which are based on weaker (wider or more extensive) notions of singularities. First let is recall the relevant notions. Note that the line element $ds^2= g_{ij}dx^idx^j$ defines a metric
\begin{equation}
g:TM\times TM\rightarrow {\mathbb R},
\end{equation}
which is a bilinear form, and where $TM$ denotes the tangential bundle on $M$.
\begin{defi}
The generalized affine parameter length of a curve $\gamma :[0,c)\rightarrow M$ with respect to a frame $E_s=(e_i(s))_{0\leq i\leq n},¸0\leq s\leq c$ of a family of basis vectors in ${\mathbb R}^n$ (abbreviated by g.a.p.) is given by
\begin{equation}
l_E(\gamma)=\int_0^c\left(\sum_{i=0}^{n}g\left(\stackrel{\cdot}{\gamma(s)},e_i(s) \right) \right) ds
\end{equation}
\end{defi}
\begin{defi}
A a curve $\gamma :[0,c)\rightarrow M$ is incomplete if it has finite g.a.p. with respect to some frame, and it is inextensible if there is no limit in $M$ of $\gamma(s)$ as $s$ approaches $c$. Furthermore a space-time is called incomplete if it contains an incomplete inextensible curve.
\end{defi}
Singularities may be approached by future directed incomplete inextensible curves, i.e. curves with nonnegative Lorentz-metric at all points of the curve, and by past-directed inextensible curves, i.e., curves with nonpostive Lorentz-metric at all points of the curve. Future-directed curves starting at a point $p\in M$ are denoted by $I^+(p)$ and past-directed curves are noted by $I^+(p)$. Null-curves or light-curves starting at a point $p$ are curves with zero value of the Lorentz-metric at all points of the curve considered, and are located at the boundary of $I^+(p)$, where it is a matter of taste to include or exclude the boundary of $I^+(p)$ in the definition of $I^(p)$ (we included this boundary here). The set of singularities related to a Lorentz manifold $(M,g_{ij})$ which are endpoints of inextensible curves in $I^+(p)$ starting from some point $p\in M$ are denoted by $M^+$, and the set of singularities which are endpoints of inextensible curves in $I^-(p)$ starting from some point $p\in M$ are denoted by $M^-$. For a curve $\gamma :[0,c)\rightarrow M$ with positive $c\in {\mathbb R}\cup \left\lbrace \infty\right\rbrace $ we define
\begin{equation}
I^+(\gamma):=\cup_{q\in \gamma([0,c))}I^+(q),~I^-(\gamma):=\cup_{q\in \gamma([0,c))}I^-(q).
\end{equation}
\begin{defi}\label{geoinc}
A space-time is geodesically incomplete if it contains a geodesic curve which is incomplete.
\end{defi}
The Hawking Penrose theorem states this is not our main concern here, we refer to \cite{HE} for the precisediscussion of its assumptions. We have
\begin{thm}\label{HP}
(Hawking Penrose Theorem) Assume that a time oriented space-time $(M,g_{ij})$ satisfies the conditions
\begin{itemize}
\item[i)] $R_{ij}X^iX^j\geq 0$ for any non-spacelike vector $X^i$.
\item[ii)] The timelike and null generic conditions are satisfied.
\item[iii)] There are no closed timelike curves.
\item[iv)] One of the three condition holds
\subitem iva) There exists a trapped surface.
\subitem ivb) There exists an achronal set without edges.
\subitem ivc) There exists a $p\in M$ such that for each future directed null-
\hspace{1.2cm} geodesic through $p$ the expansion becomes negative.
\end{itemize}
Then the manifold $(M,g_{ij})$ contains at least one incomplete timelike or null geodesic.
\end{thm}
Generic occurrence of singularities for the field equations, where 'generic' is to be understood as 'generic relative to physically reasonable space-times', is one feature of the field equations which may be successfully expressed by theorem \ref{HP}. Another proposed feature of Hawking and Penrose is that singularities are (at least weakly) censored, i.e. shielded by black holes or it is even not possible to reach the singularity from any point in space-time by an incomplete curve of finite g.a.p. length.
It is in this respect that our result provides contradictory evidence. Let is first recall the principles of strong and weak cosmic censorships.
\begin{defi}\label{scc}
(strong cosmic censorship). A singularity point $p\in M^+$ (resp. $p\in M^-$) of a space-time $(M,g)$ is strongly censored if for every $q\in M$ we have $[p]\nsubseteq I^+(q)$ (resp. $I^-(q)$). Accordingly, a space-time is said to have strongly censored singularities if all singularities in $M^+$ and $M^-$ are strongly censored.
\end{defi}
\begin{defi}\label{wcc}
(weak cosmic censorship). A singularity point $p\in M$of a space-time $(M,g)$ is weakly censored if it is not in the causal past of the future null infinity. Accordingly, a space-time is said to have strongly censored singularities if for all Cauchy surfaces of $M$ all singularities in $M^+$ and $M^-$ are strongly censored.
\end{defi}
The reason for the characterization of singularities by geodesically incomplete curves is that other characterizations turn out to be too narrow or to extensive, but counterexamples concerning the cosmic censorship hypotheses can be
\begin{defi}\label{ss}
We say that $p\in M^+$ is a strong scalar curvature singularity if there is a incomplete g.a.p. finite curve $\gamma:[0,c)\rightarrow M$ such that
\begin{equation}
\lim_{s\uparrow c}\gamma(s)=p,
\end{equation}
and
\begin{equation}
\forall \epsilon >0\forall~C>0~\exists~s\in [c-\epsilon,c):~
{\big |}
R_{\gamma(s)}{\big |}\geq C.
\end{equation}
We also say that the scalar curvature invariant blows up at $p$.
\end{defi}
Note that curvature tensor invariants involve second order spatial derivatives of the metric component functions $g_{ij}$. For this reason there are metric functions which have H\"{o}lder continuous first order derivatives and where curvature invariants blows up. There are even metric functions $g_{ij}$ which have H\"{o}lder continuous first order derivative and are smooth in the complement of the origin $(0,0)$. First we give an example
\begin{exmp}
We observe that data can have weak singularities at a singular point at the boundary of space-time where the field equations can still be solved in the complement of this point. The weak singularities at the origin are located at the boundary of the equation and are not part of the classical solution constructed. The data can be chosen such that they are part of a weak solution of the field equations.
For example, consider a metric of spatial dimension $n=2$ which depends only on one spatial variable, say $x^1$, and is constant with respect to time such that for the spatial indices $1\leq i,j\leq 2$ and for fixed time we have
\begin{equation}
\left(g_{ij} \right)_{1\leq i,j\leq 2} =\left( \begin{array}{ll}
1 + \phi_{\delta}(x^1)f(x^1)\hspace{0.5cm} 0\\
\\
0 \hspace{2.8cm}1
\end{array}\right)
,\end{equation}
where $f$ is a univariate function on the field of real number ${\mathbb R}$ of the form $z\rightarrow f(z)=z^3\cos\left(\frac{1}{z^{\alpha}}\right)$ for $\alpha\in (0.5,1)$ $z\neq 0$ and $f(0)=0$, and $\phi_{\delta}\in C^{\infty}$ is a function with support $(\-\delta,\delta)$ and with $\phi_{\delta}(0)=1$ (as known for partitions of unity). Note that
\begin{equation}
f\in H^2~\mbox{and}~f\not\in C^2~\mbox{ for }~\alpha \in (0.5,1).
\end{equation}
Since $n=2$ we have $g_{ij}\in H^2$ even for $\alpha \in (0.5,1.5)$.
Evaluating the derivatives of $f$ you observe that the second derivative is discontinuous and even blows up at $z=0$. It follows that second order derivatives of the metric $g_{ij}$ with respect to the spatial variable $x^1$ blow up at the origin. As some first order derivatives of the Christoffel symbols in the definition of the Ricci tensor $R_{ij}$ in (\ref{curv}) contain such non-vanishing second order spatial derivatives of the metric $g_{ij}$ a simple calculation shows that the non-vanishing second order derivative terms do not cancel, and as $g_{ij}$ is positive definite and bounded with a bounded inverse $g^{ij}$ the scalar curvature $R=g^{ij}R_{ij}$ blows up at the origin and is smooth in the complement of the origin. Such phenomena are consistent with constraint equations on the Cauchy surface as we shall observe later.
\end{exmp}
The latter example is rather generic. We have
\begin{prop}\label{mainprop}
Let $C^{1,\delta}\equiv C^{1,\delta}\left({\mathbb R}^n\right)$ the space of differentiable functions with H\"{o}lder continuous first order derivatives of H\"{o}lder exponent $\delta \in (0,1)$ or with Lipschitz continuous first order derivatives. Define
\begin{equation}
|f(x)|_{1,\delta}:=\sum_{0\leq |\alpha|\leq 1}\sup_{y\in {\mathbb R^{n}}}{\big |}D^{\alpha}_xf(y){\big |}+\sup_{y\neq z,~y,z\in {\mathbb R}^n}{\Bigg |}\frac{|D^{\alpha}_xf(z)-D^{\alpha}_yf(y)}{|z-y|^{\delta}}{\Bigg |},
\end{equation}
where $\delta\in (0,1]$.
For fixed time $t=t_0\in {\mathbb R}$ we consider the metric functions $g^{t_0}_{ij}\equiv g_{ij}(t_0,.),~1\leq i,j\leq n$ for $0\leq i,j\leq n$. Then in any neighborhood of $g^{t_0}_{ij}\in C^2\cap H^2$ in $\left( C^{1,\delta},|.|_{1,\delta}\right)$ there is a function $\tilde{g}^{t_0}_{ij}\in C^{1,\delta}\cap H^2$ such that typical curvature invariants of $\tilde{g}^{t_0}_{ij}$ (such as the scalar curvature) blow up at the origin and are classically well-defined in the complement of the origin.
\end{prop}
\begin{proof}
The scalar curvature satisfies
\begin{equation}
R=g^{\mu\nu}R_{\mu\nu}=g^{\mu\nu}\left( \frac{\partial \Gamma^{\alpha}_{\mu\nu}}{\partial x^{\alpha}}-\frac{\partial \Gamma^{\alpha}_{\alpha\mu}}{\partial x^{\nu}}-\Gamma^{\alpha}_{\mu\nu}\Gamma^{\beta}_{\alpha\beta}-\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha}\right) ,
\end{equation}
where the last three terms in
\begin{equation}
\begin{array}{ll}
\Gamma^{\gamma}_{\alpha\beta,\delta}=\frac{1}{2}g^{\gamma\rho}_{,\delta}\left(g_{\rho\alpha,\beta}+g_{\rho\beta,\alpha}-g_{\alpha\beta,\rho} \right)\\
\\
\hspace{1cm}+\frac{1}{2}g^{\gamma\rho}\left(g_{\rho\alpha,\beta,\delta}+g_{\rho\beta,\alpha,j}-g_{\alpha\beta,\rho,\delta} \right)
\end{array}
\end{equation}
contain second order derivatives of the metric. It is essential to consider the local behaviour around the origin. It is always possible to extend such fields such that they have an appropriate behavior at spatial infinity such that the Cauchy problem is well-defined (cf. next section). Let $C>0$ be some constant.
First, for $\alpha \in (0.5,1)$ consider the univariate function $f:{\mathbb R}\rightarrow {\mathbb R}$ with
\begin{equation}\label{dataf}
f(z)=\left\lbrace \begin{array}{ll}
(C+z^3\cos\left(\frac{1}{z^{\alpha}}\right))\phi_{\delta}(z),~ \mbox{if}~z\neq 0\\
\\
C~\mbox{if}~z=0.
\end{array}\right.
\end{equation}
For $z\in (-\delta,\delta)$ the first derivative is
\begin{equation}
f'(z)=\left\lbrace \begin{array}{ll}
3z^2\cos\left(\frac{1}{z^\alpha}\right)+\alpha z^{2-\alpha}\sin\left(\frac{1}{z^{\alpha}}\right) ,~ \mbox{if}~z\neq 0,\\
\\
0~\mbox{if}~z=0.
\end{array}\right.
\end{equation}
For $z\in (-\delta,\delta)$ the second derivative is
\begin{equation}
f''(z)=\left\lbrace \begin{array}{ll}
6z\cos\left(\frac{1}{z^{\alpha}}\right)+3 \alpha z^{1-\alpha}\sin\left(\frac{1}{z^{\alpha}}\right)\\
\\
+\alpha (2-\alpha) z^{1-\alpha}\sin\left(\frac{1}{z^{\alpha}}\right)-\alpha^2 z^{1-2\alpha}\cos\left(\frac{1}{z^{\alpha}}\right),~ \mbox{if}~z\neq 0,\\
\\
\mbox{undefined}~\mbox{if}~z=0.
\end{array}\right.
\end{equation}
The only singular term is
\begin{equation}
-\alpha^2 z^{1-2\alpha}\cos\left(\frac{1}{z^{\alpha}}\right)
\end{equation}
Hence $f\not\in C^2$ and for $\alpha\in (0.5,1)$ we have a singularity of order $1-2\alpha\in (-0.5,0)$ with upper $L^2$-integrable upper bound $\frac{C}{|z|^{1-2\alpha}}$.
Now consider a $C^{\infty}$- function $\phi_{\delta,\epsilon}:{\mathbb R}\rightarrow {\mathbb R}$ with support in $(-\epsilon,\epsilon)$ and such that $\phi_{\epsilon,\delta}(z)=1$ for $z\in (-\delta,\delta)$. Such functions are well known from the context of partitions of unity. Then in any neighborhood (with respect to the normed function space stated) of a metric $g_{ij}(t_0,.)$ (evaluated at some time $t_0$) we find a metric $\tilde{g}_{ij}(t_0,.)$ such that for some $\delta >0$ the function
\begin{equation}
x\rightarrow \tilde{g}_{ij}(t_0,x)
\end{equation}
where
\begin{equation}
\tilde{g}_{11}=g_{11}+\delta \cdot\phi_{\delta,2\delta}(x^1)f(x^1)
\end{equation}
stays in the neighborhood and such that the scalar curvature blows up at the origin and is well-defined in the complement of the origin. Note that we may choose $g_{\mu\nu}(0)=C>0$ for all $\mu$ such that the inverse of the metric tensor is well defined in a neighborhood of the Cauchy surface and Lipschitz continuity of the metric tensor components $g_{\mu\nu}$ implies Lipschitz continuity of the components of the inverse.
\end{proof}
Constructions as in (\ref{mainprop}) can be used in order to define data $g_{ij}(0,.)$ on a Cauchy surface in a subspace $C^{1,\delta}\cap H^2$. We may choose data which are smooth in the complement of the origin, such curvature invariants blow up at the origin and are well-defined elswhere. There are two additional constraints in the case of the Einstein field equation. First - as is well known - additional constraint equations have to be imposed to make the Cauchy form of the Einstein field equations well-defined. Second, if we want to prove local time existence by local contraction using viscosity limits of extended systems based on the harmonic field equations, then it is necessary to impose additional constraints on the data such that a) the harmonic field equations are well defined and b) a unique Lorentz signature is well defined in the neighborhood of the Cauchy surface. We shall formalize the items a) and b) in the next section. The construction of a local-time contraction is then based on and extended system where viscosity terms $\nu_0 \Delta g_{\mu,\nu}$, $\nu_0 \Delta g_{\mu,\nu,k}$ and $\nu_0 \Delta h_{\mu,\nu}$ with a positive viscosity $\nu_0>0$ and for $0\leq \mu,\nu \leq n$ and $1\leq k\leq n$ for each equation of the harmonic field equation systems are added.
The upshot of the following considerations is as follows. In the following section we define an appropriate data class (satisfying a uniform Lorentz signature condition and certain integrability conditions) and state the main theorem concerning the existence of time local solutions of the Einstein field equations with strong singularities in the sense of definition \ref{ss}. Here the constraint equations for the harmonic field equations hold in a weak sense at the origin and in the classical sense elsewhere. This result implies that there are counter example to various interpretations of the strong or weak cosmic censorship hypotheses. Note that the time-local existence theorem is not subsumed by \cite{HKM}. As a consequence there is also a counter example of the weak cosmic censorship. In the last section we prove the theorem.
\section{Construction of a class of unshielded singular solutions of the harmonic field equation}
Next we are concerned with the main theorem which asserts the existence of unshielded (naked) singular solutions of the harmonic field equations. We construct a spatially global and time-local solution $g_{\mu\nu}\in (-T,T)\times {\mathbb R}^n$ for $C^{1,\delta}\cap H^2$-data on a Cauchy surface which are smooth in the complement of the origin and such that $g_{\mu\nu}(t,.),~0\leq \mu,\nu \leq n$ is in the Sobolev space $H^{2}$ for $t\in (-T,T)$. The initial data functions have a certain Lorentz signature on the Cauchy surface. However, first we not that the Cauchy surface is chosen to be spacelike, where we may assume that it is given by $x_0=t=0$. We assume that
\begin{equation}
g_{\mu\nu}=\left(S^T\Lambda S \right)_{\mu\nu},
\end{equation}
where
\begin{equation}
\begin{array}{ll}
\Lambda=\mbox{diag}((\lambda_j)_{0\leq j\leq n}),~\mbox{sign}(\lambda_0)=-1,~\mbox{sign}(\lambda_j)=1~\mbox{for}\leq j\leq n.
\end{array}
\end{equation}
Constancy of the Lorentz signature on the Cauchy surface means that the Lorentz scalar
In order to have a well-defined Cauchy problem we need a constant Lorentz signature on the Cauchy surface and in a vicinity of the Cauchy surface. We can express this condition in terms of the metric tensor.
Note that the $2$-covariant components of the line element $ds^2= g_{ij}dx^idx^j$ determine a bilinear form
\begin{equation}
g:TM\times TM\rightarrow {\mathbb R},
\end{equation}
on the tangential bundle $TM$ of space-time $M$, where around each point of space time $(t,x)$ we have local neighborhood $U\subset M$ and $V\subset {\mathbb R}^n$ and coordinate transformation $i:V\rightarrow U$ with $i(s,y)=(t,x)$ such that for certain scale with light velocity $c=1$ we have
\begin{equation}\label{gtx}
g_{(t,x)}\circ i(s,y)(z,z)
=
z_{i_0}^2-\sum_{i\in \left\lbrace 1,\cdots,n \right\rbrace \setminus \left\lbrace i_0\right\rbrace}z_i^2,
\end{equation}
where $i_0$ denotes the index of time. We may also write $g_{(t,x)\mu\nu}\circ i(s,y)(z,z)=\delta_{\mu\nu}z_{\mu}z_{\nu}$. We have chosen $i_0=0$, but this is just by convention, i.e., not determined by the equation itself. For the simple choice of a Cauchy surface $H$ we may choose a corresponding function $i_{H}$ (instance of the function $i$ in (\ref{gtx})) and then stipulate that the Lorentz signature is strictly constant in the sense that for all $x\in {\mathbb R}^n$ and $z\neq 0$
\begin{equation}\label{gf}
x\rightarrow \mbox{sign}(g_{(0,x)}\circ i_{H}(0,x)(z,z))
\end{equation}
is a constant function, where for all $x\in {\mathbb R}^n$ and for all $z\neq 0$ $g_{(0,x)}\circ i_{H}(0,x)(z,z)\neq 0$. In this notation this condition of a constant Lorentz signature can be easily generalised to general spacelike Cauchy surfaces. In order to ensure that the constant Lorentz signature condition holds in the vicinity of the Cauchy surface, we would be useful to have a uniform bound of the function (\ref{gf}) in the sense that $\mbox{inf}_{z\neq 0,~x\in {\mathbb R}^n}|g_{(0,x)}\circ i_{H}(0,x)(z,z))|\geq c>0$ for some $c$. However, in order to prove local contraction we need some decay at spatial infinity. A natural uniform Lorentz condition then is
\begin{equation}
\inf_{(0,x)\in \left\lbrace 0\right\rbrace \times {\mathbb R}^n}{\Big |}\frac{g_{(t,x)00}\circ i(s,y)(z,z)}{\sum_{\mu\nu=1}^ng_{(t,x)\mu\nu}\circ i(s,y)(z,z)}{\Big |}\neq 1,
\end{equation}
and
\begin{equation}
\sup_{(0,x)\in \left\lbrace 0\right\rbrace \times {\mathbb R}^n}{\Big |}\frac{g_{(t,x)00}\circ i(s,y)(z,z)}{\sum_{\mu\nu=1}^ng_{(t,x)\mu\nu}\circ i(s,y)(z,z)}{\Big |}\neq 1.
\end{equation}
As we have bounded continuous metric components, a uniform Lorentz condition for the initial data $g_{0ij},~1\leq i,j\leq n$ is obviously sufficient such that the Lorentz signature is constant in the vicinity of the Cauchy surface. Next we prepare a specific data class which is appropriate for our construction of a local contraction theorem. Solutions are constructed via viscosity limits of extended systems.
Consider the third equation in (\ref{harm}) extended by a viscosity term $\nu_0\Delta$ (where $\Delta$ denotes the Laplacian and $\nu_0$ is a positive real viscosity constant), i.e., the equation
\begin{equation}
\frac{\partial h^{(\nu-0)}_{\mu\nu}}{\partial t}=\nu\Delta h^{(\nu_0)}_{\mu\nu}-g^{(\nu)}_{00}\left(2g^{(\nu_0)0k}\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k}+g^{km}\frac{\partial g_{\mu\nu,k}}{\partial x^m} -2H^{\nu_0}_{\mu\nu}\right),
\end{equation}
where
\begin{equation}\label{hmunu0}
\begin{array}{ll}
H^{(\nu_0)}_{\mu\nu}\equiv H_{\mu\nu}\left(g^{(\nu_0)}_{\alpha\beta},\frac{\partial g^{(\nu_0)}_{\alpha\beta}}{\partial x^{\gamma}} \right)
\end{array}
\end{equation}
and $H_{\mu\nu}$ is defined in (\ref{hmunu}) below. Here we put the viscosity constant $\nu_0$ into brackets in the supperscript in order to avoid confusion with a running index. A local time solution $g^{(\nu_0)}_{\mu\nu},\leq \mu,\nu\leq n\in C^{1,2}\left((0,T)\times {\mathbb R}^n\right) $ has the representation
\begin{equation}
\begin{array}{ll}
h^{(\nu_0)}_{\mu\nu}=h^{(\nu_0)}_{\mu\nu}(0,.)\ast_{sp}G_{\nu_0}
+-g^{(\nu)}_{00}\left(2g^{(\nu_0)0k}\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k}\right)\ast G_{\nu_0}\\
\\
+ \left( g^{km}\frac{\partial g_{\mu\nu,k}}{\partial x^m}\right)
\ast G_{\nu_0} -\left( 2H^{\nu_0}_{\mu\nu}\right)\ast G_{\nu_0}
\end{array}
\end{equation}
where $G_{\nu_0}$ is the fundamental solution of a heat equation with viscosity constant $\nu_0$ (cf. below) and $\ast_{sp}$ and $\ast$ denote spatial and space-time convolutions respectively. For solutions which vanish at spatial infinities such representations can be rewritten by partial integration and using the convolution rule such that the convoluted functions are terms built by products of the metric components $g^{(\nu_0)}_{\mu\nu}$ its spatial and time derivatives $g^{(\nu_0)}_{\mu\nu,k}$ and $h^{(\nu_0)}_{\mu\nu}$ and corresponding entries of the inverse matrices $g^{(\nu_0),\mu\nu}$, $g^{(\nu_0),\mu\nu}_{,k}$, and $h^{(\nu_0),\mu\nu}$, and where convolutions involve the Gaussian $G_{\nu_0}$ and first spatial derivatives of the Gaussian.
Recall that for two scalar spatial functions
\begin{equation}
\mbox{ if }f,g\in H^s~\mbox{ for }s>\frac{n}{2}~\mbox{ then }fg\in H^s.
\end{equation}
Hence it is natural to require
\begin{equation}\label{idata1}
g^{(\nu_0)}_{\mu\nu}(0,.)=g^{(\nu_0)}_{\mu\nu}(0,.)\in H^{s}~\mbox{ for some $s>\frac{n}{2}+1$}
\end{equation}
and
\begin{equation}\label{idata2}
h^{(\nu_0)}_{\mu\nu}(0,.)=h_{\mu\nu}(0,.)\in H^{s}~\mbox{ for some $s>\frac{n}{2}+1$}.
\end{equation}
In addition we require that for all $0\leq \mu,\nu,\lambda,\rho\leq n$ we have
\begin{equation}\label{idata3}
{\big |}g^{(\nu_0)}_{\mu\nu}(0,.)g^{(\nu_0)\lambda\rho}(0,.){\big |}\leq C
\end{equation}
for some finite constant $C>0$.
\begin{defi}\label{initdatadef}
We call a list of data $g_{\mu\nu}(0,.),~0\leq \mu,\nu\leq n$ admissible if a uniform Lorentz condition is satisfied on the Cauchy surface (initial data surface), and if in additions the conditions (\ref{idata1}), (\ref{idata2}), and (\ref{idata3}) are satisfied. We remark that this definition is given with respect to the hyperplane $x_0=t=0$, but the form of the statement is chosen such it can be generalised to spacelike Cauchy surfaces straightforwardly.
\end{defi}
Admissible data lists are abundant, although they are a bit specific concerning the behaviour at spatial infinity. These requirements allow us to concentrate on the construction of naked singularity, where the requirements at spatial infinity are similar to artificial boundary conditions, which facilitate the prove of the local contraction theorem.
Note that in the given setting the harmonic field equations can be subsumed by a class of quasilinear hyperbolic equations of second order. There is a local existence theory for such type of equations (cf. \cite{HKM}), which proves the existence of a time-local and unique solution up to a small time horizon $T>0$ for regular data, where the components of the metric
related assumptions for the first order spatial and time derivatives). However, this local existence result cannot be applied in our situation, because we have only $C^{1,\delta}$ regularity at one point and for some examples in this class second order derivatives may be
even not integrable. There are two possible reasons for time locality of existence theorems. One possible reason is that a solution 'develops' singularities after finite time. The other reason is that the metric does not satisfy a Lorentz condition after some time, and then
cannot be subsumed under the type of quasilinear hyperbolic systems for which the local existence results are proved. However, if a strict Lorentz condition is satisfied uniformly on a Cauchy surface $C$ at time $t=t_0$, then it holds in a neighborhood for time $t\in (t_0-\rho,t_0+\rho)$ for some $\rho >0$.
The uniform Lorentz condition above ensure that this requirement for the existence of a local solutions satisfied. The harmonic form of the Einstein field equations confirm time symmetry as a natural property of the theory. For this reason of time symmetry (which we find in many hyperbolic equations of mathematical physics) we can build in a weak singularity (here a curvature singularity) into the initial data on the Cauchy surface and then show that there exists a time-local solution. This local solution can be extended locally to past time and to future time. An alternative approach is to start with smooth data on the Cauchy surface, and then show that a singularity can develop at the tip of a cone. We considered such a construction for the vorticity form of the incompressible Euler equation elsewhere. Note that the alternative construction considered here may be applied to the Euler equation as well, although the Leray projection term and special form of the equation leads to a different situation. The vorticity form of the Euler equation together with incompressibility and a different Laplacian kernel (in case $n=2$ and in case $n\geq 3$) lead to specific constraints in the case of dimension $n=2$, such that singularities can be observed for $n\geq 3$ for the Cauchy problem with regular data. In contrast for the Einstein field equation Cauchy problems with regular data can have singular solutions in any dimension $n \geq 2$.
In the following we use the term 'classical solution of a differential equation' on a certain domain in the usual sense that a solution function satisfies the differential equation pointwise, and where the (partial) derivatives exist in the classical Weierstrass sense.
Finally in order to have a well-defined Cauchy problem we need that some constrain equations are satisfied. Four\`{e}s-Bruhat observed that constraint condition $\Gamma^{\mu}=0,~0\leq \mu\leq n$ on the Cauchy surface are automatically transferred to $t>0$ as long as a solution exists. Therefore solutions of the harmonic field equations $R^{h}_{\mu\nu}=0$ are well defined if the initial data constraints $G^0_{\mu}(0,x)=0,~0\leq \mu\leq n$ are satisfied where $G^0_{\mu}$ is defined via the Einstein tensor $G_{\mu\nu}$. Some notation before we state the main theorem:
we write
\begin{equation}
g_{0\mu\nu}(0,.)\in C^{\infty}\left({\mathbb R}^n\setminus \left\lbrace (0,0)\right\rbrace \right)\mbox{ if $g$ is smooth at all}~y\in {\mathbb R}^n\setminus \left\lbrace (0,0)\right\rbrace.
\end{equation}
We have
\begin{thm}\label{mainthm1}
For a list of admissible data functions
\begin{equation}
\begin{array}{ll}
g_{\mu\nu}(0,.)=g_{0\mu\nu}:{\mathbb R}^n\rightarrow {\mathbb R},~g_{\mu\nu,t}(0,.)=h_{0\mu\nu}:{\mathbb R}^n\rightarrow {\mathbb R},\\
\end{array}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
g_{0\mu\nu}(0,.)\in C^{\infty}\left({\mathbb R}^n\setminus \left\lbrace (0,0)\right\rbrace \right)\cap C^{1,\delta}\left({\mathbb R}^n\right) \\
\end{array}
\end{equation}
there is a time-local solution to the Cauchy problem
\begin{equation}\label{harm2}
\left\lbrace \begin{array}{ll}
\frac{\partial g_{\mu\nu}}{\partial t}=h_{\mu\nu}\\
\\
\frac{\partial g_{\mu\nu,k}}{\partial t}=\frac{\partial h_{\mu\nu}}{\partial x^k}\\
\\
\frac{\partial h_{\mu\nu}}{\partial t}=-g_{00}\left(2g^{0k}\frac{\partial h_{\mu\nu}}{\partial x^k}+g^{km}\frac{\partial g_{\mu\nu,k}}{\partial x^m} -2H_{\mu\nu}\right)\\
\\
g_{\mu\nu}(0,.)=g_{0\mu\nu},~g_{\mu\nu,t}(0,.)=h_{0\mu\nu},~g_{\mu\nu,k}(0,.)=g_{0\mu\nu,k}.
\end{array}\right.
\end{equation}
on a time horizon $[0,T]$ for some $T>0$ which is classical in the complement of the origin. Here, the harmonic term is given by
\begin{equation}\label{hmunu}
\begin{array}{ll}
H_{\mu\nu}\equiv H_{\mu\nu}\left(g_{\alpha\beta},\frac{\partial g_{\alpha\beta}}{\partial x^{\gamma}} \right)=g^{\alpha\beta}g_{\delta\epsilon}\Gamma^{\delta}_{\mu\beta}\Gamma^{\epsilon}_{\nu\alpha}
\\
\\
+\frac{1}{2}\left(\frac{\partial g_{\mu\nu}}{\partial x^{\alpha}}\Gamma^{\alpha}+g_{\nu\rho}\Gamma^{\rho}_{\alpha\beta}g^{\alpha\eta}g^{\beta\sigma}\frac{\partial g_{\eta\sigma}}{\partial x^{\mu}} +g_{\mu\rho}\Gamma^{\rho}_{\alpha\beta}g^{\alpha\eta}g^{\beta\sigma}\frac{\partial g_{\eta\sigma}}{\partial x^{\nu}}\right),
\end{array}
\end{equation}
where $\Gamma^{\alpha}$ is defined as in (\ref{gamma}).
Moreover, for are admissible sets of data $g_{ij0},~1\leq i,j\leq n$ in the functions space $\left( C^{1,\delta}\cap H^2,|.|_{1,\delta}\right)$ the scalar curvature blows up at the origin $(t,x)=0$ (in restricted domains of finite balls on ${\mathbb R}^n$ these admissible list of data are even dense in $C^{1,\delta}\left({\mathbb R}^n\right)$. The time-local solutions constructed satisfy the usual constraint equations on Cauchy surfaces located at time $t_0>0$. The classical solution breaks down at the origin (which is part of the boundary of the Lorentzian of the manifold, not part of the manifold itself). The constructed solutions for the harmonic field equations $R^h_{\mu\nu}=0$ and the constraint equations $G^0_{\mu}(0,x)=0,~0\leq \mu\leq n$ are weak solutions on the whole domain (spatially in $H^2$), where the origin is included.
\end{thm}
\begin{rem}
Here, a solution $g_{\mu\nu},~0\leq \mu,\nu\leq n$ of the system \ref{harm2} is called 'classical' on the domain $\left(0,T\right]\times {\mathbb R}^n$ if $g_{\mu\nu}\in C^{2,2}$, where $C^{2,2}$ is the function space of twice continuously differentiable functions, where the first superscript refers to time and the second superscript refers to the spatial variables. Note that $h_{\mu\nu}=g_{\mu\nu,t}$ is encoded by the metric $g_{\mu\nu},~0\leq \mu,\nu\leq n$ such that we may refer to the metric function components alone as a solution of the system in (\ref{harm2}).
\end{rem}
\begin{rem}
Although the target is to construct $g_{\mu\nu}\in C^{2,2}$ in the complement of the origin we note that a solution $g_{\mu\nu}\in C^{1,2}$ is a classical solution of the system in (\ref{harm2}) in the domain $(0,T]\times {\mathbb R}^n$, as it contains only spatial derivatives of the metric components $g_{\mu\nu}$ up to second order and spatial derivatives of the first order time derivatives $h_{\mu\nu}$ even up to first order. Hence, if $g_{\mu\nu}\in C^{1,2}\left((0,T]\times {\mathbb R}^n\right)$ is known to be a solution of some system equivalent to (\ref{harm2}), then the third equation in (\ref{harm2}) actually tells us that $h_{\mu\nu,t}$ is continuous $(0,T]\times {\mathbb R}^n$ such that the solution of the original system is actually in $C^{2,2}\left((0,T]\times {\mathbb R}^n\right)$.
\end{rem}
We have to mention that any solution of the Einstein field equation has to satisfy some constraint equations. Since the Einstein evolution is time reversible it is sufficient to have the the Hamiltonian constraint equation and the momentum constraint equation satisfied at some time $t_0>0$, where the solution is regular (cf. also the remark \ref{constr} below). This can be achieved by smoothing the data such that the constraint equations are satisfied for the smoothed data. The local-time solution for smooth data then satisfies the constraint equation at each time section $t_0$ of the solution interval $[0,T]$. It is straightforward to observe that in the limit of smooth data to $H^2\cap C^{1,\delta}$ data these constraint equations are the still satisfied for $t_0>0$ and hold in $H^2$ sense at $t_0$. Note here that the singular perturbation (singular in the sense of adding a term in $C^{1,\delta}\setminus C^2$ to the initial data) in the proof of Proposition \ref{mainprop} is in a $1$-dimensional subspace.
As the constructed metric solutions $g_{ij}$ with singular scalar curvature are bounded on the domain of well-posedness, it is clear that there are causal curves of finite g.a.p. length (in the domain where the solutions exists) which reach the point of singular scalar curvature at the origin. We get
\begin{cor}\label{CC}
Theorem \ref{mainthm1} implies that the weak and strong cosmic censorship in the sense of definition \ref{scc} and definition \ref{wcc} are violated for a dense set of constrained data and the corresponding spatially global and time-local solutions $g_{ij}$ described in theorem \ref{mainthm1}.
\end{cor}
\begin{rem}\label{constr}
It is well-known that the Einstein evolution equation imposes certain constraint equations on the Cauchy surface for the sake of consistency, and these equations should be satisfied in the viscosity limit of the constructed local solution. We have a Hamilton constraint equation and a momentum constraint equation which we can tacitly assume to be satisfied if the data are regular, i.e. in $C^2\cap H^2$ (at least). If the constraint equations are satisfied for regular data on a Cauchy surface then they are satisfied on all Cauchy surfaces of the time evolution of the Einstein field equations. So they are satisfied for smooth data on these Cauchy surfaces (corresponding to parameter $t>t_0$ if the initial Cauchy surfaces is indexed by parameter $t_0$). As the smooth data $t_0$ converge to data with property as in Theorem \ref{mainthm1} the constrain equations remain satisfied pointwise for time parameter $t>t_0$ and in distributional sense at $t_0$. As the constraint equations are local they also remain satisfied pointwise in the complementary region of the singularity.
\end{rem}
\section{Proof of theorem \ref{mainthm1}}
The proof is based on a local contraction argument for the component functions $g_{\mu\nu}$, $g_{\mu\nu,k}$ and $h_{\mu\nu}$ in an appropriate function space. Note that the Lorentz metric is nondegenerate in the vicinity of the Cauchy surface due to the uniform Lorentz condition and that the entries of the inverse matrix of the metric tensor are regular for the list of admissible data. These admissible data lists are defined such that local the iteration procedures defined below in the vicinity of a Cauchy surface are well-defined.
Since the initial data have a weak regularity at one point we do not claim uniqueness but construct a local time solution (branch).
We construct local solutions via viscosity limits $\nu_0\downarrow 0$ of local solutions of the extended system ($\nu_0>0$ a small positive 'viscosity' parameter)
\begin{equation}\label{harm3}
\left\lbrace \begin{array}{ll}
\frac{\partial g^{(\nu_0)}_{\mu\nu}}{\partial t}=
\nu_0 \Delta g^{(\nu_0)}_{\mu\nu}+h^{(\nu_0)}_{\mu\nu},\\
\\
\frac{\partial g^{(\nu_0)}_{\mu\nu,k}}{\partial t}=\nu_0 \Delta g^{(\nu_0)}_{\mu\nu,k}+\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k},\\
\\
\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial t}=\nu_0 \Delta h^{(\nu_0)}_{\mu\nu}\\
\\
-g^{(\nu_0)}_{00}\left(2g^{(\nu_0)0k}\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k}+g^{(\nu_0)km}\frac{\partial g^{(\nu_0)}_{\mu\nu,k}}{\partial x^m} -2H^{(\nu_0)}_{\mu\nu}\right),\\
\\
g^{(\nu_0)}_{\mu\nu}(0,.)=g_{0\mu\nu},~g^{(\nu_0)}_{\mu\nu,t}(0,.)=h_{0\mu\nu},~g^{(\nu_0)}_{\mu\nu,k}(0,.)=g_{0\mu\nu,k}.
\end{array}\right.
\end{equation}
For a function $f:(0,\rho)\times {\mathbb R}^n\rightarrow {\mathbb R}$ on some time horizon $\rho >0$ we abbreviate
\begin{equation}
f\star G_{\nu_0}=\int_0^.\int_{{\mathbb R}^n}f(s,y)G_{\nu_0}(.-s,.-y)dyds
\end{equation}
for the convolution with the Gaussian $G_{\nu_0}$, and for a function $f_0:{\mathbb R}^n\rightarrow {\mathbb R}$ we write
\begin{equation}
f_0\star_{sp} G_{\nu_0}=\int_{{\mathbb R}^n}f_0(y)G_{\nu_0}(.,.-y)dy
\end{equation}
for the convolution with the Gaussian restricted to the spatial variables.
We define an iterative scheme $g^{(\nu_0)l}_{\mu\nu},~g^{(\nu_0)l}_{\mu\nu,k},~h^{(\nu_0)l}_{\mu\nu},~0\leq \mu,\nu\leq n,~1\leq k\leq n,~l\geq 0$ with integration index $l\geq 0$. We may initialize the scheme with
\begin{equation}
g^{(\nu_0)0}_{\mu\nu}=g_{0\mu\nu}\star_{sp} G_{\nu_0},~g^{(\nu_0)0}_{\mu\nu,k}=g_{0\mu\nu,k}\star_{sp} G_{\nu_0},~h^{(\nu_0)0}_{\mu\nu}=h_{0\mu\nu}\star_{sp} G_{\nu_0},
\end{equation}
where $G_{\nu_0}$ is the fundamental solution of
\begin{equation}
G_{\nu,t}-\nu \Delta G_{\nu_0}=0.
\end{equation}
For $l\geq 1$ the functions $g^{(\nu_0)l-1}_{\mu\nu},~g^{(\nu_0)l-1}_{\mu\nu,k},~h^{(\nu)l-1}_{\mu\nu},~0\leq \mu,\nu\leq n,~1\leq k\leq n$ are given, and the functions $g^{(\nu_0)l}_{ij},~g^{(\nu_0)l}_{ij,k},~h^{(\nu)l}_{ij},~0\leq \mu,\nu\leq n,~1\leq k\leq n$ are defined iteratively as solutions of the equation
\begin{equation}\label{harm3l}
\left\lbrace \begin{array}{ll}
\frac{\partial g^{(\nu_0)l}_{\mu\nu}}{\partial t}=\nu_0 \Delta g^{(\nu_0)l}_{\mu\nu}+h^{(\nu_0)l-1}_{\mu\nu},\\
\\
\frac{\partial g^{(\nu_0)l}_{\mu\nu,k}}{\partial t}=\nu_0 \Delta g^{(\nu_0)l}_{\mu\nu,k}+\frac{\partial h^ {(\nu_0)l-1}_{\mu\nu}}{\partial x^k},\\
\\
\frac{\partial h^{(\nu_0)l}_{\mu\nu}}{\partial t}=\nu_0 \Delta h^{(\nu_0)l}_{\mu\nu}\\
\\
-g^{(\nu_0)l-1}_{00}\left(2g^{(\nu_0)l-1,0k}\frac{\partial h^{(\nu_0)l-1}_{\mu\nu}}{\partial x^k}+g^{(\nu_0)l-1,km}\frac{\partial g^{(\nu_0)l-1}_{\mu\nu,k}}{\partial x^m} -2H^{(\nu_0)l-1}_{\mu\nu}\right),\\
\\
g^{(\nu_0)l}_{\mu\nu}(0,.)=g_{0ij},~g^{(\nu_0)l}_{\mu\nu,t}(0,.)=h_{0ij},~g^{(\nu_0)l}_{\mu\nu,k}(0,.)=g_{0\mu\nu,k},
\end{array}\right.
\end{equation}
where
\begin{equation}
\begin{array}{ll}
H^{(\nu_0)l-1}_{\mu\nu}\equiv H^{(\nu_0)l-1}_{\mu\nu}\left(g^{(\nu_0)l-1}_{\alpha\beta},\frac{\partial g^{(\nu_0)l-1}_{\alpha\beta}}{\partial x^{\gamma}} \right)\\
\\
=g^{(\nu_0)l-1,\alpha\beta}g^{(\nu_0)l-1}_{\delta\epsilon}\Gamma^{(\nu_0)l-1,\delta}_{i\beta}
\Gamma^{(\nu_0)l-1,\epsilon}_{j\alpha}
\\
\\
+\frac{1}{2}{\Big(}\frac{\partial g^{(\nu_0)l-1}_{ij}}{\partial x^{\alpha}}\Gamma^{(\nu_0)l-1,\alpha}+g^{(\nu_0)}_{j\rho}\Gamma^{(\nu_0)l-1,\rho}_{\alpha\beta}g^{(\nu_0)l-1,\alpha\eta}g^{(\nu_0)l-1,\beta\sigma}\frac{\partial g^{(\nu_0)l-1}_{\eta\sigma}}{\partial x^i}\\
\\
+g^{(\nu_0)l-1}_{i\rho}\Gamma^{(\nu_0)l-1,\rho}_{\alpha\beta}g^{(\nu_0)l-1,\alpha\eta}g^{(\nu_0)l-1,\beta\sigma}\frac{\partial g^{(\nu_0)l-1}_{\eta\sigma}}{\partial x^j}{\Big )}.
\end{array}
\end{equation}
Here,
\begin{equation}
\Gamma^{(\nu_0)l-1,\mu}=g^{(\nu_0)l-1,\alpha\beta}\Gamma^{(\nu_0)l-1,\mu}_{\alpha\beta},
\end{equation}
and
\begin{equation}
\Gamma^{(\nu_0)l-1,\mu}_{\alpha\beta}=\frac{1}{2}g^{(\nu_0)l-1,\mu\rho}\left(g^{(\nu_0)l-1}_{\rho\alpha,\beta}+g^{(\nu_0)l-1}_{\rho\beta,\alpha}-g^{(\nu_0)l-1}_{\alpha\beta,\rho} \right).
\end{equation}
We have the representation
\begin{equation}\label{repl}
\begin{array}{ll}
g^{(\nu_0)l}_{\mu\nu}=g_{0\mu\nu}\star_{sp} G_{\nu_0}+h^{(\nu_0)l-1}_{\mu\nu} \star G_{\nu_0},~\\
\\
g^{(\nu_0)l}_{\mu\nu,k}=g_{0\mu\nu,k}\star_{sp} G_{\nu_0}+\frac{\partial h^ {(\nu_0)l-1}_{\mu\nu}}{\partial x^k}\star G_{\nu_0}\\
\\
h^{(\nu_0)l}_{\mu\nu}=h_{0\mu\nu}\star_{sp} G_{\nu_0}
-g^{(\nu_0)l-1}_{00}\times\\
\\
\times \left(2g^{(\nu_0)l-1,0k}\frac{\partial h^{(\nu_0)l-1}_{\mu\nu}}{\partial x^k}+g^{(\nu_0)l-1,km}\frac{\partial g^{(\nu_0)l-1}_{\mu\nu,k}}{\partial x^m} -2H^{(\nu_0)l-1}_{\mu\nu}\right)\star G_{\nu_0}.
\end{array}
\end{equation}
The first crucial step is the convergence of the function $h^{(\nu_0)l}_{\mu\nu}$ as $l\uparrow \infty$. We obtain this convergence by a local contraction result in an appropriate function space. In order to choose this space, we first observe that it may be chosen weaker than expected. First note that for a classical solution of the Einstein evolution equation we have
\begin{equation}
g_{\mu\nu}\in C^{2,2}.
\end{equation}
We would expect that we have to construct $g^{(\nu_0)}_{\mu\nu}\in C^{2,2}$ accordingly. Nevertheless, it is sufficient to construct $g^{(\nu_0)}_{\mu\nu}$ in a subspace of $C^{1,2}$ first. In order to observe this note first that in the limit ($l\uparrow \infty$) of the iterated viscosity system we have the fixed point representation
\begin{equation}\label{repfix}
\begin{array}{ll}
g^{(\nu_0)}_{\mu\nu}=g_{0\mu\nu}\star_{sp} G_{\nu_0}+h^{(\nu_0)}_{\mu\nu} \star G_{\nu_0},~\\
\\
g^{(\nu_0)}_{\mu\nu,k}=g_{0\mu\nu,k}\star_{sp} G_{\nu_0}+\frac{\partial h^ {(\nu_0)}_{\mu\nu}}{\partial x^k}\star G_{\nu_0}\\
\\
h^{(\nu_0)}_{\mu\nu}=h_{0\mu\nu}\star_{sp} G_{\nu_0}\\
\\
-g^{(\nu_0)}_{00}\left(2g^{(\nu_0),0k}\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k}+g^{(\nu_0),km}\frac{\partial g^{(\nu_0)}_{\mu\nu,k}}{\partial x^m} -2H^{(\nu_0)}_{\mu\nu}\right)\star G_{\nu_0}.
\end{array}
\end{equation}
The right side of (\ref{repfix}) has only spatial derivatives of the metric components $g^{(\nu_0)}_{\mu\nu}$ up to second order and first order spatial derivatives of the components $h_{\mu\nu}$, i.e., first order spatial derivatives of the first order time derivative of the metric components. We shall observe below, that for an appropriate choice of data and carefully chosen functions spaces regularity of these functions is preserved in the viscosity limit $\nu_0\downarrow 0$. Solutions by iterations of the viscosity system and their viscosity limit are naturally considered in a subspace of $C^{1,2}$ for the metric component functions $g_{\mu\nu}$ and in a subspace of $C^{0,1}$ for the metric components $h_{\mu\nu}$. Having constructed limits (iteration limit and viscosity limit) with $g_{\mu\nu}\in C^{1,2}\left((0,T)\times {\mathbb R}^n \right) $ and $h_{\mu\nu}\in C^{0,1}\left((0,T)\times {\mathbb R}^n \right)$ from the third equation of (\ref{harm2}) we get
\begin{equation}
\frac{\partial h_{\mu\nu}}{\partial t}=-g_{00}\left(2g^{0k}\frac{\partial h_{\mu\nu}}{\partial x^k}+g^{km}\frac{\partial g_{\mu\nu,k}}{\partial x^m} -2H_{\mu\nu}\right)\in C,
\end{equation}
where $C$ is the space of continuous functions. This implies that we have a classical solution.
In order to prepare this argument, we use the convolution rule and write the second equation in (\ref{repl}) in the form
\begin{equation}
g^{(\nu_0)l}_{\mu\nu,k}=g_{0\mu\nu,k}\star_{sp} G_{\nu_0}+ h^ {(\nu_0)l-1}_{\mu\nu}\star G_{\nu_0,k},
\end{equation}
avoiding the consideration of the convergence of spatial derivatives of $h_{\mu\nu}$, where we use later the fact that spatial first order derivatives of the Gaussian $G_{\nu_0,k}$ are locally integrable. Moreover, the convoluted metric functions and their derivatives are spatially Lipschitz in our construction.
Note that it is essential to have convergence of the $h^{(\nu_0)l}_{\mu\nu}$ (as $l\uparrow \infty$) determined by the third equation.
We again use the convolution rule in order to avoid first order spatial derivatives of $h^{(\nu_0)l}_{\mu\nu}$ and second order spatial derivatives of the metric $g^{(\nu_0)l}_{\mu\nu}$ in the convoluted term.
We get for $l\geq 1$
\begin{equation}\label{hlij}
\begin{array}{ll}
h^{(\nu_0)l}_{\mu\nu}=h_{0\mu\nu}\star_{sp} G_{\nu_0}-\left( g^{(\nu_0)l-1}_{00}2g^{(\nu_0)l-1,0k} h^{(\nu)l-1}_{\mu\nu}\right) \star G_{\nu_0 ,k}\\
\\
+\left( g^{(\nu_0)l-1}_{00}2g^{(\nu_0)l-1,0k}\right)_{,k} h^{(\nu_0)l-1}_{\mu\nu}\star G_{\nu_0 }+\left( g^{(\nu_0)l-1}_{00} g^{(\nu_0)l-1,km}
g^{(\nu_0)l-1}_{\mu\nu,k}\right) \star G_{\nu_0,m} \\
\\
-\left( \left( g^{(\nu_0)l-1}_{00} g^{(\nu_0)l-1,km}\right)_{,m}
g^{(\nu_0)l-1}_{\mu\nu,k}\right) \star G_{\nu_0} +2g^{(\nu_0)l-1}_{00} H^{(\nu_0)l-1}_{\mu\nu}\star G_{\nu_0}.
\end{array}
\end{equation}
Note that by the use of the convolution rule all terms involve only first derivatives of the metric $g_{\mu\nu}$.
Using (\ref{hlij}) we can compute the functions $h^{(\nu_0),l}_{\mu\nu},~0\leq \mu,\nu\leq n$ and consider the iteration limit $l\uparrow \infty$ and the viscosity limit $(\nu_0)\downarrow 0$. Next we consider the series $h^{(\nu_0)l}_{\mu\nu},~l\geq 1,~0\leq \mu,\nu\leq n$. In order to prove convergence in a strong function space we consider for $l\geq 2$ the functional series
\begin{equation}
h^{(\nu_0)l}_{\mu\nu}=h^{(\nu_0)1}_{\mu\nu}+\sum_{m=2}^l\delta h^{(\nu_0)m}_{\mu\nu},
\end{equation}
where
\begin{equation}
\delta h^{(\nu_0)m}_{\mu\nu}=h^{(\nu_0)m}_{\mu\nu}-h^{(\nu_0)m-1}_{\mu\nu},
\end{equation}
and where $h^{(\nu_0)1}_{\mu\nu}$ is defined by (\ref{hlij}) applied to the data, where for $l=1$ we have
$$
\begin{array}{ll}
g^{(\nu_0)l-1}_{\mu\nu}(0,.)=g^{(\nu_0)0}_{\mu\nu}(0,.)=g_{0ij},~g^{(\nu_0)l-1}_{\mu\nu,t}(0,.)=g^{(\nu_0)0}_{\mu\nu,t}(0,.)=h_{0ij},\\
\\
g^{(\nu_0)l-1}_{\mu\nu,k}(0,.)=g^{(\nu_0)0}_{\mu\nu,k}(0,.)=g_{0\mu\nu,k}.
\end{array}
$$
We have
\begin{equation}
\begin{array}{ll}
\delta h^{(\nu_0)l}_{\mu\nu}=-\left( g^{(\nu_0)l-1}_{00}2g^{(\nu_0)l-1,0k} h^{(\nu)l-1}_{\mu\nu}\right) \star G_{\nu_0 ,k}\\
\\
+\left( g^{(\nu_0)l-1}_{00}2g^{(\nu_0)l-1,0k}\right)_{,k} h^{(\nu_0)l-1}_{\mu\nu}\star G_{\nu_0 }+\left( g^{(\nu_0)l-1}_{00} g^{(\nu_0)l-1,km}
g^{(\nu_0)l-1}_{\mu\nu,k}\right) \star G_{\nu_0,m} \\
\\
-\left( \left( g^{(\nu_0)l-1}_{00} g^{(\nu_0)l-1,km}\right)_{,m}
g^{(\nu_0)l-1}_{\mu\nu,k}\right) \star G_{\nu_0} +2g^{(\nu_0)l-1}_{00} H^{(\nu_0)l-1}_{\mu\nu}\star G_{\nu_0}\\
\\
+\left( g^{(\nu_0)l-2}_{00}2g^{(\nu_0)l-2,0k} h^{(\nu)l-2}_{\mu\nu}\right) \star G_{\nu_0 ,k}\\
\\
-\left( g^{(\nu_0)l-2}_{00}2g^{(\nu_0)l-2,0k}\right)_{,k} h^{(\nu_0)l-2}_{\mu\nu}\star G_{\nu_0 }-\left( g^{(\nu_0)l-2}_{00} g^{(\nu_0)l-2,km}
g^{(\nu_0)l-2}_{\mu\nu,k}\right) \star G_{\nu_0,m} \\
\\
+\left( \left( g^{(\nu_0)l-2}_{00} g^{(\nu_0)l-2,km}\right)_{,m}
g^{(\nu_0)l-2}_{\mu\nu,k}\right) \star G_{\nu_0} -2g^{(\nu_0)l-2}_{00} H^{(\nu_0)l-2}_{\mu\nu}\star G_{\nu_0}.
\end{array}
\end{equation}
Interpolation, i.e. subtraction and addition of mixed terms $$-\left( g^{(\nu)l-1}_{00}2g^{(\nu)l-1,0k} h^{(\nu)l-2}_{\mu\nu}\right) \star G_{\nu ,k} \mbox{ etc.}$$ leads to a functional
\begin{equation}
\left(\delta g^{(\nu_0)l}_{\mu\nu},\delta g^{(\nu_0)l}_{\mu\nu,k},\delta h^{(\nu_0)l}_{\mu\nu} \right)=F\left(\delta g^{(\nu_0)l-1}_{\mu\nu},\delta g^{(\nu_0)l-1}_{\mu\nu,k},\delta h^{(\nu_0)l-1}_{\mu\nu},p \right)
\end{equation}
with a 'parameter vector'
\begin{equation}
p=\left( g^{(\nu_0)l-1}_{\mu\nu},g^{(\nu_0)l-1}_{\mu\nu,k}, h^{(\nu_0)l-1}_{\mu\nu},g^{(\nu_0)l-2}_{\mu\nu}, g^{(\nu_0)l-2}_{\mu\nu,k},h^{(\nu_0)l-2}_{\mu\nu}\right)
\end{equation}
of functions at the start of iteration step $l$. Symmetry allows us to consider indices $\mu\leq \nu$, which reduces the function space, but this does not really matter. Next we determine an appropriate function space. As we want $g_{\mu\nu}\in C^{2}$ outside the origin and where $g_{\mu\nu}$ is well defined (here $C^2$ is the function space of twice differentiable functions with continuous derivatives), our choice of a functions space close to $C^{1,2}$ for an iteration for $g_{\mu\nu}$ (resp. $g^{(\nu_0)}_{\mu\nu}$) via approximations $g^{(\nu_0),l}_{\mu\nu}$ looks rather weak, but it is fitting as the time derivative of $h_{\mu\nu}$ is defined in terms of spatial derivatives up to second order of $g_{\mu\nu}$ and spatial derivatives up to first order of $h_{\mu\nu}$.
For space-time dimension $n+1$ we have essentially $(n+1)(n+2)/2$ metric functional increments $\delta g^{(\nu_0)l}_{\mu\nu}=g^{(\nu_0)l}_{\mu\nu}-g^{(\nu_0)l-1}_{\mu\nu}$, $n(n+1)(n+2)/2$ metric functional increments $\delta g^{(\nu_0)l}_{\mu\nu,k}=g^{(\nu_0)l}_{\mu\nu,k}-g^{(\nu_0)l-1}_{\mu\nu,k}$ and $(n+1)(n+2)/2$ increments $\delta h^{(\nu_0)l}_{\mu\nu}$. Accordingly, we define
\begin{equation}
G^{(\nu_0)}_l=(\delta g^{(\nu_0)l}_{\mu})_{0\leq \mu\leq \nu\leq n},~G^{(\nu_0)}_{l1}=(\delta g^{(\nu_0)l}_{ij})_{0\leq \mu\leq \nu\leq n,~1\leq k\leq n},
\end{equation}
and
\begin{equation}
{\cal H}^{(\nu)}_l=(\delta h^{(\nu_0)l}_{\mu\nu})_{0\leq \mu\leq \nu\leq n}.
\end{equation}
For the sake of abbreviation we write
\begin{equation}
\Omega_T:=(0,T)\times {\mathbb R}^n.
\end{equation}
Then with $d_g=(n+1)(n+2)/2$, $d_{g1}=n(n+1)(n+2)/2$, $d_h=(n+1)(n+2)/2$ (we shall choose $m=1$ and $l=2$ later, but for more regular data we could adopt $m$ and $l$- therefore we use a more general notation in the following) we have a
functional
\begin{equation}\label{F}
\begin{array}{ll}
F:\left[ C_{H^2,\circ}^{m,l}\left(\Omega_T \right)\right]^{d_g}\times \left[ C_{H^2,\circ}^{m,l-1}\left(\Omega_T \right)\right]^{d_{g1}}\times \left[ C_{H^2,\circ}^{m-1,l-1}\left(\Omega_T \right)\right]^{d_{h}}\\
\\
\rightarrow \left[ C_{H^2,\circ}^{m,l}\left(\Omega_T \right)\right]^{d_g}\times \left[ C_{H^2,\circ}^{m,l}\left(\Omega_T \right)\right]^{d_{g1}}\times \left[ C_{H^2,\circ}^{m,l}\left(\Omega_T \right)\right]^{d_{h}},\\
\\
\left(G^{(\nu_0)}_l,G^{(\nu_0)}_{l1},{\cal H}^{(\nu_0)}_l\right)^T
=F\left(G^{(\nu_0)}_{l-1},G^{(\nu_0)}_{(l-1)1},{\cal H}^{(\nu_0)}_{l-1}\right)
\end{array}
\end{equation}
with the function space $C_{H^2,\circ}^{m,l}\left(\Omega_T \right)$ which is defined
as
\begin{equation}
C_{H^2,\circ}^{m,l}\left(\Omega_T \right)=\left\lbrace f\in C_{\circ}^{m,l}\left(\Omega_T \right)|\forall t\in [0,T]:~f(t,.)\in H^2 \right\rbrace
\end{equation}
along with the functions space \begin{equation}\label{incremomega}
C_{\circ}^{m,l}\left(\Omega_T\right):=\left\lbrace f:\Omega_T\rightarrow {\mathbb R}|f\in C^{m,l}~\&~\forall x~f(0,x)=0 \right\rbrace.
\end{equation}
Note that by the use of the the convolution rule we have obtained that representations of the increments have only first order derivatives of the metric functions $g^{(\nu_0)p}_{ij}$ for some $p\geq 0$. Some of the related Gaussian $G_{\nu_0}$-terms then get first order derivatives, where these Gaussians have local standard $L^1$-estimates in the time interval $(0,T]$ (open at $0$) and on a ball around a fixed spatial argument $x$ (cf. also the proof of Lemma 3.2 below. Similarly, for the $h^{(\nu_0),l}_{\mu\nu}$-terms. Outside the ball around a fixed argument $x$ we surely have $L^1$-estimates of the Gaussian such that we can apply Young inequalities in order to get contraction of $F$ on same time interval, i.e., we have
\begin{lem}\label{mainlem}
There is a time horizon $T>0$ such that the map $F$ is a contraction on the function space $$ \left[ C_{H^2,\circ}^{m,l}\left(\overline{D_T} \right)\right]^{d_g}\times \left[ C_{H^2,\circ}^{m,l-1}\left(\overline{D_T} \right)\right]^{d_{g1}}\times \left[ C_{H^2,\circ}^{m-1,l-1}\left(\overline{D_T} \right)\right]^{d_{h}}$$
with a contraction constant $c\in (0,1)$
\end{lem}
The latter contraction result leads to the pointwise limit
\begin{equation}
g^{(\nu_0)}_{\mu\nu}=\lim_{l\uparrow \infty}g^{(\nu_0)l}_{\mu\nu}\in C^{1,2}\left(\left(0,T\right],{\mathbb R} \right),
\end{equation}
where
\begin{equation}
g^{(\nu_0)}_{\mu\nu}(0,.)\in C^{1,\delta}\left({\mathbb R}^n\right)\cap H^2.
\end{equation}
We denote
\begin{equation}
h^{(\nu_0)}_{\mu\nu}=\lim_{l\uparrow \infty}h^{(\nu_0)l}_{\mu\nu}
\end{equation}
for all $1\leq \mu,\nu\leq n$. Accordingly we write for all $1\leq i,j\leq n$
\begin{equation}
H^{(\nu_0)}_{\mu\nu}=\lim_{l\uparrow \infty}H^{(\nu_0)l}_{\mu\nu}
\end{equation}
where we recall that we have
\begin{equation}
g^{(\nu_0)}_{\mu\nu}(0,.)=g^{(\nu_0)}_{0ij},~g^{(\nu_0)}_{\mu\nu,t}(0,.)=h^{(\nu_0)}_{0\mu\nu},~g^{(\nu)}_{\mu\nu,k}(0,.)=g^{(\nu)}_{0\mu\nu,k}
\end{equation}
for the initial data (which do not depend on the iteration index $l\geq -1$.
We observe that the contraction is essentially independent of the viscosity parameter $\nu_0$ (for small $\nu_0 >0$). This is not suprising as the density function $(t,y)\rightarrow G_{\nu_0}(t,y)=\frac{1}{\sqrt{4\pi \nu_0 t}^n}\exp\left(-\frac{|y|^2}{4\nu_0 t} \right) $ is integrable on the domain $\left((0,T]\times {\mathbb R}^n\right)$, where for $\delta\in (0,1)$ we have
\begin{equation}
{\big |}G_{\nu_0,i}(t,y){\big |}={\Big |}\frac{-2y_i}{4\nu_0 t}G_{\nu_0}{\Big |}\leq {\Big |}\frac{C}{|y|}G_{\nu_0}{\Big |}\leq \frac{C}{(\nu_0 t)^{\delta}|y|^{n+1-2\delta}}
\end{equation}
with $C=\sup_{z>0}\frac{1}{2}z^2\exp\left(-\frac{1}{4}z^2\right)$ is locally integrable for $n\geq 2$ (and $\delta >0.5$) such that the Gaussian is then easily seen to be globally integrable. We have
\begin{lem}
The contraction constant $c\in (0,1)$ of Lemma \ref{mainlem} can be chosen independently of the viscosity constant $\nu_0 >0$.
\end{lem}
\begin{proof}
We have observed that the essential recursive functionals in (\ref{hlij}) can be written such that the convoluted terms on the right side involve only products of metric components $g_{ij}$, their inverses and first order spatial derivatives of such metric components or products of such metric components. Here we have rewritten the terms where a second order spatial derivative of a metric tensor component appears.
For data $g^{(\nu_0)}_{\mu\nu}(0,.)$ or data entries of the inverse (essentially as in (\ref{dataf})) we have Lipschitz continuity, but only local Lipschitz continuity of the first spatial derivatives. Nevertheless we can prove the existence of a regular solution branch using classical solution representations of local time solutions, where in these representations we approximate all first order derivatives $g_{ij,k}$ by approximative convolutions $g_{ij}\ast G_{\nu,k}$, since the latter convolutions have Lipschitz continuous upper bounds (cf. below). Then we can consider viscosity limits.
Representations of (approxmations) of solution functions in terms of convolutions with Lipschitz continuous functions or Lipschitz continuous upper bounds with first order spatial derivatives of the Gaussian have the advantage that symmetry relations of the form
\begin{equation}\label{visclim}
\begin{array}{ll}
{\Big |}f\ast_{sp}G_{\nu_0,i}(s,x){\Big |}= {\Big |}\int_{{\mathbb R}^n}f(x-y)\frac{2y_i}{4\nu s\sqrt{4\pi \nu_0 s}^n}\exp\left(-\frac{|y|^2}{4\nu_0 s}\right) dy{\Big |}\\
\\
\leq \int_{\left\lbrace y|y\in {\mathbb R}^D, y_i\geq 0\right\rbrace }{\Big |}f_x(y)-f_x(y^{i,-}){\Big |}\frac{2|y_i|}{4\nu_0 s\sqrt{4\pi \nu s}^n}\exp\left(-\frac{|y|^2}{4\nu_0 s}\right) dy\\
\\
\leq \int_{\left\lbrace y|y\in {\mathbb R}^D, y_i\geq 0\right\rbrace }L{\Big |}2y_i{\Big |}\frac{2|y_i|}{4\nu_0 s\sqrt{4\pi \nu s}^n}\exp\left(-\frac{|y|^2}{4\nu_0 s}\right) dy\\
\\
=\int_{\left\lbrace y'|y'\in {\mathbb R}^D, y'_i\geq 0\right\rbrace }L{\Big |}2\sqrt{\nu_0}y'_i{\Big |}\frac{2|\sqrt{\nu_0}y'_i|}{4\pi \nu_0 s\sqrt{4\pi s}^n}\exp\left(-\frac{|y'|^2}{4 s}\right) dy'\\
\\
=\int_{\left\lbrace y'|y'\in {\mathbb R}^D, y'_i\geq 0\right\rbrace }L{\Big |}2y'_i{\Big |}\frac{|2y'_i|}{4 s\sqrt{4\pi s}^n}\exp\left(-\frac{|y'|^2}{4 s}\right) dy'\\
\\
=\int_{\left\lbrace y'|y'\in {\mathbb R}^D, y'_i\geq 0\right\rbrace }L\frac{4(y'_i)^2}{4 s\sqrt{4\pi s}^n}\exp\left(-\frac{|y'|^2}{4 s}\right) dy'\leq 4LC
\end{array}
\end{equation}
(with $y=\sqrt{\nu_0}y'$ and for some Lipschitz constant $L$ and a finite constant $C>0$ which is independent of $\nu_0$) can be used. Here, $y^{i,-}$ has the components $(y^{i,-}_j,~1\leq j\leq n$ with
\begin{equation}
y^{i,-}_j=y^{i}_j,~\mbox{ if }j\neq i~\mbox{ and }~y^{i,-}_i=y_i~\mbox{ for }~j=i.
\end{equation}
This estimate can alway be used in our situation as we have Lipschitz continuous upper bounds. In this contexts note that for data constructions as in (\ref{dataf}))
we have
\begin{equation}
\begin{array}{ll}
{\Big |}\left( z^{3}\sin\left(\frac{1}{z^{\alpha_0}} \right)\phi_\delta\ast_{sp}G_{\nu,1}\right) (t,x){\Big |}\\
\\
\leq 2{\big |}\int_{z>0}\left( (z-y)^{3} \phi_\delta(z-y)G_{\nu_0,1}(t,y)\right) (t,x){\big |}\leq C|z|\phi_\delta.
\end{array}
\end{equation}
Alternatively, we show that there is an $L^1\left((0,T)\times {\mathbb R}^n \right) $ upper bound of Gaussian $G_{\nu_0}$ and its first order derivatives $G_{\nu_0,k}, 1\leq k\leq n$ which are independent of the viscosity $\nu_0 >0$.
First, for $\Delta x=x-y$ and $\Delta s=4\nu_0\Delta t$ we have for the essential factor of the Gaussian for some $C>0$ and $\delta >0$ (where $z=\frac{\Delta x}{\sqrt{\Delta s}}$)
\begin{equation}
\begin{array}{ll}
\frac{1}{\sqrt{\Delta s}^n}\exp\left(-\frac{\Delta x^2}{\Delta s} \right) = \left( \frac{\Delta x^2}{\Delta s}\right)^{\frac{n}{2}-\delta}\frac{1}{\Delta s^{\delta}|\Delta x|^{n-2\delta}} \exp\left(-\frac{\Delta x^2}{\Delta s} \right)\\
\\
\leq\frac{1}{\Delta s^{\delta}|\Delta x|^{n-2\delta}}\sup_{z\in {\mathbb R}}\left( z^2\right)^{\frac{n}{2}-\delta}\exp\left(-z^2 \right)=\frac{C}{\Delta s^{\delta}|\Delta x|^{n-2\delta}}.
\end{array}
\end{equation}
Similarly, for the first order spatial derivatives of the (essential factor of the) Gaussian we have for $\delta\in \left(\frac{1}{2},1 \right)$ in a ball $B_{R}(x)$ of radius $R>0$ the estimate
\begin{equation}
\begin{array}{ll}
\left( \frac{1}{\sqrt{\Delta s}^n}\exp\left(-\frac{\Delta x^2}{\Delta s} \right) \right)_{,i}
\leq\frac{C}{\Delta s^{\delta}|\Delta x|^{n+1-2\delta}}.
\end{array}
\end{equation}
Hence for $T>0$ and some $c'>0$ we have
\begin{equation}
\int_0^T\int_{B_R(x)}\frac{d\Delta td\Delta x}{\Delta s^{\delta}|\Delta x|^{n+1-2\delta}}\leq \frac{c'}{\nu_0^{\delta}}|T|^{1-\delta}R^{2\delta -1}.
\end{equation}
This means that for $R=\nu_0^{2}$ we have $R^{2\delta-1}=\nu_0^{4\delta -2}$ and $\delta \in \left(\frac{2}{3},1 \right)$ we have
\begin{equation}
\frac{c'}{\nu_0^{\delta}}|T|^{1-\delta}\nu_0^{4\delta -2}= c'|T|^{1-\delta}\nu_0^{3\delta -2}\downarrow 0
\end{equation}
as $\nu_0 \downarrow 0$. Furthermore, for the complement ${\mathbb R}^n\setminus B_{R}(x)$ with $R=\nu_0^2$ we have for some finite constant $c>0$ the essential estimate
\begin{equation}
\begin{array}{ll}
{\Big |}\int_{|\Delta x|\geq \nu_0^{2}}\left(-\frac{\Delta x_k}{\nu_0 \Delta t} \right) \frac{1}{\sqrt{\nu_0\Delta t}^n}\exp\left(-\frac{\Delta x^2}{\nu_0\Delta t} \right)dy{\Big |}\\
\\
\leq {\Big |}\int_{r\geq \nu_0^{2}}\left(\frac{r}{\nu_0 \Delta t} \right) \frac{c}{\sqrt{\nu\Delta t}^n}\exp\left(-\frac{r^2}{\nu_0\Delta t} \right)r^{n-1}dr{\Big |}\\
\\
\leq {\Big |}\left(\frac{1}{2} \right) \frac{c}{\sqrt{\nu_0\Delta t}^n}\exp\left(-\frac{r^2}{\nu_0\Delta t} \right)r^{n-1}{\Big |}^{\infty}_{\nu^2_0}\\
\\
+{\Big |}\int_{r\geq \nu_0^{2}}\left(\frac{n-1}{2} \right) \frac{c}{\sqrt{\nu_0\Delta t}^n}\exp\left(-\frac{r^2}{\nu_0\Delta t} \right)r^{n-2}dr{\Big |}\downarrow 0~\mbox{as}~\nu_0\downarrow 0.
\end{array}
\end{equation}
Here we have absorbed the factor $4$ in the time variable implicitly.
It follows that we have a uniform bound
\begin{equation}
\begin{array}{ll}
\sup_{\nu_0 >0}\left( {\big |}G_{\nu_0}{\big |}_{L^1\left((0,T)\times {\mathbb R}^n \right) }
+\sum_{k=1}^n{\big |}G_{\nu_0,k}{\big |}_{L^1\left((0,T)\times {\mathbb R}^n \right) }\right) \leq C
\end{array}
\end{equation}
for some $C>0$.
\end{proof}
Using local contraction the viscosity limit ($\nu_0\downarrow 0$) and the iteration limit $l\uparrow \infty$ can be considered at the same time. We choose a sequence $\nu_l,~l\geq 1$ with $\lim_{l\uparrow \infty}\nu_l=0$ and consider the functional series $g^{(\nu_l),l}_{\mu\nu},~l\leq 2$, where we consider the representations
\begin{equation}
g^{(\nu_l),l}_{\mu\nu}=g^{(\nu_l),0}_{\mu\nu}+\sum_{p=2}^{l}\delta g^{(\nu_l),p}_{\mu\nu}.
\end{equation}
The componentwise differentiation (up to second order) of the limit $g_{\mu\nu}:=\lim_{l\uparrow \infty }g^{(\nu_l),l}_{\mu\nu}$ of this functional series is a delicate matter. However, we proceed as follows. First we consider functions on the domain $D_T=\left[0,T\right] \times \left]-\frac{\pi}{2},\frac{\pi}{2}\right[^n$
\begin{equation}
\begin{array}{ll}
g^{(\nu_l),l}_{\mu\nu}(.,\tan(.)):D_T\rightarrow {\mathbb R},~~
g^{(\nu_l),l}_{\mu\nu,k}(., \tan(.)):D_T \rightarrow {\mathbb R}\\
\\
g^{(\nu_l),l}_{\mu\nu,k,l}(., \tan(.)):D_T \rightarrow {\mathbb R}
\end{array}
\end{equation}
for indices $0\leq \mu,\nu\leq n$ and 'spatial indices' $1\leq k,l\leq n$. Here we denote $$\tan(x)=(\tan(x^1),\cdots ,\tan(x^n))^T.$$ Since $g^{(\nu_l),l}_{\mu\nu}(t,.)\in H^2\cap C^2$for all $t\in (0,T]$ for all these functions we have
\begin{equation}
\begin{array}{ll}
\forall t\in [0,T]:~g^{(\nu_l),l}_{\mu\nu}\left(t,\tan(-\frac{\pi}{2})\right) =g^{(\nu_l),l}_{\mu\nu}(t, \tan(\frac{\pi}{2})=0\\
\\
\forall t\in [0,T]:g^{(\nu_l),l}_{\mu\nu,k}\left(t,\tan(-\frac{\pi}{2})\right)=g^{(\nu_l),l}_{\mu\nu,k}\left(t,\tan(\frac{\pi}{2})\right)=0\\
\\
\forall t\in [0,T]:~g^{(\nu_l),l}_{\mu\nu,k,l}\left(t,\tan(-\frac{\pi}{2})\right)=g^{(\nu_l),l}_{\mu\nu,k,l}\left(t,\tan(\frac{\pi}{2})\right)=0.
\end{array}
\end{equation}
Hence we have periodic extensions
\begin{equation}
\begin{array}{ll}
d^{(\nu_l),l}_{\mu\nu}:D_T\rightarrow {\mathbb R},~~d^{\nu_l,l}_{\mu\nu}|_{D_T}=g^{(\nu_l),l}_{\mu\nu}(., \tan(.)),~d^{(\nu_l),l}_{\mu\nu}\left(t,-\frac{\pi}{2}\right)=d^{\nu,l}_{\mu\nu}\left(t, \frac{\pi}{2}\right),\\
\\
e^{(\nu_l),l}_{\mu\nu k}:D_T\rightarrow {\mathbb R},~~e^{(\nu_l),l}_{\mu\nu k}|_{D_T}=g^{(\nu),l}_{ij,k}(., \tan(.)),~e^{\nu,l}_{\mu\nu k}\left(t, -\frac{\pi}{2}\right)=e^{(\nu_l),l}_{\mu\nu k}\left(t, \frac{\pi}{2}\right),\\
\\
f^{(\nu_l),l}_{\mu\nu kl}:D_T \rightarrow {\mathbb R},~~f^{(\nu_l),l}_{\mu\nu kl}|_{D_T}=g^{(\nu)_l,l}_{ij,k,l}\left( ., \tan(.)\right) ,~f^{(\nu_l),l}_{\mu\nu kl}\left(t, -\frac{\pi}{2}\right)=f^{\nu,l}_{\mu\nu kl}\left(t, \frac{\pi}{2}\right).
\end{array}
\end{equation}
Note that we can recover the viscosity and iteration limit $g_{ij}$ ($\nu_l\downarrow 0$, $l\uparrow \infty$) and its derivatives up to second order from the limits of the functions $d^{(\nu_l),l}_{\mu\nu}$ and $e^{(\nu_l),l}_{\mu\nu k},f^{(\nu_l),l}_{\mu\nu kl}$ ($\nu_l\downarrow 0$) respectively.
We denote the standard closure of the domain $D_T$ by $\overline{D_T}$. We are interested in the strong convergence of the increments
\begin{equation}\label{increm}
\begin{array}{ll}
d^{(\nu_l),l}_{\circ,\mu\nu}:=d^{(\nu_l),l}_{\mu\nu}-d^{(\nu_l),l}_{\mu\nu}(0,.),~e^{(\nu_l),l}_{\circ,\mu\nu k}:=e^{(\nu_l),l}_{\mu\nu k}-e^{(\nu_l),l}_{\mu\nu k}(0,.)\\
\\
f^{(\nu_l),l}_{\circ,\mu\nu k}:=f^{(\nu_l),l}_{\mu\nu k}-f^{(\nu_l),l}_{\mu\nu k}(0,.).
\end{array}
\end{equation}
For $m=1$ and $l=2$ these increments $d^{(\nu_l),l}_{\circ,\mu\nu}$ are located in the appropriate function space
\begin{equation}\label{incremf}
C_{\circ}^{m,l}\left(\overline{D_T} \right):=\left\lbrace f:\overline{D_T}\rightarrow {\mathbb R}|f\in C^{m,l}~\&~\forall x~f(0,x)=0 \right\rbrace,
\end{equation}
and the increments $e^{(\nu_l),l}_{\circ,\mu\nu k}$ and $f^{(\nu_l),l}_{\circ,\mu\nu k}$ are located in the function spaces $C_{\circ}^{m,l-1}\left(\overline{D_T}\right)$ and $C_{\circ}^{m-1,l-1}\left(\overline{D_T} \right)$ respectively.
For functional series in this function space we may use the following classical result
\begin{thm}
Consider a functional series $F_m:=\sum_{n=1}^mf_n,~m\geq 1$ with $f_n\in C_{\circ}^{0,1}\left(\overline{D_T} \right)$. Assume that $F_m(c),~m\geq 1$ converges for fixed $c\in D_T$, and assume that the first order spatial derivative functional series $F_{m,i}:=\sum_{n=1}^mf_{n,i},~m\geq 1$ converge uniformly in $\overline{D_T}$. Then the functional series $F_m,¸ m\geq 1$ converges uniformly to a function $F=\lim_{m\uparrow \infty}\sum_{n=1}^mf_n$, and such that for all $(t,x)\in {\overline D_T}$
\begin{equation}
F_{,i}(t,x)=\lim_{m\uparrow\infty}\sum_{n=1}^mf_{n,i}(t,x)~\mbox{holds.}
\end{equation}
\end{thm}
\begin{lem}
There is a sequence $(\nu_l)_{l\geq 1}$ with $\lim_{l\uparrow \infty }\nu_l=0$ such that the limits of the functional increments in (\ref{increm}) are in the function space (\ref{incremf}), i.e.,
\begin{equation}
\begin{array}{ll}
d^0:=\lim_{l\uparrow \infty}d^{\nu_l,l}_{\circ,\mu\nu} ,~e^0:=\lim_{l\uparrow \infty}e^{\nu_l,l}_{\circ,\mu\nu k},~
f^0:=\lim_{l\uparrow \infty}f^{\nu_l,l}_{\circ,\mu\nu k}\in C_{\circ}^{0,1}\left(\overline{D_T} \right)
\end{array}
\end{equation}
\end{lem}
Hence,
\begin{equation}\label{limg}
\begin{array}{ll}
g_{\mu\nu}=\lim_{l\uparrow \infty}g^{(\nu_l),l}_{\mu\nu}\in C^{0,2}(D_T),~
g_{\mu\nu,k}=\lim_{l\uparrow \infty} g^{(\nu_l),l}_{\mu\nu,k}\in C^{0,1}(D_T),
\end{array}
\end{equation}
and, hence, for the first order time derivative, we get
\begin{equation}\label{limh}
\begin{array}{ll}
h_{\mu\nu}=\lim_{l\uparrow \infty}h^{(\nu_l),l}_{ij}\in C^{0,1}(D_T).
\end{array}
\end{equation}
We have to check that this limit is indeed a solution. First observe that we may consider the iteration limit ($l\uparrow \infty$) first.
For $\nu_0>0$ let $g^{\nu_0}_{\mu\nu},~0\leq \mu,\nu\leq n$ be a fixed point in (\ref{repfix}) with $g^{(\nu_0)}_{\mu\nu}(t,.)\in C^2$.
Plugging $g^{(\nu_0)}_{\mu\nu}$ into the harmonic field equation and using the local contraction result we observe that the limit $g_{\mu\nu}=\lim_{\nu_0\downarrow 0}=g^{(\nu_0)}_{\mu\nu},~g_{\mu\nu,k}=\lim_{\nu_0\downarrow 0}g^{(\nu_0)}_{\mu\nu,k},~\lim_{\nu_0\downarrow 0}h^{(\nu_0)}_{\mu\nu}$ in (\ref{limg}) and in (\ref{limh}) satisfies the harmonic field equation. Next abbreviate
\begin{equation}
\begin{array}{ll}
\mathbf{f}=\left(f_1,\cdots,f_M\right)\\
\\
=\left(\left( g^{(\nu_0)}_{0\mu\nu}\right)_{0\leq \mu,\nu\leq n},~\left( g^{(\nu_0)}_{0\mu\nu,k}\right)_{0\leq \mu,\nu\leq n,~1\leq k\leq n},~\left( h^{(\nu_0)}_{0\mu\nu}\right)_{0\leq \mu,\nu\leq n}\right)^T,
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{ll}
\mathbf{v}^{(\nu_0)}=\left(v^{(\nu_0)}_1,\cdots,v^{(\nu_0)}_M\right)\\
\\
=\left(\left( g^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu,\nu\leq n},~\left( g^{(\nu_0)}_{\mu,\nu,k}\right)_{0\leq \mu,\nu\leq n,1\leq k\leq n},~\left( h^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu\nu\leq n}\right)^T,
\end{array}
\end{equation}
and
\begin{equation}
F^*(\mathbf{v}^{\nu_0})=\left(F^*_1\left( \left( h^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu,\nu\leq n}\right), F^*_2\left( \left( h^{(\nu_0)}_{ij,k}\right)_{0\leq \mu,\nu\leq n,1\leq k\leq n}\right), F^*_3\left( \left( \mathbf{v}^{(\nu_0)}\right) \right)\right)^T,
\end{equation}
where
\begin{equation}
F^*_1\left( \left( h^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu,\nu\leq n}\right)=\left( h^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu,\nu\leq n}^T,
\end{equation}
\begin{equation}
F^*_2\left( \left( h^{(\nu_0)}_{\mu\nu,k}\right)_{0\leq \mu,\nu\leq n}\right)=\left( h^{(\nu_0)}_{\mu\nu,k}\right)_{0\leq \mu,\nu\leq n,~1\leq k\leq n}^T,
\end{equation}
and
\begin{equation}
F^*_3 \left( \mathbf{v}^{(\nu_0)}\right)=\left(
-g^{(\nu_0)}_{00}\left(2g^{(\nu_0)0k}\frac{\partial h^{(\nu_0)}_{\mu\nu}}{\partial x^k}+g^{(\nu_0)km}\frac{\partial g^{(\nu_0)}_{\mu\nu,k}}{\partial x^m} -2H^{(\nu_0)}_{\mu\nu}\right)_{0\leq \mu,\nu\leq n}\right)^T
\end{equation}
with the obvious identifications (strictly speaking also with some identifications of tuples of tuples $(f_i)_{1\leq i\leq M}$ and their entries). Note that we have imposed no index here, therefore the superscript $*$ at the symbol $F^*$ in order to indicate the difference to $F$.
The equation in (\ref{harm3}) may then be abbreviated as
\begin{equation}
\mathbf{v}^{(\nu_0)}_{,t}-\nu_0\Delta \mathbf{v}^{(\nu_0)}=F^*(\mathbf{v}^{(\nu_0)}),~\mathbf{v}^{(\nu_0)}(0,.)=\mathbf{f}
\end{equation}
where in the limit $\nu\downarrow 0$ the function $\mathbf{v}:=\mathbf{v}^{0}:=\lim_{\nu_0\downarrow 0}\mathbf{v}^{(\nu_0)}$ satisfies the equation
\begin{equation}\label{abbr}
\mathbf{v}_{,t}=F^*(\mathbf{v}).
\end{equation}
For $\mathbf{v}^{\nu,f}:=\mathbf{v}^{(\nu_0)}-\mathbf{f}$ we have $\mathbf{v}^{(\nu_0),f}(0,.)\equiv 0$ and
\begin{equation}
\mathbf{v}^{(\nu_0),f}_{,t}-\nu_0\Delta \mathbf{v}^{(\nu_0),f}=\nu_0 \Delta \mathbf{f}+F^*(\mathbf{v}^{(\nu_0),f}+\mathbf{f}).
\end{equation}
This leads to the representation
\begin{equation}
\mathbf{v}^{(\nu_0),f}=\nu_0 \Delta \mathbf{f}
\star G_{\nu_0}+
F^*(\mathbf{v}^{(\nu_0),f}+\mathbf{f})\star G_{\nu_0},
\end{equation}
where the convolution is understood componentwise. The latter statement has a classical interpretation only for smoothed initial data $\mathbf{f}\star G_{\nu_0}$. However, we can rewrite in terms of first order derivative, i.e, we have
\begin{equation}
\mathbf{v}^{(\nu_0),f}=\nu_0 \sum_{i=1}^n \mathbf{f}_{,i}
\star G_{\nu_0,i}+
F^*(\mathbf{v}^{(\nu_0),f}+\mathbf{f})\star G_{\nu_0},
\end{equation}
where the derivative $_{,i}$ is understood componentwise of course, i.e.,
\begin{equation}
\div\mathbf{f}=\left(f_{1,i},\cdots,f_{n,i} \right)^T.
\end{equation}
Hence, we have
\begin{equation}\label{last}
\mathbf{v}^f=\lim_{\nu_0\downarrow 0}\mathbf{v}^{(\nu_0),f}=\lim_{\nu_0\downarrow 0}\nu_0\sum_i \mathbf{f}_{,i}\star G_{\nu_0,i}+\lim_{\nu_0\downarrow 0}F^*(\mathbf{v}^{(\nu_0),f}+\mathbf{f})\star G_{\nu_0}.
\end{equation}
According to our analysis of the Gaussian above the first term on the rightside of the latter equation cancels, and, using continuity of $F$, we observe that the components of $\mathbf{v}^{(\nu_0),f}$ converge in the function space (\ref{incremf}) as $(\nu_0)\downarrow 0$. Considering the time derivative of the last term on the right side of equation (\ref{last}) for fixed $\nu_0>0$ and then going to the limit $\nu_0\downarrow 0$ we observe that the limit function $\mathbf{v}^f$ satisfies
\begin{equation}
\mathbf{v}^f_{,t}=F^*(\mathbf{v}^{f}+\mathbf{f}),
\end{equation}
which is equivalent of (\ref{abbr}).
\footnotetext[1]{\texttt{{[email protected]}, {[email protected]}}.}
|
2,869,038,156,720 | arxiv | \section{Introduction}
The multiplexing capability of a quantum memory recently emerges as an important figure-of-merit in the prospect of long-distance quantum communication
\cite{collins2007multiplexed,SimonQrep}. The straightforward approach is to increase the number of atomic ensembles holding a quantum memory. The optically addressed mode volume compared to the total size of the medium limits the multimode capacity \cite{lan2009multiplexed}. An alternative approach consists of storing multiple temporal pulses onto a single atomic ensemble. In that perspective, the protocols exploiting the inhomogeneous broadening of rare-earth ion doped crystals (REIC) are superior \cite{nunn}. The quantum storage protocols are strongly inspired by the photon-echo technique \cite{tittel-photon} benefiting from the large ratio between inhomogeneous and homogeneous linewidth of theses materials \cite{liu2005spectroscopic}. This fine spectral resolution can be interpreted as large intrinsic time-bandwidth product or as the ability to process a train of multiple pulses in the time-domain. {This advantage is shared by the different protocols using an inhomogeneous broadening. As compared to the well established "stopped light" experiment \cite[and references therein]{OpticalQM} where the signal is mapped into the longitudinal dimension of the medium, these protocols use the excitation of spectral classes into the inhomogeneous profile. In the archetypal "Controllable Reversible Inhomogeneous Broadening" (CRIB) protocol, one initially considered the Doppler profile of atomic vapors \cite{CribGas}. It has been refined in different configurations to reach record efficiencies in both vapors \cite{Hosseini2009} and REIC \cite{HedgesNat}.}
{The recently proposed atomic frequency comb protocol (AFC) belongs to the same category \cite{AFCTh}. The prepared inhomogeneous broadening should be composed of discrete absorbing peaks defining the atomic comb.} It has the largest multimode capacity \cite{nunn}. It has been rapidly implemented in different REIC with {weak} laser pulses at the single photon level \cite{AFCsingle,AFC_NJP,Usmani2010,sabooni2009storage}. {The storage of entangled states of light has been realized very recently \cite{EntangGeneva,EntangCalgary} and represent a major breakthrough. It definitely validates the interest of this technique for quantum communication purposes. The storage mechanism can be understood by analogy with a grating. The periodic structure in the absorption profile generates an echo by diffraction in the time-domain. It has been shown to be particularly efficient with an appropriate preparation in order to create a series of narrow absorbing peaks} \cite{AFCbonarota,AFC35, sabooni2009storage}. Long storage time can be achieved by applying a Raman transfer {before the echo generation. Raman pulses actually convert back and forth the optical coherence into long-lived nuclear spin excitation freezing out the atomic evolution during the memory time \cite{AFCspin}}. It then definitely distinguishes itself from its forefather the stimulated (or three-pulse) photon-echo.
The AFC protocol requires first to prepare a spectrally periodic absorbing structure of isolated peaks. The initially smooth large inhomogeneous profile is tailored by spectral hole-burning (SHB). Multimode storage then fully exploits the spectral resolution of REIC as recently demonstrated with 64 temporal modes in \NDYVO covering a 100MHz bandwidth \cite{Usmani2010}. {The demonstration has been performed in the single photon regime and perfectly illustrates the concept of multimode storage for quantum memories}. The elementary optical pumping sequence is a pulse train composed of two \cite{AFCsingle,AFC_NJP}, three \cite{Usmani2010} or many pulses \cite{AFCbonarota}. Such an amplitude-modulation (AM) sequence exhibits a periodic optical spectrum. It is imprinted on the absorption profile because of the spectral hole-burning process. The comb bandwidth is directly given by the duration of the preparation pulses. Usmani \textit{et al.} \cite{Usmani2010} have pushed a step further this approach by applying frequency-shifted sequences to cover a wider range without being limited by the AM bandwidth. This approach could be interpreted as a mixed amplitude and frequency modulation (FM) sequence. Both AM and FM are provided by acousto-optic modulators (AOM). This technique suffers from various limitations. The AFC width will be limited by the bandwidth of the external modulator (100MHz typically for AOMs). Even if fast modulators are now available \cite{Tang:04}, they may be limited by the current electronics to produce sophisticated AM and FM sequence. Amplitude modulation is also highly demanding in terms of laser power. As the pulse duration is reduced to cover a wider band an increasing number of atoms are addressed within the inhomogeneous profile. To obtain the same population transfer with more atoms, the pulse energy should be increased accordingly. It may here be limited by the power of the continuous laser from which the pulses are cut off. Alternative solutions should be investigated.
We here propose a preparation method based on FM only. The laser frequency is directly modulated without the use of external modulators to produce a broadband optical pumping spectrum. The laser frequency should be able to continuously address a significant fraction of the inhomogeneous linewidth with a good resolution. This problem has been considered in the past for REIC in the context of optical data storage, precisely exploiting the spectro-temporal dimension of the material \cite{mitsunaga1990248, lin1995demonstration}. {Even if these demonstration cannot be extended to the single photon regime, they give a realistic estimation of the storage capacity \textit{i.e} 248 and 4000 bits for \cite{mitsunaga1990248} and \cite{lin1995demonstration} respectively}. The agility and accuracy of chirped laser has been pushed to an unprecedented level for wideband radio-frequency analysis using REIC \cite{Crozatier:06,Gorju:07}. These achievements should be a source of inspiration for broadband and highly multimode quantum storage since they follow the same logic: laser frequency shifts can reach much larger bandwidth that the one achieved by the currently available opto-electronic devices.
The requirements and the generation of a broadband optical pumping FM spectrum will be detailed in a first section. This light will be used as a preparation beam for the protocol. We will then show that an atomic comb can be engraved over a large bandwidth in a \TMYAG crystal. We finally conclude by presenting the storage of \NbM temporal modes in the sample.
\section{Generation of a broadband FM spectrum for optical pumping}
The optical spectrum of the pumping light should produce the atomic frequency comb required for quantum storage. We will show in this section that such a spectrum can be obtained by internal modulation of an extended cavity diode laser.
\subsection{Desired spectrum}
The population dynamics relates the optical pumping spectrum to the resulting atomic comb \cite{AFCbonarota}. Sophisticated pumping scheme involving amplitude and phase modulation optimizes the efficiency of the protocol. Since we are mainly interested in highly multimode storage, we here focus on the fundamental properties of the AFC to reach this goal, namely a wide atomic comb with a large number of peaks. Intuitively, the pumping light spectrum should have the same features. The minimum spacing between the peaks depends strongly on the material properties through the optical pumping dynamics \cite{AFCsingle,AFC_NJP} (homogeneous linewidth, sources of line broadening, initial optical depth ...). A good trade-off has been found for our \TMYAG sample in the range 600-700 kHz \cite{AFC_NJP}. The spectrum width should be as large as possible, only limited by the inhomogeneous profile ($\sim$ 10 GHz). The corresponding spectrum can be produced by FM of a monochromatic laser. The modulation frequency $\nu_m$ gives the spacing between peaks. In the wideband FM situation, \textit{i.e.} the bandwidth is much larger than $\nu_m$ or equivalently the modulation index is much larger than 1, the bandwidth $B_T$ is simply two times the frequency deviation (Carson bandwidth rule). A modulation frequency $\nu_m \sim$ 600-700 kHz can be easily achieved by driving the current of a diode laser. Unfortunately a deep modulation may induce mode-hops because of the feedback from the extended cavity. We here use an extended-cavity diode laser containing an electro-optic prism (EOP) for synchronous tuning of the cavity length and the grating's incident angle \cite{Menager:00}. The fast response of the EOP can easily reach the MHz range. Large frequency deviations should be also accessible. We here propose to aim at a 1 GHz bandwidth for $B_T$ since it corresponds to the limit of our detection electronics in the pulsed regime (see section \ref{pulse}).
\subsection{Experimental realization and observed spectra}\label{spectrum}
A 1 GHz bandwidth for $B_T$ requires a large voltage driving the intra-cavity EOP. The DC response of the laser frequency to the applied voltage is typically 10 MHz/V. So to obtain $B_T=$ 1 GHz, one needs to apply $\sim$ 100 V (the EOP response is relatively flat over a few MHz for $\nu_m$). To avoid using high-voltage amplifiers, we decide to use a resonant circuit instead. The EOP electrodes operate as the capacitor of a series RLC circuit (see figure \ref{fig:FIGlaser}a.). The resistor is $\sim$ 55 $\Omega$ including the output impedance of the generator and inductor resistance. The inductance is chosen to obtain a resonance in the 600-700 kHz range (470 $\mu$H in our case).
\begin{figure}[ht]
\centering
\includegraphics[width=14cm]{FIGlaser.eps}\caption{a) Extended-cavity laser diode (LD) with an intra-cavity EOP. A resonant RLC circuit enhances the moderate applied voltage. A Fabry-Perot Interferometer (FPI) is used to monitor the FM laser spectrum and measure the bandwidth $B_T$. b) Typical spectrum from the FPI. c) With a 470$\mu$ H inductor and a 55 $\Omega$ resistor, the resonance peaks at 614 kHz. The fit with the RLC circuit formula (solid red line) gives a 143 pF for the capacitor. It is consistent with the expected capacitance of the EOP electrodes (see text for details). d) Bandwidth as a function of the driving voltage. We finally step the voltage to 10.8 V$_{pp}$ corresponding to \Bw bandwidth.}
\label{fig:FIGlaser}
\end{figure}
To verify that the light spectrum is able to cover a large bandwidth, we use a Fabry-Perot Interferometer (FPI, Toptica FPI 100). We then directly observe in figure \ref{fig:FIGlaser}.c the FM bandwidth even if equally spaced sidebands are not resolved (few MHz resolution of the FPI). By changing the modulation frequency $\nu_m$, we measure the resonance of the RLC circuit (see figure \ref{fig:FIGlaser}.c). On this curve, we observe a small piezo-mechanical resonance of the EOP crystal at 530 kHz corresponding to previous measurements (circled marker in figure \ref{fig:FIGlaser}.c). This effect should be avoided to obtain a wideband FM spectrum. We finally conclude than close to resonance with $\nu_m =$ 626 kHz a \Bw bandwidth can be obtained for a low driving voltage (see figure \ref{fig:FIGlaser}d.). These values are set for the rest of the experiment.
The direct FM of our laser allows us to generate a spectrally periodic spectrum covering typically 1GHz. This broadband light can be used to produce the atomic comb through the optical pumping process.
\section{Atomic Frequency Comb produced by FM spectrum}
Frequency selective optical pumping or equivalently SHB is intimately related to population dynamics in the atomic system. For the optical pumping to be efficient, a long population lifetime of the shelving state is required. {There is no hyperfine structure in \TMYAG at zero magnetic field.} We here apply a field to exploit the long lifetime of the Zeeman sublevels \cite{Ohlsson:03}. It can reach few seconds at low temperature with an appropriate orientation of the field as compared to the crystalline axes \cite{louchet:035131}. The laser polarization should then be applied accordingly. We first describe the relative orientation of the magnetic field and the pumping light polarization specifically chosen. The general idea has been previously described in Refs. \cite{AFCsingle,AFC_NJP} but significant improvements have been implemented here. We will see that an active self-stabilization of the laser on the atomic comb is required as well. This experimental development is critical to obtain the long-lived atomic comb structure.
\subsection{Pumping scheme in \TMYAG}
Since we use Zeeman sublevels as shelving states for the population, an accurate definition of the magnetic field, laser polarization as compared to the crystalline axes is necessary. The situation is complex in YAG because the thulium ions occupy six orientationally inequivalent crystal sites. Specific orientations of the laser polarization \cite[and references therein]{TmSites} and magnetic field \cite{louchet:035131} can exclude or render equivalent a set of different sites. To optimize the preparation of the atomic comb, the field and the polarization have been previously applied along the [001] axis of the crystal \cite{AFC_NJP} (see figure \ref{fig:SitesTmYAG}).
Even if the previous situation is globally satisfying, we here propose a refinement that may be important for weak signal storage (\textit{e.g.} single photon). The preparation procedure involves strong pumping light as compared to a weak signal to be stored. Both are well separated in time and a long delay can be inserted between them only limited by the population lifetime in the shelving state reaching 7s in our case. It is actually a major characteristic of the protocol. It allows the detection of a stored pulse at the single photon level \cite{AFCsingle,AFC_NJP}. The isolation of the probe beam requires a good extinction of the preparation beam (pump). Different techniques can be combined in that purpose. A small angle between pump and probe been can applied \cite{AFC_NJP}. Additionally acousto-optic modulators\cite{AFCsingle, AFC_NJP, sabooni2009storage} and/or a mechanical chopper \cite{AFCsingle} are used as optical switches. We here propose a configuration where pump and probe have perpendicular polarizations. Because the excitation scheme is directly related to the crystalline structure, a specific study should be done. This situation is depicted in figure \ref{fig:SitesTmYAG} for \TMYAG.
\begin{figure}[ht]
\centering
\includegraphics[width=9cm]{SitesTmYAG.eps}\caption{Representation of six orientationally inequivalent crystal substitution sites of thulium (1-6). The parallelepiped represents the local D2 symmetry of the sites. As an example for site 1, the transition dipole is parallel to a local axes $y$\cite{TmSites}. The laser propagates along [110] (dashed line). The polarization of preparation and the probe beam are respectively parallel to [1$\bar{\mathrm{1}}$0] (red arrow) and [001] (green arrow). We then color in red the edges (resp. in green the faces) of the sites, which are excited by the preparation (resp. probe) beam. The magnetic field is parallel to [001].}
\label{fig:SitesTmYAG}
\end{figure}
The probe beam polarization is along [001] exciting equivalently the sites 3,4,5\&6. The transition dipole is along the long dimension of the parallelepiped for each site (depicted for example as the $y$ local axes for site 1). The pump equivalently excites the same sites (with a lower
Rabi frequency) with perpendicular polarization (parallel to [1$\bar{\mathrm{1}}$0]). It also strongly excites the site 1 which is not probed anyway. This technique should be generalized to different REIC.
After its orientation, the magnitude of the magnetic field has still to be defined. It critically influences the efficiency of the protocol \cite{AFCbonarota}. As soon as the preparation bandwidth is larger than the Zeeman splitting in the ground and the excited state, reciprocal optical pumping can destructively occur and erase the atomic comb. We cannot apply a sufficiently strong magnetic field to split the sublevels further apart because the Zeeman shift is relatively small (especially in the excited state) and our magnetic field is limited to few hundreds of gauss. A partial matching of the Zeeman splitting as compared to the comb spacing has been successfully applied \cite{AFC_NJP, AFCbonarota} to prevent reciprocal optical pumping. We here apply a magnetic field of 95G. It corresponds to the lowest magnetic field to obtain AFC storage. {Below this value, no hole-burning mechanism is observed because the lifetime of the shelving state is too short \cite{AFCbonarota}}. This low value additionally reduces the inhomogeneous broadening of the Zeeman splitting, measured to be proportional to the magnetic field amplitude.
We can now apply the broadband pumping light and look at the resulting comb. From the laser, we extract two beams, the pump (preparation) and the probe, independently controlled by two AOMs (see figure \ref{fig:montagetot1}). During the preparation sequence (50ms), the pumping beam is on and the laser is modulated as described in section \ref{spectrum}. We wait 5ms before probing the comb; it is sufficient to avoid the fluorescence from the excited state. During this interruption the frequency modulation is slowly switched off to prevent any perturbation of the laser. The probe beam is then monochromatic. To monitor the transmission spectrum of the atomic comb, we simply chirp the probe beam AOM frequency over few MHz. It is adequate to observe the central part of the comb (few peaks).
\begin{figure}[ht]
\centering
\includegraphics[width=14cm]{montagetot1.eps}\caption{Optical setup for the preparation and probe beams. They are independently controlled by AOMs. The broadband spectrum is generated by the 626kHz modulation as described in \ref{spectrum}. A PI servo-loop is used to stabilize the pumping light on the transmission of the atomic comb (the 100 kHz modulation on the laser current is used to generate an error signal). It is extensively described in section \ref{locking} (PBS: polarizing beam splitter). An electro-optic amplitude modulator (EAOM) has been inserted on the probe beam to create ultra-short pulses to be stored (see section \ref{pulse} for details).}
\label{fig:montagetot1}
\end{figure}
The previous developments are not sufficient to observe any comb-like structure. We suspect the long-term laser drift to continuously erase the structure. The effect was partially observable in real-time depending on the experimental conditions (\textit{e.g.} a smaller modulation bandwidth). The drift integrated over 7s (lifetime of the structure) should be compared to the spacing between the peaks ($\nu_n$=626 kHz in our case). This effect can be avoided by stabilizing the laser on an external reference \cite{AFC_NJP,sabooni2009storage}. It is not possible in our case because we choose to directly modulate the laser in order to reach a large bandwidth. We here implement a new locking scheme that is compatible with the FM spectrum of the laser.
\subsection{Laser self-locking on the atomic comb}\label{locking}
The preparation sequence can be interpreted as SHB by the optical pumping spectrum. SHB has been employed to externally stabilize monochromatic lasers \cite{Sellin:99,PhysRevB.62.1473,Bottger:03,Julsgaard:07,tay-2010}. So we propose to generalize this method to a broadband spectrum.
Locking on regenerative SHB is relevant because typical time response of the servo-loop matches the dynamics of the atomic systems \cite{Julsgaard:07}. This is precisely what we need. The generalization to a broadband spectrum is not obvious at first sight. The optical pumping pumping spectrum is actually composed of discrete equally spaced peaks. Each peak creates in that sense a spectral hole. A small laser drift should induce a reduction of the transmission through the SHB material because each peak goes off-resonance from its own hole. By monitoring the total transmission of the broadband light, it should be possible to compensate the laser drift and stabilize its central frequency.
The setup is described in figure \ref{fig:montagetot1}. We collect the transmission of the pumping beam. It is facilitated because pump and probe have a perpendicular polarization. To generate an error signal, we gently modulate the laser current at 100 kHz with a few percent modulation index. The error signal has the same periodic structure than the atomic comb. It is amplified, additionally integrated (PI servo-loop) and fed back to the laser current whose bandwidth is limited to 10kHz in our case. The locking scheme is complicated by the time sequence alternating pump (50ms), waiting time (5ms) and probe. The servo-loop is only active when the pump is on (longest fraction of the total time sequence). The PI parameters have been optimized by looking at the transmission spectrum during the probe sequence (atomic comb). The PI corner where proportional and integrator have the same gain is set to 300 Hz. The integrator gain is limited to 20 dB (as compare to the proportional level) at low frequency. Otherwise, the system is not stable. We attribute this behavior to electronic offsets which are integrated when the pump is off (probe sequence). As a consequence, the loop does not exhibit an integrator behavior at low frequency \cite{fox20031}. The feedback loop does not compensate extremely slow drifts. It is not an issue because the error signal is periodic over a large bandwidth (comparable to $B_T$). After a slow drift (larger than the peak spacing), the system can re-lock itself on a neighboring hole without affecting the global shape of the comb structure.
This stabilization scheme allows us to observe the atomic comb produced by the broadband pumping light.
\subsection{Resulting atomic comb}\label{comb}
As described previously, the laser is rendered monochromatic by switching off the broadband modulation during the probing sequence. We here chirp the AOM to record the transmission spectra and probe the atomic populations. We then only have access to the central part of total absorption band because the AOM is scanned over few MHz only. It is sufficient to probe the atomic comb contrast by distinguishing few peaks (see figure \ref{fig:TraitPeigne800MHz}.a).
\begin{figure}[ht]
\centering
\includegraphics[width=11cm]{TraitPeigne800MHz.eps}\caption{a) Observed optical depth spectra. The background (black line) is recorded when the magnetic field is off. The same flat figure is obtained without stabilization of the laser. With the self-locking technique detailed in \ref{locking}, the modulation of the pumping light is now imprinted on the atomic absorption spectrum. b) Expected positions of the holes (blue and thick red bars) and anti-holes (green bars) in \TMYAG with a magnetic field of 95G. The pumping spectrum is composed of discrete lines separated by $\nu_n$=626 kHz. This lines corresponds to the position of holes represented by thick red bars (see text for details).}
\label{fig:TraitPeigne800MHz}
\end{figure}
A clear comb like structure is observable. It validates our self-locking technique detailed in \ref{locking}. The contrast is limited and is {significantly} lower than previous realizations \cite{AFC_NJP, AFCbonarota}. It can be due to the imperfection of our stabilization scheme whose low-frequency gain is limited for stability reason. One can alternatively incriminate the instantaneous spectral diffusion (0.5\% Tm-doped sample \cite{liu2005spectroscopic, nilsson02}). It should now be considered because we excite a much higher bandwidth and then a much larger number of ions. This effect has still to be evaluated independantly.
The effect of reciprocal optical pumping between Zeeman sublevels should also be considered. Since the splittings in the ground $\Delta_g=2.7$ MHz and excited state $\Delta_e=0.57$ MHz are much smaller than the bandwidth, pumping and depumping occur at the same time within the inhomogeneous profile. It can equivalently be interpreted by reference to SHB spectroscopy \cite{yano}. In the present situation, a monochromatic laser would create a hole at the central frequency, two side holes positioned at $\pm \Delta_e$, four anti-holes at $\pm \Delta_g$ and $\pm \left(\Delta_g-\Delta_e\right)$ \cite{Ohlsson:03}. {Additional structures are not observed in \TMYAG because the coupling strength of the crossed optical transitions involving a spin-flip is much weaker than the direct ones (no spin-flip)} \cite{deseze}. Our broadband pumping light is actually composed of discrete lines each burning its own SHB spectrum. We have represented the expected positions of the holes and anti-holes in figure \ref{fig:TraitPeigne800MHz}.b. The blue and thick red bars mark the position of holes and the green bars the position of anti-holes. {The pumping and depumping region are ideally well separated. The intrinsic laser linewidth $\sim$ 500 kHz \cite{Gorju:07} introduces an overlap reducing the contrast of the comb. Additionally a residual slow drift of the laser can erase the comb as soon as the frequency shifts by half the comb spacing.} The interplay between these two effects certainly produces a spectrum where pumping and depumping regions are not fully separated. It may explain the moderate contrast of the resulting atomic comb. The population dynamics involving four-level systems pumped by a broadband spectrum could possibly be modeled by rate equations. This is beyond the prospect of the current proof-of-principle demonstration.
Even if we only observe the central part of the atomic comb, we expect that it is actually covering a significant fraction of the pumping light FM bandwidth $B_T$. The comb should then be able to store extremely short pulses.
\section{Multiple pulse storage}\label{pulse}
In order to test the storage capacity of the crystal, we sent short pulses matching the atomic comb bandwidth. This is a direct and relevant manner to verify the total comb bandwidth. A fibered Mach-Zender electro-optic modulator is inserted on the probe beam (noted EOAM in figure \ref{fig:montagetot1}. Fabricated by Alenia Marconi Systems, its bandwidth is expected to be few GHz). The main interest of the AFC protocol is its capability to store many temporal modes \textit{i.e.} a train of short pulses. The train is produced by applying a $V_\pi$ voltage modulation near the linear region of EOAM. The modulation frequency is 800 MHz chosen to be smaller than the expected comb bandwidth (\Bw). It creates a series of {identical} pulses separated by 1.25 ns (see figure \ref{fig:TraitEcho800MHzReduced_article}.b for example). {We do not have the technical possibility to generate an arbitrary sequence of pulses as required to test the full storage capacity of the memory. Nevertheless the retrieval of an identical pulse train will give us sufficient information for this future working regime}. The total duration of the train should be limited by the storage time (1.6 $\mu$s in our case, inverse of the comb peak spacing). Its duration is actually chosen to be 1.375 $\mu$s controlled by the probe beam AOM. The signal to be stored is then composed of \NbM pulses.
\begin{figure}[ht]
\centering
\includegraphics[width=14cm]{TraitEcho800MHzReduced_article.eps}\caption{Storage of \NbM pulses. a) Incoming train transmitted by the crystal (in black), 1.6 $\mu$s later the echo appears (in red). b) The train is composed of \NbM pulses separated by 1.25 ns. c). Resolved pulses of the AFC echo. The efficiency is typically 1\% (see text for details). Since the echo is weak and to discard a potential cross-talk of the 800MHz modulation to the detection line, we switch off the magnetic field and record a reference level (no modulation is observed on the red dashed line).}
\label{fig:TraitEcho800MHzReduced_article}
\end{figure}
We clearly observe the retrieved train through the atomic comb (see figure \ref{fig:TraitEcho800MHzReduced_article}.c). The efficiency is in the 1\% range. {It is worth noting that the efficiencies are an order of magnitude lower than previously observed using the same crystal and a comparable optical thickness \cite{AFC_NJP, AFCbonarota}. As previously discussed in section \ref{comb}, the contrast of the comb is significantly lower than before because the global optical pumping dynamics is more complex (see section in \ref{comb} for a detailed discussion). To verify that the efficiency is actually limited by the poor contrast of the comb, we calculate it from the optical depth spectrum in figure \ref{fig:TraitPeigne800MHz}. We indeed apply the model presented in Refs. \cite{AFC_NJP, ChaneliereHBSM}, we expect 0.7\% agreeing relatively well the experimental result. The limiting factor is then the preparation procedure and the available optical depth of the crystal.}
Considering the absolute 800MHz modulation contrast of the incoming and retrieved train is not very significant because our detection is here limited by the electronic bandwidth of the detector (1GHz for the EOT 2030A) and the oscilloscope (1GHz for the Lecroy 104 MXi-A). {Nevertheless we clearly see that the contrast of the retrieved train is lower than the incoming one. This reduction can be attributed to a broadening of each pulse because of the limited memory bandwidth. Since we do not measure the complete transmission spectrum of the memory, we cannot really define a 3dB bandwidth. In the future working regime where each pulse should be independently controlled, a broadening effect would induce a partial overlap of the pulse and then reduce the fidelity per mode. It seems that there's a trade-off between efficiency, number of modes and actual bandwidth of the atomic comb. The exact performance of such a highly multimode memory defining the quantum communication rate has still to be evaluated.}
\section{Conclusion}
We have demonstrated the storage of \NbM temporal modes in \TMYAG using an atomic comb covering \Bw. It corresponds to a major improvement as compared to previous realizations (64 modes in \NDYVO with a 100MHz bandwidth \cite{Usmani2010}). The comb preparation technique is original because it only involves frequency-modulation. The direct modulation of the laser opens the way to very large bandwidth really exploiting the inhomogeneous broadening of REIC for quantum and classical processing applications. Even with a moderate magnetic field, we have shown that a sub-MHz spectral structure can be tailored all over the inhomogeneous profile. The experiment illustrates the broadband programming potential of closely spaced ground state sublevels. We have been able to shape the absorption profile without resorting, as usual, to distant shelving states, located outside the absorption bandwidth. In \TMYAG for instance, in applications such as the wideband spectrum analysis of an optically carried radiofrequency signal, one refreshes the spectrally periodic processing filter by continuously pumping absorbing centers into the $^{3}\textrm{F}_{4}$ bottle-neck state \cite{Lavielle:03}. This state offers little flexibility, since the lifetime is fixed at $\sim$ 10 ms. Instead, the hyperfine/Zeeman storage can be preserved for as long as $\sim$ 10 s and can be reconfigured rapidly by switching the magnetic field. Our demonstration should then be considered in a wider context.
We finally optimize the pumping scheme in our crystal (magnetic field orientation and polarization of the pumping beam). This approach should be also transposable to a wide range of rare-earth materials. Moreover, the laser is self-stabilized during the pumping sequence to ensure the engraving of the comb structure by SHB. It is a generalization of previous works for monochromatic lasers \cite{Sellin:99, PhysRevB.62.1473, Bottger:03, Julsgaard:07, tay-2010}. This stage is absolutely critical and should be considered as a general tool.
This work is supported by the European Commission through the FP7 QuRep project, by the national grant ANR-09-BLAN-0333-03 and by the Direction G\'en\'erale de l'Armement.
We thanks L. Morvan and S. Molin from Thales R\&T for providing the AMS modulator.
\section*{References}
\bibliographystyle{iopart-num}
\providecommand{\newblock}{}
|
2,869,038,156,721 | arxiv | \section{Introduction}
Trajectory Optimization (TO), such as the ones based on Mixed-Integer Convex Programming (MICP), solves a motion planning problem to generate an optimal trajectory while satisfying constraints.
One of the advantages TO has compared with other planners, such as Sampling-Based Planning (SBP) (e.g.,
Rapidly-exploring Random Tree) and Graph-Search Planning (GSP) (e.g., A*), is to easily formulate a wide variety of constraints, including equality constraints.
Conversely, GSP and SBP take considerable time in a narrow passage because it is difficult to place a sufficient number of grids or samples to represent the states present \cite{sample_constraints}.
\begin{figure}
\centering
\includegraphics[width=0.4599\textwidth, clip]{2d_traj_s_g_legend.pdf}
\caption{Generated trajectories in $\mathbb{R}^{2}$. The short-horizon TO gets stuck at the local optimum and cannot find the trajectory from start to goal. The long-horizon TO finds the optimal trajectory but takes an extended amount of time. LTO prioritizing planning time avoids the area where the time for solving TO is long due to many integer variables (i.e., obstacles). LTO prioritizing optimality of the trajectory finds the optimal solution with less planning time compared with the long-horizon TO.}
\label{concept}
\end{figure}
However, TO has two main drawbacks: expensive computational complexity with a long horizon and convergence to local optima \cite{Risk}, \cite{IntroTO}. Long-horizon TO is indispensable for generating feasible global trajectories, but the computation time grows exponentially as the number of horizons increases.
Model Predictive Control (MPC)
can spend less planning time than the long-horizon TO \cite{MIT_MPC}-\cite{fast_MPC}.
However, since it solves relatively short-horizon TO,
it has a greater probability of getting stuck at local optima.
To this end, we address Lazy Trajectory Optimization (LTO) unifying the local short-horizon TO and the global long-horizon GSP.
LTO reasons the same constraints as the original large-horizon but with the improved time complexity.
We also propose a cost function that considers the computation time of TO to balance the optimality of the trajectory and the planning time.
Next, based on Lazy Weighted A* (LWA*) \cite{LWA*}, we improve LWA* by making the vertex generation "lazy". In this work, "lazy" means that LTO runs TO only when it intends to evaluate the configuration and trajectories.
Because LTO solves many similar TOs,
it employs a warm-start to solve TO,
resulting in less planning time. By employing MICP as a short-horizon TO, we can analyze the computational complexity of LTO in addition to the bounded solution cost.
Our contributions can be summarized as follows:
\begin{enumerate}
\item We propose LTO, a framework incorporating GSP as a high-layer planner and TO as a low-layer planner, that
efficiently generates long-horizon global trajectories.
\item We present the cost function balancing the planning time of TO and the optimality of the trajectory.
\item We present the theoretic properties of LTO.
\item We demonstrate LTO's efficiency on motion planning problems of a 2 DOF free-flying robot and a 21 DOF legged robot.
\end{enumerate}
\section{Related Work}
TO finds an optimal trajectory that satisfies constraints \cite{Risk}-\cite{FASTER}.
We focus on using MICP as a short-horizon TO for the following reasons.
First, MICP can deal with nonlinear constraints with approximations and constraints involving discrete variables \cite{MIT_envelope}.
Another reason is that the solving time for MICP based on Branch and Bound (B\&B) \cite{branch}-\cite{LVIS} is bounded theoretically.
Several GSPs have been studied to decrease the planning time \cite{LWA*}-\cite{SD}.
LWA* \cite{LWA*} evaluates edges only when the planner uses them. At the start of planning, LWA* does know the true edge cost and gives an optimistic value. During the planning process, when the state is selected for expansion, LWA* evaluates the true edge cost, resulting in the decreased planning time. The limitations of LWA* are that it does not consider the difficulty of edge evaluations using TO and it assumes that the true vertex configuration is known.
The hierarchical planners \cite{hierarchical1}-\cite{hierarchical2} have shown remarkable success. Our LTO is similar to these algorithms for decreasing the planning time considering dynamics. LTO could spend less planning time in cluttered environments since it simultaneously solves GSP and TO to avoid dead-ends while the hierarchical planners need to run expensive global planners again until they find feasible paths.
Several works have been proposed to have good warm-starts \cite{UAV_hybrid}-\cite{wheel_hybrid}. Compared with them, LTO utilizes past trajectories in graph to enhance the quality of wart-starts.
\section{Problem Formulation}\label{problem_formulation}
\subsection{Notation}
\begin{figure}
\centering
\includegraphics[width=0.29\textwidth]{overview_updated2.pdf}
\caption{Overview of LTO. Given an initial graph where the true configuration of the robot and the trajectory are unknown, LTO iteratively solves TO locally to get the true vertex (configuration) and edges (trajectories) in $\mathcal{G}$. LTO performs either expansion of the vertex, gets the true vertex using TO, or gets the true edge using TO. Assume the vertex P is the current vertex LTO chooses from an open list. (a): LTO gets the true configuration of the vertex C. (b): it generates the true trajectory from the vertex P to C. (c): it expands the vertex C and inserts the vertices A and B into the open list.}
\label{overview_fig}
\end{figure}
LTO solve TO with GSP as shown in \fig{overview_fig}. Let $\mathcal{G}=(V, E)$ be a priori unknown graph consisting of vertices $V = \left(v_1, v_2, \ldots\right)$ and edges $E = (e(v_i \leftrightarrow v_j) \forall {i}, \forall {j})$.
Each vertex represents the state of a robot and each edge represents the trajectory of the robot between the vertices. We make voxels in the continuous domain, such that each vertex is in each voxel shown in \fig{edge}.
At the start of planning, we connect every two vertices if their $\ell_{\infty}$ norm is less than or equal to $r$. Let $K$ be the number of intervals along each axis. Let $i$ be the number of voxels along each axis to produce a hypercube region where LTO solves the edge (see \fig{edge}).
\subsection{Graph Structure}\label{graph_structure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{graph_update_2.pdf}
\caption{The planning procedure by LTO in $[0, 1]^2$ where $K=4, i=1, r = 1/4$. The left figure shows a graph at the start of planning where each vertex is inside the orange voxel and each edge is represented as a black dashed line. When LTO investigates the vertex, it solves TO within this voxel. When it investigates the edge, it solves TO within the associated voxels. For example, when it investigates the edge from the vertex (a) to (d), it solves TO in the voxels (a), (b), (c), (d), considering the constraints. The right figure shows the current graph after iterations. For simplicity, it does not show the black dashed lines. The true vertex and the true edges found by TO are shown as the black circles and blue lines. The infeasible vertex and edge judged by TO are shown as the red circle and red lines. For example, the vertex (e) is removed since TO cannot find the feasible configuration in the voxel (e).}
\label{edge}
\end{figure}
We solve TO within associated voxels and only use the constraints within voxels as shown in \fig{edge}. It means
we remove several domain-specific constraints, such as obstacle-avoidance constraints, if they are outside the voxels, resulting in the decreased solving time. Because we keep each vertex and edge within the voxels, the constraints outside the voxels do not influence the generated vertex and edge in voxels.
We describe how we build our graph. In the beginning,
we make voxels in the continuous domain and place a vertex in each voxel.
If we place a vertex to represent the robot's state without considering the feasibility of the state, the probability of the state associated with the vertex being infeasible would be high. If we use TO to place a vertex, TO considers constraints so that LTO can place the vertex in a feasible region. Thus, it uses TO to get the true configuration associated with the vertex and update $\mathcal{G}$. To execute TO, it associates each grid voxel with a continuous state of the robot.
For edge generation,
it uses TO to get the true trajectory associated with the edge and update $\mathcal{G}$.
LTO solves TO in the hypercube, consisting of corner points of the voxels where the target vertices are located.
\subsection{Mixed-Integer Convex Programs}\label{MICP_Formulation}
The MICP to generate edges in $\mathcal{G}$ is given by:
\begin{equation}
\begin{array}{cl}
\text { minimize } & c_T(x_N, z)+\sum_{t=0}^{N-1}c_i(x_t, z)\\
\text { subject to } & f_i(x_t, z) \leq 0, \quad t=0, \ldots, N-1 \\
& x_{\min } \leq x_{t} \leq x_{\max }, \quad t=0, \ldots, N \\
& x_{0}=x_{s}, \quad x_{N}=x_{g} \\
& x_{t} \in \mathcal{X}_{\text { }}, \quad t=0, \ldots, N\\
&z \in\{0,1\}^{n_{z}}
\label{MICP_standard}
\end{array}
\end{equation}
where $x_t$ are the states of the robot at time $t$, $z$ are binary decision variables, $c_T, c_i, f_i$ are convex functions, and $\mathcal{X}$ is the convex set.
When finding the edge in $\mathcal{G}$, we solve \eq{MICP_standard}, where $x_{s}, x_{g}$ are the state of the start vertex and the state of the goal vertex, respectively.
When finding the vertex in $\mathcal{G}$, we solve \eq{MICP_standard} with $N = 0$ without $\sum_{t=0}^{N-1} c_{i}\left(x_{t}, z\right)$.
\subsection{Warm-Start Strategy}\label{warm-start-strategy}
Let $v_p$ and $v_c$ be the start and goal state in the trajectory LTO tries to generate in $\mathcal{G}$. LTO searches the most similar trajectory in $\mathcal{G}$ based on the deviation cost as follows:
\begin{equation}
d_{\operatorname{cost}} = \left\|v_{p}-v_{i}\right\| + \left\|v_{c}-v_{j}\right\|
\label{warm_start_eq}
\end{equation}
where $v_i$ and $v_j$ are the start and goal of other trajectories in $\mathcal{G}$. We assume that a trajectory in $\mathcal{G}$ with a close start and goal designs a similar trajectory, enabling the robot to be aware of constraints.
Hence, when LTO tries to generate an edge not investigated by TO yet, it uses the edges already generated by TO as initial guesses if $d_{\operatorname{cost}}$ is lower than the threshold.
As time passes, it solves more edges and this information enhances the quality of the warm-start for the current trajectory generation.
\section{Lazy Trajectory Optimization}\label{LazyARA*}
We present LTO that unifies TO and GSP.
We employ LWA* as GSP of LTO
and improve LWA* by proposing a new cost function that considers the difficulty of TO with the guaranteed suboptimality bound. We also delay vertex validation using TO until the planner intends to expand the vertex since executing TO for all the voxels to validate the vertices is demanding.
The high-level process of LTO is shown in \fig{overview_fig}.
Given a uniform grid graph where LTO does not know the true vertex and edge, LTO performs either an expansion, a vertex validation, or an edge generation.
We use the notation $X \stackrel{+}{\leftarrow}\{\mathbf{x}\}$ and $X \stackrel{-}{\leftarrow}\{\mathbf{x}\}$ to show the compounding operations $X \leftarrow X \cup \mathbf{x}$ and $X \leftarrow X \setminus \mathbf{x}$, respectively. $Q_o, Q_c$ are priority queues to maintain the states discovered but not expanded and the expanded states.
$\hat g(v)$, $\hat h(v)$, and $\hat f(v)$ are estimates of cost-to-come, cost-to-go, and cost from the start to goal through $v$, respectively. We use $\hat h(v)$ as: $\hat h(v)=\left\|v_{i}-v_{\text{goal}}\right\|_2$. $\operatorname{TrueVertex}$ and $\operatorname{TrueEdge}$ show if a state $v$ has the true configuration and the true edge cost.
$\operatorname{Conf}(v)$ represents the true configuration of the vertex.
\subsection{TO-Aware Cost}
We propose the TO-aware cost as follows:
\begin{equation}
\begin{array}{l}
c\left(v_{1}, v_{2}\right)=\left\|v_{1}-v_{2}\right\|_2,
c_{TO}\left(v_{1}, v_{2}\right)= (1+\omega ^{n_i}) c\left(v_{1}, v_{2}\right)
\end{array}\label{cost_lazy}
\end{equation}
where $c$ is the cost of the edge using the Euclidean distance and $c_{TO}$ is the inflated cost considering the time complexity of TO. $n_i$ is the number of discrete decision variables associated with edge generation and we count $n_i$ within the associated voxels.
$\omega$ is a user-defined inflation factor.
$c_{TO}$ can be very large so that LTO may not enthusiastically investigate the edge in the voxels with many discrete variables.
In other words, $\omega$ is a tuning knob that balances the optimality of a trajectory and the planning time.
We use $c_{TO}$ as the cost of the edge if it is not investigated by TO and use $c$ if it is investigated by TO.
While many works try to minimize the number of edge evaluations \cite{LazySP}, only a few papers discuss the "difficulty" of the edge evaluation. Recognizing that the running time of the edge generation by TO grows exponentially as the number of discrete variables increases \cite{MIT_envelope}, \cite{Ponton}, we propose $c_{TO}$.
\subsection{Main Loop (\alg{alg2})}
\begin{algorithm}[t]
\small
\algsetup{linenosize=\small}
\caption{LTO($\mathcal{G}$, $v_{\operatorname{start}}$, $v_{\operatorname{goal}}$)}
\label{alg2}
\begin{algorithmic}[1]
\STATE $Q_o \leftarrow v_{\operatorname{start}}$, $Q_c \leftarrow \emptyset$, $\hat{g}\left(v_{\operatorname{start}}\right)=0$, $\hat{g}(v) \leftarrow \infty$ \label{line0}
\STATE $\operatorname{TrueVertex}(v)\leftarrow \text{False}, \operatorname{TrueEdge}(v)\leftarrow \text{False}$
\STATE $\operatorname{TrueVertex}(v_{\operatorname{start,goal}})\leftarrow \text{True}, \operatorname{TrueEdge}(v_{\operatorname{start}})\leftarrow \text{True}$
\WHILE{$\hat f\left(v_{\text {goal}}\right)>\min _{v \in Q_o}(\hat f(v))$}\label{line1}
\STATE $v = \argmin_{v \in Q_o}(\hat f(v)), Q_o \stackrel{-}{\leftarrow}\{v\} $\label{extract}
\IF{$v == v_{\operatorname{goal}}$ }
\RETURN ReconstructPath($v_{\operatorname{start}}$, $v_{\operatorname{goal}}$)
\ENDIF \label{line01}
\IF{$v \in Q_c$}\label{line3}
\STATE CONTINUE\label{line4}
\ELSIF{$\operatorname{TrueVertex}(v)$}\label{line5}
\IF{$\operatorname{TrueEdge}(v)$}\label{line6}
\STATE $Q_o, Q_c=\operatorname{Expansion}(\mathcal{G}, Q_o, Q_c, v)$
\ELSE\label{line22}
\STATE $\mathcal{G}, Q_o, Q_c=\operatorname{UpdateEdge}(\mathcal{G}, Q_o, Q_c, v)$
\ENDIF\label{line36}
\ELSE\label{line37}
\STATE $\mathcal{G}, Q_o, Q_c=\operatorname{UpdateVertex}(\mathcal{G}, Q_o, Q_c, v)$
\ENDIF\label{line46}
\ENDWHILE\label{line47}
\RETURN No Path Exists
\end{algorithmic}
\end{algorithm}
Lines~\ref{line0}-\ref{line01} are typical of A*. We iteratively remove the cheapest state in $Q_o$ until the goal is chosen. Lines~\ref{line3}-\ref{line4} are from LWA*, showing that a state is not expanded again if it is already expanded and continues to the next iteration of the while loop. Lines~\ref{line5}-\ref{line46} are new. $\operatorname{TrueVertex}(v)$ and $\operatorname{TrueEdge}(v)$ check if the expanded state $v$ has the true configuration and the true edge cost, respectively.
\subsection{Expansion (\alg{expansion})}
\begin{algorithm}[t]
\small
\algsetup{linenosize=\small}
\caption{$\operatorname{Expansion}(Q_o, Q_c, v)$}
\label{expansion}
\begin{algorithmic}[1]
\STATE $Q_c \stackrel{+}{\leftarrow}\{v\}$, $S=\operatorname{GetS u c c e s s o r s}(v)$\label{line7}
\FORALL{$v^{\prime} \in S$}\label{line9}
\STATE parent$\left(v^{\prime}\right)=v$\label{line10}
\IF{$\exists v^{\prime \prime} \in (Q_o \OR Q_c) \text{s.t.}$ TrueVertex($v^{\prime \prime}$) $\AND \text{Conf}(v^{\prime}) = \text{Conf}(v^{\prime \prime})$}\label{200}
\STATE $\hat g\left(v^{\prime}\right)=\hat g\left(\text {parent}\left(v^{\prime}\right)\right)+c_{TO}\left(\text {parent}\left(v^{\prime}\right), v^{\prime}\right)$
\STATE TrueVertex $(v^{\prime})=$ true \label{205}
\ELSE
\STATE $\hat g\left(v^{\prime}\right)=\hat g\left(\text {parent}\left(v^{\prime}\right)\right) + c_{x,v}\left(\text {parent}\left(v^{\prime}\right), v^{\prime}\right)$\label{201}
\ENDIF
\IF{$ \nexists v^{\prime \prime} \in Q_o$ s.t. $\operatorname{Conf}\left(v^{\prime \prime}\right)=\operatorname{Conf}\left(v^{\prime}\right) \AND$
TrueEdge$\left(v^{\prime \prime}\right) \AND \hat g\left(v^{\prime \prime}\right) \leq \hat g\left(v^{\prime}\right) \AND v^{\prime} \notin Q_c$}\label{line13}
\STATE $\hat f\left(v^{\prime}\right)=\hat g\left(s^{\prime}\right)+ \hat h\left(v^{\prime}\right)$, $Q_o \stackrel{+}{\leftarrow}\{v^{\prime}\}$\label{line16}
\ENDIF\label{line20}
\ENDFOR\label{line21}
\RETURN $Q_o, Q_c$
\end{algorithmic}
\end{algorithm}
In \alg{expansion}, the expanded state has both the true vertex and the true edge cost so that LTO puts all successors of $v$ in $Q_o$. GetSuccessors generates a copy of each neighboring state to maintain the states from different parents (line~\ref{line7}).
The same vertex that originated from other parent states might have already figured out the true configuration of the vertex
by already running TO. Hence, LTO checks if other versions of the successor state $v^{\prime}$ have the true configuration in $Q_o, Q_c$ (line~\ref{200}). If true,
we update $\hat g(v^{\prime})$ with $c_{TO}$. Thus, $\mathcal{G}$ has an expensive cost for edges with many integer variables.
We also set TrueVertex$(v^{\prime})$ to true (line~\ref{205}). If another version of the successor state $v^{\prime}$ does not have the true configuration, we use a distance from $v$ to the voxel's edge where $v^{\prime}$ belongs as a cost of the edge to guarantee the bounded suboptimality (line~\ref{201}). LTO checks if this version of $v^{\prime}$ should be considered for maintaining in $Q_o$ (line~\ref{line13}). If there exists the state $v^{\prime \prime}$ that represents the same configuration of $v^{\prime}$ with the true edge cost and the lower $\hat g$ value, we do not maintain $v^{\prime}$.
\subsection{Edge generation (\alg{updateedge})}
In \alg{updateedge}, the state has the true vertex but does not obtain the true edge cost yet.
Let $v^x_y$ and $v^z_y$ be the vertices representing the same configuration but originated from different parents $x, z$.
Here, $e(v^{a}_b \leftrightarrow v^{c}_d)=e(v^{e}_{b} \leftrightarrow v^{f}_{d})$. Hence, we do not want to run expensive TO again to get $e(v^{a}_b \leftrightarrow v^{c}_d)$ if we already obtain $e(v^{e}_{b} \leftrightarrow v^{f}_{d})$.
On line~\ref{300}, $\operatorname{CheckSamePair}$ checks if we already obtain the same configuration pair from different parents. If true, we get the same configuration pair (line~\ref{GetSamePair}). Line~\ref{edge_checking} checks if the obtained edge is feasible. If true,
we set $\operatorname{TrueEdge}(v)$ to true, get the cost (line~\ref{311}) and use it to update the $\hat g(v)$ (line~\ref{312}).
We use $c$ instead of $c_{TO}$ because LTO already figures out the true edge cost, and it does not make sense for the edge cost to be expensive due to $\omega ^{n_i}$. On lines~\ref{line30}-\ref{line33}, like line~\ref{line13} in \alg{expansion}, we insert $v$ into $Q_o$ if no states exist satisfying the if condition. If false on line~\ref{300}, LTO runs TO (line~\ref{line2300}) and perform the same action between lines~\ref{311}-\ref{line33}.
On line~\ref{301}, GetWarmStart computes the initial guesses $w_{\operatorname{opt}}$ of each decision variable.
\subsection{Vertex Validation (\alg{updatevertex})}
Since the state does not have the true vertex, LTO runs TO and gets the true configuration of $v$. The structure of \alg{updatevertex} and \alg{updateedge} is essentially the same, but in \alg{updatevertex}, we use $c_{TO}$ as the cost to avoid the expensive edge generation (line~\ref{4000}).
\begin{algorithm}[t]
\small
\algsetup{linenosize=\small}
\caption{$\operatorname{UpdateEdge}(\mathcal{G}, Q_o, Q_c, v)$}
\label{updateedge}
\begin{algorithmic}[1]
\IF{$\operatorname{CheckSamePair}(\mathcal{G}, \operatorname{parent}(v), v)$ is False}\label{300}
\STATE $w_{\operatorname{opt}}=\operatorname{GetWarmStart}(\mathcal{G}, \operatorname{parent}(v), v)$\label{301}
\STATE $c, \mathcal{G}, \text{Edge}=\operatorname{RunTO}(\mathcal{G},\operatorname{parent}(v), v, w_{\operatorname{opt}})$\label{line2300}
\ELSE
\STATE $c, \mathcal{G}, \text{Edge} = \operatorname{GetSamePair}(\mathcal{G}, \operatorname{parent}(v), v)$\label{GetSamePair}
\ENDIF
\IF{\text{Edge is feasible}} \label{edge_checking}
\STATE TrueEdge $(v)=$ true, $c=\operatorname{CostSamePair}(\operatorname{parent}(v), v))$\label{311}
\STATE $\hat g(v)=\hat g(\text {parent}(v))+c$\label{312}
\IF{$ \nexists v^{\prime \prime} \in Q_o$ s.t. $\operatorname{Conf}\left(v^{\prime \prime}\right)=\operatorname{Conf}\left(v\right) \AND$
TrueEdge$\left(v^{\prime \prime}\right) \AND \hat g\left(v^{\prime \prime}\right) \leq \hat g\left(v\right)$}\label{line30}
\STATE $\hat f\left(v\right)=\hat g\left(v\right)+ \hat h\left(v\right)$, $Q_o \stackrel{+}{\leftarrow}\{v\}$
\label{line31}
\ENDIF\label{line33}
\ENDIF\label{line35}
\RETURN $\mathcal{G}, Q_o, Q_c$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\small
\algsetup{linenosize=\small}
\caption{$\operatorname{UpdateVertex}(\mathcal{G}, Q_o, Q_c, v)$}
\label{updatevertex}
\begin{algorithmic}[1]
\IF{$\operatorname{CheckSameVertex}(\mathcal{G}, v)$ is False}\label{4400}
\STATE $\mathcal{G}, \text{Configuration}=\operatorname{RunTO}(\mathcal{G}, v)$\label{line23}
\ELSE
\STATE $\mathcal{G}, \text{Configuration}=\operatorname{GetSameVertex}(\mathcal{G}, v)$
\ENDIF
\IF{\text{Configuration is feasible}}
\STATE TrueVertex $(v)=$ true \label{check_vertex}
\STATE $g(v)=g(\text {parent}(v))+c_{TO}\left(\text {parent}\left(v\right), v\right)$ \label{4000}
\IF{$ \nexists v^{\prime \prime} \in Q_o$ s.t. $\operatorname{Conf}\left(v^{\prime \prime}\right)=\operatorname{Conf}\left(v\right) \AND$
TrueEdge$\left(v^{\prime \prime}\right) \AND g\left(v^{\prime \prime}\right) \leq g\left(v\right)$}\label{line40}
\STATE $\hat f\left(v\right)=\hat g\left(v\right)+ \hat h\left(v\right)$, $Q_o \stackrel{+}{\leftarrow}\{v\}$\label{line311}
\ENDIF\label{line433}
\ENDIF
\RETURN $\mathcal{G}, Q_o, Q_c$
\end{algorithmic}
\end{algorithm}
\section{Formal Analysis}
\subsection{Complexity}\label{CT_Proof}
We identify line~\ref{line2300} in \alg{updateedge} and line~\ref{line23} in \alg{updatevertex} as the main sources of planning time.
For planning with expensive edge generation, it makes more sense to discuss the time complexity based on TOs.
\begin{thm}\thlabel{micp_complexity}
Let $\mathcal{X}$ be the configuration space normalized to $[0,1]^{d}$, where $d \in \mathbb{N}$. Let $K, r$ be the number of intervals along each axis and the normalized distance calculated as $\ell_{\infty}$ (see \fig{edge}). Then, the TO-aware time complexity is $O(\left(2i+1\right)^dK^d)$ where $i = 0,1,\cdots, K$, $r = (1/K)i$.
\end{thm}
\begin{proof}
\fig{edge} shows the case where $d=2, K=4, r=1/4, i = 1$.
The total number of vertices is $K^d$ so TO for finding a vertex is called at most $K^d$ times.
For the edge generation, LTO connects the vertices if the $\ell_{\infty}$ between the center of the voxel to which one vertex belongs and the center of the voxel to which the other vertex belongs to is less than or equal to $r$. The total number of edges per vertex is $(2i+1)^d-1$ (the vertices inside the green rectangle in \fig{edge} except for the vertex (d)).
Thus, the total number of TO to find the edge is $((2i+1)^d-1)K^d$.
\end{proof}
We can even bound $K$ when TO uses MICP.
\begin{cor}\thlabel{corollcomplexity}
$K$ is bounded as:
$T_o(2^{B}K^d + 2^{NB(i+1)^d}((2i+1)^d-1))\leq T_t$
where $B$ is the maximum number of integer variables in a voxel, $T_o$ is the average solving time of convex programming on a problem domain with no integer variables and $N=0$, and $T_t$ is the acceptable running time.
\end{cor}
\begin{proof}
When finding a true vertex, the MICP solver using B\&B searches at most $2^B$ solutions and solves the regular convex programming for each solution. In the worst-case, it solves convex programs for all voxels and the solving time is $2^BK^dT_o$. Regarding the edge generation, the solver investigates $(i+1)^d$ voxels at most for every single edge. When finding edges, we consider $N$ steps planning problem so that the total number of integer variables when finding an edge is $NB(i+1)^d$. Because the solver runs at most $(2i+1)^d-1$ per a voxel to find an edge, the worst-case solving time for edge generation is $2^{NB(i+1)^d}((2i+1)^d-1)$.
\end{proof}
\subsection{Optimality}
We can bound the cost of the solution as follows:
\begin{thm}\thlabel{exact}
Let $\xi_{}^{*}$ be an optimal path.
LTO return a path $\xi$ with cost $c(\xi)\leq \alpha c(\xi_{}^{*})$ with $\alpha = (1+\omega^{M})$ where $M=NB(i+1)^d$.
\end{thm}
\begin{proof}
We need to show $\hat{g}(v) \leq \alpha g^{*}(v)$.
We use induction. At the start of planning,
$\hat{g}(v_{\operatorname{start}}) = g^{*}(v_{\operatorname{start}}) \leq \alpha g^{*}(v_{\operatorname{start}})$
so the base case holds. Next, after some iteration of \alg{alg2} and assume that $\hat{g}(v) \leq \alpha g^{*}(v)$ holds for all $v \in \xi$ so far.
Let $v_p \in \xi$ with $\hat{g}(v_p) > \alpha g^{*}(v_p)$, resulting in $\hat{f}(v_{p})>\alpha(g^{*}(v_{p}))+\hat h(v_p)$. $\hat{g}(v) \leq \alpha g^{*}(v)$ holds if no such $v_p$ exists. Here, we show that LTO will not choose such $v_p$ on line~\ref{extract} in \alg{alg2} even if $v_p$ exists.
Case 1: A vertex has been expanded before $v_p$ along $\xi$. In this case, We must have a
$v_{a-1} \in \xi$ before $v_p$ along $\xi$ with successor $v_a$ on $Q_o$. If TrueEdge($v_a$) is true:
$\begin{aligned} \hat{g}(v_{a}) & \leq \hat{g}(v_{a-1})+c(v_{a-1}, v_{a}) \\ & \leq \alpha g^{*}(v_{a-1})+c(v_{a-1}, v_{a}) \leq \alpha g^{*}(v_{a}) \end{aligned}$\\
If TrueEdge($v_a$) is false:
$\begin{aligned} \hat{g}(v_{a}) & \leq \hat{g}(v_{a-1})+c_{TO}(v_{a-1}, v_{a}) \\
& \leq \omega^{n_i} (g^{*}(v_{a-1})+c(v_{a-1}, v_{a})) \\
& \leq \omega^M (g^{*}(v_{a-1})+c(v_{a-1}, v_{a})) = \alpha g^{*}(v_{a}) \end{aligned}$\\
Hence, the assumption $\hat{g}(v) \leq \alpha g^{*}(v)$ holds true for all iterations. Since for every vertex $\alpha g^{*}\left(v_{i}\right)+\hat h\left(v_{i}\right)\leq \alpha g^{*}\left(v_{i+1}\right)+\hat h\left(v_{i+1}\right)$ is true due to the consistency of $\hat h$,\\
$\begin{aligned} \hat{f}\left(v_{a}\right) &=\hat{g}\left(v_{a}\right)+ \hat h\left(v_{a}\right)
\leq \alpha (g^{*}(v_{a}))+\hat h(v_{a}) <\hat{f}\left(v_{p}\right)
\end{aligned}$\\
which means that $v_p$ will not be chosen.
Case 2: No expanded vertex before $v_p$ along $\xi$. In this case, $Q_o$ must contain the start vertex, where we can apply the same discussion above, resulting in $\hat f(v_{\operatorname{start}})<\hat f(v_p)$.
Finally,
$c(\xi) = \hat g(v_{\operatorname{goal}}) \leq \alpha g^{*}(v_{\operatorname{goal}}) \leq \alpha c(\xi_{}^{*})$.
\end{proof}
\section{Numerical Experiments}
We validate LTO on two motion planning problems: free-flying robots in $\mathbb{R}^{2}$ and legged robots in $\mathbb{R}^{21}$.
We test algorithms with ten trials.
We evaluate LTO of the different numbers of voxels with/without the warm-start. We set $r$ such that a vertex has 15 edges per vertex for the free-flying robot experiments and 20 edges for the legged robot experiments.
With warm-start options, LTO uses other already investigated trajectories by TO if $d_{\operatorname{cost}}$ in configuration space is less than 0.1.
We also run the regular TO (i.e., no GSP is embedded), other GSP (i.e., weighted A*, LWA*), and SBP (i.e., PRM \cite{PRM}, lazyPRM \cite{LazyPRM}, RRT \cite{RRT}, RG-RRT \cite{RG-RRT}).
To have a fair comparison,
we incorporate TO into GSPs and SBPs.
When they find a node and connect nodes, they use TO to check if the sampled configuration and the edge is feasible.
We use Gurobi \cite{gurobi} to solve MICP on Intel Core i7-8750H machine and implement all planning codes in Python.
\begin{figure}
\centering
\includegraphics[width=0.4865795\textwidth, clip]{2d_optimality_updated_2_errorbar.pdf}
\caption{The results of Section~\ref{free-fly-section}. Error bars represent a 95 \% confidence interval for a Gaussian distribution. Note that for some algorithms, the confidence intervals are very small and are not visible. Compared with TO, LTO prioritizing optimality (i.e., $\omega=0$) finds the optimal solution about nine times faster without sacrificing the solution cost much (1.2 $\%$ worse).}
\label{2d_fraction}
\end{figure}
\subsection{Free-Flying Robots}\label{free-fly-section}
We consider a free-flying robot in $\mathbb{R}^{2}$ with multi obstacles. We define $p_{t} \in \mathbb{R}^{2}$ as the position and $v_{t} \in \mathbb{R}^{2}$ as the velocity. The state $x_t = (p_t,v_t)$ is controlled by $u_t\in \mathbb{R}^{2}$. Thus, the robot solves the following MICP from $x_{s}$ to $x_{g}$ while remaining in the safe region $\mathcal{X}_{\text {safe}}$ \cite{free-fly}:
\begin{equation}
\begin{array}{cl}
\text {minimize} & \sum_{t=0}^{N-1}\left(x_{t}-x_{g}\right)^{\top}Q\left(x_{t}-x_{g}\right)+u_{t}^{\top}u_{t}\\
\text {s.t.} & x_{t+1}=A x_{t}+B u_{t}, t=0, \ldots, N-1\\
& \left\|u_{t}\right\|_{2} \leq u_{\max }, \quad t=0, \ldots, N-1\\
& x_{\min } \leq x_{t} \leq x_{\max }, \quad t=0, \ldots, N\\
& x_{0}=x_{s}, \quad x_{N}=x_{g}\\
& x_{t} \in \mathcal{X}_{\text {safe}}, \quad t=0, \ldots, N\label{space_craft}
\end{array}
\end{equation}
The constraints $x_{t} \in \mathcal{X}_{\text {safe}}$ are represented using a big-M formulation with binary variables $z$ \cite{free-fly}.
We consider axis-aligned rectangular obstacles.
We run LTO under 1000 and 2000 voxels with $\omega=0, 100$ and with/without a warm-start. We set $N=7$ for TO inside LTO. For TO, we use $N=70$, which is the minimum number of $N$ for us to find a solution. The number of continuous variables, binary variables, and constraints is 168, 3360, 4230, respectively.
We run SBP and GSP at most 5000 samples with different grid sizes and show the results for the grid size showing the optimal cost.
The solution cost versus planning time is plotted in \fig{2d_fraction}. LTO finds as good solutions as TO finds with decreased planning time.
The generated trajectories are shown in \fig{concept}. By navigating the robot to a region with fewer integer variables (i.e., fewer obstacles), LTO can generate robot trajectories quickly.
We also observe that the lower the inflation factor $\omega$ is, the more optimal trajectory LTO generates.
It takes more time to design the trajectory since LTO does not guide the robot to avoid the computationally expensive regions, but it still quickly generates the trajectory without sacrificing the solution cost so much compared with the long-horizon TO.
\begin{figure}
\centering
\includegraphics[width=0.325\textwidth, clip]{time_comparions_2.pdf}
\caption{Consumed time with 2000 voxels in $\mathbb{R}^{2}$ for LTO. The larger the inflation factor $\omega$ is,
the less time LTO spends by avoiding expensive edge generation.}
\label{time_comparison}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.813\textwidth, clip]{leggedrobot_simplify_no_eps_norectangles.pdf}
\caption{The results of Section~\ref{legged_robot_result_section} in $\mathbb{R}^{21}$. The red lines in the top figure indicate the body trajectory. Error bars represent a 95 \% confidence interval for a Gaussian distribution. Note that for LTO, the confidence intervals are very small and are not visible. LTO with 6000 voxels with the warm-start finds the optimal solution about 11.3, 14.0, 1.4 times faster than TO without degrading the solution cost so much (about 0.33, 0.35, 0.01 $\%$ worse), from the left to the right environment, respectively. By increasing the inflation factor $\omega$, LTO can generate globally suboptimal trajectories faster than TO feasible option.}
\label{legged_robot_fig}
\end{figure*}
Here, we compare the results of LTO with TOs.
While the best planner among LTOs (the warm-start, 2000 voxels, $\omega=0$) designs the trajectory that cost is 1.2 \% worse than TO with a cost function, it decreases the computation time by about 93 \%.
TO with $N=70$ is the simplest TO we get the feasible global trajectories. We manually increase $N$ until we get the feasible trajectory.
In contrast, since LTO iteratively solves the short-horizon TO,
we do not need to spend time tuning $N$ until we get global feasible trajectories, resulting in less offline user time consumption.
Also, the variance of the planning time is smaller than that of TO.
Recognizing that the solver's behavior in terms of solving time is more uncertain for the large scale optimization problem \cite{Variability}, LTO uses the small scale problem (i.e., short-horizon TO) to have the small variance of the planning time.
We also evaluate our algorithms in terms of parameters. \fig{2d_fraction} shows
that $\omega$ provides a tuning knob to trade off between the solution cost and the planning time.
\fig{time_comparison} shows the individual time cost on \alg{expansion}-\alg{updatevertex} within LTO with different $\omega$ for 2000 voxels. It shows that TO for generating an edge spends a large amount of time. It also indicates that by increasing $\omega$, LTO spends less time to generate edge by navigating the robot to the region with fewer discrete variables, resulting in less total planning time.
As for the number of voxels,
LTO with 1000 voxels and $\omega=0$ designs the trajectory, resulting in 72 \% decreased planning time with the 0.4 \% worse solution cost compared with LTO with 2000 voxels and $\omega=0$.
As a result, we may use the solution from LTO with 1000 voxels and $\omega=0$.
All algorithms in SBP and GSP but RG-RRT spends a large amount of time. While LTO and TO have the success rate of 100 $\%$, SBP and GSP have the success rate of 20 $\%$ except for RG-RRT, which has that of 40 $\%$.
This is because the regular SBPs and GSPs do not consider dynamics.
\subsection{Legged Robots}\label{legged_robot_result_section}
We consider a $M$-legged robot motion planning problem. We denote the body position as $q_{t} \in \mathbb{R}^{3}$, its orientation as $\theta_{t} \in \mathbb{R}^{3}$, and toe $i$ position as $p_{it} \in \mathbb{R}^{3}$. To realize a stable locomotion, we consider the reaction force $f_{it}^{r} \in \mathbb{R}^{3}$ at foot $i$. Thus, the robot solves the following MICP \cite{MIT_envelope}:
\begin{equation}
\begin{array}{cl}
\text { minimize } & \sum_{t=0}^{N-1}\left(x_{t}-x_{g}\right)^{\top}Q\left(x_{t}-x_{g}\right)\\
\text {s.t.} & x_{\min } \leq x_{t} \leq x_{\max }, \quad t=0, \ldots, N\\
& |\Delta x_{t} | \leq \Delta x, \quad t=1, \ldots, N\\
& {p}_{it} \in \mathcal{R}_{i}({q_t}, {\theta_t}), \quad t=0, \ldots, N\\
& x_{0}=x_{s}, \quad x_{N}=x_{g}\\
& x_{t} \in \mathcal{X}_{\text {safe}}, \quad t=0, \ldots, N\\
&\sum_{i=1}^{M} f_{it}^{r}+{F}_{t o t}={0}\\
& \sum_{i=1}^{M}\left({p}_{it} \times {f}_{it}^{r}\right)+{M}_{t o t}={0}
\label{legged_robot}
\end{array}
\end{equation}
where $x_t$ contains kinematics-related decision variables $q_t, \theta_t, p_{1t}, \cdots, p_{Mt}$. Here, ${p}_{it} \in \mathcal{R}_{i}({q_t}, {\theta_t})$ shows kinematics constraints. $\sum_{i=1}^{M} f_{it}^{r}+{F}_{t o t}={0}$ and $\sum_{i=1}^{M}\left({p}_{it} \times {f}_{it}^{r}\right)+{M}_{t o t}={0}$ represent the static equilibrium of force and moment constraints, respectively. For kinematics constraints, we approximate them as linear constraints \cite{ETHNLP}.
The bilinear terms of the static equilibrium of moment can be represented as piecewise McCormick envelopes with binary variables $z$ \cite{MIT_envelope}.
In this work, we just consider $q_t, p_{it}$ for planning. To consider orientation, we may utilize \cite{tyler}-\cite{multi}. We consider a six-legged robot with 3 DOF per leg, resulting in 21 DOF planning.
We run LTO under 3000 and 6000 voxels with $\omega=0, 10$ and with/without a warm-start. We set $N=7$ for TO inside LTO. For TO, we use $N=56, 63, 70$ from the left to the right environment in \fig{legged_robot_fig}, respectively, which are the minimum number of $N$ for us to find a solution. The number of continuous variables is 20220, 23106, 25674, the number of binary variables is 32400, 14190, 6754, and the number of constraints is 74680, 55827, 48532, from the left to the right environments, respectively.
The generated trajectories and the solution cost versus planning time are shown in \fig{legged_robot_fig}. In the left and middle environments, LTO shows better performance compared with TO. In \fig{2d_fraction}, TO feasible planning finds the trajectory most quickly, but \fig{legged_robot_fig} shows that it spends more time to just find a feasible solution if the planning problem is more difficult. In the right environment, TO optimal and LTO show similar results. Since the right environment has fewer discrete variables, we do not observe the advantage of using LTO. Therefore, LTO works well in environments with many discrete decision variables like the left and the middle environments.
\section{Conclusion}
We presented LTO for high DOF robots in cluttered environments. Because LTO deeply unifies TO and GSP algorithms, it can consider the original long-horizon TO problem with a decreased planning time. We proposed a TO-aware cost function that considers the difficulty of TO.
Furthermore, LTO employs other edges in the graph as a warm-start to accelerate the planning process.
We also presented proofs of the complexity and optimality. Finally, we performed planning experiments of a free-flying robot and a legged robot motion planning problems, showing that LTO is faster with a small variance of the planning time.
Since LTO has a small variance in planning time, we argue that it would be useful for safety-critical applications such as autonomous driving. Additionally, it consists of TO and GSP so that users can use other planning algorithms for each subcomponent according to their specifications.
|
2,869,038,156,722 | arxiv | \section{ Existence of weak $L^{\infty}-$solutions}
\noindent Let $T >0$ be fixed. For $ \alpha \in (0, 1)$ define the space
$ L^{\infty}([0,T], C^{\alpha}(\mathbb{R}^{d})) $ as
the set of all bounded Borel functions $f : [0,T ] \times \mathbb{R}^{d} \rightarrow \mathbb{R} $ for which
\[
[f]_{\alpha,T}=\sup_{t\in [0,T]} \sup_{x\neq y\in \mathbb{R}^{d}} \frac{|f(x)-f(y)|}{|x-y|}
\]
\noindent We write the $ L^{\infty}([0,T], C^{\alpha}(\mathbb{R}^{d},\mathbb{R}^{d} )) $
for the space of all vector fields having components in $ L^{\infty}([0,T], C^{\alpha}(\mathbb{R}^{d})) $.
\noindent We shall assume that
\begin{equation}\label{con1}
b\in L^{\infty}([0,T], C^{\alpha}(\mathbb{R}^{d},\mathbb{R}^{d} )),
\end{equation}
\begin{equation}\label{con2}
Div \ b \in L^{p}([0,T] \times \mathbb{R}^{d}) \ for \ p>2.
\end{equation}
\begin{equation}\label{con3}
F\in L^{1}( [0, T], L^{\infty}( \mathbb{R}^{d} \times \mathbb{R}))
\end{equation}
\noindent and
\begin{equation}\label{con4}
F \in L^{\infty}( [0, T] \times \mathbb{R}^{d}, LIP( \mathbb{R})).
\end{equation}
\subsection{ Definition of weak $L^{\infty}-$solutions}
\begin{definition}\label{defisolu} We assume (\ref{con1}), (\ref{con2}), (\ref{con3}) and (\ref{con4}).
A weak $L^{\infty}-$solution of the Cauchy problem (\ref{trasportS}) is a stochastic process
$u\in L�^{\infty}( �\Omega\times[0, T] \times \mathbb{R}^{d})$
such that, for every test function�
$\varphi \in C_{0}^{\infty}(\mathbb{R}^{d})$, the process $\int u(t,
x)\varphi(x)
dx$ has a continuous modification which is a
$\mathcal{F}_{t}$-semimartingale and satisfies
\[
\int u(t,x) \varphi(x) dx= \int f(x) \varphi(x) \ dx
\]
\[
+\int_{0}^{t} \int b(s,x) \nabla \varphi(x) u(s,x) \ dx ds
+ \int_{0}^{t} \int div \ b(s,x) \varphi(x) u(s,x) \ dx ds \
\]
\[
+ \int_{0}^{t} \int F(s,x,u) \varphi(x) \ dx ds \
+ \sum_{i=0}^{d} \int_{0}^{t} \int D_{i}\varphi(x) u(s,x) \ dx \circ
dB_{s}^{i}
\]
\end{definition}
\begin{remark} We observe that a weak $L^{\infty}-$solution in the previous Stratonovich sense satisfies
the It\^o equation
\[
\int u(t,x) \varphi(x) dx= \int f(x) \varphi(x) \ dx
\]
\[
+\int_{0}^{t} \int b(s,x) \nabla \varphi(x) u(s,x) \ dx ds
+ \int_{0}^{t} \int div \ b(s,x) \varphi(x) u(s,x) \ dx ds \
\]
\begin{equation}\label{trasportITO}
+ \int_{0}^{t} \int F(s,x,u) \varphi(x) \ dx ds \
+ \sum_{i=0}^{d} \int_{0}^{t} \int D_{i}\varphi(x) u(s,x) \ dx
dB_{s}^{i} + \frac{1}{2} \int_{0}^{t} u(s,x) \triangle \varphi(x) \ dx ds
\end{equation}
\noindent for every test function $\varphi \in C_{0}^{\infty}(\mathbb{R}^{d})$. The converse is also true.
\end{remark}
\subsection{ Existence of weak $L^{\infty}-$solutions}
\begin{lemma}\label{lemaexis} Let $ f\in
L^{\infty}(\mathbb{R}^{d})$. We assume (\ref{con1}), (\ref{con2}), (\ref{con3}) and (\ref{con4}). Then there exits a weak $L^{\infty}-$solution $u$
of the SPDE (\ref{trasportS}).
\end{lemma}
\begin{proof} {\large Step 1} Assume that $F\in L^{1}( [0, T], C_{b}^{\infty}( \mathbb{R}^{d} \times \mathbb{R}))$ and $f\in C_{b}^{\infty}( \mathbb{R}^{d})$. We take a mollifier regularization $b_{n}$ of $b$ . It is known (see \cite{Chow}, chapter 1 ) that there exist an unique classical solution $u_{n}(t,x)$ of the SPDE (\ref{trasportS}), that written in weak It\^o form is
(\ref{trasportITO}) with $b_{n}$ in place of $b$. Moreover,
\[
u_{n}(t,x)=Z_{t}^{n}(x, f(Y_{t}^{n}))
\]
\noindent where $Y_{t}^{n}$ is the inverse of $X_{t}^{n}$, $X_{t}^{n}(x)$ and $Z_{t}^{n}(x,r)$
satisfy the following equations
\begin{equation}\label{itoassp}
X_{t}^{n}= x + \int_{0}^{t} b_{n}(s,X_{s}^{n}) \ ds + B_{t},
\end{equation}
\noindent and
\begin{equation}\label{itoass2p}
Z_{t}^{n}= r + \int_{0}^{t} F(s,X_{s}^{n}(x), Z_{s}^{n} ) \ ds .
\end{equation}
\noindent According to theorem 5 of \cite{FGP2}, see too remark 8, we have that
\[
\lim_{n\rightarrow \infty }\mathbb{E}[ \int_{K} \sup_{t\in[0,T]}| X_{t}^{n}- X_{t}|\ dx ]=0
\]
\noindent and
\[
\lim_{n\rightarrow \infty }\mathbb{E}[ \int_{K} \sup_{t\in[0,T]}| DX_{t}^{n}- DX_{t}|\ dx ]=0
\]
\noindent for any compact set $K \subset \mathbb{R}^{d}$, where $X_{t}(x)$ verifies
\begin{equation}\label{itoass}
X_{t}= x + \int_{0}^{t} b(s,X_{s}) \ ds + B_{t}.
\end{equation}
\noindent Now, we denote
\[
u(t,x)=Z_{t}(x, f(Y_{t})),
\]
\[
Y_{t} \ is \ the \ inverse \ of \ X_{t},
\]
\noindent and
\begin{equation}\label{itoass2}
Z_{t}= r + \int_{0}^{t} F(s,X_{s}(x), Z_{s}) \ ds .
\end{equation}
\noindent Then , we observe that
\[
|u^{n}(t,x)- u(t,x)|\leq | f(Y_{t}) - f(Y_{t}^{n} ) | + \int_{0}^{t} | F(s,X_s^{n}, Z_{s}^{n}( f(Y_{t}^{n} ) ) - F(s,X_s, Z_{s}( f(Y_{t} ) ) | \ ds
\]
\[
\leq | f(Y_{t}) - f(Y_{t}^{n}) | + C \int_{0}^{t} | Z_{s}^{n}( f(Y_{t}^{n} ) ) - Z_{s} ( f(Y_{t} ) ) | \ ds .
\]
From to theorem 5 of \cite{FGP2}, see too remark 8, and the Lipchitz property of $F$ we conclude that $\lim_{n\rightarrow \infty }\mathbb{E}[ \int_{K} \sup_{t\in[0,T]}| u_{n}(t,x)- u(t,x)| ]=0$ and
$u(t,x)$ is a weak $L^{\infty}-$solution of the SPDE (\ref{trasportS}).
{\large Step 2} Assume that $F\in L^{1}( [0, T], C_{b}^{\infty}( \mathbb{R}^{d} \times \mathbb{R}))$. We a take a mollifier regularization $f_{n}$ of $f$ . By the last step $u_{n}(t,x)=Z_{t}(x, f_{n}(Y_{t}))$ is a weak $L^{\infty}-$solution of the SPDE (\ref{trasportS}), that written in weak It\^o form is
(\ref{trasportITO}) with $f_{n}$ in place of $f$.
\noindent We have that any compact set $K \subset \mathbb{R}^{d}$ and $p\geq 1$
\[
lim_{n\rightarrow\infty} \sup_{[0,T]} \int_{K} | f_{n}(X_{t}^{-1})-f (X_{t}^{-1})|^{p} \ dx =
\]
\[
lim_{n\rightarrow\infty } \sup_{[0,T]} \int_{X_{t}(K)} | f_{n}(x)-f (x)|^{p} \ JX_{t}(x) \ dx =0
\]
\noindent Then we have that
\[
lim_{n\rightarrow\infty } \sup_{[0,T]} \int_{K} |Z_{t}(x, f_{n}(Y_{t})) -Z_{t}(x, f(Y_{t}))|^{p} \ dx =0.
\]
\noindent Thus $u(t,x)=Z_{t}(x, f(Y_{t}))$ is a weak $L^{\infty}-$solution of the SPDE (\ref{trasportS})
{\large Step 3} We take a mollifier regularization $F_{n}$ of $F$. By the step 2, we have that
$u_{n}(t,x)=Z_{t}^{n}(x, f(Y_{t}))$ is a weak $L^{\infty}-$solution of the SPDE (\ref{trasportS}), and hold that $Z_{t}^{n}(x,r)$ satisfies the equation (\ref{itoass2}) with $F_{n}$ in place of $F$.
\noindent We observe that
\[
|Z_{t}^{n}(x,r)- Z_{t}(x, r)|\leq \int_{0}^{t} | F_n(t,X_s, Z_{s}^{n} ) - F(t,X_s, Z_{s} ) | \ ds
\]
\[
\leq \int_{0}^{t} | F_n(t,X_s, Z_{s}^{n} ) - F_n(t,X_s, Z_{s} ) | \ ds + \int_{0}^{t} | F_n(t,X_s, Z_{s}) - F(t,X_s, Z_{s} ) | \ ds
\]
\[
\leq C \int_{0}^{t} | Z_{s}^{n} - Z_{s} | \ ds + \int_{0}^{t} | F_n(t,X_s, Z_{s}) - F(t,X_s, Z_{s} ) | \ ds
\]
\noindent By the Gronwall Lemma we follow that
\[
lim_{n\rightarrow\infty }|Z_{t}^{n}(x,r)- Z_{t}(x, r)| =0 \ uniformaly \ in \ t, \ x,\ r.
\]
\noindent Then
\[
lim_{n\rightarrow\infty }|Z_{t}^{n}(x, f(Y_{t})) -Z_{t}(x, f(Y_{t})) =0 \ uniformaly \ in \ t \ and \ x .
\]
\noindent Therefore, we conclude that $u(t,x)=Z_{t}(x, f(Y_{t}))$ is a weak $L^{\infty}-$ solution of the SPDE (\ref{trasportS}).
\end{proof}
\section{ Uniqueness of weak $L^{\infty}-$solutions}
\noindent In this section, we shall present an uniqueness theorem
for the SPDE (\ref{trasportS}) under similar conditions to the
linear case , see theorem 20 of \cite{FGP2}.
\noindent Let $\varphi_{n}$ be a standard mollifier. We introduced the commutator defined as
\[
\mathcal{R}_{n}(b,u)=(b\nabla ) (\varphi_{n}\ast u )- \varphi_{n}\ast((b\nabla)u)
\]
\noindent We recall here the following version of the commutator lemma
which is at the base of our uniqueness theorem.
\begin{lemma}\label{conmuting} Let $\phi_{t}$ be an $C^1$ -diffeomorphism of $\mathbb{R}^{d}$. Assume
$b \in L_{loc}^{\infty}( \mathbb{R}^{d}, \mathbb{R}^{d})$ , $div b \in L_{loc}^{1}(\mathbb{R}^{d})$,
$u \in L_{loc}^{\infty}(\mathbb{R}^{d})$. Moreover, for $d>1$, assume also $J\phi^{-1}\in W_{loc}^{1,1}(\mathbb{R}^{d})$ Then for any $\rho \in C_{0}^{\infty}(\mathbb{R}^{d})$ there exits
a constant $C_{\rho}$ such that , given any $R>0$ such that $supp(\rho\circ\phi^{-1})\subset B(R)$, we have :
\begin{enumerate}
\item[a)] for $d>1$
\[
|\int \mathcal{R}_{n}(b,u)(\phi(x)) \rho(x) \ dx |
\]
\[
\leq C_{\rho} \|u\|_{ L_{R+1}^{\infty}} \ [\|div b\|_{ L_{R+1}^{1}} \|J\phi^{-1}\|_{ L_{R}^{\infty}} + \|b\|_{ L_{R+1}^{\infty}}(
\|D\phi^{-1}\|_{ L_{R}^{\infty}}+ \|DJ\phi^{-1}\|_{ L_{R}^{1}} )]
\]
\item[b)] for $d=1$
\[
|\int \mathcal{R}_{n}(b,u)(\phi(x)) \rho(x) \ dx | \leq C_{\rho} \|u\|_{ L_{R+2}^{\infty}} \| b\|_{ W_{R+2}^{1,1}} \|J\phi^{-1}\|_{ L_{R}^{\infty}}
\]
\end{enumerate}
\end{lemma}
\begin{proof}
See pp 28 of \cite{FGP2}.
\end{proof}
\noindent We are ready to prove our uniqueness result of weak
$L^{\infty}-$solution to the Cauchy problem (\ref{trasportS}).
\begin{theorem}\label{uni} Assume (\ref{con1}), (\ref{con2}), (\ref{con3}) and (\ref{con4}). Then, for every $f\in L^{\infty}(\mathbb{R}^{d})$ there exists an unique weak $L^{\infty}-$solution of the Cauchy problem
(\ref{trasportS}).
\end{theorem}
\begin{proof}
{\large Step 1}( It\^o-Ventzel-Kunita
formula) Let $u,v$ be are two weak $L^{\infty}-$solutions and $\varphi_{n}$ be a standard mollifier. We put $w=u-v$, applying the It\^o-Ventzel-Kunita formula (see Theorem 8.3 of \cite{Ku2} ) to $F(y)=\int w(t,z) \varphi_{n}(y-z) \ dz $, we obtain that
\[
\int w(t,z) \varphi_{n}(X_{s}-z) dz
\]
\noindent is equal to
\[
\int_{0}^{t} \int b(s,z) \nabla [\varphi_{n}(X_{s}-z)] w(s,z) \ dz
ds + \int_{0}^{t} \int div \ b(s,z) \varphi_{n}(X_{s}-z) u(s,z) \
dz ds \ +
\]
\[
\int_{0}^{t} \int (F(s,z,u)-F(s,z,v)) \varphi_{n}(X_{s}-z)\ dz ds \
+ \sum_{i=1}^{d} \int_{0}^{t} \int
w(s,z) D_{i} [\varphi_{n}(X_{s}-z)]] dz \circ dB_{s}^{i} +
\]
\[
\int_{0}^{t} \int (b \nabla)(w(s,.)\ast \varphi_{n})(X_{s}) \
ds - \sum_{i=1}^{d} \int_{0}^{t} \int
w(s,z) D_{i}[ \varphi_{n}(X_{s}-z)] dz \circ dB_{s}^{i}.
\]
\noindent Then
\[
\int w(t,z) \varphi_{n}(X_{t}-z) dz =
\]
\[
\int_{0}^{t} \int (F(s,z,u)-F(s,z,v)) \varphi_{n}(X_{s}-z)\ dz ds \
- \int_{0}^{t} \mathcal{R}_{n}(w,b)(X_{s}(x))\ ds,
\]
\noindent where $\mathcal{R}_{n}$ is the commutator defined above.
{\large Step 2}( $\lim_{n\rightarrow \infty}\int_{0}^{t} \mathcal{R}_{n}(w,b)(X_{s}) \ ds= 0$) We argue as in \cite{FGP2}. We observe by Lemma \ref{conmuting} and the Lebesgue dominated theorem that
\[
\lim_{n\rightarrow \infty}\int_{0}^{t} \int \mathcal{R}_{n}(w,b)(X_{s}) \rho(x) \ ds= 0
\]
\noindent for all $\rho\in C_{0}^{\infty}(\mathbb{R}^{d})$, for details see Theorem 20 of \cite{FGP2}.
{\large Step 3}( $w=0$) We observe that
\[
\lim_{n\rightarrow \infty} (w(t,.)\ast \varphi_{n})(.) =w(t,.)
\]
\noindent, where the convergence is in $L^{1}([0,T],
L^{1}_{loc}(\mathbb{R}^{d}))$. From the flow properties of $X_t$,
see theorem 5 of \cite{FGP2}, we obtain
\[
\lim_{n\rightarrow \infty} (w(t,.)\ast \varphi_{n})(X_{t}) =w(t,X_{t})
\]
\noindent and
\[
\lim_{n\rightarrow \infty} ((F(t,.,u)-F(t,.,v))\ast \varphi_{n})(X_{t})=
\]
\[
(F(t,,X_{t},u(t,,X_{t}))-F(t,,X_{t},v(t,,X_{t})) ,
\]
\noindent where the convergence is $~~\mathbb{P} \ a.s \mbox{ in }\ L^{1}([0,T],
L^{1}_{loc}(\mathbb{R}^{d}))$. Then by steps 1, 2 we have
\[
w(t,X_{t}) = \int_{0}^{t} F(s,,X_{s},u(t,,X_{s}))-F(s,,X_{s},v(t,,X_{s})) \ ds.
\]
\noindent Thus, for any compact set $K\subset \mathbb{R}^{d}$ we obtain that
\[
\int_{K} |w(t,X_{t})| dx \leq \int_{0}^{t} \int_{K} |F(s,,X_{s},u(t,,X_{s}))-F(s,,X_{s},v(t,,X_{s}))| \ dx ds.
\]
\[
\ \leq C \int_{0}^{t} \int_{K} |w(t,,X_{s})| \ dx ds.
\]
\noindent where $C$ is contant related to the Lipchitz property of $F$. It follows
\[
\int_{K} |w(t,X_{t})| dx \leq C \int_{0}^{t} \int_{K} |w(t,,X_{s})| \ dx ds.
\]
\noindent and thus $w(t,X_{t})=0 $ by the Gronwall Lemma.
\end{proof}
\begin{remark}
We observe that the unique solution $u(t,x)$ has the
representation $u(t,x)=Z_{t}(x, f(X_{t}^{-1}))$ , where $X_{t}$ and $Z_{t}$ satisfy the equations
(\ref{itoass}) and (\ref{itoass2}) respectively.
\end{remark}
\begin{remark} We mention that other variants of the theorem \ref{uni} can be proved. In fact, the step 2 is valid under other hypotheses, see corollary 23 of \cite{FGP2}.
\end{remark}
\begin{remark} We recall that relevant examples of non-uniqueness for the deterministic linear transport equation
are presented in \cite{FGP2} and \cite{F1}. Currently we do not get a counter-example itself of the non-linear case. An interesting future work
is to study if the nonlinear case may induce new pathologies.
\end{remark}
|
2,869,038,156,723 | arxiv | \section{Introduction}
It is well established that most massive galaxies host in their centre black holes with masses above $10^6 \mathrm{M_{\odot}}$, also known as supermassive black holes (SMBHs) \citep{Magorrian1998, Kormendy2013}. Historically, the first strong indications of their presence were active galactic nuclei (AGNs), for which the high luminosity and electromagnetic spectrum were explainable only by the accretion of matter by supermassive compact objects \citep[][]{Lynden-Bell1969}.
In recent years, observers utilising the Event Horizon Telescope have even been able to produce the first direct image of the shadow of the $\approx6 \times 10^9 \mathrm{M_{\odot}}$ SMBH at the centre of the Messier 87 galaxy \citep{EHT2019} as well as Sagittarius A*, the $\approx 4\times10^6\mathrm{M_{\odot}}$ SMBH in the centre of the Milky Way \citep{EHT2022}.
Similarly, the existence of stellar mass BHs $M\mathrm{_{BH}}<100\mathrm{M_{\odot}}$, the end product of massive star evolution, has been confirmed by several X-ray and optical observations \citep{Casares2014, ElBadry2022, Remillard2006} and more recently even by gravitational wave (GW) detections \citep{Abbott2016, Abbott2017, Abbott2021}.
On the other hand, black holes with masses between $10^2 \mathrm{M_{\odot}} \lesssim M\mathrm{_{BH}} \lesssim 10^6 \mathrm{M_{\odot}}$, referred to as intermediate-mass BHs (IMBHs), remain more elusive.
Over the last decades, several observations seem to indicate the presence of IMBHs in galactic nuclei. More than thirty years ago, \citet{Kunth1987}
pointed out an AGN in the dwarf galaxy POX 52, which might be powered by a $\sim 10^5 \mathrm{M_{\odot}}$ BH \citep{Barth2004}.
Similarly, a $10^4-10^5 \mathrm{M_{\odot}}$ BH could be responsible for the
Seyfert activity detected in the nuclear star cluster of the dwarf spiral galaxy NGC 4395 \citep{Filippenko1989, Shih2003, Peterson2005, denBrok2015} )
l.77. Many objects with similar masses have been found in several subsequent surveys \citep[see][and references therein]{Greene2020, Reines2022}. Recently, \citet{Gultekin2022} showed that at least five sources located in low-mass galaxies (with stellar masses $<3\times 10^9 \mathrm{M_{\odot}}$), are consistent with active BHs with masses between $10^{4.9}\mathrm{M_{\odot}}<M\mathrm{_{BH}}<10^{6.1}\mathrm{M_{\odot}}$. The presence of such BHs might not be limited to the centres of galactic nuclei; a few of the most massive globular clusters might also be hosting objects of similar masses \citep{Farrell2014, Pechetti2022}. Recently, a tidal disruption event (TDE) was discovered \citep{Lin2018} in a dense stellar object of mass $\sim 10^7 \mathrm{M_{\odot}}$, likely either a large globular cluster or a tidally stripped satellite nucleus. X-ray continuum fitting indicates that the TDE was powered by an IMBH of mass $\sim 10^4 \mathrm{M_{\odot}}$ \citep{Wen2021}.
The physical processes leading to the formation of these BHs are still debated in the literature \citep[see][for a comprehensive review]{Volonteri2021}. One of the most promising scenarios for growing such massive BHs is the collisional scenario. Low-mass BHs (a few tens of solar masses) located in dense stellar environments could grow in mass through stellar and compact object mergers \citep[see, for example][]{Stone2017}. Gravitational wave observations could have already revealed the tip of the iceberg of this process. The LIGO-Virgo-KAGRA interferometers have thus far detected about ninety BH coalescences, a few of which generated BHs with masses above $\gtrsim 100 \mathrm{M_{\odot}}$ \citep{Abbott2021}. Being the most massive members of many star clusters, stellar-mass BHs are expected to sink in the very centre of the host stellar system. There, if the density is high enough, a chain of hierarchical mergers could produce BHs with masses around $10^2 \mathrm{M_{\odot}} - 10^4 \mathrm{M_{\odot}}$ \citep{Atallah2022, Arca-Sedda2021, Fragione2022, Mapelli2021, Rizzuto2021} or even $10^5 \mathrm{M_{\odot}}$ according to \citet{Antonini2019}. The main obstacle to this hierarchical formation path is recoil caused by the anisotropic emission of gravitational radiation. The final product of compact object mergers is expected to receive GW-induced velocity kicks that range from a few tens up to $5000$ km/s depending on the mass ratio and the spins of the merging bodies \citep{Campanelli2007, Lousto2010, Lousto2019}. Systems with low escape velocities, such as globular clusters, are unlikely to retain the final product of repeated BH mergers \citep{Arca-Sedda2021, Gerosa2019}. Only dense stellar environments with an escape velocity in excess of $v_{\mathrm{esc}} \gtrsim 100$ km/s have a non-negligible chance to grow IMBHs through hierarchical collisions \citep{Gerosa2019, Mapelli2021}. Nuclear star clusters are the only class of stellar systems that fulfill this condition \citep{Neumayer2020}.
In star cluster environments, stellar BHs coexist with stars, which provide an alternative merger channel for BH mass growth, especially in young compact clusters which are populated by many massive stars. Direct $N\mathrm{-body} $ simulations presented in \citet{Zwart2004} showed that young dense low-metallicity star clusters, if compact enough, can trigger repeated stellar mergers and generate very massive stars (VMSs) with masses up to $\sim 1000 \mathrm{M_{\odot}}$,
which collapse directly into massive BH without losing mass in the process.
Recent direct $N\mathrm{-body} $ simulations evolve similar systems using up-to-date stellar evolution recipes and confirm the collisional formation scenario of VMSs \citep{DiCarlo2021, Rizzuto2022}. However, they also indicate that such a process can lead at most to BHs with masses of a few $100 \mathrm{M_{\odot}}$ in agreement with the latest Monte-Carlo simulations \citep{Kremer2020, Gonzalez2022}. The evolution of VMSs is highly uncertain. \citet{Glebbeek2009} argues that the final product of massive stellar mergers is affected by enhanced mass loss that prevents VMSs from generating BH more massive than $10^2 \mathrm{M_{\odot}}$.
Young massive star clusters may have another channel for massive BHs to grow: mergers between BHs and massive stars.
Such events have been reported in both direct $N\mathrm{-body} $ simulations \citep{Mapelli2016, Rizzuto2021, Rizzuto2022} and Monte-Carlo simulations \citep{Giersz2015}.
In \citet{Rizzuto2021} we show that BHs up to $\sim 3\times 10^2 \mathrm{M_{\odot}}$ can form even when the VMS direct collapse channel is suppressed\footnote{With the stellar evolution recipes adopted in \citet{Rizzuto2021} VMSs above $>10^2 \mathrm{M_{\odot}}$ form BHs lighter than $30 \mathrm{M_{\odot}}$.} as long as stellar BHs can accrete a significant fraction of the stellar material when colliding with a massive star.
Young massive star clusters can generate massive BHs only during the early stage of their evolution when they are still dense, and massive stars are still present in the system. As soon as stars undergo supernova explosions, the clusters lose a large fraction of their central mass and subsequently experience a rapid expansion. The most massive stars terminate their life, collapsing into stellar black holes.
As these BHs are the heaviest objects left in the system, they segregate in the cluster core and form a compact subsystem.
It is well established in the literature that the BH subsystem prevents the cluster core-collapse, drives stars away from the centre, and can even remove them from the cluster\citep{Merritt2004, Breen2013}.
At the same time, also BHs can be dynamically ejected during repeated few-body close encounters. Because of these interactions, over time, star clusters can lose most of their BHs.
Which of the two components, BHs or stars, will dissolve first depends on whether the cluster is tidally filling\footnote{A cluster is tidally filling when it contains stars all the way to its tidal radius, otherwise it is tidally underfilling. Typically compact and dense systems are tidally underfilling while more dilute clusters with a moderate initial central density are tidally filling.} at the beginning or during its evolution \citep{Giersz2019}.
In tidally filling clusters, stars escape faster than BHs. These systems evolve toward a state in which the BHs constitute the majority of their mass.
The low-density globular cluster Palomar 5 has most likely entered this phase since direct $N\mathrm{-body} $ simulations suggest that about $20$ per cent of its mass is in the form of BHs \citep{Gieles2021}.
On the other hand, in tidally underfilling clusters, the BH component evaporates first, leaving behind only a few black holes immersed in a sea of low-mass stars. An example of such a system is the $\sim 10^5 \mathrm{M_{\odot}}$ globular cluster NGC 6397,
which hosts no more than a dozen of BHs in its core \citep{Weatherford2020}.
The presence of an IMBH would further catalyse the removal of the BH subcluster, as simulations show that the massive compact object is likely to rapidly expel most BHs and BH binaries during few-body interactions \citep{Giersz2015}.
In this case the star cluster would quickly evolve into a system consisting of low-mass stars enclosing a central massive BH. During this stage the BH can slowly consume the surrounding stars through tidal disruption events or tidal capture events (TCEs). The former occurs when the BH is close enough to a star to exert a tidal pull that rips it apart (left panel of Fig. \ref{fig:1_sketch_TDE_TCE}).
The latter case occurs when an unbound star is sufficiently near to a BH to experience a strong enough tidal perturbation to become bound.
The tidal force deforms the nearby star, so part of its initial orbital energy is transferred to its internal energy. This mechanism can convert unbound stars onto a bound orbit. When this happens, the star is said to be "captured" by the BH (see right panel of Fig. \ref{fig:1_sketch_TDE_TCE} as an illustration of a TCE). After being captured, the star can feed the compact object in various ways, such as, partial disruption events at each pericentre passage \citep{Kremer2022}, mass transfer events, or even dynamically induced mergers.
TCEs were highlighted for the first time in \citet{Fabian1975};
\citet{Press1977} provided the first mathematical description of these events, later on, refined by \citet{Lee1986}.
Since the tidal capture mechanism has a larger cross-section than the tidal disruption process, it is likely to impact BH mass growth in dense stellar environments significantly. The analytical work led by \citet{Stone2017} provided, for the first time, a comprehensive study on the critical role played by TCEs and TDEs in forming massive BHs in galactic nuclei.
Their work showed that a low mass BH located in the centre of a dense stellar environment can reach, within a Hubble time, up to $10^6 \mathrm{M_{\odot}}$ through tidal disruptions and tidal capture runaway collisions. According to their calculation, the rapid growth can be triggered only in clusters with a core density of $n_{\mathrm{c}}>10^7/$pc$^3$ and a core velocity dispersion of $\sigma_{\mathrm{c}} > 40$ km/s.
In this work, we explore the tidal collisional runaway scenario using direct $N\mathrm{-body} $ simulations. Therefore, we evolve star clusters initialised according to the density and velocity dispersion criteria given in \citet{Stone2017}, and study the dynamic mechanisms responsible for the growth of massive BH.
In Section \ref{sec:Mathods}, we describe in detail the numerical integrator and the tidal interactions prescriptions adopted for our investigation. In Section \ref{sec:InitialConditions}, we discuss the intial conditions of our clusters. In Section \ref{sec:Results} we present the results of our simulations and in Section \ref{sec:Conclusions} we offer a summary of our main results.
\section{Methods} \label{sec:Mathods}
In order to investigate the growth of massive BHs in compact stellar environments through TDEs and TCEs, we run five direct $N\mathrm{-body} $ simulations of dense star clusters consisting of a central BH surrounded by low-mass stars. We evolve the systems for at least $40$ Myr using the direct $N\mathrm{-body} $ integrator \textsc{bifrost} \citep{Rantala2022}, which we describe in detail in the following Subsection.
We include in \textsc{bifrost} prescriptions for modelling the effect of tidal interactions during close encounters using a drag force as described in Subsection \ref{subsec:drag force}.
\subsection{The \textit{N}-body code}
In this study we used the novel GPU-accelerated direct-summation $N\mathrm{-body} $ simulation code \textsc{bifrost} \citep{Rantala2022} which is based on the earlier FROST code \citep{Rantala2021}. The code uses a hierarchical implementation (HHS-FSI, \citealt{Rantala2021}) of the fourth-order forward symplectic integrator (FSI, e.g. \citealt{Chin1997,Chin2005,Dehnen2017}).
In addition to the Newtonian accelerations, FSI uses additional so-called gradient accelerations to cancel second-order error terms with strictly positive integrator sub-steps. This is different to the widely used Yoshida-type symplectic integrators \citep{Yoshida1990} which always contain negative sub-steps in integrator orders higher than two. For the Kepler problem it has been shown that fourth-order symplectic integrators with strictly positive sub-steps outperform the integrators that include negative sub-steps \citep{Chin2007}. Compared to the common block time-step scheme widely used in $N\mathrm{-body} $ simulations \citep{Aarseth2003}, HHS-FSI is manifestly momentum-conserving due to pair-wise acceleration calculations and synchronized kick operations. In addition, rapidly evolving parts of the simulated systems decouple from slowly evolving parts in the hierarchical integration, making the approach especially efficient for $N\mathrm{-body} $ systems with a large dynamical range \citep{Pelupessy2012}. \\
Besides the forward integrator, \textsc{bifrost} includes regularised \citep{Rantala2020} and secular integration techniques for subsystems, i.e. binaries, triple systems and small clusters around massive black holes. The equations of motion of the particles in the subsystems are post-Newtonian (PN) up to order PN3.5 using the formulas of \citet{Thorne1985,Blanchet2006}. For binary systems the post-Newtonian equations of motion enable relativistic precession effects due to conservative even PN terms, as well as orbit circularisation and inspiral due to radiation-reaction forces due to gravitational wave emission. \textsc{bifrost} can evolve massive star clusters with high numerical accuracy containing up to a few million stars with an arbitrary fraction of primordial binaries \citep{Rantala2022}, a feature which only few current codes share \citep{Wang2020}. \\
The basic \textsc{bifrost} version has four prescriptions for mergers. First, two particles are merged if their gravitational-wave inspiral timescale becomes shorter than their current time-steps. We also merge particles if their mutual separation is shorter than the relativistic innermost stable circular orbit. \textsc{bifrost} also includes a simple prescription for tidal disruption events which is significantly extended for this study as described in the next sections. Finally, two stars are merged if they directly collide, i.e. their radii overlap. Table \ref{table:bifrost_parameters} reports the main \textsc{bifrost} code parameters used in our runs.
\begin{table}
\begin{tabular}{l l l}
\hline
\textsc{bifrost} user-given parameter & symbol & value\\
\hline
forward integrator time-step factor & $\eta_\mathrm{ff}$, $\eta_\mathrm{fb}$, $\eta_\mathrm{\nabla}$ & $0.2$\\
subsystem neighbour radius & $r_\mathrm{rgb}$ & $0.008$ pc\\
regularization GBS tolerance & $\eta_\mathrm{GBS}$ & $10^{-10}$\\
regularization GBS end-time tolerance & $\eta_\mathrm{endtime}$ & $10^{-4}$\\
regularization highest PN order & & PN3.5\\
secular integration threshold & $N_\mathrm{orb,sec}$ & $10$\\
secular highest PN order & & PN2.5\\
\hline
\end{tabular}
\caption{The main user-given code \textsc{bifrost} parameters used in the simulation runs in this study. The parameter definitions correspond to the ones in \citet{Rantala2022}.}
\label{table:bifrost_parameters}
\end{table}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/2_TCE_15_star.pdf}
\caption{{\it Left panel}: Evolution of an initially unbound orbit of a tidally affected $1.5 \mathrm{M_{\odot}}$ star around a $50 \mathrm{M_{\odot}}$ black hole. Due to tidal energy losses (implemented in the simulation as a drag force), the star loses orbital energy and its orbit circularises. {\it Right panel}: Time evolution (in log-scale) of the orbital eccentricity (black) and the pericentre distance (solid red line) of the star. In the final stages the orbit circularises and the pericentre distance shrinks. Once the pericentre becomes smaller than the tidal radius (red dot-dashed line) the star is disrupted in a TDE.}
\label{fig:2_TCE_15_star}
\end{figure*}
\subsection{Prescription for tidal disruption and mass accretion}
In \textsc{bifrost}, a BH destroys a star when $r$, the distance between the two interacting objects is smaller than the tidal radius $R_{\mathrm{t}}$.
To compute $R_{\mathrm{t}}$ we adopt the criterion given in \citet{Kochanek1992}:
\begin{equation}
R_{\mathrm{t}}=1.3R_*\left(\frac{M\mathrm{_{BH}} + m_*}{2m_*}\right)^{1/3},
\end{equation}
where $R_*$ and $m_*$ are the radius and the mass of the destroyed star, respectively, while $M\mathrm{_{BH}}$ is the BH mass.
To ensure the codes does not miss any TDEs, we decrease the time steps of each interaction with pericentre $R_{\mathrm{p}}< 3 R_{\mathrm{t}}$, for the code to check the condition $r < R_{\mathrm{t}}$ exactly at pericenter passage. With this restriction, the regularised integrator is less efficient, but fortunately, these close interactions are infrequent. Therefore they have a negligible impact on the overall performance of the code.
After a TDE, a fraction of the stellar material is expected to be ejected at high velocity, while the remaining mass remains bound to the BH and eventually forms an accretion disk around it. Recent hydrodynamic simulations of tidal disruption events between stellar BHs and main-sequence stars show that the fraction of stellar material bound to the BH could be close to unity \citep{Kremer2022}. However, these simulations do not model the accretion phase in which
a significant fraction of the initially bound gas may get unbound, especially if the accretion occurs at a super-Eddington rate \citep{Ayal2000, Bonnerot2020, Dai2018, Metzger2016, Steinberg2022, Toyouchi2021}.
In general, many aspects of the accretion phase are poorly constrained. For example, it is not very well understood
how much stellar mass ends up into the BH and how much is lost, and the rate at which the accretion should occur is still debated in the literature.
In this work we follow the simple estimate
in \citet{Rees1988} and assume that in a tidal disruption event, $50$ per cent of the stellar mass is accreted by the BH instantly\footnote{During these instantaneous accretion events we assume that linear momentum is conserved.}, and the other $50$ per cent is instantly removed from the cluster.
We discuss how this simplification affects our results in Subsection \ref{sub:mass_growth}.
\subsection{Tidal interaction energy loss}
When a star of mass $m_*$ moves in an orbit with a pericentre slightly larger than the tidal radius $R_{\mathrm{p}} \gtrsim R_{\mathrm{t}}$, tidal forces are not strong enough to rip the star apart, but they can still deform the star triggering internal oscillations.
Consequently, a fraction of the orbital energy of the star is lost in the internal energy of the star. In other words, tidal interactions force stars to deviate from their original orbits: bound eccentric orbits become more bound and circular, and unbound parabolic or hyperbolic orbits become bound.
To estimate the energy lost during a parabolic encounter, \citet{Press1977} approximated the oscillations amplitude of the perturbed star through an expansion in spherical harmonics. With their procedure, the fraction of orbital energy deposited into the star is:
\begin{equation}\label{eq_press_teukolsky_1977}
\Delta E = \int_{-\infty}^{+\infty}\frac{\mathrm{d}E}{\mathrm{d}t}\mathrm{d}t \approx \frac{Gm_*^2}{R_*}\left(\frac{M\mathrm{_{BH}}}{m_*} \right)^2 \sum_{l=2}^{\infty} \left(\frac{R_*}{R_{\mathrm{p}}} \right)^{2l+2}T_l(\eta).
\end{equation}
Here $T_l$ are dimensionless coefficients that depend on the internal structure of the deformed object. They are functions of the parameter $\eta$ defined as:
\begin{equation}
\eta \coloneq \left(\frac{m_*}{m_*+M\mathrm{_{BH}}} \right)^{1/2}\left(\frac{R_{\mathrm{p}}}{R_*}\right)^{3/2}.
\end{equation}
As follows from its definition, $\eta$ indicates the duration of the pericentre passage with respect to the hydro-dynamical timescale of the star $m_*$.
In practical applications, only $l=2$ and $l=3$ are taken into account. $T_0=0$ and higher terms give a negligible contribution to the final value of $\Delta E$. The values of $T_2$ and $T_3$ have been calculated explicitly for objects with polytropic indices $n=1.5, 2$ and $n=3$ \citep[see][]{LeeandOstriker1986, Ray1987}. Based on these values, \citet{PortegiesZwart1993} provide
fitting functions to rapidly estimate $T_2$ and $T_3$ during close encounters in $N\mathrm{-body} $ simulations.
The formula shown in Eq. \ref{eq_press_teukolsky_1977}, has been derived exclusively for parabolic encounters ($e=1$).
To extend this formulation for eccentricities larger or smaller than $1$, we utilise the prescription given in appendix A of \citet{Mardling2001}, which provides a generalisation of Eq. \ref{eq_press_teukolsky_1977}
by introducing a new expression for $\eta$ (indicated with $\zeta$) that depends explicitly on the eccentricity of the orbit:
\begin{equation}\label{eq:zeta}
\zeta \coloneq \eta \left( \frac{2}{1+e} \right)^{\alpha(\eta)/2}
\end{equation}
where $\alpha(\eta)=1 + \frac{1}{2}\left|\frac{\eta -2}{2}\right|$, therefore for $e=1$ we restore the original formulation. Numerical tests show that using $T_l(\zeta)$ instead of $T_l(\eta)$, the tidal evolution of hyperbolic orbits ($e\gtrsim1$) is consistent with a more accurate model for tidal interactions presented in \citet{Mardling1995}. In addition, $T_l(\zeta)$ should give more reliable results for bound orbits with $e\gtrsim0.5$
\citep[see][]{Mardling2001}.
To estimate the tidal energy loss during a BH - star close encounter in our simulations we first compute $\zeta$ using Eq.
\ref{eq:zeta} and then evaluate $T_2(\zeta)$ and $T_3(\zeta)$ using the interpolation formulas given in \citet{PortegiesZwart1993}.
Finally, we use Eq. \ref{eq_press_teukolsky_1977} to estimate the energy loss in tidal interactions.
For orbits with $e<0.5$, we assume the tides switch from the dynamical to the equilibrium regime. The amount of tidal energy dissipation, therefore, decreases with decreasing eccentricity. To take into account this effect, we multiply
the energy dissipation $\Delta E$ by the eccentricity $e$ ($\Delta E=e\times \Delta E$) similarly to what was done by \citet{Baumgardt2006}.
\subsection{Modelling tidal interactions with a drag force}
\label{subsec:drag force}
In the previous subsection, we described the procedures adopted to estimate $\Delta E$, the amount of energy dissipated during tidal interactions. There remains the problem of how to use $\Delta E$ to
change the orbit of the two closely interacting objects.
The standard approach assumes that all the energy $\Delta E$ is instantly emitted at each pericentre passage. Then under the assumption that the angular momentum is conserved, one can derive the new eccentricity and semi-major axes from the present one \citep{Mardling2001} and updates the orbit accordingly. The entire procedure must be implemented outside the regularised integrator. In practice, to resolve the trajectory, the integrator must evolve the orbit to the pericentre, apply the orbital change due to tidal energy loss, then continue the evolution to the next pericentre passage.
This procedure is relatively straightforward to implement in isolated two-body encounters, but it becomes very cumbersome when three or more objects are involved in the interaction.
Integrating the effects of tidal interactions directly in the regularisation would be significantly more convenient.
For this reason, we decided to model tidal interactions using a drag force following \citet{Samsing2018}:
\begin{equation}\label{eq:dragforce4}
\vec{F}_{\text{t}}(r,v) = -C \frac{\vec{v}}{r^4}
\end{equation}
where $r$ and $v$ are the relative separation and relative velocity of the two tidally interacting bodies, respectively, while $C$ is a normalisation factor:
\begin{equation}
\int_{\mathrm{orb}} F_t(r,v) dr = \Delta E.
\end{equation}
Here $\Delta E$ is the same quantity computed in the instant emission treatment (Eq. \ref{eq_press_teukolsky_1977}). With this definition, $C$ ensures the two method dissipate the same amount of energy per orbit.
The force expression of Eq. \ref{eq:dragforce4} closely resembles the formula of the dissipative PN 2.5 radiation-reaction force.
For this reason, Eq. \ref{eq:dragforce4} can be included with little effort in the \textsc{bifrost} regularised integrator alongside the PN terms.
The strong dependence of $F_{\text{t}}$ on $r$, the distance of the two interacting bodies, ensures that tidal interactions are effectively activated during very close encounters and are negligible at large distances.
\begin{figure*}
\includegraphics[width=\textwidth]{figures/3_TCE_05_star.pdf}
\caption{{\it Left panel}: Evolution of an initially unbound orbit of a tidally affected $0.5 \mathrm{M_{\odot}} $ star around a $50 \mathrm{M_{\odot}} $ black hole. The evolution is similar to the $1.5 \mathrm{M_{\odot}} $ star but on shorter timescales. {\it Right panel}: Time evolution (in linear-scale) of the orbital eccentricity (black) and the pericentre distance (solid red line) of the star. In the final stages the orbit circularises and the pericentre distance shrinks. Once the pericentre becomes smaller than the tidal radius (red dot-dashed line) the star is disrupted in a TDE.}
\label{fig:3_TCE_05_star}
\end{figure*}
\subsection{Tidal interactions drag force and orbital evolution}
\label{sub:orbit under drag force}
The instant emission prescription for tidal interactions assumes conservation of angular momentum; therefore the pericentre increases until the orbit circularizes \citep[see][for more details]{Mardling2001}.
On the contrary, the tidal interaction drag force does not conserve angular momentum. Therefore, with this method the pericentre of tidally captured stars decreases over time until $R_{\mathrm{p}} < R_{\mathrm{t}}$. In other words, in the absence of external perturbers, the tidal interaction drag force drives every TCE into a TDE . The time required to bring a captured star to tidal disruption depends on the star's initial pericentre, initial eccentricity and internal structure. In this work, we model low-mass stars ($m_*<0.7 \mathrm{M_{\odot}}$) with the polytropic index $n=1.5$ and massive stars ($m_*> 0.7 \mathrm{M_{\odot}}$) with polytropic index $n=3.0$. Consequently, massive stars are more compact and less affected by tidal perturbations than low-mass stars. Fig. \ref{fig:2_TCE_15_star} shows the orbital evolution (left panel) of a $1.5 \mathrm{M_{\odot}}$ star captured by a $50 \mathrm{M_{\odot}}$ BH. The star initially moves in a quasi-parabolic orbit with $e=1.0001$ and $R_{\mathrm{p}}=1.5 R_{\mathrm{t}}$. In the beginning, the eccentricity decreases while the pericentre stays constant. In the last stage of the evolution, $R_{\mathrm{p}}$ drops rapidly and reaches the tidal radius. It takes about $100$ yr for the star to be destroyed (see right panel of Fig. \ref{fig:2_TCE_15_star}). The $0.5 \mathrm{M_{\odot}}$ star shown in Fig. \ref{fig:3_TCE_05_star} experiences a similar evolution. However, the effect of the drag force is much stronger when acting on the $0.5 \mathrm{M_{\odot}}$ star. Consequently, the low-mass star spiral towards the BH in just 0.6 years.
As shown in Figs. \ref{fig:2_TCE_15_star} and \ref{fig:3_TCE_05_star}, the stars, after being captured, undergo a intial phase of circularisation (the eccentricity drops) while the pericentre remains unchanged. Only in the last phase of evolution, also the pericentre begins to decrease.
We will use this feature of the drag force to identify all the TCEs that occurred in our simulations (see Subsection \ref{sub:direct TDE}).
To summarise, in our simulations, we model the effect of tidal interactions using the drag force described by Eq. \ref{eq:dragforce4}. With such a prescription, every TCE will produce a TDE in less than $\lesssim 10^3$ yr.
It must be said that the outcome following a TC event is highly debatable. In this paper, we assume that every tidal capture leads rapidly to tidal disruption. However, this may not be true. A star, after being captured, if not affected by external perturbations, follows a bound but very eccentric orbit.
If internal mechanisms (such as viscosity) of the star are adequately efficient to rapidly dissipate the oscillatory motion of the star, at the second pericentre passage, the star will experience again a strong tidal interaction that leads to orbital energy loss and consequently to the shrinking of the semi-major axis. The process repeats until the star gets destroyed and swallowed by the BH, or the two bodies circularise. If the spin-orbit coupling is inefficient, orbital angular momentum is conserved, and the star circularises around the BH. If the spin-orbit interaction is efficient, a fraction of orbital angular momentum can be transferred efficiently into the stellar spin.
Another thing to consider is the possible expansion of the stars after the pericentre passage due to the gain in internal energy. In this scenario, the radius of the star $R_*$ will increase, resulting in an increased tidal radius: the star will be tidally disrupted after a few pericentre passages.
Another possible scenario is described in \citet{Mardling2001}, where they argue that if the oscillations induced by tidal interaction are not damped efficiently, TCEs might lead to chaotic evolution. In fact, the pericentre passages that follow the capture can further excite but also damp oscillation modes in the deformed star. In other words, the orbit and the star can randomly exchange energy packages in both directions. The resulting trajectories of the object are unpredictable and resemble a random walk.
Recent hydrodynamical simulations indicate that captured stars that experience partial tidal disruptions to be completely destroyed after a few pericentre passages \citep{Kremer2022}. Unfortunately, the simulations sample is too small to constrain the fate of tidally captured stars extensively.
\section{Initial Conditions}
\label{sec:InitialConditions}
\begin{table}
\begin{tabular}{cccccc}
\hline
Name & $M\mathrm{_{BH}}$ & $R_{\mathrm{h}}$ & $r_{\mathrm{c}}$ & $n_{\mathrm{c}}$ & $\sigma_{\mathrm{c}}$ \\
- & $\mathrm{M_{\odot}}$ & pc & pc & $\#$ stars / pc $^3$ & km/s \\
\hline
R04M300 & 300 & 0.4 & 5.4e-03 & 1.4e+09 & 42.3 \\
R06M300 & 300 & 0.6 & 7.5e-03 & 1.7e+08 & 32.9 \\
R08M300 & 300 & 0.8 & 1.1e-02 & 6.8e+07 & 28.5 \\
R06M50 & 50 & 0.6 & 1.7e-02 & 5.5e+07 & 26.3 \\
R06M2000 & 2000 & 0.6 & 3.3e-03 & 1.7e+09 & 90.4 \\
\hline
\end{tabular}
\caption{Initial clusters and BH properties: Name: name of the model; $M\mathrm{_{BH}}$: initial BH mass; $R_{\mathrm{h}}$: initial half mass radius; $r\mathrm{_c}\,$: initial core radius; $n\mathrm{_c}\,$: initial central particle density; $\sigma_{\mathrm{c}}$: initial velocity dispersion. }
\label{table:initial_conditions}
\end{table}
\begin{table*}
\begin{tabular}{cccccccc}
\hline
Name & t & $M\mathrm{_{BH}}$ & N$_{\text{TDE}}$ & $R_{\mathrm{h}}$ & $r_{\mathrm{c}}$ & $\rho_\mathrm{c}$ & $\sigma_{\mathrm{c}}$ \\
- & Myr & $\mathrm{M_{\odot}}$ & - & pc & pc & $\mathrm{M_{\odot}}$ / pc $^3$ & km/s \\
\hline
R04M300 & 41.2 & 1152 & 1239 & 0.5 & 2.0e-02 & 8.8e+06 & 33.3 \\
R06M300 & 148.0 & 1329 & 1487 & 0.9 & 6.6e-02 & 8.5e+05 & 23.6 \\
R08M300 & 41.1 & 686 & 532 & 0.9 & 5.3e-02 & 1.6e+06 & 21.4 \\
R06M50 & 41.9 & 556 & 665 & 0.7 & 2.9e-02 & 5.0e+06 & 25.1 \\
R06M2000 & 41.7 & 2844 & 1342 & 0.8 & 1.6e-02 & 1.3e+07 & 51.3 \\
\hline
\end{tabular}
\caption{Cluster and BHs properties at the end of each simulation: Name: name of the model; t: simulation run time. $M\mathrm{_{BH}}$: final BH mass; N$_{\text{TDE}}$: total number of TDE; $R_{\mathrm{h}}$: final half mass radius; $r\mathrm{_c}\,$: final core radius; $\rho\mathrm{_c}\,$: final central density; $\sigma$: final velocity dispersion. }
\label{table:final}
\end{table*}
We generated the initial conditions for five very dense star clusters with $256000$ single stars and no primordial binaries using the code \textsc{mcluster} \citep{mcluster}. The masses of the stars are sampled from \citet{Kroupa2001} initial mass function with a range from $0.08 \mathrm{M_{\odot}} $ up to $2.00 \mathrm{M_{\odot}}$\footnote{Having chosen the most massive star to be $2 \mathrm{M_{\odot}}$, allows us to neglect the effect of stellar evolution, and focus only on the dynamical evolution of the systems.}.
We located at the centre of the cluster single BHs with initial masses equal to $50 \mathrm{M_{\odot}}$, $300 \mathrm{M_{\odot}}$ and $2000 \mathrm{M_{\odot}}$. The five models are initialized with primordial mass segregation and at starting time they follow a \citet{King1966} density profile with W$_0=9$; three of these have an initial half mass radius of $R_{\mathrm{h}}=0.6$ pc, the other two have a half mass radius respectively equal to $0.4$ pc and $ 0.8$ pc.
From now on, we will refer to the five simulations using the labels reported in Table \ref{table:initial_conditions}\footnote{The core radius reported in the table is computed using $r\mathrm{_c}=\sqrt{(\sum_i \rho_i^2 r_i^2)/(\sum_i \rho_i^2)}$, where $\rho_i$ is the local density around the $i$-particle and $r_i$ is its distance from the centre.}.
Thus initialised, the systems have very high core densities (from $\sim 6 \times 10^7$ particles per pc$^{3}$ for R06M50 up to $\sim 2 \times 10^9$ particles per pc$^{3}$ for R06M2000) and high central velocities dispersion (ranging from $30$ km/s up to $90$ km/s) and they might resemble the conditions inside the cores of nuclear star clusters.
The initial conditions we chose for our cluster match the criteria indicated by \citet{Stone2017} to trigger a tidal capture runaway collision. As discussed in the introduction the work by Stone indicate that stellar environments with $n_{\mathrm{c}}> 10^7$ stars per pc$^3$ and a central velocity dispersion $\sigma_{\mathrm{c}}>40 $km/s are expected to trigger tidal capture runaway collisions.
\section{Results}
\label{sec:Results}
All five models maintain a significant tidal disruption rate $\dot{N}_{\mathrm{TDE}}$ throughout the simulation. The final number of TDEs depends strongly on the initial size ($R_{\mathrm{h}}$) of the cluster.
For instance, R04M300, the most compact model, registers a total of 1239 TDEs in about 40 Myr. On the other hand, in the same evolution time, the least compact model, R08M300 records less than half the TDEs (see Table \ref{table:final}).
The primary mechanism that leads stars being close enough to the BH to be destroyed or captured can be broken down into two parts.
First, dynamical interactions in the vicinity of the BH build up a cloud of bound orbits, which accompany the BH until the end of the simulation.
Once these bound orbits are formed, since they are located in the inner part of the cluster, they are constantly perturbed by many gravitational scattering events. As a consequence of these interactions the bound orbit become over time more bound and very eccentric ($e\sim 1$), resulting in their tidal capture or disruption.
In simple words, the BH first forms rapidly a bound cloud in its vicinity and later feeds on it slowly over time.
We provide a comprehensive description of the main properties of the bound cloud in the different simulations in Subsections \ref{sub:bound_cloud} and \ref{sub:Bahcall Wolf}.
The bound cloud does not only influence the BH mass growth, but it also plays an important role in the evolution of the cluster core
as we discuss in Subsection \ref{sub:cluster core evolution}.
In Subsection \ref{sub:TCE and TDE}, we show the BHs grow primarily through direct TDEs\footnote{Stars are destroyed at first pericentre passage.}. However, also
TCEs contribute to the BHs mass growth, as they trigger about $10$ per cent of the total TDEs.
We study how the TDE rate changes over time in Subsection \ref{sub:TDE rate}. There we provide fitting formulas and an analytical approximation to estimate
$\dot{N}_{\mathrm{TDE}}$, which also helps to understand quantitatively the dynamical processes that drive stars to disruption.
Finally, in Subsection \ref{sub:mass_growth}, we briefly discuss the phase of BH gas accretion after a star is tidally destroyed. Moreover, we investigate how the BH mass growth
changes varying the fraction of stellar mass that ends up into the BH after a TDE.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/7_binary_fraction.pdf}
\caption{Time evolution of the fraction of stars $(f_{\mathrm{b}})$ within the BH influence radius ($R_{\mathrm{inf}}$) which are bound to the central BH.
Models R04M300, R06M300, and R08M300 start with about $\sim25$ per cent of bound stars, and they maintain this fraction until the end of the simulations. R06M2000 starts with a fraction of about $\sim50$ per cent that drops to $\sim30$ per cent.
A bound cloud of stars in R06M50 forms within the first 5 Myr reaching $\sim20$ per cent after $\sim 10$ Myr.}
\label{fig:7_binary_fraction}
\end{figure}
\subsection{Formation and evolution of the bound cloud}\label{sub:bound_cloud}
Our simulations reveal that, in the vicinity of the BHs, inside a region enclosed within the influence radius $R_{\mathrm{inf}}$\footnote{The influence radius $R_{\mathrm{inf}}$ defines a sphere centred on the BH that encompasses a number of stars with total mass equal to $2M\mathrm{_{BH}}$.}, a significant fraction of the stars is bound to the BH i.e. they have negative orbital energy $E = \frac{m_*v^2}{2} -\frac{Gm_*M\mathrm{_{BH}}}{r}<0$, where $m_*$ is the mass of the star. In Fig. \ref{fig:7_binary_fraction}, we plot, as a function of time, $f_{\mathrm{b}}$ the fraction of stars within the influence radius bound to the BH\footnote{By definition $f_{\mathrm{b}}=\frac{N_{\mathrm{b}}}{N_{\mathrm{inf}}}$, where $N_{\mathrm{b}}$ and $N_{\mathrm{inf}}$ are respectively the number of bound stars and the total number of stars within the influence radius.}.
All the models that started with a $M\mathrm{_{BH}} = 300 \mathrm{M_{\odot}} $ BH display a stable bound fraction approximately equal to $\approx0.2$. Fig. \ref{fig:7_binary_fraction} shows that this value stays constant for at least $40$ Myr. Since we evolved R06M300 for $150\,\mathrm{Myr}$, we know that the bound fraction is very likely to remain constant for a significantly longer time (see Fig. \ref{fig:Appendix_R06M300} in Appendix \ref{apx:bound_cloud}).
On the other hand, model R06M2000 initially has a fraction of bound stars close to $50$ per cent, which drops rapidly in the first $\sim 10$ Myr and stabilises then at $30$ per cent, as indicated by the dark blue line in Fig. \ref{fig:7_binary_fraction}.
In contrast to the other models, R06M50 starts its evolution without bound orbits around the BH. However, the bound cloud forms rapidly in the first $\sim 4$ Myr (orange line in Fig. \ref{fig:7_binary_fraction}). At the end of this build-up phase, the fraction of bound stars levels out at about $20$ per cent, a similar value to the one observed in R04M2000, R06M2000 and R08M2000.
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/4_density_profile_fit.pdf}
\caption{Total stellar number density profiles (thin green line) for the model R06M300 with a 300 $\mathrm{M_{\odot}} $ central BH at 5 Myr (left), 45 Myr (middle) and 100 Myr (right). The power-law fits indicate shallow profiles with slopes of $\beta \sim 1.08 - 1.2$. The number density profiles of stars bound to the central black hole (bold green lines) are significantly steeper with power-law exponents of $\beta \sim 1.71 - 1.84$ (orange dot-dashed lines). These slopes are consistent with the \citet{Bahcall1976} $\beta = 7/4$ density profile (see Fig. \ref{fig:5_BW_comparison} for the time evolution of the power-law indices).
}
\label{fig:4_density_profile_fit}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/5_BW_comparison.pdf}
\caption{Time evolution of the power-law index $\beta$ for the stellar number density (see Fig. \ref{fig:4_density_profile_fit}) of all stars (orange) and stars bound to the central BH (blue). The central BH mass is increasing from $50 \mathrm{M_{\odot}} $ (R06M50, left) to 300 $\mathrm{M_{\odot}} $ (R06M300, middle) and 2000 $\mathrm{M_{\odot}} $ (R06M2000, right). The total density profile slope increases with increasing BH initial mass. It varies from $\beta \sim 1.0$ for R06M50 (left panel) to $\beta \sim 1.4$ for R06M2000 (right panel). The slope of the bound stars is independent of the black hole mass and mostly consistent with a Bahcall-Wolf cusp ($\beta \sim 1.75$). This result also holds for all other models and also on longer timescales (see Appendix \ref{apx:bound_cloud}). The uncertainty on the fitted slope, indicated in the plots with shadow regions, is estimated using one hundred consecutive simulation snapshots.}
\label{fig:5_BW_comparison}
\end{figure*}
\subsection{Stellar distribution around the central black hole}
\label{sub:Bahcall Wolf}
When a single-mass stellar system with a massive central BH evolves for more than a relaxation time, gravitational interactions drive the stellar distribution in the vicinity of the BH to form a cusp with a particle density profile: $n \propto r^{-1.75}$, where $r$ is the distance from the BH and $n$ is the stellar particle density.
This cusp, known as a Bahcall-Wolf (BW) cusp, was predicted for a stellar system with equal mass stars by \citet{Bahcall1976}
and thereafter it was generalised for systems with unequal mass stars \citep{Bahcall1977, Alexander2009}.
In particular, \citet{Alexander2009} show that heavy stars, when they are relatively common, tend to follow the original BW profile with slope $\beta=1.75$, while lighter stars form a shallower cusp with $1.5\leq \beta \leq 1.75$.
Direct $N\mathrm{-body} $ simulations of star clusters with a central SMBH confirm that stars very close to the massive object follow a BW cusp and indicate that the cusp extends for a tenth of the influence radius $\sim 0.1 R_{\mathrm{inf}}$ \citep{Preto2004}.
Our runs show somewhat different results. The entire stellar population neighbouring the BH follows a significantly shallower distribution than the BW cusp. However, the distribution of the bound stars surrounding the BH is very much consistent with a BW profile that extend for about an influence radius $r<R_{\mathrm{inf}}$.
To visualise the radial disposition of the stars around the BH in our simulations, we plot the radial stellar density for R06M300 at three different times, displayed with the green thin line in Fig. \ref{fig:4_density_profile_fit}.
Within the influence radius, this line is well fitted by a power-law $n \propto r^{-\beta}$ with slope that oscillates around $\beta\sim1.1$ (see middle panel of Fig. \ref{fig:5_BW_comparison}). On the other hand, the distribution of the bound component (green thick line in Fig. \ref{fig:4_density_profile_fit}) is remarkably similar to the Bahcall-Wolf density profile. The relaxation timescale at $R_{\mathrm{inf}}$ for R06M300 is very short: gravitational scattering processes are expected to form a Bahcall-Wolf in less than $\lesssim 5$ Myr\footnote{Here the relaxation timescale is estimated accounting for both bound and unbound stars.}. However, only the bound orbits are affected by relaxation processes as they stay in the vicinity of the BH for a prolonged time. \\
This result holds for R06M50 and R06M300, as shown in the left and the central panels of Fig. \ref{fig:5_BW_comparison}, which illustrate that the power-law index $\beta$ for the bound cloud (in blue) oscillates around the Bahcall-Wolf predicted value throughout the simulation.
Although the bound clouds in R06M50 and R06M300 consist of unequal mass stars, they are mainly composed of the heaviest stars in the cluster (due to primordial mass segregation), which, as predicted by \citet{Alexander2009}, follow a BW density profile.
This is a robust result that does not depend on the initial half-mass radius of the cluster (see Fig. \ref{fig:Appendix_BW}). \\
On the other hand, the power-law index $\beta$ of R06M2000 bound cloud is systematically lower than $1.75$ throughout the simulation, as shown in the right panel of Fig. \ref{fig:5_BW_comparison}. R06M2000, having a larger influence radius than R06M50 and R06M300, also has a more extended bound cloud containing many more low-mass stars. The latter tend to form a density profile shallower than the BW cusp as indicated by \citet{Alexander2009}.
\subsection{Evolution of the cluster core}
\label{sub:cluster core evolution}
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/6_gravothermal_oscillations.pdf}
\caption{Time evolution of the central density (red) for the star cluster model R04 without a central black hole. The system rapidly undergoes core collapse and develops gravothermal oscillations driven by dynamically forming binary stars. The same cluster model with a central black hole of 300 $\mathrm{M_{\odot}} $ (R04M300, blue) does not undergo core collapse but expands gradually. The prevention of core collapse and the steady expansion is driven by the already existing population of "binaries" which the central black hole forms with many stars.}
\label{fig:6_gravothermal_oscillations}
\end{figure}
The BHs in our runs constantly occupy the very centre of the cluster. Therefore,
their mass growth is heavily influenced by the cluster core properties (such as the density $\rho_\mathrm{c}$ and the velocity dispersion $\sigma_{\mathrm{c}}$) and their evolution over time.
At the same time, the BHs, exerting violent dynamical interactions in their surroundings, are expected to affect substantially the evolution of the cluster inner region.
To understand how the central compact object impacts the evolution of the cluster core, we rerun model R04M300 without a BH for 15 Myr.
Fig. \ref{fig:6_gravothermal_oscillations} compares the evolution of the R04M300 core density with and without a BH.
R04M300 without a BH (red line) undergoes gravothermal oscillations\footnote{ Gravothermal oscillations are well understood and have been studied extensively theoretically.
The first work that revealed such oscillations goes back to \citet{Sugimoto1983}. For a comprehensive overview of this topic see \citet[][]{Heggie1993}.}: its core collapses and expands several times.
Consequently, the central density oscillates from $10^8$ up to $10^{11}$ particles per pc$^{3}$. The collapse phase is driven by two-body gravitational interactions, which randomly scatter particles into the centre, causing the core to contract \citep{Spitzer1987}. When the core is dense enough three-body, close interactions can efficiently form hard binaries, which release energy into the core, forcing it to expand. The expansion phase ends when the binaries escape the centre due to violent dynamical interactions.
Two-body interactions force the core to re-collapse as soon as all the binaries are ejected away from the centre, and the cycle restarts. Conversely, the core in R04M300 with a BH (blue line in Fig. \ref{fig:6_gravothermal_oscillations}) experiences a monotonic expansion. In this simulation, stars near the BH quickly bind to the compact object
BH quickly bind to the compact object and form BH-star binaries (many stars form a binary with the central BH), which
cause the core to expand. Due to the high BH mass, BH-stars binaries are unlikely to be ejected from the core. Instead, they can provide a steady energy flow to the centre until the end of the cluster evolution.
In other words, the central BH and the bound close stars act as a dynamical heat source and force the core to a steadily expand \citep[as already shown in previous works][]{Shapiro1977, Heggie2007}.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/8_core_expansion.pdf}
\caption{Time evolution of the central number density (top panel) and core radius (bottom panel) for the models as indicated in the legend. The initially very dense systems expand rapidly and the central density drops by almost two orders of magnitude. Stars bound to the central black holes drive the expansion. This effect prevents the expected core collapse in the absence of a central black hole (see Fig. \ref{fig:6_gravothermal_oscillations}). For the low mass (50 $\mathrm{M_{\odot}}$) black hole simulation (orange) the onset of core collapse is seen but is terminated once a bound nuclear subsystem has formed after a few million years (see Fig. \ref{fig:7_binary_fraction}).}
\label{fig:8_core_expansion}
\end{figure}
Fig. \ref{fig:8_core_expansion} illustrates that R06M2000, R06M300 and R08M300 register early core expansion similar to R04M300. The expansion in these models starts right at the beginning of the simulation. On the other hand, in the first few million years, the core in R06M50 contracts (see orange line in Fig. \ref{fig:8_core_expansion}). The contraction halts after $\sim 4$ Myr, and the core expands until the simulation ends. The $50 \mathrm{M_{\odot}} $ BH in R06M50 cannot immediately reverse the core contraction because, at $t=0$ Myr, it does not have a bound cloud. The bound subsystem builds up in about $4$ Myr (orange line in Fig. \ref{fig:7_binary_fraction}) right before the core starts to expand (orange line in Fig. \ref{fig:8_core_expansion}).
This fact further indicates that the bound stars around the BH are responsible for the dynamical heating of the cluster centre.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/9_circ_and_direct_TDE.pdf}
\caption{The number of tidal disruption events for the first 40 Myr of the simulations. The direct disruption events are dominating in all simulations (shaded regions) and the cicularised TDEs are sub-dominant. As explained in the text circularised TDEs correspond to TCEs. They contribute between $9$ per cent and up to $15$ per cent depending on the model. For comparison we show the R06M50 simulation without a tidal drag force (blue) for which the number of TDEs is reduced by about $10$ per cent. More massive central black holes (R06M2000) as well as more compact clusters (R04M300) increase the number of disruption events.}
\label{fig:9_circ_and_direct_TDE}
\end{figure}
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/10_ecc_distribution.pdf}
\caption{The eccentricity distribution of the stellar orbits right before tidal disruption for the M300 models with varying initial half mass radii (left panel) and the R06 models with varying central black hole masses (right panel). For all models most disruption orbits are very radial with high eccentricities $e \gtrsim 0.99$ (these events are indicated in the plot as direct TDEs). In a few cases, the distruption orbit is even hyperbolic (unbound) with $e \gtrsim 1.0$ (not shown here). About $\sim 10$ per cent of the stars have been tidally captured and have undergone partial or total circularisation by drag forces before disruption (these events are indicated as circularized TDEs and correspond, as explained in the text, to TCEs). The size of the star cluster and the mass of the central black holes only has a weak impact on the eccentricity distribution (see Section \ref{subsec:drag force}.) }
\label{fig:10_ecc_distribution}
\end{figure*}
\subsection{Direct tidal disruption events and tidal capture events} \label{sub:direct TDE}
\label{sub:TCE and TDE}
Overall, the early expansion of the core registered in all our models does not prevent the BH from experiencing a large number of TDEs. Fig. \ref{fig:9_circ_and_direct_TDE} shows that only after 40 Myr two of our models (R04M300 and R06M2000) register more than $\gtrsim1200$ TDEs, and even our least dense system, R08M300, recorded more than $\gtrsim 500$ TDEs in the same period of time.
We are interested in understanding how many of these TDEs were induced by TCEs.
Unfortunately, as we have modeled tidal interactions with a drag force, directly integrated into the equations of motion, it is not possible to record TCEs directly.
Nevertheless, we can deduce indirectly how many of the TDEs were triggered by TCEs by looking at the eccentricity of a star right before its disruption.
With the prescription adopted in this study, every captured orbit undergoes a phase of circularisation followed by a phase of pericentre shrinking (see Subsection \ref{sub:orbit under drag force}).
Thus, a tidally captured star always experiences partial or total circularisation before being tidally disrupted. On the contrary, direct TDEs tend to have eccentricities very close to unity.
The two panels in Fig. \ref{fig:10_ecc_distribution} display the eccentricity distribution of all TDEs. The distributions are bimodal with a broad peak at $\log(1-e) \lesssim -2.0$ and a narrow peak that extends between $\log(1-e) \sim -1.0$ and $\log(1-e) \sim 0.0$. Such plots provide a clear cut separation between direct TDEs ($e \sim 1.0$) and circularised TDEs ($e \lesssim 0.9$).
In principle, dynamical interactions could decrease the eccentricity of some direct TDEs. However, in practice, this never occurs. Only the tidal interaction drag force can significantly impact the eccentricity evolution of these events.
For instance, when we deactivate the tidal interaction drag force, the model R06M50 registered only direct TDEs (no events with $e \lesssim 0.9$ see blue line in Fig. \ref{fig:10_ecc_distribution}). With the drag force, R06M50 records 87 circularised TDEs.
In summary, we can estimate the number of TCEs by counting the number of circularised TDEs ($e \lesssim 0.9$).
Fig. \ref{fig:9_circ_and_direct_TDE} indicates that the BHs feed mainly on direct TDEs. However,
TCEs have also a relevant impact on the total number of TDEs: they trigger between $10$ per cent and $15$ per cent of the TDEs. Their contribution to the BH mass growth is even more substantial. Contrary to a direct TDE, in a TCE, the stellar material is tightly bound to the BH. Hence, the BH could absorb a larger fraction of the stellar material. We discuss this in more detail in Section \ref{sub:mass_growth}.
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/11_star_mass.pdf}
\caption{{\it Left panel}: distribution of stellar masses $m_*$ tidally disrupted (TDE) by the central BH for models R04M300, R06M300 and R08M300 (same initial BH mass and increasing half mass radii). Due to mass segregation the most massive stars in the simulations are more likely to be disrupted. Stars above $\sim 2$ $\mathrm{M_{\odot}}$ have formed in stellar mergers. {\it Right panel}: distribution of stellar masses $m_*$ tidally disrupted by the central BH for models R06M50, R06M300 and R06M2000 (same initial half mass radius and increasing black hole masses). The distribution of R06M2000 looks markedly flatter and does not peak at $\sim 1.7$ $\mathrm{M_{\odot}}$. This model has about three times more TDEs than the other models and the most massive stars are consumed in the first few million years. Thereafter increasingly lower mass stars are disrupted by the central black hole.}
\label{fig:11_star_mass}
\end{figure*}
\subsection{Tidal disruption rate as a function of time}
\label{sub:TDE rate}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{figures/13_TC_orbit_elemetnts.pdf}
\caption{Semi-major axis vs. eccentricity about $10$ kyr before a star is disrupted by the central black hole for models R06M50, R06M300, and R06M2000 (from top left clockwise). The R06M50 model without a drag force is also shown for comparison at top right. The colored circles reported in each panel represent all the stars that merged with the BH. The orbital elements are computed before they could have been affected by tidal energy loss (here indicated with the blue area). More than $90$ per cent of the colliding stars were moving on a bound orbit.}
\label{fig:13_TC_orbit_elemetnts}
\end{figure*}
Unsurprisingly, in our simulations, the average mass of the tidally disrupted stars is significantly higher than the mean stellar mass;
it varies from $m_*\sim 1.3$ $\mathrm{M_{\odot}}$ up to $m_*\sim 1.6$ $\mathrm{M_{\odot}}$ depending on the initial conditions.
In fact, due to mass segregation, the BH preferentially devours the most massive stars in the system.
The mass distribution of these stars is shown in Fig. \ref{fig:11_star_mass}.
For four of our models, the distribution peaks at $2 \mathrm{M_{\odot}}$, which are the most massive stars of the clusters.
R06M2000 is the only exception; it has a rather flat mass distribution. The $2000 \mathrm{M_{\odot}}$ BH destroys with the same probability stars between $\sim 0.5 \mathrm{M_{\odot}}$ and $2.0 \mathrm{M_{\odot}}$.
This exception can be explained by looking at Fig. \ref{fig:13_TC_orbit_elemetnts}, which shows that most TDEs originate from bound orbits.
Most of the stars that contribute to the growth of the BH come from the bound cloud.
In other words, the BH feeds mainly on the bound subsystem. Due to its large BH mass, R06M2000 is the model with the most extended bound cloud since bound clouds extend for about an influence radius ($r<R_{\mathrm{inf}}$) (see Subsection \ref{sub:bound_cloud}). Therefore, compared to the other models, its bound subsystems encompass many more low-mass stars. To recap, the reservoir of stars that sustain the growth of the BH coincides mostly with the bound subsystem. Since R06M300 has the most extended bound cloud, it has a higher probability of consuming lighter stars.
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/15_r_power_law.pdf}
\caption{Cumulative number of TDEs fitted using equation \ref{eq:NTDE} for all the five models. After calibrating the constant factor $F$ for each model, Eq. \ref{eq:NTDE} predicts very accurately the number of TDEs as a function of time. The histogram in the right bottom panel reports the distribution of the $F$-values, as well as $F$ mean value $\pm$ the standard deviation. The evolution of the cumulative number of TDEs is displayed up to 40 Myr for all models, except for the R06M300 model which we evolved for $150\,\mathrm{Myr}$. }
\label{fig:15_r_power_law}
\end{figure*}
As we have already mentioned, the bound cloud plays a key role in driving most of the TDEs. Equally important is the role played by the unbound component of the cluster, which, first of all, acts as a reservoir for the bound subsystem, ensuring that the fraction of bound stars is constant over time (see Fig. \ref{fig:7_binary_fraction}).
In addition, unbound stars constantly scatter with the bound orbits, and through complex dynamical interactions, they also trigger TDEs and TCEs.
Note that orbits in the bound cloud are typically too wide to be affected by the tidal drag force: most of the orbits are beyond the drag force region of influence (blue area in Fig. \ref{fig:13_TC_orbit_elemetnts} ). Gravitational scattering events are necessary to lead stars close enough to be tidally captured or tidally destroyed. In other words, the two-body relaxation process is indispensable for driving the bound cloud stars to tidal disruption.
The time scale at which orbits near the BH change their energy and angular momentum is consistent with the non-resonant relaxation time scale $t\mathrm{_{rel}}$. Since we include post-Newtonian corrections in the equation of motion of the simulation particles, resonant effects are suppressed in agreement with what previous studies have found \citep{Merritt2011, Brem2013}.
From these considerations it follows that we can estimate the TDE rate using:
\begin{equation}
\label{eq:Ndot_1}
\dot{N}_{\mathrm{TDE}} = F\frac{N_{b}}{t\mathrm{_{rel}}}
\end{equation}
where $N_{b}$ is the number of bound stars within the BH influence radius, $F$ is a numerical pre-factor that is calibrated for the different models and $t\mathrm{_{rel}}$ is the relaxation time scale of the cluster calculated at $R_{\mathrm{inf}}$ which we estimate following \citet{Spitzer1987}:
\begin{equation}
\label{eq:t_rel}
t\mathrm{_{rel}} = \frac{1.8\times 10^3}{\ln(\Lambda)} \left( \frac{10^7 \mathrm{M_{\odot}}/\mathrm{pc^3}}{\rho} \right) \left(\frac{\sigma}{100 \mathrm{km/s} } \right)^3 \left(\frac{1\mathrm{M_{\odot}}}{m_*} \right) \mathrm{Myr}
\end{equation}
where $\sigma$, $\rho$ and $m_*$ are respectively the velocity dispersion, stellar density and the average stellar mass computed within $r<R_{\mathrm{inf}}$.
The Coulomb logarithm, $\Lambda$, can be approximated with $\Lambda = 0.11 N$ \citep{Giersz1994}. In our case $N$ is the number of particles within the influence radius, therefore $N=2 M\mathrm{_{BH}} / m_*$. Similarly we can now rewrite $N_{b}=2 f_{\mathrm{b}} M\mathrm{_{BH}}/m_*$, where $f_{\mathrm{b}}$ is the fraction of bound stars at $r<R_{\mathrm{inf}}$. It then follows that we can rewrite Eq. \ref{eq:Ndot_1} as:
\begin{equation}
\label{eq:Ndot_2}
\begin{aligned}
\dot{N}_{\mathrm{TDE}} = & 1.1 F f_{\mathrm{b}} \ln \left( 0.22 \frac{M\mathrm{_{BH}}}{m_*} \right) \left( \frac{M\mathrm{_{BH}}}{10^3 \mathrm{M_{\odot}}}\right)\left( \frac{\rho}{10^7 \mathrm{M_{\odot}}/\mathrm{pc^3}} \right) \\
& \times \left(\frac{100 \mathrm{km/s} }{\sigma} \right)^3 \mathrm{Myr^{-1}} \
\end{aligned}
\end{equation}
where the stellar velocity dispersion $\sigma$, the stellar density $\rho$ and the average stellar mass $m_*$ are computed at the influence radius $(r=R_{\mathrm{inf}})$.
To estimate the cumulative number of TDEs as a function of time
we integrate Eq. \ref{eq:Ndot_2} obtaining:
\begin{equation}
\label{eq:NTDE}
\begin{aligned}
N_{\mathrm{TDE}}(t) = 1.1 F \int_{0}^{t}& f_{\mathrm{b}}(t')\left( \frac{M\mathrm{_{BH}}(t')}{10^3 \mathrm{M_{\odot}}}\right) \ln \left( 0.22 \frac{M\mathrm{_{BH}}(t')}{m_*(t')} \right) \\
& \times \left( \frac{\rho(t')}{10^7 \mathrm{M_{\odot}}/\mathrm{pc^3}} \right) \left(\frac{100 \mathrm{km/s} }{\sigma(t')} \right)^3 dt'. \
\end{aligned}
\end{equation}
Here, to stay as general as possible, we assumed that the variables $f_{\mathrm{b}}$, $M\mathrm{_{BH}}$, $\rho$, $m_*$ and $\sigma$ functions of time.
Calibrating the variable $F$ on each model Eq. \ref{eq:NTDE} predicts fairly accurately the number of TDEs as a function of time as displayed in Fig. \ref{fig:15_r_power_law}. The bottom right panel of the same figure shows the best fitting value for $F$ in the different simulations and it indicates that $0.6\lesssim F \lesssim 1.0$.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/16_extrap_comparison.pdf}
\caption{The mass of the BH as a function of time for the simulation R06M300 (green line) and using equation \ref{eq:Mdot} (black region). The black area perfectly reproduces the simulation result until $150\,\mathrm{Myr}$ (the end of the simulation). It then predicts $M\mathrm{_{BH}}$ for later times.
The uncertainty in the prediction is calculated including the fitting error for $\rho$ and $m_*$ (see Subsection \ref{sub:mass_growth} for more details).}
\label{fig:16_extrap_comparison}
\end{figure}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{figures/17_extrapolating_BH_growth.pdf}
\caption{\textit{Left panel}:Mass of the BH as a function of time computed by integrating Eq. \ref{eq:Mdot}. When solving the equation we assume $m_*$ to be constant and equal to $m_*=1.5\mathrm{M_{\odot}}$; We vary $f_{\mathrm{c}}$ from $1.0$ (full accretion scenario, see dark blue region) to $0.2$ ($80$ per cent stellar material is lost, see magenta region). \textit{Right panel}: BH mass accretion as a function of time for the five scenarios presented in the left panel. In one scenario (red line) we limit the accretion to the Eddington rate and we assume all the stellar materia in TDEs accumulate around the BH without mass loss. }
\label{fig:17_extrapolating_BH_growth}
\end{figure*}
\subsection{Extrapolating the mass growth of the central black hole}
\label{sub:mass_growth}
This subsection describes a method to predict the BH mass growth beyond the simulation time. We cannot evolve our systems for more than $150\,\mathrm{Myr}$ as our simulations are computationally costly, but we can use the results of our runs to extrapolate the BH mass for a longer time.
We start by connecting the tidal disruption rate $\dot{N}_{\mathrm{TDE}}$ estimated in equation \ref{eq:Ndot_2} with $\dot{M}_{\mathrm{BH}}$ the BH accretion rate.
For this purpose, we need to estimate how much stellar mass remains bound to the BH during a TDE and how much of the bound material ends up in the compact object.
The simple argument presented in \citet{Rees1988} indicates that if a BH destroys a star in a parabolic orbit, half of the stellar mass is lost during the disruption process.
The remaining portion is expected to form an accretion disk around the BH, eventually falling on the compact object on a time scale that depends on the accretion process \citep{Shiokawa2015, Bonnerot2016, Hayasaki2016}. If the accretion disk injects gas into the BH at a rate larger than the Eddington rate, the disc likely becomes weakly bound
and a significant fraction of the gas is ejected through winds \citep{Strubbe2011}.
In our simulations, we model neither the accretion disc formation phase nor the actual accretion phase. Instead we average out all these processes by introducing the parameter $f_{\mathrm{c}}$, which
represent the fraction of stellar mass accreted by the BH during a TDE. It follows from the definitions of $f_{\mathrm{c}}$ that:
\begin{equation}
\begin{aligned}
\dot{M}_{\mathrm{BH}} = f_{\mathrm{c}} m_* \dot{N}_{\mathrm{TDE}}.
\end{aligned}
\end{equation}
We can therefore estimate the BH mass growth rate $\dot{M}_{\mathrm{BH}}$ by computing $\dot{N}_{\mathrm{TDE}}$ using Eq. \ref{eq:Ndot_2}.
Observing that the velocity dispersion to estimate $\dot{N}_{\mathrm{TDE}}$ is calculated at $r=R_{\mathrm{inf}}$ we can further simplify the previous equations by rewriting $\sigma$ as a function of the BH mass $M\mathrm{_{BH}}$ and the density $\rho$ using the virial theorem (see Eq. \ref{eq:sigmafit}). It thus follows:
\begin{equation}\label{eq:Mdot}
\dot{M}_{\mathrm{BH}} = C f_{\mathrm{c}} f_{\mathrm{b}} \left( \frac{m_*}{\mathrm{M_{\odot}}} \right) \sqrt{ \frac{\rho}{10^7\mathrm{M_{\odot}}/\mathrm{pc}^3} } \ln \left( 0.22 \frac{M\mathrm{_{BH}}}{m_*} \right) \frac{\mathrm{M_{\odot}}}{\mathrm{Myr}}
\end{equation}
where we have incorporated all the constants in $C=1.1 \times 10^6 F / C_0$ (see Appendix \ref{apx:extrapolations} for the definition of $C_0$).
Since our simulations are computationally very expensive, we evolved them for no more than $150\,\mathrm{Myr}$. However, as long as we find a reliable extrapolation of $f_{\mathrm{b}}$, $\rho$ and $m_*$, we can use Eq. \ref{eq:Mdot} to predict the mass of the black hole beyond our simulation time.
It turns out that all three of these quantities, the density, the average stellar mass and the fraction of bound stars, manifest regular and predictable behaviour.
As transpires from the left panel of Fig. \ref{fig:Appendix_R06M300} it is reasonable to assume $f_{\mathrm{b}}$ does not change over time. Hence we consider $f_{\mathrm{b}} \approx 0.2$ to be constant.
The density $\rho$ in the vicinity of the BH (upper panel in Fig. \ref{fig:8_core_expansion}) displays a consistent trend: after an initial phase of core expansion, all models, independently of their initial BH mass and initial concentration, converge to an almost identical evolution. Consequently, we can use a fitting function to robustly extrapolate the later evolution of $\rho$, as we illustrate comprehensively in Appendix \ref{apx:extrapolations}. In the same appendix, we also show that the change in time of $m_*$ can be modelled using a linear fit (see the left panel in Fig. \ref{fig:Appendix_extrapolations}).
To summarise, we provided fitting formulas to predict the evolution of $\rho$ and $m_*$, which, together with Eq. \ref{eq:Mdot} allow us to extrapolate $M\mathrm{_{BH}}$ as a function of time.
Substituting in Eq. \ref{eq:Mdot} $f_{\mathrm{c}}=0.5$, which is the fiducial value we assumed in our runs, we obtain the results displayed in Fig. \ref{fig:16_extrap_comparison}.
The time evolution of the R06M300 BH mass is perfectly consistent with the simulation results (green line in Fig. \ref{fig:16_extrap_comparison})
reaching $2500 \mathrm{M_{\odot}} $ in about $\sim 1$ Gyr.
Thanks to its simplicity, we can use Eq. \ref{eq:Mdot} to explore systems that are slightly different from the ones we simulated. For instance, a star cluster with a larger number of particles ($N>10^6$) has a larger number of massive stars at its disposal that the BH can consume. This implies that $m_*$ stays constant for much longer. Simply substituting the star's mass to a constant value of $m_*=1.5 \mathrm{M_{\odot}}$, leads to a more significant mass growth: the BH in this scenario is expected to reach $3500 \mathrm{M_{\odot}}$ in 1 Gyr (see the purple line in Fig. \ref{fig:17_extrapolating_BH_growth}).
By means of equation \ref{eq:Mdot} we can also explore how modifying $f_{\mathrm{c}}$ would affect our findings. We are not only limited to changing the value of $f_{\mathrm{c}}$ but we can also make a distinction between the accretion fraction for direct TDEs ($f_\mathrm{c}^\mathrm{TDE}$) and for TCEs ($f_\mathrm{c}^\mathrm{TCE}$).
Since our models registered $\sim 90$ per cent of direct TDEs and about $\sim10$ per cent of TDEs triggered by tidal capture. We can thus link $f_{\mathrm{c}}$ with $f_\mathrm{c}^\mathrm{TDE}$ and $f_\mathrm{c}^\mathrm{TCE}$ as follow:
\begin{equation}
f_{\mathrm{c}} = (0.9f_\mathrm{c}^\mathrm{TDE} + 0.1f_\mathrm{c}^\mathrm{TCE}) / 2.
\end{equation}
We use the equality above to explore the scenario
$f_\mathrm{c}^\mathrm{TCE} =1.0, f_\mathrm{c}^\mathrm{TDE} =0.5$, which is motivated by the fact that, in TDEs induced by
tidal captures, stellar gas is more tightly bound to the BH.
The left panel of Fig. \ref{fig:17_extrapolating_BH_growth} outlines the main results.
Not surprisingly $f_\mathrm{c}^\mathrm{TDE} = f_\mathrm{c}^\mathrm{TCE} =0.2$ fades the growth of the BH, which reaches only $\sim 1500 \mathrm{M_{\odot}}$ in 1 Gyr. On the other hand, the scenario with $f_\mathrm{c}^\mathrm{TDE} = f_\mathrm{c}^\mathrm{TCE} =1.0$ brings the BH to $\sim 7600 \mathrm{M_{\odot}}$. This scenario might require an initial accretion rate of $\sim 15 \dot{M}_{\mathrm{EDD}}$ as the dark blue line in the right panel of Fig. \ref{fig:17_extrapolating_BH_growth} indicates. Observations indicate that super-Eddington accretion
can be possible during TDEs \citep[see][and references therein]{Komossa2015}.
However, theoretical studies show that jets and violent outflows
will probably unbind between $30$ per cent and up to $75$ per cent of the gas around the BH \citep{Ayal2000, Metzger2016, Toyouchi2021}. For this reason, the BH in our model might not be able to sustain an accretion rate with $f_{\mathrm{c}}=1$, at least during the first $20 - 30$ Myr.
Nevertheless, the BH can still reach $\sim 7000 \mathrm{M_{\odot}} $ in 1 Gyr, even limiting the accretion rate to the Eddington accretion as shown (in red) in the left panel of Fig. \ref{fig:17_extrapolating_BH_growth}. If all the stellar gas generated in TDEs accumulates around the BH and stays bound to it, the BH will accrete it within 1 Gyr, even restricting its growth rate to $\dot{M}_{\mathrm{BH}} \lesssim \dot{M}_{\mathrm{EDD}}$. The accretion rate for this scenario, indicated in the right panel of Fig. \ref{fig:17_extrapolating_BH_growth} in red, is constant $\dot{M}_{\mathrm{BH}}=\dot{M}_{\mathrm{EDD}}$ for the first $\sim 100$ Myr; as soon as the BH consumes all the gas accumulated during the past TDEs, the accretion drops rapidly to significantly lower values.
In conclusion, our analysis indicates that $300 \mathrm{M_{\odot}} $ BH can grow up to $\sim 10^4 \mathrm{M_{\odot}} $ as long as, after TDEs, all stellar material remains bound. This scenario might be unfeasible if the tidally destroyed stars move in open orbits, but as we showed in the previous subsection that the BH feeds primarily on the bound cloud. To put it in another way, the great majority of the destroyed stars come from bound orbits. Therefore, most stellar material formed during TDEs might have a high chance of remaining bound to the BH.
\section{Discussion and conclusions}
\label{sec:Conclusions}
In this work, we evolved and analyzed five direct $N\mathrm{-body} $ simulations of compact ($R_{\mathrm{h}}\leq0.8$ pc) star clusters ($N=256000$) for at most $150\,\mathrm{Myr}$ to investigate the growth of massive BHs through repeated TCEs and TDEs. All our systems start with a single central BHs ($50 \mathrm{M_{\odot}} \leq M\mathrm{_{BH}} \leq 2000 \mathrm{M_{\odot}} $) surrounded by mass-segregated low-mass stars in the mass range $0.08 \mathrm{M_{\odot}} \lesssim m_*\lesssim2.0 \mathrm{M_{\odot}}$.
The clusters initial central stellar density $n_{\mathrm{c}} > 10^7 $pc$^{-3}$
and initial central velocity dispersion ($30 \mathrm{km/s} \lesssim \sigma_{\mathrm{c}} \lesssim 90 \mathrm{km/s}$) fulfill the criteria provided by \citet{Stone2017} to trigger tidal capture runaway collisions.
The five models show several similarities in both the evolution of the cluster core and the growth of the central black hole.
The BHs in all realizations, at the very beginning of the runs, give rise to a subsystem of bound orbits in its vicinity. Around $20$ per cent to $30$ per cent of the stars enclosed within the influence radius ($r<R_{\mathrm{inf}}$) stays bound to the BH throughout the simulation. This cloud of bound stars forms a \citet{Bahcall1976} cusp embedded in a reservoir of unbound particles with a shallower density distribution.
Our analysis indicate that the bound cloud plays a key role in the evolution of the cluster core, as it constantly releases energy to the core through dynamical interactions resulting in a monotonic core expansion.
Consequently, the clusters cannot mantain their original central density which drop by about an order of magnitude in the first few Myr.
Despite this initial expansion, the BHs experienced a sustained tidal disruption rate until the simulation ended. About $10-15$ per cent of the TDEs are caused by tidal captures; in all these cases, the orbit of the destroyed stars underwent partial or total circularisation. \\
We derived simple equations (see Eq. \ref{eq:Ndot_2} and \ref{eq:NTDE}) that predict accurately the number of TDEs experienced by the central BHs as shown in Fig. \ref{fig:15_r_power_law}.
We use these equations to derive an expression that model the BH mass growth rate as a function of time.
With this expression we explore scenarios that slightly deviate from our initial conditions and predict the time evolution of BH masses up to 1 Gyr.
We predict that a few hundred solar masses BH can easily reach $\sim 2000 - 3000 \mathrm{M_{\odot}}$ through repeated TDEs and TCEs within 1 Gyr. The BH
can even grow up to $\sim 10^4\mathrm{M_{\odot}}$ if all the stellar material remains bound to the BH and no mass loss occur during the destruction and accretion phase (see Fig. \ref{fig:17_extrapolating_BH_growth}). We argue that this scenario is plausible because, as revealed by our simulations (see Fig. \ref{fig:13_TC_orbit_elemetnts}), the BHs feed primarily on bound orbits. As long as the accretion proceeds at sub-Eddington rate, all the stellar gas should remain bound to the BH.
Numerous observations indicate the presence of $10^4 \mathrm{M_{\odot}} - 10^5 \mathrm{M_{\odot}}$ IMBHs in galactic nuclei.
One of the most convincing observations of this type reveals
that the nuclear star cluster of the galaxy NGC 4395 hosts a BH with a mass between $9\times10^3 \mathrm{M_{\odot}}$ \citep{Woo2019} up to $\sim 4 \times 10^5 \mathrm{M_{\odot}}$ \citep{Peterson2005, denBrok2015}.
Our results show that tidal captures and tidal disruption events might contribute significantly to the growth of the many observed IMBHs inside nuclear star clusters.
TCEs and TDEs might even be the most dominant IMBH growth channel in clusters with moderate escape velocities ($<100$ km/s), where the hierarchical formation scenario is suppressed by relativistic recoils \citep{Mapelli2021}. \\
In this work, we did not intend to conduct realistic simulations of NSCs. Instead, we developed idealised systems that contain all the key ingredients to address the questions we aim to answer while being an approximate version of real systems. There are two main simplifications adopted in this study.
First of all, we initialised our systems with an old population of stars. In this way, we were able to neglect effects related to stellar evolution and focus exclusively on dynamic effects. However, NSCs experience multiple episodes of star formation that replenish the system of massive stars \citep{Carson2015, Kacharov2018}. The latter might contribute significantly to the BH mass growth and could dramatically alter the scenario described in this work.
In addition, our simulations contain only a single central BH. In other words, we assume that after the BH subsystem evaporates, only one BH remain in the cluster.
However, the stellar system might keep a BH binary instead of a single BH which might significantly change the tidal disruption rate experienced by the compact objects. \\
In future studies, we plan to include in our initial conditions the central BHs subcluster, since the latter could significantly influence the IMBH tidal disruption rate \citep{Teboul2022}.
We also plan to locate our systems at the centre of an external potential to study the effect of the galaxy bulge around a NSC.
In addition, in subsequent work, we intend to make a comprehensive comparison between our findings with analytical models for loss cone dynamics \citep{Stone2016} and tidally-driven runaway growth \citet{Stone2017}.
\section*{Acknowledgements}
FPR, PHJ, SL and DI acknowledge the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930). PHJ also acknowledges the support of the Academy of Finland grant 339127.
TN acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) under Germany's Excellence Strategy – EXC-2094 –
390783311 from the DFG Cluster of Excellence 'ORIGINS'.
The simulations have been carried out with the Ampere GPU system of the MPI for Astrophysics cluster "Freya" hosted by the Max Planck Computing and Data facility. This study used also facilities hosted by the CSC - IT Centre for Science, Finland.
|
2,869,038,156,724 | arxiv | \section{Introduction and Motivation}
\subsection{Motivation of the Problem}
In recent times, there has been a crisis in the sciences because too many research results are found to lack replicability and reproducibility. Some of this crisis has been attributed to a failure of statistical methods to account for data-dependent exploration and modeling that precedes statistical inference. Data-dependent actions such as selection of subsets of cases, of covariates, of responses, of transformations and of model types has been aptly named ``researcher degrees of freedom'' \citep{simmons2011false}, and these may well be a significant contributing factor in the current crisis. Classical statistics does not account for them because it is built on a framework where all modeling decisions are to be made {\em independently of the data on which inference is to be based}. But if the data are in fact used to this end prior to statistical inference, then such inference loses its justifications and the ensuing validity conferred on it by classical theories. It is therefore critical that the theory of statistical inference be brought up to date to account for data-driven modeling. Updating the theory that justifies statistical inferences usually requires modifying the procedures of inference such as hypothesis tests and confidence intervals. As a consequence, the new procedures may lose some power relative to the previously stipulated but illusionary power derived from classical theories. This, however, is a necessary price to be paid for better justification of statistical inference in the context of the pre-inferential liberties taken in today's data-analytic practice. While updating of statistical theories and inference procedures will not solve all problems underlying the current crisis, it is a necessary step as it may help mitigate at least some aspects of the crisis. In what follows we refer to all data-analytic decisions that are made using the data prior to inference as ``data-driven modeling''.
A second issue with theories of classical statistical inference is that many of them rely on the assumption that the data have been correctly modeled in a probabilistic sense. This means the theories tend to assume that the probability model used for the data correctly captures the observable features of the data generating process. Justifications of statistical inferences derived from such theories are therefore {\em in}valid if the model is incorrect or (using the technical term) ``misspecified''. With the proliferation of data-analytic approaches in science and business, it is becoming ever more unrealistic to assume that all statistical models are correctly specified and inferences are made only after carefully vetting the model for correct specification, for example, using model diagnostics. Such vetting may never have been realistic in the first place, and it should also be said that pre-inferential diagnostics should be counted among ``researcher degrees of freedom'' as they may result in data-driven modeling decisions. It is therefore a mandate of realism to use so-called ``model-robust'' methods of statistical inference, and for statistical theory to provide their justifications. In matters of misspecification the situation is somewhat less dire than data-driven modeling as there exists a rich literature on the study of inference when models are misspecified.
We will naturally draw on extant proposals for misspecification-robust or (using the technical term) ``model-robust'' inference and adapt them to our purposes.
To summarize, there exist at least two ways in which statistical inferences with justifications from classical mathematical statistics only can be invalidated, namely,
\begin{enumerate}[label = \bfseries(P\arabic*)] \itemsep 0em
\item data-driven modeling prior to statistical inference, and\label{prob1}
\item model misspecification.\label{prob2}
\end{enumerate}
In light of the replicability and reproducibility crisis in the sciences, it is of considerable interest, even urgency, to develop methods of statistical inference and associated theoretical justifications that account for both \ref{prob1} and \ref{prob2}. Even though these problems are manifest in almost all statistical procedures used in practice, it is no simple task to provide methods of valid statistical inference that address these problems in greater generality. For this reason the present article puts forth specifically a method of valid inference for the case that the fitting procedure is ordinary least squares (OLS) linear regression. Here there exists a literature that documents the drastic effects of ignoring \ref{prob1} and \ref{prob2}; see, for example, \cite{Bueh63}, \cite{Olsh73}, \cite{REN80}, and~\cite{Freed83}. We will address one particular form of problem \ref{prob1}, namely, data-driven selection of regressor variables, and we will deal with several forms of problem~\ref{prob2}.
Some of the earliest work that studies estimators under data-dependent modeling \ref{prob1} include \cite{Hjort03} and \cite{Claeskens07}. Although these articles deal with a general class of statistical procedures, a major limitation, in view of the current article, is that the data-dependent modeling is restricted to a very narrow class of principled variable selection methods such as AIC or some other information criterion. The fact is, however, that few data analysts will confine themselves to a strict protocol of data-driven modeling. To address broader aspects of ``researcher degrees of freedom'' there have more recently emerged proposals that provide validity of statistical inference in the case of arbitrary data-driven selection of regressor variables. The first such proposal was by \cite{Berk13} who solve the problem allowing misspecified response means but retaining the classical assumptions of homoskedastic and normally distributed errors. We refer to \cite{Berk13} for many other prior works related to problem \ref{prob1} where data-driven modeling consists of selection of regressor variables. A more recent article that expands on \cite{Berk13} is by \cite{Bac16}. An alternative approach is by \cite{Lee16}, \cite{Tibs16}, \cite{Tian16} (for example). Similar to \cite{Hjort03}, these proposals do not insure validity of inference against arbitrary regressor selection but against {\em specific selection methods} such as the lasso or stepwise forward selection. This type of post-selection inference is conditional on the selected model and dependent on distributional assumptions, thereby not addressing problem~\ref{prob2}.
The present article is close in spirit to \cite{Berk13} and \cite{Bac16} and lends their approach a considerable degree of generality by covering both fixed regressors (as in these references) and (newly) random regressors. \cite{Bac16} is the only work we know of that provides valid statistical inference under arbitrary data-dependent regressor selection and general misspecification of the regression models. Their framework assumes a situation where the set of sub-models is finite and of fixed cardinality independent of the sample size. Their method of statistical inference is NP-hard, hence requires computational heuristics. To overcome these limitations we propose here a simplified procedure with the following properties: (1)~it is comparatively computationally efficient with at most polynomial complexity in the total number of covariates, and (2)~it allows the set of sub-models to grow almost exponentially as a function of the sample size. Thus the procedure is also in the spirit high-dimensional statistics where the total number of covariates is allowed to be much larger than the sample size.
\subsection{Overview}
In what follows, the term ``model-selection'' will always mean arbitrary data-driven selection of regressor variables, which is the only aspect of problem \ref{prob1} that will be addressed in this article. Furthermore, the only fitting method considered here is OLS linear regression; this limitation is for expository purposes, and results for more general types of regressions will be given elsewhere. Problem \ref{prob2} will be addressed by the complete absence of modeling assumptions. In particular, it will {\em not} be assumed that the response means behave linearly in the regressors, and equally it will {\em not} be assumed that the errors are homoskedastic and normally distributed.
The goal is to provide confidence regions for linear regression coefficients obtained after model-selection. In the process, we will prove simple but powerful results about linear regression that lend themselves to proving the validity of confidence regions. The main contributions of the current paper are as follows:
\begin{enumerate}
\item We treat OLS linear regression as a fitting method for linear equations while treating the associated Gaussian linear model merely as a working model that is permitted to be misspecified. We consider the case where the observations are the random vectors comprised of a response variable and one or more regressor variables/covariates, allowing the latter to be random rather than fixed. Note that fixed covariates are assumed in the settings of \cite{Berk13} and \cite{Bac16}. Random covariates require us to interpret and understand what is being estimated more carefully. See \cite{Buja14} for an explanation why under misspecification the treatment of random covariates as fixed is not justified.
\item Following \cite{Berk13} and \cite{Bac16} we decouple the inference problem from model selection, meaning that the inferences proposed here are valid no matter how the model selection was done. This feature has pluses and minuses. On the plus side, inferences will be valid even in the presence of ad-hoc and informal selection decisions made by the data analyst, including, for example, visual diagnostics based on residual plots. On the minus side, decoupling implies that inferences cannot take into account any properties of the model selection procedure when in fact only one such procedure was used. A strong argument by \cite{Berk13} and \cite{Bac16} in favor of decoupling, however, is that in reality data analysts will rarely limit themselves to one and only one formal selection method if it produces unsatisfactory results on the data at hand. Therefore, in order to truly contribute to solving the crisis in the sciences, unreported informal selection should be assumed and accounted for. Decoupling of model selection and inference has a further benefit: It solves the circularity problem by permitting selection to start over and over as often as the data analyst pleases; inferences in all selected models will be valid, whether they are found satisfactory or unsatisfactory for whatever reasons.
\item Our theory provides validity of post-selection inferences even when model selection is applied to a very large number of covariates --- almost exponential in the sample size. Thus the theory is in the spirit of contemporary high-dimensional statistics which is interested in problems where the number of variables is larger than the sample size. Of course we require model selection to produce models of size smaller than the sample size in order to avoid trivial collinearity when the number of covariates exceeds the sample size.
\item We mostly focus on one simple strategy for valid post-selection inference that has the advantage of great simplicity, both in theory and in computation --- its computational cost being proportional to the number $p$ of covariates. This is surprising as the computational complexity of \cite{Berk13} is exponential in $p$ because it requires searching all covariates in all possible submodels. The drawback of the strategy is that its confidence regions are not aligned with the coordinate axes in covariate space, hence do not immediately provide confidence intervals for the slope parameters of the form ``estimate $\pm$ half-width''.
\item Most of the present results are based on deterministic inequalities that allow for valid post-selection inference even when the random vectors involved are structurally dependent. This approach may not produce best possible rates in some contexts, but the resulting inferences will be more robust to independence assumptions.
\end{enumerate}
As a caveat, it should be stated that we do not address the question of when linear regression is appropriate in a given data analytic situation when misspecification is present. We consider it a reality that many if not most linear regressions are fitted in the presence of various degrees of misspecification, and reporting results for interpretation should be accompanied by statistical inference just the same. Our goal is therefore limited to providing asymptotic justification of inference in the presence of misspecification {\em and} after data-driven model selection.
The remainder of the paper is organized as follows. Section \ref{sec:Notation_Problem_Formuation} provides the necessary notation for a rigorous formulation of the problem of valid post-selection inference. In Section \ref{sec:Simultaneous}, the problem of post-selection inference is shown to be equivalent to a problem of simultaneous inference. In Section \ref{sec:FirstStrat} we present the first strategy for valid post-selection inference along with its main features. Section \ref{sec:Comp} describes an implementation method based on the multiplier bootstrap. Section \ref{sec:Generalization} provides a simple generalization to linear regression-type problems. Section \ref{sec:HighDim} points out an interesting connection between the post-selection confidence regions proposed here and the estimators proposed in the high-dimensional linear regression literature. In Section \ref{sec:ProsAndCons}, we discuss various advantages and disadvantages of the approach presented in this paper. The final Section \ref{sec:Conclusions} summarizes the results.
Many of the proofs are deferred to Appendices \ref{app:ProofLemmaL1}, \ref{app:LebesgueMeasure} and \ref{app:LassoRegions}. Most of the discussion in the paper is based on the assumption of independent random vectors, although comments about applicability to dependent random vectors are given in appropriate places. Appendix \ref{app:HDCLT} provides theoretical background about a high-dimensional central limit theorem and the consistency of multiplier bootstrap. These results are required for computation of joint quantiles for the proposed confidence regions. Appendix \ref{app:Dependent} describes the functional dependence setting where the computation of required quantiles is not much different from that of the independence setting.
\section{Notation and Problem Formulation}\label{sec:Notation_Problem_Formuation}
\subsection{Notation related to Vectors, Matrices and Norms} \label{sec:Notation}
For any vector $v\in\mathbb{R}^q$ and $1\le j\le q$, $v(j)$ denotes the $j$-th coordinate of $v$. For any non-empty subset $M \subseteq \{1,2,\ldots,q\}$, $v(M)$ denotes the sub-vector of $v$ with indices in $M$. For instance, if $M = \{2, 4\}$ and $q \ge 4$, then $v(M) = (v(2), v(4))$. If $M = \{j\}$ is a singleton then $v(j)$ is used instead of $v(\{j\})$.
Therefore, $ v(M) \in \mathbb{R}^{|M|}$ where $|M|$ denotes the cardinality of~$M$.
For any symmetric matrix $A\in\mathbb{R}^{q\times q}$ and $M \subseteq \{1,2,\ldots,q\}$, let $A(M)$ denote the sub-matrix of $A$ with indices in $M\times M$ and for $1\le j, k\le q$, let $A(j,k)$ denote the value at the $j$-th row and $k$-th column of $A$.
Define the $r$-norm of a vector $v\in\mathbb{R}^q$ for $1\le r\le \infty$ as usual~by
\[
\norm{v}_r := \bigg(\sum_{j=1}^q |v(j)|^r\bigg)^{1/r},\quad\mbox{for}\quad 1\le r < \infty,\quad\mbox{and}\quad \norm{v}_{\infty} := \max_{1\le j\le q}|v(j)|.
\]
Let $\norm{v}_0$ denote the number of non-zero entries in $v$ (note this is not a norm). For any symmetric matrix $A$, let $\lambda_{\min}(A)$ denote the minimum eigenvalue of $A$. Also, let the elementwise maximum and the operator norm be defined, respectively, as
\[
\norm{A}_{\infty} := \max_{1\le j, k\le q}|A(j,k)|,\quad\mbox{and}\quad \norm{A}_{op} := \sup_{\norm{\delta}_2 \le 1}\norm{A\delta}_2.
\]
The following inequalities will be used throughout without special mention:
\begin{equation}\label{eq:MatrixVectorIneq}
\norm{v}_1 \le \norm{v}_0^{1/2}\norm{v}_2,\quad \norm{Av}_{\infty} \le \norm{A}_{\infty}\norm{v}_1,\quad\mbox{and}\quad |u^{\top}Av| \le \norm{A}_{\infty}\norm{u}_1\norm{v}_1,
\end{equation}
where $A\in\mathbb{R}^{q\times q}$ and $u, v\in\mathbb{R}^q$.
\subsection{Notation Related to Regression Data and OLS}
Let $(X_i^{\top}, Y_i)^{\top}\in\mathbb{R}^p\times\mathbb{R} ~ (1\le i\le n)$ represent a sample of $n$ observations. The covariate vectors $X_i \in \mathbb{R}^p$ are column vectors. It is common to include an intercept term when fitting the linear regression. To avoid extra notation, we assume that all covariates under consideration are included in the vectors $X_i$, so the data analyst may take the first coordinate of $X_i$ to be 1. In case that the number $p$ of covariates varies with $n$, this should be interpreted as a triangular array. Throughout, the term ``model'' is used to refer to the subset of covariates present in the regression and there will be \underline{\emph{no}} assumption that any linear model is true for any choice of covariates.
In order describe ``models'' in the sense of subsets of covariates, we use index sets $M \subseteq \{1,2,\ldots,p\}$ as in the previous subsection and write $X_i(M)$ for the covariate vectors in the submodel $M$. For any $1\le k\le p$, define the set of all non-empty models of size no larger than $k$ by
\begin{equation}\label{eq:kSparse}
\mathcal{M}_p(k) := \{M:\, M\subseteq\{1,2,\ldots, p\},\,\, 1\le |M| \le k\},
\end{equation}
so that $\mathcal{M}_p(p)$ is the power set of $\{1,2,\ldots,p\}$ excluding the empty set.
To proceed further, we assume that the observations are independent but possibly non-identically distributed. Note that this assumption includes as special cases (i)~the setting of independent and identically distributed observations and (ii)~the setting of fixed (non-random) covariates (by defining the distribution of $X_i$ to be a point mass at the observed $X_i$). Our setting is more general than either (i) or (ii) in that some of the covariates are allowed to be fixed while others are random.
For any $M \subseteq \{1,2,\ldots,p\}$, define the ordinary least squares empirical risk (or objective) function as
\begin{equation}\label{eq:EmpObj}
\hat{R}_n(\theta; M) := \frac{1}{n}\sum_{i=1}^n \left\{Y_i - X_i^{\top}(M)\theta\right\}^2,\quad\mbox{for}\quad \theta\in\mathbb{R}^{|M|}.
\end{equation}
Using this, define the expected risk (or objective) function as
\begin{equation}\label{eq:PopObj}
R_n(\theta; M) := \frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[\left\{Y_i - X_i^{\top}(M)\theta\right\}^2\right],\quad\mbox{for}\quad \theta\in\mathbb{R}^{|M|}.
\end{equation}
(The notations $\mathbb{E}$ and $\mathbb{P}$ are used to denote expectation and probability computed with respect to all the randomness involved.)
Define the least squares estimator and the corresponding target for model $M$ as
\begin{equation}\label{eq:LeastSquares}
\hat{\beta}_{n,M} := \argmin_{\theta\in\mathbb{R}^{|M|}} \hat{R}_n(\theta; M),\quad\mbox{and}\quad \beta_{n,M} := \argmin_{\theta\in\mathbb{R}^{|M|}}R_n(\theta; M),
\end{equation}
for all $M\subseteq\{1,2,\ldots,p\}$, hence $\hat{\beta}_{n,M}, \, \beta_{n,M} \!\in\! \mathbb{R}^{|M|}$. Note, however, the following: Suppose $M = \{1, 2\}$ and $M' = \{1\}$, then it is generally the case that $\hat{\beta}_{n,M'}(1) \neq \hat{\beta}_{n,M}(1)$ and $\beta_{n,M'}(1) \neq \beta_{n,M'}(1)$, that is, estimates and parameters in submodels are \emph{not} subvectors of their analogues in larger models, except, for example when the columns of $X(M)$ are orthogonal. The reason for this is the collinearity between the covariates in model $M$. The comments above applies for general models $M'\subset M$.
This is why we must write $M$ as a subscript and not in parentheses. (See Section 3.1 of \cite{Berk13} for a related discussion.)
Next define related matrices and vectors as follows:
\begin{equation} \label{eq:MatrixVector}
\begin{split}
\hat{\Sigma}_n := \frac{1}{n}\sum_{i=1}^n X_iX_i^{\top}\in\mathbb{R}^{p\times p},\quad&\mbox{and}\quad \hat{\Gamma}_n := \frac{1}{n}\sum_{i=1}^n X_iY_i\in\mathbb{R}^{p},\\
\Sigma_n := \frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[X_iX_i^{\top}\right]\in\mathbb{R}^{p\times p},\quad&\mbox{and}\quad \Gamma_n := \frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[X_iY_i\right]\in\mathbb{R}^{p}.
\end{split}
\end{equation}
Note that for these quantities there is no need to define separate versions in submodels $M$ because they are just the submatrices $\hat{\Sigma}_n(M)$ and $\Sigma_n(M)$ and subvectors $\hat{\Gamma}_n(M)$ and $\Gamma_n(M)$, respectively. The OLS estimate of the slope vector and its target in the sub-model $M$ satisfy the following normal equations:
\begin{equation}\label{eq:MatrixVector-OLS}
\hat{\Sigma}_n(M) \hat{\beta}_{n,M} = \hat{\Gamma}_n(M), ~~~~
\Sigma_n(M) \beta_{n,M} = \Gamma_n(M).
\end{equation}
\begin{rem}
We do not solve the equations \eqref{eq:MatrixVector-OLS} on purpose because the confidence regions to be constructed below will accommodate exact collinearity by including subspaces of degeneracy. Minimizers of the objective functions $\hat{R}_n(\theta; M)$ and $R_n(\theta; M)$ defined in \eqref{eq:EmpObj} and \eqref{eq:PopObj} always exist, even if they are not unique. Estimates $\hat{\beta}_{n,M}$ can only be unique when $|M| \le n$ because $\hat{\Sigma}_n(M)$ has rank at most~$\min\{|M|, n\}$. Targets $\beta_{n,M}$, on the other hand, can be unique without a constraint on $n$ because they are based on expectations rather than finite averages, so $\Sigma_n$ and $\Sigma_n(M)$ can be strictly positive definite and $R_n(\theta; M)$ strictly convex with a unique minimizer even when $|M| > n$.
\end{rem}
\subsection{Problem Formulation} \label{sec:problem_formulation}
Under very mild assumptions, $\hat{\beta}_{n,M} - \beta_{n,M}$ converges to zero as $n$ tends to infinity for any fixed, non-random model $M$ (see \cite{Chap2:Kuch18}). This fact justifies calling $\hat{\beta}_{n,M}$ an estimator of $\beta_{n,M}$ or, equivalently, $\beta_{n,M}$ the target of estimation of $\hat{\beta}_{n,M}$. Also, for a fixed $M$, $\hat{\beta}_{n,M}$ has an asymptotic normal distribution, i.e.,
\[
n^{1/2}\left(\hat{\beta}_{n,M} - \beta_{n,M}\right) ~\overset{\mathcal{L}}{\to}~ N\left(0, AV_M\right)~~~~(0 \in \mathbb{R}^{|M|},~AV_M \in \mathbb{R}^{|M|\times|M|} )
\]
for some positive definite matrix $AV_M$ that depends on $M$ and some moments of $(X, Y)$; see the linear representation in \cite{Chap2:Kuch18}. The notation $\overset{\mathcal{L}}{\to}$ denotes convergence in law (or distribution).
Asymptotic normality lends itself for the construction of $(1 \!-\! \alpha)$-confidence regions $\hat{\mathcal{R}}_{n,M}$ such that
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\beta_{n,M} \in \hat{\mathcal{R}}_{n,M}\right) \ge 1- \alpha
\]
for any fixed $\alpha\in[0,1]$. We approach statistical inference using confidence regions rather than statistical tests, but this is a technical rather than a conceptual choice because confidence regions and tests are in a duality to each other: a confidence region with coverage at least $1\!-\!\alpha$ is a set of parameter values that could not be rejected at level $\alpha$ if used as point null hypotheses.
The problem of valid post model-selection inference is to construct for given non-random sets of models $\mathcal{M}_p$ a set of confidence regions $\{\hat{\mathcal{R}}_{n,M}\!: M \!\in\! \mathcal{M}_p\}$ such that for any {\bf\em random model} $\hat{M}$ depending (possibly) on the same data satisfying $\mathbb{P}\left(\hat{M} \!\in\! \mathcal{M}_p\right) \!=\! 1$, we have
\begin{equation}\label{eq:PoSIRequire}
\liminf_{n\to\infty}\,\mathbb{P}\left(\beta_{n,\hat{M}} \in \hat{\mathcal{R}}_{n,\hat{M}}\right) \ge 1 - \alpha.
\end{equation}
The guarantee \eqref{eq:PoSIRequire} requires the confidence asymptotically because we strive for a theory that requires few assumptions, whereas finite sample confidence guarantees require strong assumptions.
The notation $\hat{M}$ for random models requires an elaboration of the sources of randomness envisioned here.
With the reproducibility crisis in mind, we cast a wide net for the sources of model randomness by adopting a broad frequentist perspective that includes not only datasets but data analysts as well. Conventional frequentism can be conceived as capturing the random nature of an observed dataset in the actual world by embedding it in a universe of possible worlds with datasets characterized by a joint probability distribution of the observations. We broaden the concept by pairing the random datasets with random data analysts who have varying data analytic preferences and backgrounds. This variability among data analysts may be called ``random researcher degrees of freedom'', a term that alludes to the freedoms we exercise when analyzing in general, and when selecting covariates in a regression in particular. Some of the latter freedoms have been described and classified by \cite{Berk13}, Section~1: (1)~formal selection methods such as stepwise forward or backward selection, lasso-based selection using a criterion to select the penalty parameter, or all-subset search using a criterion such as $C_p$, AIC, BIC, RIC,~etc.; (2)~informal selection steps such as examination of residual plots or influence measures to judge acceptability of models; (3)~post hoc selection such as making substantive trade-offs of predictive viability versus cost of data collection. The waters get further muddied even in the case of formal selection methods (1) when ``informal meta-selection'' is exercised: trying out multiple formal selection methods, comparing them, and favoring some over others based on the results produced on the data at hand. This list of ``researcher degrees of freedom'' in model selection should make it evident that these freedoms are indeed exercised in practice, but in ways that should be called ``subjective'', namely, based on personal background, experience and motivations, as well as historic and institutional contexts. For these reasons it may be infeasible to capture the randomness contributed by data analysts' exercise of their freedoms in terms of stochastic models.
Following \cite{Berk13}, this infeasibility can be bypassed by adding a quantifier ``for all $\hat{M}$'' to the requirement \eqref{eq:PoSIRequire}, thereby capturing all possible ways in which selection may be performed. The added gain is that at a technical level the requirement \eqref{eq:PoSIRequire} permits a reduction to a problem of simultaneous inference.
We must, however, impose certain limits on the freedom of model selection: The set of potential regressors must be pre-specified before examining the data. For example, it is not permissible to initially declare the regressors $X_1,\ldots,X_p$ to be the universe for searching submodels, only to decide after looking at the data that one would also like to search among product interactions $X_j X_k$. The decision to include interactions in data-driven selection would have to be made before looking at the data. Thus data-driven expansion of the universe of regressors for selection is not covered by our framework.
Again following \cite{Berk13}, a curious aspect of the target of estimation has to be noted: $\beta_{n,\hat{M}}$ has become a random quantity with a random dimension $|\hat{M}|$, whereas for a fixed $M$ the target $\beta_{n,{M}}$ is a constant. After data-driven modeling the selected target $\beta_{n,\hat{M}}$ has become random due to data-driven selection $\hat{M}$. This, however, is the only randomness present: among all possible targets $\{ \beta_{n,M} : M \in \mathcal{M}_p \}$, one is randomly selected, namely,~$\beta_{n,\hat{M}}$. The associated estimate $\hat{\beta}_{n,\hat{M}}$ in the random model $\hat{M}$, in addition to its intrinsic variability, also incurs the randomness due to selection. On a technical level, note that the random target $\beta_{n,\hat{M}}$ for the random selection $\hat{M}$ may exist even if the estimate $\hat{\beta}_{n,\hat{M}}$ may not exist due to collinearity. This issue requires some care in Lemmas \ref{lem:UniformConsisL1} and \ref{lem:RateD1nD2n} below.
The inference criterion in \eqref{eq:PoSIRequire} can be decomposed by conditioning on the data-driven selections:
\begin{equation} \label{eq:MarginalConditional}
\mathbb{P}\left(\beta_{n,\hat{M}}\in\hat{\mathcal{R}}_{\hat{M}}\right) ~=~
\sum_{M \in \mathcal{M}_p}
\mathbb{P}\left(\beta_{n,M} \in \hat{\mathcal{R}}_{M} ~\bigg|~ \hat{M} = M\right)
\mathbb{P}\left(\hat{M} = M\right) .
\end{equation}
Plainly, if a guarantee of the form \eqref{eq:PoSIRequire} is available for the marginal probability on the left hand side, no guarantee can be deduced for the conditional probabilities given the random events $\hat{M} = M$ on the right hand side. The decomposition \eqref{eq:MarginalConditional} makes explicit the difference between our current marginal approach and the approach taken by \cite{Lee16}, \cite{Tibs16} and \cite{Tian16}, for example.
We mention briefly that \cite{Rin16} use a notion of ``honest confidence'' that asks for valid inference uniformly over a class of data-generating distributions, that is,
\[
\liminf_{n\to\infty}\,\inf_{\mathbb{P}\in\mathcal{P}_n}\mathbb{P}\left(\beta_{n,\hat{M}}\in\hat{\mathcal{R}}_{n,\hat{M}}\right) \ge 1 - \alpha,
\]
for some class of probability distributions $\mathcal{P}_n$ of the observations. This ``honesty'' holds for our results, too, due to the uniform validity of the multiplier bootstrap proved by \cite{Chern17}, but we will not discuss this further.
\subsection{Alternative Approaches}
There exists an ``obvious'' approach to valid post-selection inference based on sample splitting, as examined by \cite{Rin16}: split the data into two disjoint parts, then use one part for selecting a model $\hat{M}$ and the other part for inference in the selected model $\hat{M}$. If the two parts of the data are stochastically independent of each other, post-selection inferences will be valid. For independent observations \cite{Rin16} were able to provide very general and powerful results. Sample splitting has considerable appeal due to its universal applicability under independence of the two parts: it ``works'' for any type of model selection, formal or informal, as well as for any type of model being fitted. It has some drawbacks, too, an obvious one being the reduced sample sizes of the two parts, which increase the sampling variability of both the model selection stage and the inference stage. Another drawback is that required independence of the two parts, which makes it less obvious how to generalize sample splitting to dependent data. For customers of statistical inferences, it may also be somewhat disconcerting to realize that the splitting procedure incurs a level of artificial randomness and might have produced different results in the hands of another data analyst who would have used another random split. Reliance on random splits brings to our attention a greater concern that relates to the reproducibility crisis in the sciences: sample splitting introduces another ``researcher degree of freedom'', namely, the freedom to choose a particular split after having tried several splits. In practice it would seem extremely unrealistic to assume that data analysts will in fact commit themselves to using just one random split and not be tempted to try several. It could even be argued that using just one split would be irresponsible because it throws away a chance to learn about the stability of model selection and subsequent inferences under multiple splits. Having performed such a stability analysis, however, invalidates the post-selection inferences obtained from the splits because another level of selection arises: that of choosing one of the splits for final reporting. This would not be a problem if stability analysis showed that the same model is being selected in the vast majority of splits, but experience with regression shows that this is not the generic situation: In most regressions, there exist large numbers of submodels with nearly identical performance, making it likely that model selection will be highly variable between sample splits. In summary, while high in intuitive appeal, sample splitting opens up another pandoras box of selection possibilities that may defeat the solution it was meant to provide.
A different type of post-selection guarantees are available from the approach of \cite{Lee16}, \cite{Tibs16} and \cite{Tian16} when model selection is of a pre-specified form such as lasso selection or stepwise forward selection. The inference guarantees they provide are conditional on the selected model. Their approach is ingeniously tailored to specific formal selection methods and takes advantage of their properties. It is, however, a model-trusting approach that relies much on the correctness of the assumed model as being finite-sample correct under a Gaussian linear model with fixed covariates. For this reason and because so much conditioning is performed, it is unlikely that this approach enjoys much robustness to misspecification (see, for example, Section A.20 of \cite{MR3798003}).
By comparison, we strive here for model robustness by limiting ourselves to asymptotically correct coverage that is marginal rather than conditional, and by allowing covariates to be treated as random rather than fixed.
A larger point to be reiterated here is that tailoring post-selection inference to a specific formal selection method such as the lasso does not address the issue that data analysts may not limit themselves to just one formal selection method and nothing else. It may be more realistic to assume, as we do here, that they exercise broader liberties that include trying out multiple formal selection methods as well as informal model selection of various kinds. Providing and recommending valid post-selection inference that casts a wider net on selection methods may have a better chance of making an at least partial contribution to solving the reproducibility crisis in the sciences.
\section{Equivalence of Post-selection and Simultaneous Inference}\label{sec:Simultaneous}
The first step towards achieving the goal of constructing a set of confidence regions $\{\hat{\mathcal{R}}_{n,M}:\, M\in\mathcal{M}_p\}$ satisfying \eqref{eq:PoSIRequire} is to convert the post-selection inference problem into a simultaneous inference problem. This conversion is provided by Theorem \ref{thm:Uniform}, which parallels \cite{Berk13} but offers the generality needed here. The theorem is proved for finite samples, but a version using ``$\liminf$'' follows readily.
\begin{thm}\label{thm:Uniform}
For any set of confidence regions $\{\hat{\mathcal{R}}_{n,M}:\,M\in\mathcal{M}_p\}$ and $\alpha\in[0,1]$, the following two statements are equivalent:
\begin{itemize}
\item[$(1)$] The post-selection inference problem is solved, that is,
\[
\mathbb{P}\left(\beta_{n,\hat{M}}\in\hat{\mathcal{R}}_{n,\hat{M}}\right) \ge 1 - \alpha,
\]
for all data-dependent model selections satisfying $\mathbb{P}(\hat{M}\in\mathcal{M}_p) = 1$.
\item[$(2)$] The simultaneous inference problem over $M \!\in\! \mathcal{M}_p$ is solved, that is,
\[
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p}\left\{\beta_{n,M}\in\hat{\mathcal{R}}_{n,M}\right\}\right) \ge 1 - \alpha.
\]
\end{itemize}
\end{thm}
\begin{proof}
Define for any fixed $M\!\in\!\mathcal{M}_p$ the coverage event $\mathcal{A}_M \!=\! \{\beta_{n,M}\in\hat{\mathcal{R}}_{n,M}\}$, and similarly $\mathcal{A}_{\hat{M}} \!=\! \{\beta_{n,\hat{M}} \in \hat{\mathcal{R}}_{n,\hat{M}}\}$. Note that $\mathcal{A}_{\hat{M}}$ is the event in $(1)$ and $\bigcap_{M\in\mathcal{M}_p} \mathcal{A}_M$ the event in (2).
$(2)\Rightarrow(1)$: It is sufficient to show that for any random selection procedure $\hat{M}$ we have
\[
\bigcap_{M\in\mathcal{M}_p} \mathcal{A}_M ~\subseteq~ \mathcal{A}_{\hat{M}}.
\]
Because $\hat{M}$ takes on values in $\mathcal{M}_p$ only, $\bigcup_{M' \in \mathcal{M}_p} \{ \hat{M} = M' \}$ is the whole sample space. Hence
\begin{align*}
\mathcal{A}_{\hat{M}} &= \bigcup_{M^{\prime}\in\mathcal{M}_p} \{\hat{M} = M^{\prime}\} \cap \mathcal{A}_{M^{\prime}} \\
&\supseteq \bigcup_{M^{\prime}\in\mathcal{M}_p} \{\hat{M} = M^{\prime}\} \cap \bigcap_{M\in\mathcal{M}_p}\mathcal{A}_M \\
&= \bigcap_{M\in\mathcal{M}_p}\mathcal{A}_M .
\end{align*}
$(1)\Rightarrow(2)$: To prove this implication, it is sufficient to construct a data-driven (hence random) selection procedure $\hat{M}$ that satisfies
\begin{equation} \label{eq:worst-Mhat}
\mathcal{A}_{\hat{M}} ~= \bigcap_{M\in\mathcal{M}_p} \mathcal{A}_M .
\end{equation}
This is achieved by letting $\hat{M}$ be any selection procedure that satisfies
\[
\hat{M} ~\in~ \argmin_{M\in\mathcal{M}_p}\, \mathbbm{1}\{\mathcal{A}_M\} ,
\]
where $\mathbbm{1}\{A\}$ denotes the indicator of event $A$. It follows that
\[
\mathbbm{1}\{\mathcal{A}_{\hat{M}}\} ~=~ \min_{M\in\mathcal{M}_p}\,\mathbbm{1}\{\mathcal{A}_M\} ,
\]
which is equivalent to \eqref{eq:worst-Mhat}. This completes the proof of $(1)\Rightarrow(2)$.
\end{proof}
\begin{rem}\label{rem:theorem1-proof}
The proof makes no use of the regression context at all; it is merely about indexed sets/events $\mathcal{A}_M$ and random selections $\hat{M}$ of the indexes~$M$. The second part of the proof constructs an adversarial random selection procedure $\hat{M}$ that requires simultaneous coverage over all~$M$.
\end{rem}
\begin{rem}\label{rem:theorem1-meaning}
The theorem establishes the equivalence of family-wise simultaneous coverage and post-selection coverage allowing for arbitrary random selection. The argument, because it makes no use of the regression context, applies to any type of regression.
\end{rem}
\begin{rem}\label{rem:theorem1-Berk13}
Lemma~4.1 in \cite{Berk13} (``Significant triviality bound'') corresponding to Theorem~\ref{thm:Uniform} is much more intuitive because it is based on maxima over pivotal $t$-statistics rather than confidence regions. The gain in intuition, however, is purchased at a price: an injection of mathematically irrelevant detail. The bare-bones nature of the underlying structure is revealed by the above proof which does not even involve probability but set theory only.
\end{rem}
\begin{rem}\;(Inherent High-dimensionality)\label{rem:highdim}
Returning to regression, note that in view of Theorem~\ref{thm:Uniform}, valid post-selection inference is inherently a high-dimensional problem in the sense that the number of parameters subject to estimation and inference is large, indeed, often larger than the sample size. For illustration, consider a common regression setting where the number of covariates is $p = 10$ and the sample of size $n = 500$. Estimation and testing of the slopes in the full model seems unproblematic because there are 50 observations per parameter. Now, for the post-selection inference problem with all non-empty sub-models, there are $2^{p} - 1 = 1023$ vector parameters of varying dimensions, adding up to a total of $p2^{p-1} = 5120$ parameters in the various submodels, exceeding the sample size $n=500$ by a factor of ten and thus constituting an inference problem in the high-dimensional category.
\end{rem}
Theorem \ref{thm:Uniform} shows that in order to achieve universally valid post-selection inference, that is, inference that satisfies \eqref{eq:PoSIRequire} {\bf\em for all} data-driven selection procedures $\hat{M}$, it is necessary and sufficient to construct a set of confidence regions $\hat{\mathcal{R}}_{n,M}$ such that
\begin{equation}\label{eq:PoSIRequire2}
\liminf_{n\to\infty}\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p}\left\{\beta_{n,M} \in \hat{\mathcal{R}}_{n,M}\right\}\right) \ge 1 - \alpha.
\end{equation}
All of our solutions to the post-selection inference problem in this article are constructed to satisfy~\eqref{eq:PoSIRequire2}.
\section{An Approach to Post-Selection Inference}\label{sec:FirstStrat}
\subsection{Valid Confidence Regions} \label{sec:ValidConf}
Equipped with the required notation, we proceed to construct confidence regions $\hat{\mathcal{R}}_{n,M}$ for linear regression. From Equations~\eqref{eq:EmpObj} and~\eqref{eq:PopObj}, we see that the least squares estimator and target given in \eqref{eq:LeastSquares} can be written as
\begin{equation}\label{eq:ModifiedObjective}
\begin{split}
\hat{\beta}_{n,M} &= \argmin_{\theta\in\mathbb{R}^{|M|}}\, \left\{\theta^{\top}\hat{\Sigma}_n(M)\theta - 2\theta^{\top}\hat{\Gamma}_n(M)\right\},\quad \mbox{and}\\
\beta_{n,M} &= \argmin_{\theta\in\mathbb{R}^{|M|}}\, \left\{\theta^{\top}\Sigma_n(M)\theta - 2\theta^{\top}\Gamma_n(M)\right\}.
\end{split}
\end{equation}
The differences between the two objective functions in \eqref{eq:ModifiedObjective} can be controlled in terms of two error norms below related to the $\Sigma$ matrices and the $\Gamma$ vectors defined in \eqref{eq:MatrixVector}. Define therefore the estimation errors of $\hat{\Sigma}_n$ and $\hat{\Gamma}_n$ as follows:
\begin{equation}\label{eq:D1nD2n}
\begin{split}
\mathcal{D}_{n}^{\Sigma} &~:=~ \norm{\hat{\Sigma}_n - \Sigma_n}_{\infty} = \max_{M\in\mathcal{M}_p(2)}\norm{\hat{\Sigma}_n(M) - \Sigma_n(M)}_{\infty},
\\
\mathcal{D}_{n}^{\Gamma} &~:=~ \norm{\hat{\Gamma}_n - \Gamma_n}_{\infty} = \max_{M\in\mathcal{M}_p(1)}\norm{\hat{\Gamma}_n(M) - \Gamma_n(M)}_{\infty}.
\end{split}
\end{equation}
The equalities on the right are useful trivialities given here for later use: $\mathcal{M}_p(2)$ and $\mathcal{M}_p(1)$ are the sets of all models of sizes bounded by 2 and 1, respectively, where size 1 is sufficient for ``$\max$'' to reach all elements of the $\Gamma$ vectors, but size 2 is needed for ``$\max$'' to reach all off-diagonal elements of the $\Sigma$ matrices as well. Importantly, neither $\mathcal{D}_{n}^{\Sigma}$ nor $\mathcal{D}_{n}^{\Gamma}$ is a function of submodels~$M$.
The quantities $\mathcal{D}_{n}^{\Sigma}$ and $\mathcal{D}_{n}^{\Gamma}$ are statistics whose quantiles will play an essential role in the construction of the confidence regions to be defined next. In each submodel $M \in \mathcal{M}_p(p)$, we will construct for the parameter vector $\beta_{n,M}$ two confidence regions: The first satisfies {\em finite sample} guarantees at the cost of lesser transparency, whereas the second satisfies {\em asymptotic} guarantees with the benefit of greater simplicity. The motivations for the particular forms of these regions will become clear in the course of the elementary proofs of the theorems to follow. With these preliminary remarks in mind, we define
\begin{align}
\hat{\mathcal{R}}_{n,M} &:= \left\{\theta\in\mathbb{R}^{|M|}:\, \norm{\hat{\Sigma}_n(M)\left\{\hat{\beta}_{n,M} - \theta\right\}}_{\infty} \le C_{n}^{\Gamma}(\alpha) + C_{n}^{\Sigma}(\alpha)\norm{\theta}_1\right\},
\label{eq:FirstFinite}\\
\hat{\mathcal{R}}_{n,M}^{\dagger} &:= \left\{\theta\in\mathbb{R}^{|M|}:\, \norm{\hat{\Sigma}_n(M)\left\{\hat{\beta}_{n,M} - \theta\right\}}_{\infty} \le C_{n}^{\Gamma}(\alpha) + C_{n}^{\Sigma}(\alpha)\norm{\hat{\beta}_{n,M}}_1\right\},
\label{eq:FirstAsym}
\end{align}
where $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ are bivariate joint quantiles of $\mathcal{D}_{n}^{\Gamma}$ and $\mathcal{D}_{n}^{\Sigma}$ in \eqref{eq:D1nD2n}, that is,
\begin{equation}\label{eq:ConfidenceR}
\mathbb{P}\left(\mathcal{D}_{n}^{\Gamma} \le C_{n}^{\Gamma}(\alpha) ~~\textrm{and}~~ \mathcal{D}_{n}^{\Sigma} \le C_{n}^{\Sigma}(\alpha)\right) ~\ge~ 1 - \alpha.
\end{equation}
\begin{rem}{ (Restriction of Models for Selection)}
The confidence regions defined in \eqref{eq:FirstFinite} and \eqref{eq:FirstAsym} do not take advantage of restricted model universes such as ``sparse model selection'' where $\hat{M} \in \mathcal{M}_p(k)$ searches only models of sizes up to $k~(< p)$. It might, however, be of practical interest to consider the post-selection inference problem when the set of models used in selection is indeed a strict subset of the set $\mathcal{M}_p(p)$ of all models. This can be accommodated with an obvious tweak whereby
\[
\mathcal{D}_{n}^{\Gamma}(\mathcal{M}_p) := \sup_{M\in\mathcal{M}_p}\norm{\hat{\Gamma}_n(M) - \Gamma_n(M)}_{\infty}\quad\mbox{and}\quad \mathcal{D}_{n}^{\Sigma}(\mathcal{M}_p) := \sup_{M\in\mathcal{M}_p}\norm{\hat{\Sigma}_n(M) - \Sigma_n(M)}_{\infty}
\]
become functions of the restricted model universe $\mathcal{M}_p ~(\subsetneq\mathcal{M}_p(p))$. Note, however, that according to~\eqref{eq:D1nD2n} we have $\mathcal{D}_{n}^{\Gamma}(\mathcal{M}_p) = \mathcal{D}_{n}^{\Gamma}$ as long as the model universe $\mathcal{M}_p$ includes all models of size one, and $\mathcal{D}_{n}^{\Sigma}(\mathcal{M}_p) = \mathcal{D}_{n}^{\Sigma}$ as long as $\mathcal{M}_p$ includes all models of size two. This is the case, for example, when ``sparse model selection'' is used, meaning $\mathcal{M}_p = \mathcal{M}_p(k)$ for $k < p$. Thus confidence regions of the form \eqref{eq:FirstFinite} do not gain from ``sparse model selection.'' This is so because the regions depend effectively only on marginal and bivariate properties of the observations~$(X_i,Y_i)$ and their distributions through $\Gamma_n$, $\hat{\Gamma}_n$, $\Sigma_n$ and $\hat{\Sigma}_n$.
\end{rem}
\medskip
\noindent Further observations on $(\mathcal{D}_n^{\Gamma}, \mathcal{D}_n^{\Sigma})$ and $(C_n^{\Gamma}(\alpha), C_n^{\Sigma}(\alpha))$:
\begin{itemize} \itemsep 0em
\item Bivariate quantiles are not unique: one may marginally increase one and decrease the other suitably, maintaining the bivariate coverage probability $1-\alpha$. Allowed is any choice of $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ that satisfies~\eqref{eq:ConfidenceR}.
\item These quantiles are not known and must be estimated from the data. A bootstrap procedure to estimate them is described in Section~\ref{sec:Comp}.
\item The estimation errors $\mathcal{D}_{n}^{\Gamma}$ and $\mathcal{D}_{n}^{\Sigma}$, being based on averages of quantities of dimensions $p \times 1$ and $p \times p$, respectively, converge by the law of large numbers to zero as $n\to\infty$ under mild conditions (see Lemma~\ref{lem:RateD1nD2n}). Therefore, $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ converge to zero as~$n\to\infty$.
\end{itemize}
\subsection{Validity of the Confidence Regions $\hat{\mathcal{R}}_{n,M}$}
We proceed to proving validity of the simultaneous inference guarantee \eqref{eq:PoSIRequire2}. This will be done in Theorem \ref{thm:Appr1.2} for the confidence regions $\hat{\mathcal{R}}_{n,M}$ where $M \in \mathcal{M}_p(p)$, and in Theorem \ref{thm:PoSIX} for the confidence regions $\hat{\mathcal{R}}^{\dagger}_{n,M}$ where $M \in \mathcal{M}_p(k)$ for some $k \le p$.
\begin{thm}\label{thm:Appr1.2}
The set of confidence regions $\{\hat{\mathcal{R}}_{n,M}:\,M\in\mathcal{M}_p(p)\}$ defined in \eqref{eq:FirstFinite} satisfies
\begin{equation}\label{eq:SimultaneousG1.2}
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\beta_{n,M} \in \hat{\mathcal{R}}_{n,M}\right\}\right) \ge 1 - \alpha,
\end{equation}
Furthermore, for any random model $\hat{M}$ with $\mathbb{P}(\hat{M}\in\mathcal{M}_p(p)) = 1$, we have
\[
\mathbb{P}\left(\beta_{n,\hat{M}}\in\hat{\mathcal{R}}_{n,\hat{M}}\right) \ge 1 - \alpha.
\]
\end{thm}
As mentioned earlier, this theorem is non-asymptotic as it provides guarantees for finite samples. It is, however, not directly actionable because, as mentioned earlier also, the bivariate quantiles used in the construction of the confidence regions need to be estimated. Hence actionable versions of these regions end up having only asymptotic guarantees as well.
\begin{proof}
The proof is surprisingly elementary and involves simple manipulation of the estimating equations. We start by subtracting the normal equations of the target from those of the estimates, see \eqref{eq:MatrixVector-OLS}. This holds for all $M \in \mathcal{M}_p(p)$:
\begin{align}
\hat{\Sigma}_n(M)\hat{\beta}_{n,M} - \Sigma_n(M)\beta_{n,M}
&~=~ \hat{\Gamma}_n(M) - \Gamma_n(M) .
\end{align}
Telescope the left side by subtracting and adding $\hat{\Sigma}_n(M) \beta_{n,M}$:
\begin{align}
\hat{\Sigma}_n(M) \left( \hat{\beta}_{n,M} - \beta_{n,M} \right)
+
\left( \hat{\Sigma}_n(M)- \Sigma_n(M) \right) \beta_{n,M}
&~=~ \hat{\Gamma}_n(M) - \Gamma_n(M) ,
\end{align}
Move the second summand on the left to the right side of the equality, take the sup norm and apply the triangle inequality on the right side:
\begin{align}
\left\| \hat{\Sigma}_n(M) \left( \hat{\beta}_{n,M} - \beta_{n,M} \right) \right\|_\infty
&~\le~
\left\| \hat{\Gamma}_n(M) - \Gamma_n(M) \right\|_\infty +
\left\| \left( \hat{\Sigma}_n(M)- \Sigma_n(M) \right) \beta_{n,M} \right\|_\infty ,
\end{align}
Applying the second inequality in \eqref{eq:MatrixVectorIneq} to the last term it follows that
\begin{equation}\label{eq:Conserv2}
\norm{\hat{\Sigma}_n(M)\left\{\hat{\beta}_{n,M}-\beta_{n,M}\right\}}_{\infty} \le \norm{\hat{\Gamma}_n(M) - \Gamma_n(M)}_{\infty} + \norm{\hat{\Sigma}_n(M) - \Sigma_n(M)}_{\infty}\norm{\beta_{n,M}}_{1}.
\end{equation}
Because $\hat{\Gamma}_n(M) - \Gamma_n(M)$ and $\hat{\Sigma}_n(M) - \Sigma_n(M)$ are a subvector and a submatrix of $\hat{\Gamma}_n - \Gamma_n$ and $\hat{\Sigma}_n - \Sigma_n$, respectively, this inequality implies
\begin{equation}\label{eq:MainConserv}
\norm{\hat{\Sigma}_n(M)\left\{\beta_{n,M} - \hat{\beta}_{n,M}\right\}}_{\infty} \le \norm{\hat{\Gamma}_n - \Gamma_n}_{\infty} + \norm{\hat{\Sigma}_n - \Sigma_n}_{\infty}\norm{\beta_{n,M}}_{1}.
\end{equation}
This inequality is deterministic and holds for any sample. It also holds for all $M \in \mathcal{M}_p(p)$. These facts allow us to take the intersection of the events \eqref{eq:MainConserv} over all submodels $M$ and transform it into a ``probability one'' statement. Using $\mathcal{D}_{n}^{\Gamma}$ and $\mathcal{D}_{n}^{\Sigma}$ defined in \eqref{eq:D1nD2n}, we have
\begin{equation}\label{eq:MainProb1}
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\norm{\Sigma_n(M)\left\{\beta_{n,M} - \hat{\beta}_{n,M}\right\}}_{\infty}
~\le~
\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_{1}\right\}\right)
~=~ 1.
\end{equation}
From the definitions of $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ in \eqref{eq:ConfidenceR} follows the required result \eqref{eq:SimultaneousG1.2}. The second result of post-selection guarantees for random models follows by an application of Theorem~\ref{thm:Uniform}.
\end{proof}
\begin{rem}{ (Reach of the Validity Guarantee)}
It is interesting to note that the guarantee \eqref{eq:SimultaneousG1.2} in Theorem \ref{thm:Appr1.2} is valid for every sample size $n$ and any number of covariates $p$. In particular, $p \gg n$ and $p = \infty$ are covered without difficulty even though $\hat{\Sigma}_n(M)$ is necessarily singular for $|M| > n$. For this to make sense recall that for singular $\hat{\Sigma}_n(M)$ the confidence region $\hat{\mathcal{R}}_{n,M}$ simply contains a non-trivial affine subspace of~$\mathbb{R}^p$.
\end{rem}
\begin{rem} { (Estimation of Bivariate Quantiles)}
The finite sample guarantee \eqref{eq:SimultaneousG1.2} requires the bivariate quantiles $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ of $\mathcal{D}_{n}^{\Gamma}$ and $\mathcal{D}_{n}^{\Sigma}$, respectively, to satisfy \eqref{eq:ConfidenceR} for all $p, n \ge 1$. In general, these bivariate quantiles can only be estimated consistently in the asymptotic sense as explained in Section~\ref{sec:Comp}.
\end{rem}
\begin{rem}{ (Independence of Observations)}\label{rem:Indep}
For simplicity in the discussion above, we used the assumption of independence of random vectors $(X_i, Y_i), 1\le i\le n$. Theorem \ref{thm:Appr1.2} holds without this assumption because no use of this assumption was made in its proof. However, validity of the post-selection guarantee holds as long as $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ are valid quantiles in the sense of~\eqref{eq:ConfidenceR}.
\end{rem}
\subsection[Asymptotic Validity of dagger]{Asymptotic Validity of the Confidence Regions $\hat{\mathcal{R}}_{n,M}^{\dagger}$}
The confidence region $\hat{\mathcal{R}}_{n,M}$ is difficult to analyze in terms of its shape and its Lebesgue measure. (However, with a different parametrization of $\hat{\mathcal{R}}_{n,M}$, \cite{Belloni17} prove that this confidence region is a convex polyhedron; see Equation (42) of the supplement of \cite{Belloni17}.) Because of these difficulties we also prove asymptotic validity of more intuitive confidence regions of the form $\hat{\mathcal{R}}_{n,M}^{\dagger}$ defined in \eqref{eq:FirstAsym}. Because these regions depend on estimates $\hat{\beta}_{n,M}$ whose variability explodes under increasing collinearity, we need to control the minimum eigenvalue of the matrix $\Sigma_n(M)$ for models up to size $k$ to preclude too much collinearity in the limit:
\begin{equation}\label{eq:Lambdak}
\Lambda_n(k) := \min_{M\in\mathcal{M}_p(k)}\lambda_{\min}(\Sigma_n(M)).
\end{equation}
We then make use of the following assumption:
\begin{description}
\item[\namedlabel{eq:UniformConsis}{(A1)(k)}]
The estimation error $\mathcal{D}_{n}^{\Sigma}$ satisfies
\[
k\mathcal{D}_{n}^{\Sigma} = o_p\left(\Lambda_n(k)\right)\quad\mbox{as}\quad n\to\infty.
\]
\end{description}
This assumption is used for uniform consistency of the least squares estimator in $\norm{\cdot}_1$-norm as in Lemma \ref{lem:UniformConsisL1}. The rate of convergence of $\mathcal{D}_{n}^{\Sigma}$ to zero implies a rate constraint on $k$. Here, as before, $k = k_n$ is allowed to be a sequence depending on $n$. As can be expected, the dependence structure between the random vectors $(X_i, Y_i), 1\le i\le n$ and their moments determine the rate at which $\mathcal{D}_{n}^{\Sigma}$ converges to zero. See Lemma \ref{lem:RateD1nD2n} for more details. The theorem is stated with this high level assumption so that it is more widely applicable in particular to various structural dependencies on observations. Note that assumption \ref{eq:UniformConsis} allows for the minimum eigenvalue of $\Sigma_n$ to converge to zero or even be zero as $n\to\infty$ if $p = p_n$ changes with $n$.
Before proceeding to the proof that $\hat{\mathcal{R}}^{\dagger}_{n,M}$ are asymptotically valid post-selection confidence regions, we prove uniform-in-model consistency of $\hat{\beta}_{n,M}$ to $\beta_{n,M}$. See Appendix \ref{app:ProofLemmaL1} for a detailed proof. Also, see \cite{Uniform:Kuch18} for more results of this flavor.
\begin{lem}\label{lem:UniformConsisL1}
For all $k\ge 1$ satisfying $k\mathcal{D}_{n}^{\Sigma} \le \Lambda_n(k)$ and for all $M\in\mathcal{M}_p(k)$,
\begin{equation}\label{eq:MarginalL1}
\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_1 \le \frac{|M|\left(\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1\right)}{\Lambda_n(k) - k\mathcal{D}_{n}^{\Sigma}}.
\end{equation}
\end{lem}
The following theorem proves the validity of the simultaneous inference guarantee for~$\hat{\mathcal{R}}^{\dagger}_{n,M}$.
\begin{thm}\label{thm:PoSIX}
For every $1\le k\le p$ that satisfies \ref{eq:UniformConsis}, the confidence regions $\hat{\mathcal{R}}_{n,M}^{\dagger}$ defined in \eqref{eq:FirstAsym} satisfy
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\beta_{n,M}\in\hat{\mathcal{R}}_{n,M}^{\dagger}\right\}\right) \ge 1 - \alpha.
\]
\end{thm}
\begin{proof}
The starting point of this proof is Equation \eqref{eq:MainProb1}. Under assumption \ref{eq:UniformConsis}, Lemma \ref{lem:UniformConsisL1} (inequality \eqref{eq:MarginalL1}) implies that for all $M\in\mathcal{M}_p(k)$,
\begin{align*}
\left|\frac{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\hat{\beta}_{n,M}}_1}{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1} - 1\right| &\le \frac{\mathcal{D}_{n}^{\Sigma}\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_1}{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1}\\
&\le \frac{\mathcal{D}_{n}^{\Sigma}}{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1}\cdot \frac{|M|\left\{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1\right\}}{\Lambda_n(k) - |M|\mathcal{D}_{n}^{\Sigma}}\\
&\le \frac{k\mathcal{D}_{n}^{\Sigma}}{\Lambda_n(k) - k\mathcal{D}_{n}^{\Sigma}}.
\end{align*}
Therefore, for $1\le k\le p$ satisfying assumption \ref{eq:UniformConsis},
\[
\sup_{M\in\mathcal{M}_p(k)}\left|\frac{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\hat{\beta}_{n,M}}_1}{\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1} - 1\right| \le \frac{k\mathcal{D}_{n}^{\Sigma}/\Lambda_n(k)}{1 - \left(k\mathcal{D}_{n}^{\Sigma}/\Lambda_n(k)\right)} = o_p(1).
\]
Hence,
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\norm{\Sigma_n(M)\left\{\beta_{n,M} - \hat{\beta}_{n,M}\right\}}_{\infty} \le \mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\hat{\beta}_{M}}_{1}\right\}\right) = 1.
\]
The definition of $(C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha))$ in \eqref{eq:ConfidenceR} proves the required result.
\end{proof}
\subsection[Further Remarks]{Further Remarks on the Confidence Regions $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}^{\dagger}_{n,M}$}
\begin{rem}{ (Centering and Scaling)}\label{rem:Invariance}
The confidence regions $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}^{\dagger}_{n,M}$ are not equivariant with respect to linear transformation of covariates or the response. Equivariance is an important feature for practical interpretation. A simple way to obtain equivariance with respect to diagonal linear transformations of the random vectors would be to use linear regression with covariates centered and scaled to have sample mean zero and sample variance 1. Since the validity of confidence regions does not require independence, as mentioned in Remark \ref{rem:Indep}, this centering and scaling based on the data will not affect the post-selection guarantee as long as marginal means and variances are estimated consistently. This might also have an effect on the volume of the confidence regions not in terms of rate but in terms of constants since the intercept is not longer needed in $\norm{\beta_{n,M}}_1$. See Section \ref{sec:ProsAndCons} for more details.
\end{rem}
\begin{rem}{ (Shape of $\hat{\mathcal{R}}_{n,M}^{\dagger}$)}
The confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ is a polyhedron, because it can be described by $2|M|$ linear inequalities (with random coefficients). More specifically, it is a parallelepiped because the inequalities come in pairs of parallel constraints. The Lebesgue measure of this confidence region is much easier to study than that of the region $\hat{\mathcal{R}}_{n,M}$ (see Proposition~\ref{lem:LebesgueMeasure} below).
\end{rem}
\begin{rem}{ (Comparison of $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}_{n,M}^{\dagger}$ in Testing)} As mentioned before, the shape of the confidence region $\hat{R}_{n,M}$ is not easily described. There are, however, scenarios where the advantages of $\hat{\mathcal{R}}_{n,M}$ over $\hat{\mathcal{R}}_{n,M}^{\dagger}$ can be clearly understood. Consider the problem of significance testing, that is, $H_{0,M}:\,\beta_{n,M} = 0$. The level $\alpha$ test based on the confidence region $\hat{\mathcal{R}}_{n,M}$ rejects $H_{0,M}$ if
\begin{equation}\label{eq:RejectionRegionFinite}
\norm{\hat{\Sigma}_n(M)\hat{\beta}_{n,M}}_{\infty} \ge C_n^{\Gamma}(\alpha).
\end{equation}
By comparison, the level $\alpha$ test based on the confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ rejects $H_{0,M}$ if
\begin{equation}\label{eq:RejectionRegionAsym}
\norm{\hat{\Sigma}_{n}(M)\hat{\beta}_{n,M}}_{\infty} \ge C_n^{\Gamma}(\alpha) + C_n^{\Sigma}\norm{\hat{\beta}_{n,M}}_1.
\end{equation}
Thus $\hat{\mathcal{R}}_{n,M}$ results in more rejections and hence greater power than $\hat{\mathcal{R}}_{n,M}^{\dagger}$ at the same level~$\alpha$. A similar argument holds even if the null hypothesis is changed to $H_0:\,\beta_{n,M} = \theta_0\in\mathbb{R}^{|M|}$ for some sparse~$\theta_0$.
\end{rem}
\subsection[Rate Bounds and Lebesgue Measure]{Rate Bounds on $\mathcal{D}_n^{\Gamma}$, $\mathcal{D}_n^{\Sigma}$ and Lebesgue Measure of the Regions}
Before proceeding further with the study of the confidence regions, it might be useful to understand the rates at with $\mathcal{D}_{n}^{\Gamma}$ and $\mathcal{D}_{n}^{\Sigma}$ converge to zero under some assumptions on the initial random vectors $(X_i, Y_i), 1\le i\le n$. As mentioned in Remark \ref{rem:Indep}, the validity of post-selection coverage guarantee does not require independence of random vectors and so, a rate result under ``functional dependence'' is presented in Appendix \ref{app:Dependent}. Set $Z_i = (X_i^{\top}, Y_i)^{\top}$ for $1\le i\le n$ and define
\begin{equation}\label{eq:OmegaDef}
\hat{\Omega}_n := \frac{1}{n}\sum_{i=1}^n Z_iZ_i^{\top},\quad\mbox{and}\quad \Omega_n := \frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[Z_iZ_i^{\top}\right]\in\mathbb{R}^{(p+1)\times(p+1)}.
\end{equation}
Observe that $$\max\{\mathcal{D}_{n}^{\Gamma}, \mathcal{D}_{n}^{\Sigma}\} \le \norm{\hat{\Omega}_n - \Omega_n}_{\infty}.$$
The following lemma from \cite{Uniform:Kuch18} proves a finite sample bound for the expected value of the maximum absolute value of $\hat{\Omega}_n - \Omega_n$. For this result, set for $\gamma > 0$ and any random variable $W$,
\[
\norm{W}_{\psi_{\gamma}} := \inf\left\{C > 0:\,\mathbb{E}\left[\psi_{\gamma}\left(\frac{|W|}{C}\right)\right] \le 1\right\},
\]
where $\psi_{\gamma}(x) = \exp(x^{\gamma}) - 1$ for $x \ge 0$. For $0 < \gamma < 1$, $\norm{\cdot}_{\psi_{\gamma}}$ is not a norm but is a quasi-norm. A random variable $W$ satisfying $\norm{W}_{\psi_{\gamma}} < \infty$ is called a sub-Weibull random variable of order $\gamma$. The special cases $\gamma = 1$ and $\gamma = 2$ correspond to the well-known classes of sub-exponential and sub-Gaussian random variables.
\begin{lem}\label{lem:RateD1nD2n}
Fix $n, p\ge 2$. Suppose the random vectors $Z_i, 1\le i\le n$ are independent and satisfy for some $0 < \gamma \le 2$
\begin{equation}\label{eq:MarginalPhi}
\max_{1\le i\le n}\max_{1\le j\le p+1}\norm{Z_i(j)}_{\psi_{\gamma}} \le K_{n,p},
\end{equation}
for some positive constant $K_{n,p}.$ Then
\begin{equation}\label{eq:ExpMaxBd}
\mathbb{E}\left[\sqrt{n}\norm{\hat{\Omega}_n - \Omega_n}_{\infty}\right] \le C_{\gamma}\left\{A_{n,p}\sqrt{\log p} + {K_{n,p}^2}(\log p\log n)^{2/\gamma}n^{-1/2}\right\},
\end{equation}
and for all $\alpha\in(0,1]$,
\[
\max\{C_n^{\Gamma}(\alpha), C_n^{\Sigma}(\alpha)\} \le 7A_{n,p}\sqrt{\frac{\log\left(\frac{3}{\alpha}\right) + 2\log p}{n}} + \frac{C_{\gamma}K_{n,p}^2(\log(2n))^{2/\gamma}(\log\left(\frac{3}{\alpha}\right) + 2\log p)^{2/\gamma}}{n},
\]
where $C_{\gamma}$ is a positive universal constant that grows at the rate of $(1/\gamma)^{1/\gamma}$ as $\gamma\downarrow 0$ and
\[
A_{n,p}^2 := \max_{1\le j\le k\le p+1}\, \frac{1}{n}\sum_{i=1}^n \mbox{Var}\left(Z_i(j)Z_i(k)\right).
\]
\end{lem}
\begin{proof}
See Theorem 4.1 of \cite{KuchAbhi17}. A similar result holds for $\gamma > 2$ (the case in which the random variables have tails lighter than the Gaussian). See Theorem 3.4 of \cite{KuchAbhi17} for a result in this direction.
\end{proof}
The confidence regions $\hat{\mathcal{R}}_{n,M}^{\dagger}$ are simple parallelepipeds and can be seen as linear transformations of $\norm{\cdot}_{\infty}$-norm balls. Hence, their Lebesgue measures can be computed exactly. Since the confidence regions are valid over a large number of models, we present a relative Lebesgue measure result uniform over a set of models. For $A\subseteq\mathbb{R}^q$ with $q\ge 1$, let $\mbox{\textbf{Leb}}(A)$ denote the Lebesgue measure of $A$ with the measure supported on $\mathbb{R}^q$. For convenience, we do not use different notations for the Lebesgue measure for different $q\ge 1$.
\begin{prop}\label{lem:LebesgueMeasure}
For any $k\ge 1$ such that assumption \ref{eq:UniformConsis} are satisfied, the uniform relative Lebesgue measure result holds:
\begin{equation}\label{eq:FistBoundLebesgueMeasure}
\sup_{M\in\mathcal{M}_p(k)}\frac{\mbox{\textbf{Leb}}\left(\hat{\mathcal{R}}_{n,M}^{\dagger}\right)\Lambda_n^{|M|}(k)}{(C_{n}^{\Gamma}(\alpha) + C_{n}^{\Sigma}(\alpha)\norm{\beta_{n,M}}_1)^{|M|}} = O_p(1).
\end{equation}
Hence, it can be said that $\mbox{\textbf{Leb}}(\hat{\mathcal{R}}_{n,M}^{\dagger}) = O_p(\mathcal{D}_{n}^{\Gamma} + \mathcal{D}_{n}^{\Sigma}\norm{\beta_{n,M}}_1)^{|M|}$ uniformly for $M\in\mathcal{M}_p(k)$ if $\Lambda_n^{-1}(k) = O(1)$. Moreover, additionally under the setting of Lemma \ref{lem:RateD1nD2n},
\begin{equation}\label{eq:LebesgueRandomX}
\mbox{\textbf{Leb}}\left(\hat{\mathcal{R}}_{n,M}^{\dagger}\right) = O_p\left(\sqrt{\frac{|M|\log p}{n}}\right)^{|M|}\quad\mbox{uniformly for }M\in\mathcal{M}_p(k),
\end{equation}
if $p$ and $n$ satisfy
\begin{equation}\label{eq:pNotIncrease}
(\log p)^{2/\alpha}(\log n)^{2/\alpha - 1/2} = o(n^{1/2}).
\end{equation}
\end{prop}
\begin{proof}
See Appendix \ref{app:LebesgueMeasure} for a detailed proof.
\end{proof}
\begin{rem}{ (Is the rate optimal?)}
Even though the problem of post-selection inference is studied from various perspectives as discussed in Section~\ref{sec:problem_formulation}, we do not know of a result regarding the optimal size of confidence regions in the post-selection problem. The following argument hints that the rate derived in~\eqref{eq:LebesgueRandomX} is indeed optimal. Since by Theorem~\ref{thm:Uniform} shows simultaneous inference has to be solved for post-selection guarantees, we need to infer about the set of ``parameters'' or functionals
\[
\{\beta_{n,M}(j):\,M\in\mathcal{M}_p(k)\}.
\]
The total number of functionals here is given by
\[
\sum_{\ell = 1}^{k} \binom{p}{\ell}\ell \le k\sum_{\ell = 1}^k \binom{p}{\ell} \le k\sum_{\ell = 1}^k\frac{p^{\ell}}{\ell!} \le k\sum_{\ell = 1}^k \frac{k^{\ell}}{\ell!}\left(\frac{p}{k}\right)^{\ell} \le k\left(\frac{ep}{k}\right)^k \le \left(\frac{2ep}{k}\right)^k.
\]
Even assuming $\sqrt{n}(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j))$ is exactly normal for all $M\in\mathcal{M}_p(k)$ and $j\in M$, we get that
\begin{equation}\label{eq:FullSupremum}
\max_{M\in\mathcal{M}_p(k)}\,\left|\frac{\sqrt{n}\left(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j)\right)}{\sigma_{n,M}(j)}\right| = O_p\left(\sqrt{{k\log(ep/k)}}\right).
\end{equation}
See, for example, Equation (4.3.1) of \cite{DeLaPena99} and the discussion following. Here $\sigma_{n,M}(j)$ represents the variance of $\sqrt{n}(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j))$. Note that the normality assumption implies that
\[
\norm{\sqrt{n}\left(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j)\right)}_{\psi_2} < \infty,
\]
which is enough to apply Equation (4.3.1) of~\cite{DeLaPena99}.
It is possible to get a bound sharper than~\eqref{eq:FullSupremum} with model size dependent scaling. For instance, applying Proposition 4.3.1 of \cite{DeLaPena99}, we get
\begin{equation}\label{eq:SeparatedMaximum}
\max_{1\le \ell \le k}\frac{1}{\sqrt{\log(1 + \ell)}}\max_{M\in\mathcal{M}_p(\ell)\cap\mathcal{M}^c_p(\ell - 1)}\frac{\left|\sqrt{n}\left(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j)\right)/\sigma_{n,M}(j)\right|}{\sqrt{\ell\log(ep/\ell)}} = O_p(1).
\end{equation}
See Appendix~\ref{app:SeparatedMaximum} for a precise statement and proof. This hints that for any model $M$, the confidence region for $\beta_{n,M}$ in the context of simultaneous inference has Lebesgue measure of order $(\sqrt{|M|\log p/n})^{|M|}$. Note that the arguments above are all upper bounds and so they do not prove a lower bound for the Lebesgue measure. This suggests that the Lebesgue measure of our confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ in~\eqref{eq:FirstAsym} is of optimal rate, in general.
\end{rem}
\subsection{Confidence Regions under Fixed Covariates} \label{rem:FixedX}
Since most of the post-selection inference literature as reviewed in Section \ref{sec:Notation} deals with the case of fixed covariates, it is of particular interest to understand how our confidence regions behave in this case. In our framework we can interpret fixed covariates as having point mass distributions at the observed value $X_i$, hence:
\[
\Sigma_n = \frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[X_iX_i^{\top}\right] = \frac{1}{n}\sum_{i=1}^n X_iX_i^{\top} = \hat{\Sigma}_n.
\]
Therefore, $\mathcal{D}_{n}^{\Sigma} = \norm{\hat{\Sigma}_n - \Sigma_n}_{\infty} = 0$ and so, $C_2(\alpha) = 0$. Also, note that in this case
\[
\beta_{n,M} = \left(\frac{1}{n}\sum_{i=1}^n X_i(M)X_i^{\top}(M)\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^n X_i(M)\mathbb{E}\left[Y_i\right]\right).
\]
Hence, in case of fixed covariates,
\begin{align*}
\hat{\mathcal{R}}_{n,M} &= \hat{\mathcal{R}}_{n,M}^{\dagger} = \left\{\norm{\hat\Sigma_n(M)\left\{\hat{\beta}_{n,M} - \beta_{n,M}\right\}}_{\infty} \le C_{n}^{\Gamma}(\alpha)\right\}.
\end{align*}
Note that under fixed covariates assumption \ref{eq:UniformConsis} is trivially satisfied since $\mathcal{D}_{n}^{\Sigma} = 0$. Thus by Theorem \ref{thm:Appr1.2} (or \ref{thm:PoSIX}), finite sample valid post-selection inference holds for all model sizes in case of fixed covariates under no model or distributional assumptions as were required in \cite{Berk13}.
A nice feature of the methodology proposed in \cite{Berk13} is that the inference is tight in the sense there exists a model selection procedure such that the post-selection confidence interval has coverage exactly $1 - \alpha$. Even though the confidence region $\hat{\mathcal{R}}_{n,M}$ is derived under a more general framework, this tightness holds in this generality. This can be easily seen by noting that
\begin{align*}
\sup_{M\in\mathcal{M}_p(p)}\norm{\Sigma_n(M)\left\{\hat{\beta}_{n,M} - \beta_{n,M}\right\}}_{\infty} &= \sup_{M\in\mathcal{M}_p(p)}\left|\frac{1}{n}\sum_{i=1}^n X_i(M)(Y_i - \mathbb{E}\left[Y_i\right])\right|\\ &= \sup_{1\le j\le p}\left|\frac{1}{n}\sum_{i=1}^n X_i(j)(Y_i - \mathbb{E}\left[Y_i\right])\right| = \mathcal{D}_{n}^{\Gamma}.
\end{align*}
Take $\hat{M} = \{\hat{j}\}$, where
\[
\hat{j}\in\argmax_{1\le j\le p}\,\left|\frac{1}{n}\sum_{i=1}^n X_i(j)(Y_i - \mathbb{E}\left[Y_i\right])\right|.
\]
For this random model $\hat{M}$, the coverage of $\hat{\mathcal{R}}_{n,\hat{M}}$ is exactly equal to $(1 - \alpha)$.
\subsubsection[Fixed design and comparison with Berk et al. (2013)]{Lebesgue Measure and Comparison with \cite{Berk13}}
The rate bound~\eqref{eq:LebesgueRandomX} of Lemma~\ref{lem:LebesgueMeasure} is written explicitly for general random covariates. As shown in Remark~\ref{rem:FixedX}, under the assumption of fixed covariates, $C_n^{\Sigma}(\alpha) = 0$ and $\hat{\mathcal{R}}_{n,M} = \hat{\mathcal{R}}_{n,M}^{\dagger}$. So, from the proof of Lemma~\ref{lem:LebesgueMeasure}, we get,
\[
\mathbf{Leb}\left(\hat{\mathcal{R}}_{n,M}\right) \le |\Sigma_n(M)|^{-1}\left(C_n^{\Gamma}(\alpha)\right)^{|M|},\quad\mbox{for all}\quad M\in\mathcal{M}_p(p).
\]
Under the setting of Lemma~\ref{lem:RateD1nD2n}, it follows that
\begin{equation}\label{eq:LebesgueFixedX}
\mathbf{Leb}\left(\hat{\mathcal{R}}_{n,M}\right) = O_p\left(|\Sigma_n(M)|^{-1}\right)\left(\sqrt{\frac{\log p}{n}}\right)^{|M|}.
\end{equation}
Clearly, this is much smaller than the size shown in~\eqref{eq:LebesgueRandomX} for general random covariates. One possible explanation for this discrepancy between fixed and random covariates is as follows: The confidence regions $\hat{\mathcal{R}}_{n,M}$~\eqref{eq:FirstFinite} and $\hat{\mathcal{R}}_{n,M}^{\dagger}$~\eqref{eq:FirstAsym} are written in terms of
\[
\hat{\Sigma}_n(M)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right).
\]
But in case of fixed covariates
\begin{equation}\label{eq:FixedXEstimating}
\hat{\Sigma}_n(M)\beta_{n,M} = \Gamma_n(M).
\end{equation}
So, even though the confidence regions are written for $\beta_{n,M}$, they can be thought of as confidence regions for the population ``parameter'' or functional $\Gamma_n(M)$. Also note that over all models $M\in\mathcal{M}_p(p)$, the set of all functionals $\Gamma_n(M)$ can be inferred just based on $\Gamma_n\in\mathbb{R}^p$. Since this is a $p$-dimensional functional, a confidence region with length $\sqrt{\log p/n}$ on each coordinate can be constructed. This explains why the smaller size in~\eqref{eq:LebesgueFixedX} is possible. In case of random covariates, \eqref{eq:FixedXEstimating} is not true and the randomness due to the covariates brings in some error.
It is striking and somewhat surprising that the smaller size~\eqref{eq:LebesgueFixedX} is possible. In our construction it is not just possible, the confidence region can be computed in polynomial time using bootstrap discussed in Section~\ref{sec:Comp}. The other post-selection methods that can be used in this fixed covariate setting are those of~\cite{Berk13} and \cite{Bac16}. The confidence regions in both these works are based on the quantiles of the statistic
\begin{equation}\label{eq:MaxTStat}
\max_{M\in\mathcal{M}_p(k)}\left|\frac{\sqrt{n}\left(\hat{\beta}_{n,M}(j) - \beta_{n,M}(j)\right)}{\sigma_{n,M}(j)}\right|,
\end{equation}
for some ``variance'' $\sigma_{n,M}(j)$ (The choices of this quantity differ between the works. For simplicity, we assume this quantity is known.) Based on the ``max-$|t|$'' statistic~\eqref{eq:MaxTStat}, a confidence region for $\beta_{n,M}$ is
\[
\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}} := \left\{\theta\in\mathbb{R}^{|M|}:\,\max_{1\le j\le |M|}\left|\frac{\sqrt{n}\left(\hat{\beta}_{n,M}(j) - \theta(j)\right)}{\sigma_{n,M}(j)}\right| \le C_{n,k}(\alpha)\right\},
\]
where $C_{n,k}(\alpha)$ is the quantile of the max-$|t|$ statistic. Under fixed covariates and Gaussian response, $\sqrt{n}\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)$ is normally distributed. As shown in~\eqref{eq:FullSupremum}, the max-$|t|$ statistic~\eqref{eq:MaxTStat} can be of the order $\sqrt{k\log(ep/k)}$. This implies that $C_{n,k}(\alpha)$ can be of the order $\sqrt{k\log(ep/k)}$ and so, the Lebesgue measure of the confidence region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ satisfies
\begin{equation}\label{eq:GeneralPoSIConstant}
\mbox{\textbf{Leb}}\left(\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}\right)= O_p\left(1\right)\left(\sqrt{\frac{k\log p}{n}}\right)^{|M|}\quad\mbox{uniformly over all}\quad M\in\mathcal{M}_p(k).
\end{equation}
This shows that the confidence region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ is worse than $\hat{\mathcal{R}}_{n,M}^{\dagger}$ in at least two aspects. Firstly, the size of the confidence region has an additional factor $\sqrt{k}$ that makes the region huge in comparison. Secondly, the Lebesgue measure does not scale with model size $|M|$. For example, after searching over the set of models $\mathcal{M}_p(k)$, if the analyst settles on a (random) model of size $1$, then the post-selection confidence region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ has a size that still scales with $k$. In sharp contrast, our confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$, even in the random design case, has size scaling only with the model $M$ (and does not depend on the largest model considered in selection process).
\subsubsection{Fixed Covariates with the Restricted Isometry Property (RIP)} \label{sec:RIP}
The rate bound~\eqref{eq:GeneralPoSIConstant} is derived using the fact that $C_{n,k}(\alpha)$ can in general be of the order $\sqrt{k\log(ep/k)}$. Under orthogonal designs ($\hat{\Sigma}_n = I_p$, the identity matrix in $\mathbb{R}^{p\times p}$), \cite{Berk13} proved that $C_{n,k}(\alpha) = O(\sqrt{\log p})$, and so the size of the region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ matches that of our confidence region. Since the construction of~\cite{Berk13} is based on normality, the exact size of the confidence region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ could be better than the region $\hat{\mathcal{R}}_{n,M}^{\dagger}$. It is also interesting to note under orthogonal design $\hat{\mathcal{R}}_{n,M}^{\dagger}$ provides a rectangle with sides parallel to the coordinate axis and so is of the same shape as that of $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$. Recently, \cite{Bachoc18} showed that the orthogonal design restriction can be relaxed to RIP. A symmetric matrix $A\in\mathbb{R}^{p\times p}$ is said to satisfy RIP of order $k$ with RIP constant $\delta$ if for all $M\in\mathcal{M}_p(k)$ and for all $\theta\in\mathbb{R}^{|M|}$,
\[
(1 - \delta)\norm{\theta}^2 \le \theta^{\top}A(M)\theta \le (1 + \delta)\norm{\theta}^2.
\]
This is equivalent to
\begin{equation}\label{eq:RIPDef}
\max_{|M|\le k}\norm{A(M) - I_{|M|}}_{op} \le \delta,
\end{equation}
where $\norm{\cdot}_{op}$ denotes the operator norm. So, $\hat{\Sigma}_n$ satisfying RIP implies that all $k$ subset covariates are nearly orthogonal. Theorem 3.3 of \cite{Bachoc18} proves that for fixed covariates and Gaussian response,
\[
C_{n,k}(\alpha) = O\left(\sqrt{\frac{\log p}{n}} + \delta c(\delta)\sqrt{\frac{k\log(ep/k)}{n}}\right),
\]
under the assumption that $\hat{\Sigma}_n$ is RIP of order $k$. Here $c(\delta)$ is an increasing non-negative function, satisfying $c(\delta)\to1$ as $\delta\to 0$. So, under the RIP condition with $\delta\sqrt{k}\to 0$, the Lebesgue measure of the confidence region $\hat{\mathcal{R}}_{n,M}^{\mathtt{max-t}}$ matches again with that of our confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$. It is also interesting to note that under RIP condition for $\hat{\Sigma}_n$ with $\delta\to 0$, the confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ provides a parallelepiped with sides near parallel to the coordinate axis. More strikingly, the following result holds for fixed covariates:
\begin{prop}\label{prop:ConfUnderRIP}
Define the confidence region
\[
\hat{\mathcal{R}}_{n,M}^{\mathtt{RIP}} := \left\{\theta\in\mathbb{R}^{|M|}:\,\norm{\hat{\beta}_{n,M} - \theta}_{\infty} \le C_n^{\Gamma}(\alpha)\right\}.
\]
If, for any $1\le k\le p$, the matrix $\hat{\Sigma}_n$ satisfies the RIP condition of order $k$ with RIP constant $\delta$ and $\delta\sqrt{k} = o(1)$ as $n\to\infty$, then
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\beta_{n,M}\in\hat{\mathcal{R}}_{n,M}^{\mathtt{RIP}}\right\}\right) \ge 1 - \alpha.
\]
\end{prop}
\begin{proof}
From the proof of Theorem~\ref{thm:Appr1.2}, we know that for all $M\in\mathcal{M}_p(k)$,
\[
\norm{\hat{\Sigma}_n(M)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{\infty} \le \mathcal{D}_{n}^{\Gamma}.
\]
Observe that
\begin{align}
\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_{\infty} &\le \norm{\hat{\Sigma}_n(M)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{\infty} + \norm{\left(\hat{\Sigma}_n(M) - I_{|M|}\right)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{\infty}\nonumber\\
&\le \norm{\hat{\Sigma}_n(M)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{\infty} + \norm{\left(\hat{\Sigma}_n(M) - I_{|M|}\right)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{2}\nonumber\\
&\le \norm{\hat{\Sigma}_n(M)\left(\hat{\beta}_{n,M} - \beta_{n,M}\right)}_{\infty} + \delta\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_2\nonumber\\
&\le \mathcal{D}_{n}^{\Gamma} + \delta\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_2.\label{eq:LInfinityBound}
\end{align}
From Remark 4.3 of \cite{Uniform:Kuch18}, we get that
\begin{equation}\label{eq:Bachoc33}
\sup_{M\in\mathcal{M}_p(k)}\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_2 \le \frac{\sqrt{k}\mathcal{D}_n^{\Gamma}}{\Lambda_n(k)}.
\end{equation}
(Note that in the notation of~\cite{Uniform:Kuch18}, $\mathcal{D}_n^{\Gamma}$ is different and can be bounded as shown in Proposition 3.1 there the bound above holds.) Therefore, combining~\eqref{eq:LInfinityBound} and~\eqref{eq:Bachoc33}, we get that for all $M\in\mathcal{M}_p(k)$,
\[
\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_{\infty} \le \mathcal{D}_n^{\Gamma}\left(1 + \frac{\delta\sqrt{k}}{\Lambda_n(k)}\right).
\]
From the RIP property~\eqref{eq:RIPDef}, $\Lambda_n(k) \ge 1 - \delta$ and so, for all $M\in\mathcal{M}_p(k)$,
\[
\norm{\hat{\beta}_{n,M} - \beta_{n,M}}_{\infty} \le \mathcal{D}_n^{\Gamma}\left(1 + \frac{\delta\sqrt{k}}{(1 - \delta)}\right).
\]
Therefore, under $\delta\sqrt{k}\to 0$ and using the definition of $C_n^{\Gamma}(\alpha)$,
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\beta_{n,M} \in \hat{\mathcal{R}}_{n,M}^{\mathtt{RIP}}\right\}\right) \ge 1 - \alpha.
\]
This completes the proof.
\end{proof}
\begin{rem}{ (RIP is Restrictive)}
The Restricted Isometry Property is a well-known condition in high-dimensional linear regression literature and is also known to be a very restrictive condition. It implies a requirement of near orthogonal covariate subsets, which is often not justified in practice.
\end{rem}
\begin{rem}{ (Generalization of the Result of~\cite{Bachoc18})}
Theorem 3.3 of \cite{Bachoc18} proves a bound on the expectation of $\sup\{\|\hat{\beta}_{n,M} - \beta_{n,M}\|_{\infty}:\,M\in\mathcal{M}_p(k)\}$ for fixed covariates and Gaussian response. Inequality~\eqref{eq:Bachoc33} above proves a deterministic inequality on this supremum quantity. This deterministic inequality along with Lemma~\ref{lem:RateD1nD2n} proves the rate bound in a more general setting.
\end{rem}
\section{Computation by Multiplier Bootstrap}\label{sec:Comp}
All the confidence regions defined in the previous section (and the ones to be defined in the forthcoming sections) depend only on the available data except for the (joint) quantiles $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$. Computation or estimation of joint bivariate quantiles $C_{n}^{\Gamma}(\alpha)$ and $C_{n}^{\Sigma}(\alpha)$ is the most important component of an application of approach 1 for valid post-selection inference. In this section, we apply the high-dimensional central limit theorem and multiplier bootstrap for estimating these quantiles. We note that either a classical bootstrap or the recently popularized method of multiplier bootstrap works for estimating these joint quantiles in the setting described in Lemma \ref{lem:RateD1nD2n}. See \cite{Chern17} and \cite{Zhang14} for a detailed discussion. For simplicity, we will only describe the method of multiplier bootstrap for the case of independent random vectors. The discussion here applies the central limit theorem and multiplier bootstrap result proved in Appendix \ref{app:HDCLT}. And we refer to \cite{Zhang14} for the case of dependent settings described in Appendix \ref{app:Dependent}.
Define vectors $W_i\in\mathbb{R}^q$ for $1\le i\le n$ containing
\begin{equation}\label{eq:WiDefinition}
\left(\left\{X_i(j)Y_i\right\}, 1\le j\le p;\; \left\{X_i(l)X_i(m)\right\}, 1\le l \le m\le p\right),
\end{equation}
with
\[
q = 2p + \frac{p(p-1)}{2} = O(p^2).
\]
As shown in Equation \eqref{eq:RectangleRepresentation} in Appendix~\ref{app:HDCLT}, for any $t_1, t_2\in\mathbb{R}^+\cup\{0\}$, the set
\[
\{\mathcal{D}_{n}^{\Gamma} \le t_1, \mathcal{D}_{n}^{\Sigma} \le t_2\},
\]
can be written as a rectangle in terms of
\[
S_n^W := \frac{1}{\sqrt{n}}\sum_{i=1}^n \left\{W_i - \mathbb{E}\left[W_i\right]\right\}.
\]
In the unified framework of linear regression, $(X_i, Y_i)$ are possibly non-identically distributed and so, $\mathbb{E}\left[W_i\right]$ are not all equal. Let $e_1, e_2, \ldots, e_n$ be independent standard normal random variables and define
\[
S_n^{eW} := \frac{1}{\sqrt{n}}\sum_{i=1}^n e_i(W_i - \bar{W}_n),\quad\mbox{where}\quad \bar{W}_n := \frac{1}{n}\sum_{i=1}^n W_i.
\]
Write $S_n^{eW}(\mathbf{I})$ for the first $p$ coordinates of $S_n^{eW}$ and $S_n^{eW}(\mathbf{II})$ for the remaining coordinates of $S_n^{eW}$. The following algorithm gives the pseudo-program for implementing the multiplier bootstrap.
\begin{enumerate}
\item Generate $B_n$ random vectors from $N_n(0, I_n)$, with $I_n$ denoting the identity matrix of dimension $n$. Let these be denoted by $\{e_{i,j}:\, 1\le i\le n, 1\le j\le B_n\}$.
\item Compute the $j$-th replicate of $S_n^{eW}$ as
\[
S_{n,j}^{\star} := \norm{\frac{1}{n}\sum_{i=1}^n e_{i,j}(W_i - \bar{W}_n)}_{\infty},\quad\mbox{for}\quad 1\le j\le B_n.
\]
\item Find any two numbers $(\hat{C}_{1n}^{\Gamma}(\alpha), \hat{C}_{2n}^{\Sigma}(\alpha))$ such that
\[
\frac{1}{B_n}\sum_{i=1}^{B_n} \mathbbm{1}{\left\{\norm{S_{n,j}^{\star}(\mathbf{I})}_{\infty} \le \hat{C}_{1n}^{\Gamma}(\alpha), \norm{S_{n,j}^{\star}(\mathbf{II})}_{\infty} \le \hat{C}_{2n}^{\Sigma}(\alpha)\right\}} \ge 1 - \alpha.
\]
Here $\mathbbm{1}\{A\}$ is the indicator function of a set $A$.
\end{enumerate}
The following theorem proves the validity of multiplier bootstrap under assumption \eqref{eq:MarginalPhi} of Lemma \ref{lem:RateD1nD2n}. Recall the definition of $W_i$ from~\eqref{eq:WiDefinition}. Note that we only prove asymptotic conservativeness instead of consistency which does not hold. See Remark \ref{rem:Inconsistency} in Appendix \ref{app:HDCLT}. This inconsistency can be easily understood by noting that $\mathbb{E}\left[W_i\right]$ is replaced by the average $\bar{W}_n$ which is not a consistent estimator. Define
\[
L_{n,p} := \max_{1\le j\le q}\frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[\left|W_i(j) - \mathbb{E}\left[W_i(j)\right]\right|^3\right].
\]
\begin{thm}\label{thm:BootApplication}
Suppose $(X_i^{\top}, Y_i)^{\top}, 1\le i\le n$ are independent random variables satisfying
\[
\min_{1\le j\le q}\frac{1}{n}\sum_{i=1}^n\mbox{Var}\left(W_i\right) \ge B > 0,
\]
and
\begin{equation}\label{eq:MarginalPhiAlpha}
\max_{1\le i\le n}\max\left\{\max_{1\le j\le p}\norm{X_i(j)}_{\psi_{\gamma}}, \norm{Y_i}_{\psi_{\gamma}}\right\} \le K_{n,p}.
\end{equation}
If $n, p\ge 1$ are such that $$\max\left\{L_{n,p}^{-1}K_{n,p}\left(\log p\right)^{1 + 6/\gamma},\, L_{n,p}^2\log^7p,\, K_{n,p}^{6}\log q,\, K_{n,q}^2(\log p\log n)^{4/\gamma}\right\} = o(n),$$
then the multiplier bootstrap described above provides a conservative inference in the sense that
\[
\lim_{n\to\infty}\inf_{t_1, t_2 \ge 0}\left(\mathbb{P}\left(\mathcal{D}_{n}^{\Gamma} \le t_1, \mathcal{D}_{n}^{\Sigma} \le t_2\right) - \mathbb{P}\left(\norm{S_{n,j}^{eW}(\mathbf{I})}_{\infty} \le t_1, \norm{S_{n,j}^{eW}(\mathbf{II})}_{\infty} \le t_2\big|\mathcal{Z}_n\right)\right) \ge 0,
\]
where $\mathcal{Z}_n := \{(X_i^{\top}, Y_i)^{\top}:\,1\le i\le n\}.$
\end{thm}
\begin{proof}
Theorems \ref{thm:MarginalPhiHDBEBound} and \ref{thm:BootstrapConsistency} (stated in Appendix \ref{app:HDCLT}) apply in the setting above since under assumption \eqref{eq:MarginalPhiAlpha},
\[
\max_{1\le i\le n}\max_{1\le j\le q}\norm{W_i(j)}_{\psi_{\gamma/2}} \le \max_{1\le i\le n}\max\left\{\max_{1\le j\le p}\norm{X_i(j)}_{\psi_{\gamma}}, \norm{Y_i}_{\psi_{\gamma}}\right\}^2 \le K_{n,p}^2.
\]
And the rate restriction on $n$ and $p$ ensure that the bounds in Theorem \ref{thm:MarginalPhiHDBEBound} and \ref{thm:BootstrapConsistency} both converge to zero. See Remark~\ref{rem:Inconsistency} for the conservative property.
\end{proof}
By Theorem \ref{thm:BootApplication}, the estimates $(\hat{C}_{1n}^{\Gamma}(\alpha), \hat{C}_{2n}^{\Sigma}(\alpha))$ are consistent for some quantities that can replace the quantiles $(C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha))$ of $(\mathcal{D}_{n}^{\Gamma}, \mathcal{D}_{n}^{\Sigma})$ in \eqref{eq:ConfidenceR}.
\begin{rem}\,(Consistency under Identical Distributions)
Under the general framework of just independent random vectors without any assumption on the heterogenity of the distributions, it is impossible to prove consistency as shown in~\cite{Chap2:Kuch18}. The result of~\cite{Chap2:Kuch18} is proved under a much simpler setting but applies here too. If in addition identical distribution of the random vectors is assumed, then it is easy to show from the results of Appendix~\ref{app:HDCLT} that the multiplier bootstrap described above is in fact consistent under the same assumptions of Theorem~\ref{thm:BootApplication}.
\end{rem}
\section{A Generalization for Linear Regression-type Problems}\label{sec:Generalization}
A simple generalization of Theorems \ref{thm:Appr1.2} and \ref{thm:PoSIX} as stated in Theorem \ref{thm:Appr1Gen} allows valid post-selection inference in linear regression-type problems. The importance of this generalization can be seen from Remark \ref{rem:MissingRobust} and the discussion in Section \ref{sec:ProsAndCons}. To describe this generalization, consider the following setting. Let $\hat{\Sigma}_n^{\star}, \Sigma_n^{\star}$ be two $p$-dimensional matrices and $\hat{\Gamma}_n^{\star}, \hat{\Gamma}^{\star}$ be two $p$-dimensional vectors. Consider the error norms
\[
\mathcal{D}_{n}^{\Gamma{\star}} := \norm{\hat{\Gamma}_n^{\star} - \Gamma_n^{\star}}_{\infty}\quad\mbox{and}\quad \mathcal{D}_{n}^{\Sigma{\star}} := \norm{\hat{\Sigma}_n^{\star} - \Sigma_n^{\star}}_{\infty}.
\]
Define for every $M\in\mathcal{M}_p(p)$, the estimator and the corresponding target as
\begin{align*}
\hat{\xi}_{n,M} &:= \argmin_{\theta\in\mathbb{R}^{|M|}}\,\left\{\theta^{\top}\hat{\Sigma}_n^{\star}(M)\theta - 2\theta^{\top}\hat{\Gamma}_n^{\star}(M)\right\},\\
\xi_{n,M} &:= \argmin_{\theta\in\mathcal{R}^{|M|}}\,\left\{\theta^{\top}\Sigma_n^{\star}(M)\theta - 2\theta^{\top}\Gamma_n^{\star}(M)\right\}.
\end{align*}
Consider the confidence regions $\hat{\mathcal{R}}_{n,M}^{\star}$ and $\hat{\mathcal{R}}_{n,M}^{{\star}\dagger}$, analogues to those before, as
\begin{align*}
\hat{\mathcal{R}}_{n,M}^{\star} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\norm{\hat{\Sigma}_n^{\star}(M)\left(\hat{\xi}_{n,M} - \theta\right)}_{\infty} \le C_{n}^{\Gamma{\star}}(\alpha) + C_{n}^{\Sigma{\star}}(\alpha)\norm{\theta}_1\right\},\\
\hat{\mathcal{R}}_{n,M}^{{\star}\dagger} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\norm{\hat{\Sigma}_n^{\star}(M)\left(\hat{\xi}_{n,M} - \theta\right)}_{\infty} \le C_{n}^{\Gamma{\star}}(\alpha) + C_{n}^{\Sigma{\star}}(\alpha)\norm{\hat{\xi}_{n,M}}_1\right\}.
\end{align*}
where $C_{n}^{\Gamma\star}(\alpha)$ and $C_{n}^{\Sigma\star}(\alpha)$ are constants (or joint quantiles) that satisfy,
\[
\mathbb{P}\left(\mathcal{D}_{n}^{\Gamma{\star}} \le C_{n}^{\Gamma\star}(\alpha)\quad\mbox{and}\quad \mathcal{D}_{n}^{\Sigma{\star}} \le C_{n}^{\Sigma\star}(\alpha)\right) \ge 1 - \alpha.
\]
Finally, let $\Lambda_n^{\star}(k) = \min\{\lambda_{\min}(\Sigma_n^{\star}(M)):\,M\in \mathcal{M}_p(k)\}$.
\begin{thm}\label{thm:Appr1Gen}
The set of confidence regions $\{\hat{\mathcal{R}}_{n,M}^{\star}:\,M\in\mathcal{M}_p(p)\}$ satisfies
\begin{equation}\label{eq:Simultaneous}
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\xi_{n,M} \in \hat{\mathcal{R}}_{n,M}^{\star}\right\}\right) \ge 1 - \alpha,
\end{equation}
and if for any $1\le k\le p$ that satisfies $k\mathcal{D}^{\star}_{2n} = o_p(\Lambda^{\star}_n(k)) = o_p(1)$,
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\xi_{n,M} \in \hat{\mathcal{R}}_{n,M}^{{\star}\dagger}\right\}\right) \ge 1 - \alpha.
\]
\end{thm}
\begin{proof}
The proof is exactly the same as for Theorems \ref{thm:Appr1.2} and \ref{thm:PoSIX}. The reader just has to realize that we did not use any structure of $\hat{\Sigma}_n, \hat{\Gamma}_n$ or that they are unbiased estimators of $\Sigma_n, \Gamma_n$ respectively, in the proof there.
\end{proof}
\begin{rem}\label{rem:MissingRobust}
The result in Theorem \ref{thm:Appr1Gen} allows one to deal with the case of missing data or outliers in linear regression setting. In case of missing data or when the data is suspected of containing outliers, it might be more useful to use estimators of $\Sigma_n$ and $\Gamma_n$ that take this concern into account. For the case of missing data/errors-in-covariates/multiplicative noise, see \citet[Examples 1, 2 and 3]{Loh12} and references therein for estimators other than $\hat{\Sigma}_n$ and $\hat{\Gamma}_n$. For the case of outliers either in the classical sense or in the adversarial corruption setting, see \cite{Chen13}. For correct usage of this theorem, it is crucial that the sub-matrix and sub-vector of $\Sigma_n^{\star}$ and $\Gamma_n^{\star}$, respectively are used for sub-models. For example, if we use full covariate imputation in case of missing data, then the sub-model estimator should be based on a sub-matrix of this full covariate imputation. Also, see \citet[pages 11--12]{Uniform:Kuch18} for other settings of applicability.
\end{rem}
\section{Connection to High-dimensional Regression and Other Confidence Regions}\label{sec:HighDim}
The confidence regions $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}_{n,M}^{\dagger}$ have a very close connection to a well-known estimator in the high-dimensional linear regression literature called the Dantzig Selector proposed by \cite{Can07} and the closely related ones by \cite{Ros10} and \cite{Chen13}. These papers or methods are not related to post-selection inference and were proposed under a linear model assumption. The Dantzig selector estimates $\beta_0\in\mathbb{R}^p$, using observations $(X_i^{\top}, Y_i), 1\le i\le n$ that satisfy $Y_i = X_i^{\top}\beta_0 + \varepsilon_i$ for independent and identically distributed errors $\varepsilon_i$ with a mean zero normal distribution. \cite{Can07}, like many others, assumed fixed covariates $X_i, 1\le i\le n$. In our notation, the Dantzig selector is defined by the optimization problem
\[
\mbox{minimize}\quad\norm{\beta}_1\quad\mbox{subject to}\quad \norm{\Gamma_n - \Sigma_n\beta}_{\infty}\le \lambda_n,
\]
for some tuning parameter $\lambda_n$ that converges to zero as $n$ increases. To relate this to our confidence regions $\hat{\mathcal{R}}_{n,M}^{\dagger}$ (in \eqref{eq:FirstFinite}), note that for $\beta = \beta_0$ in the constraint set, the quantity inside the norm is $\Sigma_n(\hat{\beta} - \beta_0)$ where $\hat{\beta}$ is any least squares estimator.
The estimator defined in \cite{Chen13} and \cite{Ros10} resembles
\[
\mbox{minimize}\quad\norm{\beta}_1\quad\mbox{subject to}\quad \norm{\Gamma_n - \Sigma_n\beta}_{\infty} \le \lambda_n + \delta_n\norm{\beta}_1,
\]
for some tuning parameters $\lambda_n$ and $\delta_n$ both converging to zero as $n$ increases. This constraint set corresponds to our confidence regions $\hat{\mathcal{R}}_{n,M}$ in Theorem \ref{thm:Appr1.2}.
The following theorem proves that there exist valid post-selection confidence regions that resemble the objective functions of lasso (\cite{Tibs96}) and sqrt-lasso (\cite{Belloni11}). The proof is deferred to Appendix \ref{app:LassoRegions}. These relations to high-dimensional linear regression literature poses the interesting question: ``is there a more deeper connection between post-selection inference and high-dimensional estimation?''. Other than the results in linear regression, we do not yet have an answer to this interesting question.
Define for every $M\in\mathcal{M}_p(p)$, the confidence regions
\begin{align*}
\mathring{\mathcal{R}}_{n,M} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\right.\\
&\left.\hat{R}_n(\theta; M) \le \hat{R}_n(\hat{\beta}_{n,M}; M) + 2C_{n}^{\Gamma}(\alpha)\left[\norm{\hat{\beta}_{n,M}}_1 + \norm{\theta}_1\right] + C_{n}^{\Sigma}(\alpha)\left[\norm{\hat{\beta}_{n,M}}_1^2 + \norm{\theta}_1^2\right]\right\},\\
\mathring{\mathcal{R}}_{n,M}^{\dagger} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\hat{R}_n(\theta; M) \le \hat{R}_n(\hat{\beta}_{n,M}; M) + 4C_{n}^{\Gamma}(\alpha)\norm{\hat{\beta}_{n,M}}_1 + 2C_{n}^{\Sigma}(\alpha)\norm{\hat{\beta}_{n,M}}_1^2\right\},\\
\breve{\mathcal{R}}_{n,M} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\right.\\
&\quad\left.\hat{R}^{1/2}_n(\theta; M) \le \hat{R}_n^{1/2}(\hat{\beta}_{n,M}; M) + C_n^{1/2}(\alpha)\left(1 + \norm{\theta}_1\right)+ C_n^{1/2}(\alpha)\left(1 + \norm{\hat{\beta}_{n,M}}_1\right)\right\},\\
\breve{\mathcal{R}}_{n,M}^{\dagger} &:= \left\{\theta\in\mathbb{R}^{|M|}:\,\hat{R}^{1/2}_n(\theta; M) \le \hat{R}_n^{1/2}(\hat{\beta}_{n,M}; M) + 2C_n^{1/2}(\alpha)\left(1 + \norm{\hat{\beta}_{n,M}}_1\right)\right\},
\end{align*}
where $\hat{R}_n(\cdot; M)$ is the empirical least squares objective function defined in Equation \eqref{eq:EmpObj} and $C_n(\alpha)$ is the $(1 - \alpha)$-upper quantile of $\max\{\mathcal{D}_{n}^{\Gamma}, \mathcal{D}_{n}^{\Sigma}\}$.
\begin{thm}\label{thm:PoSILasso}
For any $n\ge 1, p\ge 1$, the following simultaneous inference guarantee holds:
\begin{align}
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\beta_{n,M}\in\mathring{\mathcal{R}}_{n,M}\right\}\right) &\ge 1 - \alpha,\label{eq:LassoFinite}\\
\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\beta_{n,M}\in\breve{\mathcal{R}}_{n,M}\right\}\right) &\ge 1 - \alpha,\label{eq:SqrtLassoFinite}
\end{align}
and for any $1\le k\le p$ satisfying \ref{eq:UniformConsis}, we have
\begin{align}
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\beta_{n,M}\in\mathring{\mathcal{R}}_{n,M}^{\dagger}\right\}\right) &\ge 1 - \alpha,\label{eq:LassoAsym}\\
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(p)}\left\{\beta_{n,M}\in\breve{\mathcal{R}}_{n,M}^{\dagger}\right\}\right) &\ge 1 - \alpha,\label{eq:SqrtLassoAsym}
\end{align}
\end{thm}
\begin{rem}{ (Intersection of Confidence Regions)}\label{rem:Intersect}
All our confidence regions are based on deterministic inequalities as mentioned before. This implies that the intersection of the confidence regions $\hat{\mathcal{R}}_{n,M}, \hat{\mathcal{R}}_{n,M}^{\dagger}$ and $\mathring{\mathcal{R}}_{n,M}$ provides a valid simultaneous and post-selection inference. That means, for any $1\le k\le p$ such that \ref{eq:UniformConsis} holds,
\begin{equation}\label{eq:Intersect}
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\hat{\mathcal{R}}_{n,M}\cap\hat{\mathcal{R}}_{n,M}^{\dagger}\cap \mathring{\mathcal{R}}_{n,M}\right\}\right) \ge 1 - \alpha.
\end{equation}
To prove this, let $\hat{\mathcal{C}}_{n,M}, \hat{\mathcal{C}}_{n,M}^{\dagger}$ and $\mathring{\mathcal{C}}_{n,M}$ represent the confidence sets $\hat{\mathcal{R}}_{n,M}, \hat{\mathcal{R}}_{n,M}^{\dagger}$ and $\mathring{\mathcal{R}}_{n,M}$ with $(C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha))$ replaced by $(\mathcal{D}_{n}^{\Gamma}, \mathcal{D}_{n}^{\Sigma})$. From the proofs of Theorems \ref{thm:Appr1.2}, \ref{thm:PoSIX} and \ref{thm:PoSILasso}, it is clear that
\[
\liminf_{n\to\infty}\,\mathbb{P}\left(\bigcap_{M\in\mathcal{M}_p(k)}\left\{\hat{\mathcal{C}}_{n,M}\cap\hat{\mathcal{C}}_{n,M}^{\dagger}\cap\mathring{\mathcal{C}}_{n,M}\right\}\right) = 1.
\]
So by the definition of $(C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha))$ \eqref{eq:ConfidenceR}, the result of \eqref{eq:Intersect} follows. Provably the intersection of confidence regions is smaller. By the same argument it is possible to include the confidence regions $\mathring{\mathcal{R}}_{n,M}^{\dagger}, \breve{\mathcal{R}}_{n,M},$ and $\breve{\mathcal{R}}_{n,M}^{\dagger}$ in the intersection .
\end{rem}
\begin{rem}{ (Usefulness of Lasso-based Regions)}
The confidence regions discussed in this section are given solely for the purpose of illustrating and making solid the connection between post-selection inference and high-dimensional linear regression. The shape of all these confidence regions is ellipsoid and have larger volume than the confidence region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ in terms of the rate. This result is not presented here but is not difficult to prove. This rate comparison is only asymptotic and the intersection argument presented in Remark \ref{rem:Intersect} might still be useful in finite samples.
\end{rem}
\section{Discussion of the Current Approach}\label{sec:ProsAndCons}
The confidence regions $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}_{n,M}^{\dagger}$ constitute what we call approach 1. Various advantages and disadvantages of this approach are discussed in this section. Some of these comments also apply to the confidence regions mentioned in Theorem \ref{thm:Appr1Gen}.
The following are some of the advantages of this approach. The confidence regions are asymptotically valid for post-selection inference. This is the first work that provides valid post-selection inference in this generality. The confidence region for any model $M$ depend only on the joint quantiles $C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha)$ and the least squares linear regression estimator corresponding to the model $M$, $\hat{\beta}_{n,M}$. So, the computational complexity of these confidence regions is no more than a multiple of the computational complexity of $\hat{\beta}_{n,M}$. Computation of $C_{n}^{\Gamma}(\alpha), C_{n}^{\Sigma}(\alpha)$ takes no more than a linear function of $p$ operations, as shown in Section \ref{sec:Comp}. This computational complexity is in sharp contrast to the valid post-selection inference method proposed by \cite{Berk13} or \cite{Bac16} which requires essentially solving for the least squares estimators of all the models for a confidence region with some model $M$. Therefore, implementation of their procedure is NP-hard, in general. The Lebesgue measure of the confidence regions $\hat{\mathcal{R}}_{n,M}^{\dagger}$ converges to zero at a rate that is the minimax rate in high-dimensional linear regression literature. So, we suspect this might be the optimal rate here too but at present we do not have a proof or even an optimality framework. Note that the volume of the confidence region for model $M$ is computed with respect to the Lebesgue on $\mathbb{R}^{|M|}.$
There is one more advantage which might not seem like one at first glance. The confidence region for $\beta_{n,M}$ for a particular model does not require information on how many models are being used for model selection. The volume of the confidence region for $\beta_{n,M}$ depends only on the features of the model $M$ except for the quantiles. This implies that the confidence regions $\hat{\mathcal{R}}_{n,M}^{\dagger}, M\in\mathcal{M}_p(k)$ can often have much smaller volumes than the ones produced using the approach of \cite{Berk13}.
There are some disadvantages and some irking factors associated with this approach. Firstly, notice that the confidence regions are not invariant under linear transformations of the observations as briefed in Remark \ref{rem:Invariance}. Most methods in high-dimensional linear regression procedures that induce sparsity also share this feature. Even from a naive point of view, invariance under change of units for all variables involved is crucial for interpretation. This translates to invariance under diagonal linear transformations of the observations. Normalizing all the variables involved to have a unit standard deviation is a commonly suggested method to attain invariance under diagonal transformations. Formally, this means one should use
\[
{X}_i^* = \left(\frac{X_i(1) - \bar{X}(1)}{s_n(1)}, \ldots, \frac{X_i(p) - \bar{X}_i(p)}{s_n(p)}\right),\quad {Y}_i^* = \frac{Y_i - \bar{Y}}{s_n(0)},
\]
in place of $(X_i, Y_i), 1\le i\le n$, where for $1\le j\le p$,
\[
\bar{X}(j) = \frac{1}{n}\sum_{i=1}^n X_i(j),\quad\mbox{and}\quad s_n^2(j) = \frac{1}{n}\sum_{i=1}^n \left[X_i(j) - \bar{X}(j)\right]^2,
\]
and
\[
\bar{Y} = \frac{1}{n}\sum_{i=1}^n Y_i,\quad\mbox{and}\quad s_{n}^2(0) = \frac{1}{n}\sum_{i=1}^n \left[Y_i - \bar{Y}\right]^2.
\]
This leads to the matrix and vector,
\[
\hat\Sigma_n^{\star} = \frac{1}{n}\sum_{i=1}^n {X}_i^*{X}_i^{*\top},\quad\mbox{and}\quad \hat\Gamma_n^{\star} = \frac{1}{n}\sum_{i=1}^n {X}_i^*{Y}_i^*.
\]
Note that the observations $({X}_i^*, {Y}_i^*), 1\le i\le n$ are not independent even if we start with independent observations $(X_i, Y_i)$. This is one of the reasons why we did not assume independence for Theorems \ref{thm:Appr1.2}, \ref{thm:PoSIX} and \ref{thm:Appr1Gen}. Of course one needs to prove the rates for the error norms $\mathcal{D}_{n}^{\Gamma\star}$ and $\mathcal{D}_{n}^{\Sigma\star}$ in this case for an application of these results. We leave it to the reader to verify that the rates are exactly the same obtained in Lemma \ref{lem:RateD1nD2n} (one needs to use a Slutsky-type argument). See \cite{Cui16} for a similar derivation. We conjecture that much weaker conditions than listed in Lemma \ref{lem:RateD1nD2n} are enough for those same rates, in particular, exponential moments are not required. See \citet[Theorem 5.3]{Geer14} for a result in this direction. Getting back to invariance under arbitrary linear transformations, we do not know if it is possible come up with a procedure that retains the computational complexity of approach 1 while satisfying this invariance. We conjecture that this is not possible and that there is a strict trade-off between computational efficiency and affine invariance.
Another disadvantage of approach 1 is that it is mostly based on deterministic inequalities. As the reader may have suspected, this might lead to some conservativeness of the method. Note that non-identical distributions of the observations already introduces some conservativeness. The confidence regions $\hat{\mathcal{R}}_{n,M}$ and $\hat{\mathcal{R}}_{n,M}^{\dagger}$ cover $\beta_{n,M}$ with probability (at least) $1 - \alpha$ asymptotically. In particular, these confidence regions provide valid post-selection inference for the full vector $\beta_{n,M}$ instead of each of the coordinates of $\beta_{n,M}$. The region $\hat{\mathcal{R}}_{n,M}^{\dagger}$ is defined by a system of linear inequalities and hence the local inference (or inference on coordinates) for $\beta_{n,M}(j), 1\le j\le |M|$ can be obtained by solving a linear program. However, these can be very conservative for local inference guarantees.
We emphasize before ending this section that the main focus of approach 1 is validity and better computational complexity not optimality. However, optimality holds for our confidence regions as mentioned in Remark \ref{rem:FixedX} for fixed covariates. It should be understood that without validity there is no point in proving any kind of optimality properties about the size of confidence region.
\section{Conclusions and Future Directions}\label{sec:Conclusions}
In this paper, we have considered a computationally efficient approach to valid post-selection inference in linear regression under arbitrary data-driven method of variable selection. The approach here is very different from the other methodologies available in the literature and is based on the estimating equation of linear regression. At present it is not clear if this approach can be extended to other $M$-estimation problems. Since our confidence regions are based on deterministic inequalities, our results provide valid post-selection inference even under dependence and non-identically distributed random vectors. For this reason, the setting of the current work is the most general available in the literature of post-selection inference.
In addition to providing several valid confidence regions, we compare the Lebesgue measure of our confidence regions with the ones from~\cite{Berk13} and~\cite{Bac16}. This comparison shows that our confidence regions are much smaller (in terms of volume) in case of fixed (non-stochastic) covariates. In general, the volume of our confidence regions scales with the cardinality of model $\hat{M}$ chosen. This is a feature not available from the works of~\cite{Berk13} and~\cite{Bac16}. Note that the confidence regions from selective inference literature have infinite expected length as shown in~\cite{2018arXiv180301665K}.
An interesting finding of our work is the connection between post-selection confidence regions and high-dimensional sparsity inducing linear regression estimators. If this finding were to hold for other $M$-estimation problems, then computationally efficient valid post-selection confidence regions are possible in general.
\bibliographystyle{apalike}
|
2,869,038,156,725 | arxiv | \section{The Model}
The standard TASEP is a model of particles moving unidirectionally on a one-dimensional discrete lattice with a fixed hopping rate $\gamma$. Steric interaction excludes that more than one particle can occupy the same site, and overtaking is not allowed. The average TASEP current of particles $J$ as a function of particle density $\rho$ is given by the parabolic relation $J_{TASEP}(\rho)=\gamma\rho(1-\rho)$ in the large lattice limit. The presence of one or more static defects in the lattice, defined as ``slow'' sites with hopping rates smaller than $\gamma$, leads to a reduction of the maximal current flow~\cite{janowsky_exact_1994} and confers a truncated parabola shape to the current-density relation $J(\rho)$. The defects considered so far in the literature are static since the hopping rates associated with the slow sites are constant. \\
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.82\columnwidth]{fig1_new.pdf}
\caption{(Color online) Mechanism for a one-site dynamical defect: the site $s$ closes or opens with rates $f$ and $u$, respectively, represented by a gray square that obstructs the passage of particles. Particles hop at rate $\gamma$ and cannot enter the closed region. Moreover, if a particle occupies the site $s$, the closing of that site is forbidden.}
\label{TASEPrules}
\end{center}
\end{figure}
The periodic boundary case illustrates well the main phenomenology of the model. We consider therefore a ring of $L$ sites on which $N \le L$ particles are allowed to hop in one direction, fixing the overall density at $\rho = N/L$.
A region of size $d$ represents the dynamical defect and has an intrinsic two-state dynamics coupled to the presence of particles.
The defect can pass from the \textit{closed} state to the \textit{open} state with rate $u$. The inverse process occurs with rate $f$ only when all $d$ sites of the defect are empty.
Our results indicate that the single-site defect case for $d=1$ (see Fig.~\ref{TASEPrules}) exhibits the main features of the model. Therefore in this work we focus, for sake of simplicity, on this case. The model proposed here can be applied to describe further systems; for instance, junctions in transport networks~\cite{embley_understanding_2009} can be thought of as particular dynamical defects with influx dependent dynamics.
\section{Numerical Simulation Results}
We have performed continuous-time Monte Carlo simulations based on the Gillespie algorithm~\cite{gillespie_general_1976}. We have studied the model in a large parameter space taking several lattice sizes $L$ (up to $4000$) and varying the rates $\gamma, u, f$ in order to explore all the different dynamical regimes of the model for which we determined the characteristic timescales.
We numerically characterized the different regimes by computing the probability distribution function of the time lags $\tau$ between the passage of two consecutive particles on one site. In particular, we chose the site $s+1$, right after the defect site $s$. \\
When the opening rate $u$ is the largest rate (i.e. when $u > f > \gamma$ or $u > \gamma > f$) we find a single timescale governed by the hopping rate $\gamma$. Particles essentially flow without a significant interaction with the defect.
Figure~\ref{fig:s1}, indeed, shows that the time lag distributions collapse if rescaled by the hopping rate $\gamma$. We naturally define such a behavior as the TASEP-like regime.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth]{ab.pdf}
\caption{(Color online) Time lag distributions rescaled by $\gamma$ in the TASEP-like limit for a system of size $L=500$ at density $\rho=0.5$ for (a) $u > \gamma > f$ and (b) $u > f > \gamma$ ($f=1$ in both cases).}
\label{fig:s1}
\end{center}
\end{figure}
Similarly, when the closing rate $f$ is the largest rate and particles can pass only one at a time during an opening event (i.e. when $f>u>\gamma$), the time distribution is characterized by a single characteristic timescale for long times.\\
In such a regime the dynamical defect acts like a static defect.
One can therefore think that particles are injected into the region after the defect with an effective constant entry rate $q$, in analogy to a slow site or static defect~\cite{Janowsky:1992p15963}. Such an effective rate can be approximated by the product of the hopping rate and the probability that the defect region is open:
\begin{equation}
q=\gamma \frac{u}{u+f} \;.
\end{equation}
We also note that the probability to move a particle onto the defect, when this is open, is $\gamma/(\gamma+f)$.\\
\indent If we denote by $\rho_{i}^{open}$ the probability to find a particle on site $i$ restricted to times when the defect is open, the current can be approximated by $J=q \rho_{s-1}^{open}(1-\rho_{s}^{open})$ where $s$ is the defect site. The typical passage time will scale then as $\hat{\tau}\approx 1/J$, which is the typical time for a particle to pass. By assuming, due to the blockage, that $\rho_{s-1}^{open}\approx 1$, the probability to find a particle on site $s$ when the region is open can be approximated by the probability to move a particle there, i.e. $\rho_{s}^{open}\sim\gamma/(\gamma+f)$.\\
This leads to an estimate of the typical timescale $\hat{\tau}$ in this regime
\begin{equation}
\hat{\tau}\approx \frac{(u+f)(\gamma+f)}{\gamma u f }\;,
\label{tau}
\end{equation}
which is in good agreement with the simulations, see Figure~\ref{fig:s2}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\columnwidth]{d-bis.pdf}
\caption{(Color online) Time lag distributions in the slow site regime $f>u> \gamma$ rescaled by $\hat{\tau}$, see Eq.~(\ref{tau}). \label{fig:s2}}
\end{center}
\end{figure}
The situation changes in the other regimes, and in particular when $\gamma>f>u$. Here two timescales are present, consisting in a sharp peak and a large tail in the time lag distributions (see Fig.~\ref{fig:s3}). The sharp peak is a signature of several particles passing through the blockage during the same opening event, while the large tails are given by the long waiting times to open the region (since $u$ is the smallest rate).
We therefore expect that the short time dynamics is regulated by the open, TASEP-like behavior of the system, and therefore described by the rate $1/\gamma$ , while the long time dynamics is governed by $1/u$. This is consistent with the outcome of the simulations, as shown in Figure~\ref{fig:s3}. Two regimes are clearly distinguishable: in the first one the dynamics is the same as in the TASEP and in the second one the distributions of time lags shows an exponential tail. We call this regime the intermittent regime.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\columnwidth]{e+inset.pdf}
\caption{(Color online) Time lag distributions in the intermittent regime $\gamma>f>u$. The short time dynamics is governed by the hopping rate $\gamma$ (see inset) while the long time by the opening rate $u$.}
\label{fig:s3}
\end{center}
\end{figure}
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[width=0.95\columnwidth]{kymo_new.jpg}
\caption{Kymographs of a system in the intermittent regime ($\rho=0.3$, $u=0.01$, $f=1$, and $\gamma=100$) for (a) a large lattice $L=1000$ and (b) a small lattice $L=250$. The defect is located in the middle of the lattice. \label{fig:s5}}
\end{center}
\end{figure}
The intermittent regime can be easily visualized by the use of so-called \textit{kymographs}, space-time representations of the evolution of the system. Figure~\ref{fig:s5}a shows that, for large sizes, there are always high densities (HD) and low density (LD) regions before and after the defect, respectively. The site is often closed and does not allow the flow of particles; when it opens, several particles are able to pass. This creates an intermittent behavior of the current.
In the case of small systems, as depicted in Figure~\ref{fig:s5}b for a system of $L=250$, the unstable HD-LD front relaxes over the entire lattice length. This causes severe finite-size effects, which will be discussed in Section~\ref{finite-size}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.72\textwidth]{diagram_and_plots_v3.pdf}
\caption{(Color online) Sketch of the phase diagram of the system in the $f/\gamma$ and $u/\gamma$ space. The different regimes are separated by dashed lines. The atypical current-density relations are shown for every regime: numerical simulations (closed circles) are compared with the finite-segment mean-field (FSMF) predictions (continuous lines) introduced in Sec.~\ref{sec::FSMF}. While away from the intermittent regime the agreement is satisfactory, deep in the intermittent (black) regime the FSMF prediction overestimates the current.}
\label{folding-diag-composite}
\end{center}
\end{figure*}
The phase diagram shown in Fig.~\ref{folding-diag-composite} summarizes the behavior of the system described so far. Each regime exhibits a different slope in a log-linear scale defining characteristic timescales that we sum up as it follows:
\begin{itemize}
\item[(i)]{\it The TASEP-like regime}: when $u>f>\gamma$ or $u>\gamma>f$, due to the rapid opening, the current is not affected by the defect. The system therefore behaves like a homogeneous TASEP and the only relevant timescale is $1/\gamma$ (white region in Fig.~\ref{folding-diag-composite});
\item[(ii)]{\it The static defectlike regime}: when $f>u>\gamma$ the dynamical defect acts like a static one, allowing the passage of particles in a way that can be described with a single effective rate $q<\gamma$ (top-right gray region in Fig.~\ref{folding-diag-composite}), similar to the model in~\cite{janowsky_exact_1994};
\item[(iii)]{\it The intermittent regime}: when $\gamma>f>u$ there are two strongly separated timescales, in contrast to the former cases. The short timescale $1/\gamma$ is the time separation between the passage of two consecutive particles during an opening event, while the long timescale $1/u$ corresponds to the time intervals during which the defect is closed and obstructs the passage of particles (black region in Fig.~\ref{folding-diag-composite}). This is a different regime caused by the presence of the dynamical defect.
\end{itemize}
The remaining cases ($\gamma>u>f$ and $f>\gamma>u$) are crossovers regimes between the intermittent and the homogeneous or static-defect TASEP-like behaviors, respectively.
These different regimes have a specific signature in the current versus density relation $J(\rho)$.
Similarly to the cases presented in the literature involving localized static defects in one-dimensional systems~\cite{janowsky_exact_1994}, numerical simulations show a reduction of the current with respect to the homogeneous TASEP; the current-density relation is a truncated parabola with a constant plateau value, see insets in Fig.~\ref{folding-diag-composite}. In large systems, when the opening rate $u$ decreases with respect to the other typical timescales, the current-density plateau lowers to smaller values, occupying a wider interval of densities and merging with the TASEP parabola $\gamma\rho(1-\rho)$ only for $\rho \to 0$ and $\rho \to 1$. As in~\cite{janowsky_exact_1994}, clusters of particles form before the defect so that the average density profiles are usually characterized by a sharp separation between a HD and a LD phase. In regions of the parameter space where the dynamics of the defect does not play a major role, our model recovers the standard phenomenology of an homogeneous exclusion process and the one of an exclusion process with a static-defect. However, we shall show that when the dynamics of the defect induces the intermittent regime, the system is subject to severe finite-size effects which considerably modify the current versus density relation $J(\rho)$ and the average density profiles in space $\rho(x)$. We will present and discuss these finite-size effects in Section \ref{finite-size}.
\section{Mean-field approaches}
\subsection{Finite-segment mean-field}
\label{sec::FSMF}
To compute the current-density relationship we use a finite-segment mean-field (FSMF) approach~\cite{*chou_clustered_2004, *dong_understanding_2009}, which allows us to define effective entry and exit rates from the pair of sites ($s-1,s$), where the dynamics is treated exactly
We illustrate the FSMS approximation as it follows. We consider a ring of $L$ sites and a dynamical defect composed of one single site at position $s$. In the large-$L$ limit we can imagine splitting the system into three parts: a semi-infinite left sub-lattice, a semi-infinite right sub-lattice and in between a middle region composed of the defect at sites $s$ and $s-1$. We study then the dynamics in the middle region introducing effective rates for the injection and extraction of particles.
Denoting the dynamical defect site by $s$, the pair of sites ($s-1,s$) has six possible states, given that site $s-1$ can be occupied or empty, and site $s$ can be empty and open, empty and closed, or occupied and open (see Table~\ref{table_states}).
\begin{table}[h!]
\begin{center}
\begin{tabular}{l|l|l|l}
label & s-1 & s & conformation of $s$\\
\hline
$x_{1}$ & 0 & 0 & open\\
$x_{2}$ & 0 & 0 & closed\\
$x_{3}$ & 1 & 0 & open\\
$x_{4}$ & 1 & 0 & closed\\
$x_{5}$ & 0 & 1 & open\\
$x_{6}$ & 1 &1 & open
\end{tabular}
\end{center}
\caption{Available configurations of sites $s-1$ and $s$. \label{table_states}}
\end{table}\\
\indent When the current reaches its plateau the system is split into HD and LD phases, separated by the dynamical defect. By imposing then current continuity, the two densities have to be coupled as $\rho_{HD}=1-\rho_{LD}=1-\rho_{s}$ ($\rho_s$ denotes the density of the defect site and we assume that site $s$ is in the LD phase).\\
The master equation
\begin{equation}
\label{eq:me}
\frac{\partial \vec{P}}{\partial t}=\mathbb{W}\vec{P}
\end{equation}
for the probability $\vec{P}$ to find the system in one of the six states $x_1, x_2,\ldots x_6$ is well defined once all the transition rates between all different states are specified. The transition matrix $\mathbb{W}$ between the states then reads
\begin{equation}
\small
\mathbb{W}=\left(
\begin{array}{cccccc}
-f-\gamma \hat{\rho}_{s} & u & 0 & 0 & \gamma \hat{\rho}_{s} & 0 \\
f & -u-\gamma \hat{\rho}_{s} & 0 & 0 & 0 & 0 \\
\gamma \hat{\rho}_{s} & 0 & -f-\gamma & u & 0 & \gamma \hat{\rho}_{s} \\
0 & \gamma \hat{\rho}_{s} & f & -u & 0 & 0 \\
0 & 0 & \gamma & 0 & -2 \gamma \hat{\rho}_{s} & 0 \\
0 & 0 & 0 & 0 & \gamma \hat{\rho}_{s} & -\gamma \hat{\rho}_{s}
\end{array}
\right) \;,
\end{equation}
\normalsize
where we used the notation $\hat{\rho}_{s}=1-\rho_{s}$. The matrix $\mathbb{W}$ therefore contains the effective transition rates as a function of the density on the defect $\rho_{s}$, assuming that a shock is located in the middle region between a phase at high density $1-\rho_{s}$ and a phase at low density $\rho_{s}$.
Solving the master equation (\ref{eq:me}) in the steady-state, we compute, as a function of $\rho_{s}, f, u$ and $\gamma$, the probability to find a particle on site $s$, which is by definition equal to the density $\rho_s$. One then obtains an expression for $\rho_{s}$ as a function of all other parameters (see the Appendix for more details).
The plateau current is then given by $J_{plateau}= \gamma \rho_{s} (1-\rho_{s})$. Note that this procedure can be extended to larger defects ($d>1$).
To validate the FSMF approach, in Fig.~\ref{fig::FSMF}a we show the relative difference $\Delta J/J=(J-J_{FSMF})/J$ between simulations and the FSMF in all the different regimes of the phase diagram. Data are taken in the middle of the plateau of the current-density relation ($\rho=0.5$), also to avoid deviations due to finite-size effects (section \ref{finite-size}). This analysis provides
reasonably good results in the TASEP-like and the slow-site-like regimes. However, it reveals also that the FSMF approximation is not appropriate in the intermittent regime (circles and diamonds in Fig.~\ref{fig::FSMF}a, $u/f<1$).
\begin{figure}[htb]
\includegraphics[width=0.45 \textwidth]{Fig2-noiMF} \hspace{-5ex}
\caption{(Color online) (a) Relative difference $\Delta J/J=(J-J_{FSMF})/J$ between simulations and the FSMF approach. Circles and diamonds show the crossover from the intermittent regime to the TASEP-like regime, while squares and triangles show the crossover from an intermediate nonintermittent regime to the slow-site-like regime, see Fig.~\ref{folding-diag-composite}. (b) Average closing times $\langle t_f\rangle$ (in seconds) for different densities. Vertical dotted lines represent the boundaries between the different regimes crossed (see, dash-dotted line in the inset). For large $u/f$, $\tilde{\rho}\sim \rho$, while in the intermittent region at small $u/f$ the assumption $\tilde{\rho}\sim 0.5$ provides a good estimate of the closing times. }
\label{fig::FSMF}
\end{figure}
\subsection{Intermittent mean-field}
\label{sec::intermittent}
We therefore turn our attention to the intermittent regime, where the FSMF approach fails. We start by analyzing the average time between consecutive opening and closing events $\langle t_{f} \rangle$, Fig.~\ref{fig::FSMF}b, i.e. the average time the folding region remains open allowing the passage of particles, before folding again.
When the coupling between the conformation of the defect and the presence of particles is weak, i.e. $u$ is the largest rate, the average timescales as $\langle t_{f}\rangle \sim f^{-1}$. However, when the coupling is strong, i.e., $\gamma$ is the largest rate, the former expression is corrected as $\langle t_{f}\rangle=[f(1-\tilde{\rho})]^{-1}$, where $\tilde{\rho}$ is the probability to find a particle on the defect site given that the site $s$ is open (previously also called $\rho_s^{open}$). This probability can be approximated in some limiting cases: in the TASEP-like regime, $u$ is the fastest rate and hence the defect is almost always open. In this case $\tilde{\rho}\sim \rho$ and the current is very close to the one predicted by the pure TASEP.
In contrast, in the intermittent regime $J(\rho)$ exhibits a plateau (see the insets in Fig.~\ref{folding-diag-composite}), within which the opening-closing dynamics is independent of the total density. In this case particles are blocked for long times behind the closed defect that occasionally opens, thus allowing for a collective passage of particles, as confirmed by the kymographs (see Fig.~\ref{fig:s5}). Just after the opening of the defect, $\tilde{\rho}$ can be estimated from a simple mean-field approach: $d\tilde{\rho}/dt=\gamma \rho_{s-1}^{open} (1-\tilde{\rho})-\gamma\tilde{\rho}(1-\rho_{s+1}^{open})$, where $\rho_{s-1}^{open}$ and $\rho_{s+1}^{open}$ denote the density of particles before and after the defect, respectively. Approximating $\rho_{s-1}^{open}$ by 1 and $\rho_{s+1}^{open}$ by 0, one gets $\tilde{\rho}=0.5$ in accordance with numerical simulations, as Fig.~\ref{fig::FSMF}b shows a good agreement with the folding times $\langle t_f \rangle$ in the intermittent regime estimated with $\tilde{\rho}=0.5$. \\
\indent Hence, an intermittent dynamics is established so that no net flow passes through the defect for a time of order $1/u$ and then a large current $J_{open}=\gamma \tilde{\rho}(1-\tilde{\rho})$ flows for a time $\langle t_{f}\rangle$.
This gives an average plateau current
\begin{equation}
J_{IMF}=\gamma \tilde{\rho}(1-\tilde{\rho})\dfrac{u/f}{1-\tilde{\rho}+u/f}
\label{eq:iMF}
\end{equation}
which we refer to as intermittent mean-field (IMF) current.
Figure~\ref{phase}a shows that deep in the intermittent regime (where $\tilde\rho\sim 0.5$) this approach provides a substantial improvement over the FSMF approach in predicting the current in the plateau.
\section{Finite-size Effects}
\label{finite-size}
As previously stated, the system presents strong finite-size effects, remarkably pronounced in the intermittent regime. Here the current-density relation of small lattices becomes asymmetric (Fig.~\ref{phase}a): the current is reduced at small densities whereas it is enhanced for large densities. Figure~\ref{phase}a also shows that the value of $J$ in the plateau does not depend on the system size. Moreover, the plateau disappears for very small systems (Fig.~\ref{phase}b) while the current-density profile remains asymmetric.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig3}
\caption{(Color online) (a) Deep in the intermittent regime the FSMF calculation (dotted line) is unable to predict the plateau value, while the IMF with $\tilde{\rho}=0.5$ (dashed line) does. Moreover, in the intermittent regime truncated parabolas are found only in the large-$L$ limit: smaller systems show current reduction (enhancement) at low (high) densities. The parameter used are $f=1s^{-1}, u=0.01s^{-1}$ and $\gamma=100s^{-1}$. (b) For very small systems (here $L=16$) the IMF calculation with $\tilde{\rho}=\rho$ (dashed lines) correctly approximates the numerical $J(\rho)$ relation ($f=1s^{-1}, \gamma=100s^{-1}$).}
\label{phase}
\end{figure}
Such a behavior can be rationalized as follows: whereas in the thermodynamic limit the probability to find a particle on the defect site, when open, is well approximated by $\tilde{\rho}\sim 0.5$, in very small systems (e.g. $L=16$ in Fig.~\ref{phase}b) the unstable HD-LD interface moving from the dynamical defect quickly relaxes over the whole system to the homogeneous TASEP density. Therefore, when the defect allows the passage of particles, all the sites can be considered to be identical and the density on the defect $\tilde \rho \sim \rho$.
Thus, identifying $\tilde{\rho}$ with $\rho$ in Eq.~(\ref{eq:iMF}) gives good quantitative agreement, shown in Fig.~\ref{phase}b. Moreover, this reasoning allows the determination of the lower boundary for the current versus density relation $J_{L}(\rho)$ for very small system sizes $L$, the upper boundary $J_{\infty}(\rho)$ being the truncated parabola profile predicted by the IMF theory.
For intermediate system sizes $L$ the situation is again different. Remarkably, the current for $\rho>0.5$ is enhanced compared to the current obtained in large systems, whereas it is reduced for $\rho<0.5$ (Fig.~\ref{phase}a).
This effect can be understood by analyzing the relaxation dynamics after an opening event by integrating the system of $L$ coupled ordinary
differential equations (ODEs) describing the occupancy of each site in the lattice $\rho_i$.
In this respect, we examine the relaxation of a TASEP system with initial conditions representing particles queuing behind the closed region and waiting for its opening.
Before the opening, a HD region forms before the defect site $s$, while starting from the same defect site the system is at LD.
Therefore we want to study the unstable HD-LD front moving through the folding site during the interval of time between two folding events. Like the situation represented in Fig.~\ref{fig:s5}, we imagine starting with $N=L\rho$ particles queued behind the closed site $s$.
At time $t=0$ the site opens, letting the particles flow and relaxing the HD-LD inhomogeneity. We focus then on how the density $\tilde\rho$ that a particle occupies the site $s$ (impeding its closing) evolves in time between two closing events. In order to do so, we
write a system of $L$ coupled differential equations
\begin{equation}
\frac{d\rho_i}{dt} = \gamma \rho_{i-1} (1-\rho_i) - \gamma \rho_{i} (1-\rho_{i+1}), \qquad i=1,\dots,L \,,
\label{eq:ODE}
\end{equation}
with the prescription $\rho_0\equiv \rho_L$ (periodic boundary conditions), and fixing the initial conditions as described above, i.e. $\rho_i=0$, $\forall \, i$ excluding the $N$ sites preceding the defect for which $\rho_{s-j}=1$, $j=1,\dots N$, where $s$ is the defect site and $N= L \rho$ is the total number of particles (here $\gamma$ is arbitrarily fixed to 1 and defines the unit of time).\\
\indent We then imagine opening the dynamical defect and integrating the system numerically to observe the evolution, with time, of the occupancy $\tilde{\rho}$ of the site $s$ after the opening event. Even though here we do not consider successive closing events (we are only interested in the relaxation dynamics), in the intermittent regime we consider that, before any opening of the site $s$, the system lies in a situation very well approximated by the previous initial conditions, as supported also by the kymographs in Fig.~\ref{fig:s5}.
The results, shown in Fig.~\ref{fig:s6}, strongly depend on the size $L$ of the system. The density $\tilde{\rho}$ of small systems (with just a few sites) relaxes very quickly to the homogeneous TASEP density. This is the reason why, for very short lattices, we can approximate $\tilde{\rho}\sim \rho$ in the IMF formula, Eq.~(\ref{eq:iMF}).
\begin{figure*}[!th]
\begin{center}
\includegraphics[width=0.7\textwidth]{rho_tilde_t.pdf}
\caption{(Color online) Numerical solution of $\tilde \rho (t)$ based on the system of ODEs~(\ref{eq:ODE}). The black curve shows that the steady state $\tilde \rho = \rho$ is reached quickly in small systems, and through an oscillatory transient in larger systems (dashed and dash-dotted lines). \label{fig:s6}}
\end{center}
\end{figure*}
By increasing the length $L$, the situation becomes more complicated. In general, $\tilde{\rho}$ first increases and saturates at $0.5$. Here the region on the left of the defect effectively acts as a (peculiar) reservoir and one can imagine that there is always a particle ready to be injected into the rightmost part. The duration of this transitory state increases with $L$, as there are more particles in the reservoir. After that, $\tilde{\rho}$ drops (low densities, Figures~\ref{fig:s6}a), corresponding to the shock passing the defect; then it converges to the homogeneous density in an oscillatory way, the oscillations representing the return of the diffusive shock (periodic boundaries).\\
\indent This behavior differs from the one observed in~\cite{motegi_exact_2012}, where the authors study a particular case of the relaxation of a TASEP with $\rho=0.5$, for which it is not possible to observe the oscillatory behavior.\\
\indent The study of the shock relaxation allows us to notice that, in large systems, the region closes again quicker than $\tilde{\rho}$ drops (or increases for high densities as in Fig.~\ref{fig:s6}b) or even relaxes to the TASEP density; this allows us to approximate $\tilde{\rho}$ with $0.5$ in the intermittent regime (see Sec.~\ref{sec::intermittent}). For instance, in Fig.~\ref{fig::FSMF}b we measured an average closing time $\langle t_f \rangle \sim 15$s for a system with $\rho=0.3$ and $L=500$. A system with the same features would need a time of $\sim$600s to escape the transitory state at $\tilde\rho \sim 0.5$, and longer times to eventually relax to the uniform TASEP density. Therefore, the IMF approach with $\tilde{\rho}=0.5$ gives a good approximation for large systems. Problems arise when $\tilde{\rho}$ drops (or increases) before the region closes, causing the observed non trivial current-density relationship for intermediate sizes.\\
Although in the other regimes the system also presents the ordinary finite-size effects of the TASEP, the severe finite-size effects in the intermittent regime have a different and rather counter-intuitive nature. They result from the transient relaxation of the density after an opening event and therefore present only in the intermittent regime. The occupancy of the defect site after an opening event indeed depends on the size of the system.\\
\indent In this respect, intermittence and related finite-size effects have strong consequences on the stationary density profiles too. Whereas, similar to the static-defect case, outside the intermittent regime there is a sharp phase separation between the HD-LD profile before and after the dynamical defect (Fig.~\ref{profiles}a), the presence of intermittence induces relevant boundary effects modifying the density profiles (Fig.~\ref{profiles}b). \\
This is a signature of the transient relaxation of the HD-LD interface during the time $\langle t_f \rangle$. Such strong correlations are associated with the nonstationary dynamics of finite systems and are particularly evident at densities for which current reduction (or enhancement) occurs.
\begin{figure}[h]
\begin{center}
\includegraphics[type=pdf,ext=.pdf,read=.pdf,width=\columnwidth]{fig4}
\caption{(Color online) Density profiles for a lattice with $L=500, f=1s^{-1}, u=0.01s^{-1}$. (a) For $\gamma=1s^{-1}$ there is a clear HD-LD separation with a sharp front, in agreement with the FSMF (horizontal dotted lines). (b) In the presence of intermittence ($\gamma=100s^{-1}$) longer correlations are established and simulations deviate from the FSMF prediction.}
\label{profiles}
\end{center}
\end{figure}
\section{Conclusion}
In this work we have introduced the concept of transport on a lattice in the presence of local interactions between particles and substrate, also referred to as \textit{dynamical defects}.
This concept is key to understanding natural transport phenomena occurring on substrates with a fluctuating environment. It helps explain fundamental biological processes such as protein synthesis or intracellular traffic, where the local conformation of the substrate can deeply influence the characteristics of the flow of molecular motors. The obtained results, however, are general and therefore applicable to other transport processes, such as vehicular or human traffic, and synthetic molecular devices.\\
\indent The phenomenology is presented and studied by means of the totally asymmetric simple exclusion process, a prototypic model of transport in nonequilibrium physics. In this framework we have provided original mean-field arguments that allow the reproduction and the rationalization of the rich phenomenology of the model. We have thus discussed a novel dynamical regime, characterized by an intermittent current of particles and induced by the local interaction and competition between particle motion and the defect dynamics. Importantly, we have found that a particle-lattice interaction triggers severe finite-size effects that have a counterintuitive strong impact on transport. For different system densities, the small size of the lattice reduces or enhances the flow of particles, inducing an asymmetric current-density profile.\\
\indent Given the small size of biological substrates, the physics of the intermittent regime is highly relevant to the understanding of protein synthesis and motor protein transport.
Bursts of gene expression have been often reported~\cite{yu_probing_2006} and our results provide a physical mechanism to explain the contribution of the translation process to them. Moreover, intermittent behavior can strongly influence motor protein current fluctuations that can be measured in state of the art experiments~\cite{leduc_molecular_2012}.
In the biological context of mRNA translation, our results on finite-size effects would correspond to an increase in protein production for small mRNA strands compared to longer strands; interestingly, highly expressed proteins constituents of ribosomes are short~\cite{planta_list_1998}.\\
\indent The model presented here can be extended by including, for example, larger interaction sites or different boundary conditions. However, the observed phenomenology does not change: if the defect is extended ($d>1$) the finite-size effects are even more pronounced than in the $d=1$ case~\cite{turci_preparation_2012}, and lattices with open boundaries, more common in practical applications than lattices with periodic boundary conditions, present the same characteristics, although edge effects can be relevant if the defect is moved close to the boundaries~\cite{turci_preparation_2012}.
\acknowledgments
We are grateful to I.~Stansfield for bringing to our attention this research topic, and to N.~Kern, I.~Neri and C.~A.~Brackley for valuable discussions. F.T. was supported by the French Ministry of Research, E.P. by CNRS and PHC no 19404QJ, L.C. by a SULSA studentship, M.C.R. by BBSRC (Grants No. BB/F00513/X1 and No. BB/G010722) and SULSA, and A.P. by the University of Montpellier 2 Scientific Council.
|
2,869,038,156,726 | arxiv | \part{\cal P}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}{Proposition}
\newtheorem{problem}{Problem}
\newtheorem{remark}{Remark}
\newtheorem{definition}{Definition}
\newtheorem{lemma}{Lemma}
\newtheorem{proof}{Proof}
\begin{document}
\begin{center}
\textbf{Relating multiway discrepancy and singular values of graphs and
contingency tables}
\end{center}
\begin{center}
Marianna~Bolla
\end{center}
\begin{center}
\textit{Institute of Mathematics, Budapest University of Technology and
Economics} \\ E-mail: {\tt {[email protected]}}
\end{center}
\renewcommand\abstractname{Abstract}
\begin{abstract}
\noindent
The $k$-way discrepancy $\disc_k (\C)$ of a
rectangular array $\C$ of nonnegative entries is the minimum of
the maxima of the within- and between-cluster discrepancies
that can be obtained by simultaneous $k$-clusterings (proper partitions)
of its rows and columns.
In Theorem~\ref{fotetel}, irrespective of the size of $\C$,
we give the following estimate for the
$k$th largest
non-trivial singular value of the normalized table:
$s_k \le 9\disc_{k } (\C ) (k+2 -9k\ln \disc_{k } (\C ))$, provided
$\disc_{k } (\C ) <1$ and $k\le \rk (\C )$.
This statement is the converse of Theorem 7 of Bolla~\cite{Bolla14},
and the proof
uses some lemmas and ideas of Butler~\cite{Butler}, where only the
$k=1$ case is treated, in which case our upper bound is the tighter.
The result naturally extends to the singular
values of the normalized adjacency matrix of a weighted undirected or
directed graph.
\noindent
\textbf{Keywords:} {multiway discrepancy; normalized table; singular values;
weighted graphs; directed graphs; generalized random graphs.}
\noindent
\textit{MSC:} 15A18, 05C50
\end{abstract}
\section {Introduction}\label{intro}
In many applications, for example when microarrays are analyzed, our
data are collected in the form of an $m\times n$ rectangular array
$\C=(c_{ij})$ of
nonnegative real entries, called contingency table.
We assume that $\C$ is
non-decomposable, i.e., $\C \C^T$ (when $m\le n$) or
$\C^T \C$ (when $m > n$) is irreducible.
Consequently,
the row-sums
$d_{row,i} =\sum_{j=1}^n c_{ij}$ and column-sums $d_{col,j}=\sum_{i=1}^m c_{ij}$
of $\C$ are strictly positive, and the diagonal matrices
$\DD_{row} =\diag (d_{row,1} ,\dots ,d_{row,m})$ and
$\DD_{col} =\diag (d_{col,1} ,\dots ,d_{col,n})$ are regular.
Without loss of generality, we also assume that
$\sum_{i=1}^n \sum_{j=1}^m c_{ij} =1$, since neither our main object, the
normalized table
\begin{equation}\label{cnor}
\C_{nor} = \DD_{row}^{-1/2} \C \DD_{row}^{-1/2} ,
\end{equation}
nor the multiway discrepancies to be introduced are affected by the scaling
of the entries of $\C$.
It is well known (see e.g.,~\cite{Bolla14}) that the singular values of
$\C_{nor}$ are in the [0,1]
interval. Enumerated in non-increasing order, they are the real numbers
$$
1=s_0 >s_1 \ge \dots \ge s_{r-1} > s_{r} = \dots = s_{n-1} =0 ,
$$
where $r= \rk (\C )$. When $\C$ is non-decomposable,
1 is a single singular value, and it is
denoted by $s_0$, since it belongs to the trivial singular vector pair,
which will be disregarded in some further calculations.
Our purpose is to find relations between the $k$th nontrivial singular value
$s_k$ of $\C_{nor}$ and the minimum $k$-way discrepancy of $\C$ defined
herein.
\begin{definition}\label{diszkrepancia}
The multiway discrepancy of the rectangular array $\C$ of nonnegative entries
in the proper $k$-partition $R_1 ,\dots ,R_k$ of its rows and
$C_1 ,\dots ,C_k$ of its columns is
\begin{equation}\label{disk}
\disc (\C ; R_1 ,\dots ,R_k , C_1 ,\dots ,C_k ) =
\max_{\substack{1\le a\le b\le k \\X\subset R_a , \, Y\subset C_b}}
\frac{|c (X, Y)-\rho (R_a,C_b ) \Vol (X)\Vol (Y)|}{\sqrt{\Vol(X)\Vol(Y)}} ,
\end{equation}
where
$c (X, Y) =\sum_{i\in X} \sum _{j\in Y} c_{ij}$ is the cut between
$X\subset R_a$ and $Y\subset C_b$,
$\Vol (X) = \sum_{i\in X} d_{row,i}$ is the volume of the row-subset $X$,
$\Vol (Y) = \sum_{j\in Y} d_{col,j}$ is the volume of the column-subset $Y$,
whereas
$\rho (R_a,C_b) =\frac{c(R_a,C_b)}{ \Vol (R_a) \Vol (C_b)}$ denotes the relative
density between $R_a$ and $C_b$.
The minimum $k$-way discrepancy of $\C$ itself is
$$
\disc_k (\C ) = \min_{\substack{R_1 ,\dots ,R_k \\ C_1 ,\dots ,C_k } }
\disc (\C ; R_1 ,\dots ,R_k , C_1 ,\dots ,C_k ).
$$
\end{definition}
In Section~\ref{conc}, I will extend this notion to an edge-weighted graph $G$
and denote it by $\disc_k (G)$. In that setup, $\C$ plays the role of
the edge-weight matrix
(symmetric in the undirected; quadratic, but usually not symmetric
in the directed case; and it is the adjacency matrix if $G$ is a simple graph
when the eigenvalues of the normalized adjacency matrix enter into the
estimates, in their decreasing absolute values).
Note that $\disc (\C ; R_1 ,\dots ,R_k , C_1 ,\dots ,C_k )$ is the smallest
$\alpha$ such that for every $R_a ,C_b$ pair and for every
$X\subset R_a$, $Y\subset C_b$,
\begin{equation}\label{dif}
|c (X, Y)-\rho (R_a,C_b ) \Vol (X)\Vol (Y)| \le \alpha \sqrt{\Vol(X)\Vol(Y)}
\end{equation}
holds.
Hence, in the $k$-partitions of the rows and columns,
giving the minimum $k$-way discrepancy (say, $\alpha^*$) of $\C$,
every $R_a ,C_b$ pair is $\alpha^*$-regular in terms of the volumes, and
$\alpha^*$ is the smallest possible discrepancy that can be attained
with proper $k$-partitions.
It resembles the notion of $\epsilon$-regular pairs in the Szemer\'edi
regularity lemma~\cite{Szemeredi}, albeit with given number of
vertex-clusters, which are usually not equitable;
further, with volumes, instead of cardinalities.
Historically,
the notion of discrepancy together with the expander mixing lemma
was introduced for simple, regular graphs, see
e.g., Alon, Spencer, Hoory, Linial, Widgerson~\cite{AlonS,Hoory},
and extended to Hermitian matrices
in Bollob\'as, Nikiforov~\cite{BollobasN}.
In Chung, Graham, Wilson~\cite{Chung1}, the authors use the term quasirandom
for simple
graphs that satisfy any of some equivalent properties, some of them
closely related to discrepancy and eigenvalue separation.
Chung and Graham~\cite{Chung2} prove that for simple graphs `small' discrepancy
$\disc (G)$ (with our notation, $\disc_1 (G)$)
is caused by
eigenvalue `separation': the second largest singular value (which is also
the second largest absolute value eigenvalue), $s_1$, of the
normalized adjacency matrix is `small', i.e., separated from the
trivial singular value $s_0 =1$,
which is the edge of the spectrum.
More exactly, they prove $\disc (G) \le s_1$, hence giving some kind of
generalization of the expander mixing lemma for \textit{irregular} graphs.
In the other direction, for Hermitian matrices,
Bollob\'as and Nikiforov~\cite{BollobasN} estimate the second largest singular
value of an $n\times n$ Hermitian matrix $\A$ by $C \disc (\A ) \log n$,
and show that this is best possible up to a multiplicative constant.
Bilu and Linial~\cite{Bilu} prove the converse of the expander mixing
lemma for simple regular graphs, but their key
Lemma 3.3, producing this statement, goes beyond regular graphs.
In Alon et al.~\cite{Alon10},
the authors relax the notion of eigenvalue separation to essential
eigenvalue separation (by introducing a parameter for it, and requiring the
separation only for the eigenvalues of a relatively large part of the graph).
Then they prove relations between the constants of this kind of
eigenvalue separation and discrepancy.
For a general rectangular array $\C$ of nonnegative entries,
Butler~\cite{Butler} proves the following forward
and backward statement in the $k=1$ case:
\begin{equation}\label{but}
\disc (\C ) \le s_1 \le 150\disc (\C ) (1-8\ln \disc (\C ) ) ,
\end{equation}
where $\disc (\C )$ is our $\disc_1 (\C )$ and, with our notation, $s_1$ is
the largest nontrivial singular value of $\C_{D}$ (he denotes is with
$\sigma_2$).
Since $s_1 <1$, the upper estimate makes sense for very small discrepancy,
in particular, for
$\disc (\C ) \le 8.868 \times 10^{-5}$.
The lower estimate further generalizes the expander mixing lemma to
rectangular matrices, but it can be proved with the same tools as in
the quadratic case (see Proposition~\ref{EML} in Section~\ref{conc}).
So far, the overall discrepancy has been considered in the sense,
that $\disc (\C )$ or $\disc (G)$ measures the largest possible deviation
between the actual and expected connectedness of arbitrary (sometimes disjoint)
subsets $X,Y$, where under expected the hypothesis of
independence is understood (which corresponds to the rank 1 approximation).
Note than in~\cite{Butler,Butler1}, $\disc_t (G)$ (or $AltDisc_t (G)$ for
alternating walks in directed graphs) is also introduced,
which measures the minimum possible deviation between
the actual and expected number of walks of length $t$ between the
vertex-subsets. Similar notion appears in~\cite{Chung2}, and other notions
of discrepancy are also introduced in~\cite{Chung3}; for example, the
skew-discrepancy for directed graphs.
Notwithstanding, these papers consider variants of the overall
discrepancy, which corresponds to the one-cluster situation.
My purpose is, in the multicluster scenario, to find
similar relations between the minimum $k$-way discrepancy and
the SVD of the normalized matrix, for given $k$.
In one direction, in Section~\ref{biz}, I will prove the following.
\begin{theorem}\label{fotetel}
For every non-decomposable contingency table $\C$
and integer $1\le k\le \rk (\C )$,
$$
s_k \le 9\disc_{k } (\C ) (k+2 -9k\ln \disc_{k } (\C )) ,
$$
provided $\disc_{k } (\C ) <1$,
where $s_k$ is the $k$th largest non-trivial singular value of the normalized
table $\C_{nor}$ introduced in~(\ref{cnor}).
\end{theorem}
Note that $\disc_k (\C ) =0$ only if $\C$ has a block structure with $k$
row- and column-blocks, in which case $s_k =0$ also holds.
Likewise, $\disc_{k } (\C ) <1$ is not a peculiar requirement, since in view
of $s_k <1$, the upper bound of the theorem has relevance only for
$\disc_k (\C )$ much smaller than 1; for example, for
$\disc_{1 } (\C ) \le 1.866\times 10^{-3}$,
$\disc_{2 } (\C ) \le 8.459\times 10^{-4}$,
$\disc_{3 } (\C ) \le 5.329\times 10^{-4}$, etc.
In the other direction,
in Theorem 7 of~\cite{Bolla14}, I showed that
(under some balancing conditions on the margins and cluster sizes)
a bit modified version of this
$k$-way discrepancy is $O (\sqrt{2k} S_k +s_k )$, where $S_k$ is the sum of
the squareroots of the $k$-variances of the optimal row- and
column-representatives (they depend on the normalized singular vectors
corresponding to $s_1 ,\dots ,s_{k-1}$). In fact, $S_k$ the smaller,
the larger the gap between $s_k$ and $s_{k-1}$ is.
I will better explain this notion in Section~\ref{last}.
There I will also
illustrate that $S_k =0$ holds in many special cases, and consequently,
my upper estimate for the $k$-way discrepancy boils down to $B s_k$ with
some absolute constant $B$.
For example, in the simple graph case,
when $k=2$ and our graph is bipartite, biregular,
the discrepancy between the two independent vertex-sets is
estimated from above with $B s_2$ by my result, and, up to a constant factor,
this is the same as the estimate proved in Evra et al.~\cite{Evra}.
In Section~\ref{last}, I will also mention some spectral relations to
the weak Szemer\'edi regularity lemma~\cite{Borgs,Frieze,Gharan,Szegedy}.
\section{Proof of Theorem~\ref{fotetel}}\label{biz}
Before proving the theorem, I encounter some lemmas of others that I will
use, possibly with some modifications.
Lemma 3 of Bollob\'as and Nikiforov~\cite{BollobasN} is the key to prove their
main result.
This lemma states that to every $0<\ep <1$ and vector $\x\in \C^n$,
$\| \x \| =1$, there exists a vector $\y\in \C^n$ such that its coordinates
take no more than
$\left\lceil \frac{8\pi}{\ep} \right\rceil \left\lceil \frac4{\ep} \log
\frac{2n}{\ep} \right\rceil$
distinct values and $\|\x -\y \| \le \ep$.
This is why $\log n$ appears in their estimate for the second largest singular
value of an $n\times n$ Hermitian matrix.
Since I do not want to appear the
log-sizes in my estimate in the miniature world of $[0,1]$,
I will rather use the construction of the following lemma,
which is indeed a consequence of Lemma 3 of~\cite{BollobasN}.
\begin{lemma}[Lemma 3 of Butler~\cite{Butler}]\label{l1}
To any vector $\x \in \C^n$, $\| \x \| =1$ and diagonal matrix
$\DD$ of positive real diagonal entries, one can construct a
step-vector $\y \in \CC^n$ such that $\|\x-\DD\y\|\le \frac13$,
$\| \DD \y \| \le 1$, and the nonzero entries of $\y$ are of the form
$\left( \frac45 \right)^j e^{\frac{\ell }{29} 2\pi i}$ with appropriate
integers $j$ and $\ell$ ($0\le \ell \le 28$).
\end{lemma}
Note that starting with an $\x$ of real coordinates, we do not need all
the 29 values of $\ell$, only two of them will show up, as it follows from a
better understanding of the construction of~\cite{Butler}. In fact, by
the idea of~\cite{BollobasN}, $j$'s come from dividing the coordinates of
$\DD^{-1} \x / \| \DD^{-1} \x \|$ in decreasing absolute values into groups,
where
the cut-points are powers of $\frac45$.
With the notation $\x =(x_s))_{s=1}^n$, if $x_s$ is
in the $j$-th group, then the corresponding coordinate of the approximating
complex vector $\y =(y_s )_{s=1}^n$ is as follows. If $x_s =0$, then $y_s=0$,
otherwise $y_s =\left(\frac45 \right)^j
e^{\left( \lfloor \frac{29\theta}{2\pi} \rfloor /29 \right) 2\pi i }$,
where $\theta$ is the argument of $x_s$, $0\le \theta <2\pi$,
and therefore, $\ell = \lfloor \frac{29\theta}{2\pi} \rfloor$ is an integer
between 0 and 28. However, when the coordinates of $\x$ are real numbers,
then only the values 0 and 14 of $\ell$ can occur,
since $\theta$ can take only one
of the values 0 or $\pi$, depending on whether $x_s$ is positive or negative.
We will intensively use this observation in our proof.
\begin{lemma}[Lemma 4 of Butler~\cite{Butler}]\label{l2}
Let $\M$ be a matrix with largest singular value
$\sigma$ and corresponding unit-norm singular vector pair $\v, \u$. If
$\x$ and $\y$ are vectors such that $\| \x \|\le 1$,
$\| \y \|\le 1$, $\| \v -\x \| \le \frac13$, $\| \u -\y \| \le \frac13$,
then $\sigma \le \frac92 \langle \x , \M \y \rangle $.
\end{lemma}
Note that, in our case, $\M$ is a real matrix and so, $\v ,\u$ have real
coordinates; still, the approximating (step-vectors) $\x , \y$ may have
complex coordinates, and so,
$\langle .,. \rangle$ denotes the (possibly complex) inner product.
Note that in the possession of real (column) vectors $\x ,\y$ and matrix $\M$,
$\langle .,. \rangle$ can be written in terms of matrix-vector multiplications
with transpositions:
$\langle \x , \M \y \rangle =\x^T \M \y$.
\noindent
\textbf{Proof} (of the main theorem).
Assume that $\alpha :=\disc_k (\C ) <1$ and it is attained with the proper
$k$-partition
$R_1 ,\dots ,R_k$ of the rows and $C_1 ,\dots ,C_k$ of the columns of $\C$;
i.e., for every $R_a ,C_b$ pair and
$X\subset R_a$, $Y\subset C_b$ we have
\begin{equation}\label{reg}
| c (X, Y) -\rho (R_a,C_b) \Vol (X) \Vol (Y)| \le \alpha
\sqrt{\Vol (X ) \Vol (Y )} .
\end{equation}
Our purpose is to put Inequality~(\ref{reg}) in matrix form by using
indicator vectors and introducing the $m\times n$ auxiliary matrix
\begin{equation}\label{F}
\F =\C - \DD_{row} \RR \DD_{col} ,
\end{equation}
where $\RR =(\rho (R_a,C_b) )$ is the $m\times n$ block-matrix of $k\times k$
blocks with entries equal to $\rho (R_a,C_b)$ over the block $R_a \times C_b$.
With the indicator vectors $\1_X$ and $\1_Y$ of $X\subset R_a$ and
$Y\subset C_b$, Inequality~(\ref{reg}) has the following equivalent form:
\begin{equation}\label{ind}
|\langle \1_X , \F \1_Y \rangle |
\le \alpha \sqrt{ \langle \1_X ,\C \1_n \rangle
\langle \1_m , \C \1_Y \rangle }
\end{equation}
where $\1_n$ denotes the all 1's vector of size $n$ and
$\langle .,. \rangle$ denotes the (possibly complex) inner product.
Note that in the possession of real (column) vectors and matrices,
$\langle .,. \rangle$ can be written in terms of matrix-vector multiplications
with transpositions; for example,
$\langle \1_X , \F \1_Y \rangle =\1_X^T \F \1_Y$.
At the same time, Equation~(\ref{F}) yields
$$
\DD_{row}^{-1/2} \F \DD_{col}^{-1/2} = \DD_{row}^{-1/2} \C
\DD_{col}^{-1/2} - \DD_{row}^{1/2} \RR \DD_{col}^{1/2} =
\C_{nor} - \DD_{row}^{1/2} \RR \DD_{col}^{1/2} .
$$
Since the rank of the matrix $\DD_{row}^{1/2} \RR \DD_{col}^{1/2} $
is at most $k$,
by Theorem 3 of Thompson\footnote{Actually, Thompson stated the
theorem for square matrices, but in the possession of a rectangular one,
we can supplement it with zero rows or columns to make it quadratic; further,
the nonzero singular values of the so obtained square matrix are the same
as those of the rectangular, supplemented with additional zero singular
values that will not alter the shifted interlacing facts.}~\cite{Thompson},
describing the effect of rank $k$
perturbations for the singular values, we obtain the following upper estimate
for $s_k$, that is the $(k+1)$th largest (including the trivial 1)
singular value of $\C_{nor}$:
$$
s_k \le s_{max} (\DD_{row}^{-1/2} \F \DD_{col}^{-1/2}) =
\| \DD_{row}^{-1/2} \F \DD_{col}^{-1/2} \| ,
$$
where $\| .\|$ denotes the spectral norm.
Let $\v \in \R^m$ be the left and $\u\in \R^n$ be the right unit-norm singular
vector corresponding to the
maximal singular value of $\DD_{row}^{-1/2} \F \DD_{col}^{-1/2}$, i.e.,
$$
|\langle \v , (\DD_{row}^{-1/2} \F \DD_{col}^{-1/2} ) \u \rangle | =
\| \DD_{row}^{-1/2} \F \DD_{col}^{-1/2} \|.
$$
In view of Lemma~\ref{l1},
there are stepwise constant vectors $\x\in \CC^m$ and $\y \in\CC^n$ such that
$\| \v -\DD_{row}^{1/2} \x \| \le \frac13$ and
$\| \u -\DD_{col}^{1/2} \y \| \le \frac13$; further,
$\| \DD_{row}^{1/2} \x \| \le 1$ and $\| \DD_{col}^{1/2} \y \| \le 1$.
Then Lemma~\ref{l2} yields
$$
\| \DD_{row}^{-1/2} \F \DD_{col}^{-1/2} \| \le
\frac92 \left| \langle (\DD_{row}^{1/2} \x ), (\DD_{row}^{-1/2} \F
\DD_{col}^{-1/2} ) (\DD_{col}^{1/2} \y ) \rangle \right| =
\frac92 |\langle \x , \F \y \rangle | .
$$
Now we will use the construction in the proof of the Lemma 3~\cite{Butler}
in the special case when the vectors
$\v=(v_s))_{s=1}^m$ and $\u=(u_s))_{s=1}^n$, to be approximated,
have real coordinates. Therefore, only the following
three types of coordinates
of the approximating complex vectors $\x =(x_s))_{s=1}^m$ and
$\y =(y_s )_{s=1}^n$ will appear.
If $v_s =0$, then $x_s=0$ too; if $v_s >0$, then $x_s =(\frac45 )^j$ with
some integer $j$; if $v_s <0$, then $x_s =(\frac45 )^j
e^{\frac{28}{29} \pi i}$ with some integer $j$. Likewise,
if $u_s =0$, then $y_s=0$ too; if $u_s >0$, then $y_s =(\frac45 )^{\ell}$ with
some integer $\ell$; if $u_s <0$, then $y_s =(\frac45 )^{\ell}
e^{\frac{28}{29} \pi i}$ with some integer $\ell$.
With these observations, the step-vectors $\x$ and $\y$ can be written as
the following finite sums with respect to the integers $j$ and $\ell$:
$$
\x = \sum_{j} (\frac45 )^j \x^{(j)} , \quad
\x^{(j)} =\sum_{a=1}^k ( \1_{{\cal X}_{ja1}} + e^{\frac{28}{29} \pi i}
\1_{{\cal X}_{ja2}} ) , \quad \textrm{where}
$$
$$
{\cal X}_{ja1} = \{ s: \, x_s =(\frac45 )^j , \, s\in R_a \}
\quad \textrm{and} \quad
{\cal X}_{ja2} = \{ s: \, x_s =(\frac45 )^j e^{\frac{28}{29} \pi i} ,
\, s\in R_a \} ;
$$
likewise,
$$
\y = \sum_{\ell} (\frac45 )^{\ell} \y^{(\ell )} , \quad
\y^{(\ell )} =\sum_{b=1}^k ( \1_{{\cal Y}_{\ell b1}} + e^{\frac{28}{29} \pi i}
\1_{{\cal Y}_{\ell b2}} ) , \quad \textrm{where}
$$
$$
{\cal Y}_{\ell b1} =\{ s: \, y_s =(\frac45 )^{\ell} , \, s\in C_b \}
\quad \textrm{and} \quad
{\cal Y}_{\ell b2} = \{ s: \, y_s =(\frac45 )^{\ell} e^{\frac{28}{29} \pi i},
\, s\in C_b \} .
$$
Then
\begin{equation}\label{hiv}
\begin{aligned}
|\langle \x^{(j)}, \F \y^{(\ell )} \rangle |
&\le \sum_{a=1}^k \sum_{b=1}^k \sum_{p=1}^2 \sum_{q=1}^2
\left| \langle \1_{{\cal X}_{jap}} ,\F \1_{{\cal Y}_{\ell bq}} \rangle
\right| \\
&\overset{(\ref{ind})}\le \sum_{a=1}^k \sum_{b=1}^k \sum_{p=1}^2 \sum_{q=1}^2
\alpha \sqrt{ \langle \1_{{\cal X}_{jap}} ,\C \1_n \rangle
\langle \1_m , \C \1_{{\cal Y}_{\ell bq}} \rangle } \\
&\le \alpha 2k \sqrt{\sum_{a=1}^k \sum_{b=1}^k \sum_{p=1}^2 \sum_{q=1}^2
\langle \1_{{\cal X}_{jap}} ,\C \1_n \rangle
\langle \1_m , \C \1_{{\cal Y}_{\ell bq}} \rangle } \\
&= 2k \alpha \sqrt{ \langle \sum_{a=1}^k \sum_{p=1}^2
\1_{{\cal X}_{jap}} ,\C \1_n \rangle
\langle \1_m , \C \sum_{b=1}^k \sum_{q=1}^2 \1_{{\cal Y}_{\ell bq}} \rangle } \\
&= 2k \alpha \sqrt{ \langle |\x^{(j)} | ,\C \1_n \rangle
\langle \1_m , \C |\y^{(\ell )} | \rangle } ,
\end{aligned}
\end{equation}
where in the first inequality we used that $|e^{\frac{28}{29} \pi i}|=1$,
in the second one we used (\ref{ind}), while in the last one,
the Cauchy--Schwarz inequality with $4k^2$ terms. We also introduced the
notation $|\z | = (|z_s |)_{s=1}^n$ for the real vector, the coordinates of
which are the absolute values of the corresponding coordinates of the
(possibly complex)
vector $\z$. In the same spirit, let $| \M |$ denote the matrix whose entries
are the absolute values of the corresponding entries of $\M$ (we will use
this only for real matrices). With this formalism, this is the right moment
to prove the following inequalities that will be used soon to finish the proof:
\begin{equation}\label{kell}
\sum_{\ell} |\langle \x^{(j)}, \F \y^{(\ell )} \rangle | \le
2 \langle | \x^{(j)} | , \C \1_n \rangle , \quad
\sum_{j} |\langle \x^{(j)}, \F \y^{(\ell )} \rangle | \le
2 \langle \1_m , \C |\y^{(\ell )} | \rangle .
\end{equation}
Since the two inequalities are of the same flavor, it suffices to prove
only the first one. Note that it is here, where we use the exact definition of
$\F$ as follows.
$$
\begin{aligned}
\sum_{\ell} |\langle \x^{(j)}, \F \y^{(\ell )} \rangle |
&\le \langle |\x^{(j)} |, |\F | \sum_{\ell} |\y^{(\ell )} | \rangle \\
&\le \langle |\x^{(j)} |, (\C + \DD_{row} \RR \DD_{col} )
\1_n \rangle |
= 2 \langle | \x^{(j)} | , \C \1_n \rangle
\end{aligned}
$$
because $|\y^{(\ell )} |$ is a 0-1 vector and
$\C + \DD_{row} \RR \DD_{col}$ is a (real) matrix of nonnegative entries.
We also used that the $i$th coordinate of the vector
$(\C + \DD_{row} \RR \DD_{col} ) \1_n$ for $i\in R_a$ is
$$
d_{row,i} \left( 1+ \sum_{b=1}^k \rho (R_a ,C_b ) \Vol (C_b ) \right)=
2 d_{row,i}
$$
(here we utilized that the sum of the entries of $\C$ is 1), and therefore,
$$
(\C + \DD_{row} \RR \DD_{col} ) \1_n = 2 \C \1_n .
$$
Finally, we will finish the proof with similar
calculations as in~\cite{Butler}. Let us further estimate
$$
\langle \x , \F \y \rangle = \sum_j \sum_{\ell}
\langle (\frac45 )^{j} \x^{(j)} , \F (\frac45 )^{\ell} \y^{(\ell )}
\rangle .
$$
Put $\gamma := \log_{4/5} \alpha$; in view of
$\alpha <1$, $\gamma >0$ holds. Then we divide the above summation into
three parts as follows.
$$
\begin{aligned}
&|\langle \x , \F \y \rangle | \le \sum_j \sum_{\ell} (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle | \\
&= \underset{\textrm{(a)}}{\sum_{|j-\ell | \le \gamma }} (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle | +
\underset{\textrm{(b)}}{\sum_{j-\ell > \gamma }} (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle | +
\underset{\textrm{(c)}}{\sum_{j-\ell < -\gamma }} (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle | .
\end{aligned}
$$
The three terms are estimated separately. Term (a) can be
bounded from above as follows:
$$
\begin{aligned}
\sum_{|j-\ell | \le \gamma } (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle | & \overset{(\ref{hiv})}{\le}
2k\alpha \sum_{|j-\ell | \le \gamma }
\sqrt{ (\frac45 )^{2j} \langle |\x^{(j)} | ,\C \1_n \rangle
(\frac45 )^{2\ell} \langle \1_m , \C |\y^{(\ell )} | \rangle } \\
& \overset{(*)}{\le}
k\alpha \sum_{|j-\ell | \le \gamma }
\left[ (\frac45 )^{2j} \langle |\x^{(j)} | ,\C \1_n \rangle +
(\frac45 )^{2\ell} \langle \1_m , \C |\y^{(\ell )} | \rangle \right] \\
& \overset{(**)}{\le} k\alpha (2\gamma +1 )
\left[ \sum_{j }(\frac45 )^{2j} \langle |\x^{(j)} | ,\C \1_n \rangle +
\sum_{\ell } (\frac45 )^{2\ell} \langle \1_m , \C |\y^{(\ell )} | \rangle \right] ,\\
& \overset{(***)}{\le} 2k\alpha (2\gamma +1 ) ,
\end{aligned}
$$
where in the first inequality, the estimate of~(\ref{hiv}) and in (*),
the geometric-arithmetic mean inequality were used; (**) comes from the
fact that in summation (a), for fixed $j$ or $\ell$, any term can show
up at most $2\gamma +1$ times, and (***) is due to the easy observation that
\begin{equation}\label{no}
\sum_{j }(\frac45 )^{2j} \langle |\x^{(j)} | ,\C \1_n \rangle =
\| \DD_{row}^{1/2} \x \|^2 \le 1 , \quad
\sum_{\ell} (\frac45 )^{2\ell} \langle \1_m , \C |\y^{(\ell )} | \rangle =
\| \DD_{col}^{1/2} \y \|^2 \le 1 .
\end{equation}
Terms (b) and (c) are of similar appearance (the role of $j$ and $\ell$ is
symmetric in them), therefore, we will estimate
only (b). Here $j-\ell > \gamma$, yielding $j +\ell > 2\ell +\gamma$.
Therefore,
$$
\begin{aligned}
\sum_{j-\ell > \gamma } (\frac45 )^{j+\ell}
|\langle \x^{(j)} , \F \y^{(\ell )} \rangle |
&\le \sum_{\ell } (\frac45 )^{2\ell +\gamma}
\sum_j |\langle \x^{(j)} , \F \y^{(\ell )} \rangle | \\
&\overset{(\ref{kell})}{\le} \sum_{\ell } (\frac45 )^{2\ell +\gamma}
2 \langle \1_m , \C |\y^{(\ell )} | \rangle \\
&= 2 (\frac45 )^{\gamma} \sum_{\ell } (\frac45 )^{2\ell }
\langle \1_m , \C |\y^{(\ell )} | \rangle \overset{(\ref{no})}{\le}
2 (\frac45 )^{\gamma}.
\end{aligned}
$$
where, in the second and third inequalities, (\ref{kell}) and (\ref{no})
were used.
Consequently, (c) can also be estimated from above with
$2 (\frac45 )^{\gamma}$.
Collecting the so obtained estimates together, we get
$$
\begin{aligned}
s_k &\le \frac92 |\langle \x , \F \y \rangle | \le \frac92
\left[ 2k\alpha (2\gamma +1 ) +4 (\frac45 )^{\gamma} \right] =
9\alpha \left[ 2k\frac{\ln \alpha}{\ln \frac45} +k+2\right] \\
&\le 9\alpha [2k (-4.5)\ln \alpha + k +2 ]
=9\alpha (k+2 -9k\ln\alpha ) ,
\end{aligned}
$$
that was to be proved. For $k=1$, our upper bound is
tighter than that of~(\ref{but}).
\section{Some weaker results}\label{last}
Now about our first attempts to prove something like
Theorem~\ref{fotetel}, because they may be informative for the reader.
\begin{itemize}
\item
First we wanted to use
Lemma 3 of Bollob\'as and Nikiforov~\cite{BollobasN},
since, in addition, it specifies the number of
distinct coordinates of the approximating step-vector.
This lemma states that to every $0<\ep <1$ and vector $\x\in \C^n$,
$\| \x \| =1$, there is a vector $\y\in \C^n$ such that its coordinates
take no more than
\begin{equation}\label{step}
\left\lceil \frac{8\pi}{\ep} \right\rceil \left\lceil \frac4{\ep} \log
\frac{2n}{\ep} \right\rceil
\end{equation}
values and $\|\x -\y \| \le \ep$.
Note that this lemma implies Lemma 3 of Butler~\cite{Butler},
which states that to any unit-norm vector $\x \in \C^n$ and diagonal matrix
$\DD$ of positive diagonal entries, one can construct a
step-vector $\y \in \CC^n$ such that $\|\x-\DD\y\|\le \ep$ and
$\| \DD \y \| \le 1$. Even the construction of the two lemmas are similar.
In our case, $\x\in \R^n $ and we need $1/3$ precision.
Given the diagonal matrix $\DD$ of
positive diagonal entries, we will now construct a
step-vector $\y$ of complex entries such that $\|\x-\DD\y\|\le 1/3$, by merely
using Lemma 3 of~\cite{BollobasN}. First set
$f:=\| \DD^{-1} \x \|$ and $d:=\|\DD\| =\max_i d_i$. Then, by \cite{BollobasN},
to the unit-norm vector $ \DD^{-1} \x /f$ and to $0<\ep <1$
there is a step-vector $\y\in \CC^n$, with the same number of different
coordinates as in~(\ref{step}), such that
$$
\left\| \frac{\DD^{-1} \x}{f} -\y \right\| \le \ep .
$$
The step-vector $\z =f\y \in \CC^n$, with the same number of different
coordinates as in $\y$, will do for us, since with an appropriate
$\ep$ we can reach that $\| \x -\DD \z \| \le \frac13$. Indeed,
$$
\ep \ge \left\| \frac{\DD^{-1} \x}{f} -\frac{\z }{f} \right\| = \frac1{f}
\| \DD^{-1} (\x -\DD \z )\| \ge \frac1{f} \min_i \frac1{d_i}
\| \x -\DD \z \| = \frac1{fd} \| \x -\DD \z \| .
$$
Therefore,
$$
\| \x -\DD \z \| \le fd\ep =\frac13
$$
holds with $\ep =\frac1{3fd}$ that cannot exceed $\frac13$, since
$fd \ge 1$. This can be seen from the following argument:
$$
1=\|\x\| = \| \DD \DD^{-1} \x \| \le \| \DD \| \cdot \| \DD^{-1} \x \| =df .
$$
Eventually, by the construction of~\cite{BollobasN},
$|y_j| \le \frac{|x_j |}{d_j f}$,
$j=1,\dots ,n$. Therefore,
$|z_j | =f|y_j| \le \frac{|x_j |}{d_j}$, and
$|d_j z_j | \le |x_j |$, $\forall j$. Consequently, $\| \DD\z\| \le \| \x \|=1$.
The main implication of this fact is that the maximal number of distinct
coordinates of the step-vector in Lemma 3 of~\cite{Butler} is also of
order $\log n$, and we wanted to make use of this fact in the first
attempts of the proof of some backward statement. For this purpose,
we managed to prove the following lemma, inspired by Lemma 4
of~\cite{BollobasN}, though, in a more general setup.
We will give the proof too, since it may be of interest for its
own right.
\begin{lemma}\label{mine}
Let $\C$ be an $m\times n$ matrix of nonnegative real entries
and let the rows and columns have
positive real weights $d_{r,i}$'s and $d_{c,j}$'s (independently of the
entries of $\C$), which are collected in the main diagonals of the
$m\times m$ and $n\times n$
diagonal matrices $\DD_r$ and $\DD_c$, respectively.
Let $R_1 ,\dots ,R_k$ and $C_1 ,\dots ,C_{\ell}$
be proper partitions of the rows and columns; further, $\x \in \CC^m$ and
$\y \in \CC^n$ be stepwise constant vectors having equal coordinates over
the index sets corresponding to the partition members of
$R_1 ,\dots ,R_k$ and $C_1 ,\dots ,C_l$, respectively.
The $k\times \ell$ real matrix $\C'=(c'_{ab})$ is defined by
$$
c'_{ab} := \frac{c(R_a ,C_b )}{\sqrt{\VOL (R_a ) \VOL (C_b )}} ,
\quad a=1,\dots k; \, b =1 ,\dots ,\ell ,
$$
where $c(R_a ,C_b)$ is the usual cut of $\C$ between $R_a$ and $C_b$,
whereas $\VOL (R_a ) =\sum_{i\in R_a} d_{r,i}$ and
$\VOL (C_b ) =\sum_{j\in C_b} d_{c,j}$. Then
$$
| \langle \x ,\C \y \rangle | \le \| \C' \| \cdot \| \DD_r^{1/2} \x \| \cdot
\| \DD_c^{1/2} \y \| ,
$$
where $\| \C' \|$ denotes the spectral norm, that is the largest singular
value of the real matrix $\C'$, and the squared norm of a complex vector is
the sum of the squares of the absolute values of its coordinates.
\end{lemma}
Note that here the row- and column-weights have nothing to do with the
entries of $\C$, and the volumes are usually not the ones defined in
Section~\ref{intro}; this is why they are denoted by $\VOL$ instead of $\Vol$.
\noindent
\textbf{Proof of Lemma~\ref{mine}}
For the distinct coordinates of $\x$ and $\y$ we introduce
$$
x_i := \frac{x'_a}{\sqrt{\VOL (R_a )}} \quad \textrm{if} \quad i\in R_a
\quad \textrm{and} \quad
y_j := \frac{y'_b}{\sqrt{\VOL (C_b )}} \quad \textrm{if} \quad j\in C_b
$$
with $x'_a$ and $y'_b$ that are coordinates of $\x' \in \CC^k$ and
$\y' \in \CC^l$. Obviously, $\| \DD_r^{1/2} \x \| =\| \x' \|$ and
$\| \DD_c^{1/2} \y \| =\| \y' \|$. Then, using $\bar {}$ for the complex
conjugation,
$$
\begin{aligned}
| \langle \x ,\C \y \rangle | &=
\left| \sum_{i=1}^m \sum_{j=1}^n x_i {\bar y}_j c_{ij} \right| =
\left| \sum_{a=1}^k\sum_{b=1}^l\frac{x'_a}{\sqrt{\VOL (R_a )}}
\frac{{\bar y}'_b}{\sqrt{\VOL (C_b)}} c(R_a ,C_b ) \right| \\
&= \left| \sum_{a=1}^k \sum_{b=1}^l x'_a {\bar y}'_b c'_{ab} \right| =
| \langle \x' ,\C' \y' \rangle |
\le s_{max} (\C' ) \cdot \| \x' \| \cdot \| \y' \| \\
&= \| \C' \| \cdot \| \DD_r^{1/2} \x \| \cdot \| \DD_c^{1/2} \y \|
\end{aligned}
$$
by the well-known extremal property of the largest singular value,
which finishes the proof.
Using this lemma and the starting steps of the proof of Theorem~\ref{fotetel},
with the matrix $\F$ defined in~(\ref{F}) and the constructed
step-vectors $\x\in \CC^m$, $\y \in \CC^n$, we have
$$
s_k \le \| \DD_{row}^{-1/2} \F \DD_{col}^{-1/2} \| \le
\frac92 |\langle \x , \F \y \rangle | .
$$
We also know from \cite{BollobasN} and the preliminary argument
that $\x$ takes on at most $r_1 =\Theta (\log m )$,
and $\y$ takes on at most $r_2 =\Theta (\log n )$
distinct values, which define the proper partitions
$P_{1} ,\dots ,P_{r_1}$ of the rows and
$Q_{1}, \dots ,Q_{r_2}$ of the columns.
Let us consider the subdivision of them with
respect to $R_1 ,\dots ,R_k$ and $C_1 ,\dots ,C_k$.
In this way, we obtain the proper partition
$P'_{1} ,\dots ,P'_{\ell_1 }$ of the rows and
$Q'_{1}, \dots ,Q'_{\ell_2 }$ of the columns with at most
$\ell_1 = k r_1$ and $\ell_2 =k r_2 $ parts.
Now, we apply
Lemma~\ref{mine} to the matrix $\F$ and to the step-vectors $\x$ and $\y$,
which are also
stepwise constant with respect to the above partitions. The row-weights
and column-weights are the $d_{row,i}$'s and $d_{col,j}$'s, respectively.
In view of the lemma, the entries of the $\ell_1 \times \ell_2$ matrix $\F'$
are
$$
f'_{ab} := \frac{f(P'_{a} ,Q'_{b} )}{\sqrt{\Vol (P'_{a} )
\Vol (Q'_{b} )}}
$$
and
$$
| \langle \x ,\F \y \rangle | \le
\| \F' \| \cdot \| \DD_{row}^{1/2} \x \| \| \DD_{col}^{1/2} \y \|
\le \| \F' \| .
$$
But by a well-known linear algebra fact,
$$
\| \F' \| =s_{max} (\F' ) \le \sqrt{\ell_1 \ell_2 }
\max_{a \in \{ 1,\dots ,\ell_1 \} } \max_{b \in \{ 1,\dots ,\ell_2 \} }
| f'_{ab} | \le \ell \cdot
\disc_{\substack{R_1 ,\dots ,R_k \\ C_1, \dots ,C_k } } (\C ) ,
$$
where $\ell =\sqrt{\ell_1 \ell_2 } $ and we used Formula (\ref{disk})
for the discrepancy. Consequently,
$$
s_k \le \frac92 \ell \disc_{k } (\C )
$$
follows. The drawback is that the upper bound contains
$\ell =k\sqrt{r_1 r_2}$ which is of order
$\sqrt{\log m \log n}$. Therefore, we prefer the estimate of
Theorem~\ref{fotetel} that does not contain the sizes of $\C$.
\item
Another dead-end was the attempt with the following matrix
$\EE$ instead of $\F$ of~(\ref{F}):
\begin{equation}\label{E}
\EE =\C - \DD_{row} {\hat \C } \DD_{col} ,
\end{equation}
where ${\hat \C }= \sum_{i=0}^{k-1} s_i \hv_i \hu_i^T $
is an $m\times n$ block-matrix of $k\times k$
blocks with entries equal to ${\hat c}_{ab}$ over the block $R_a \times C_b$.
The vectors $\hv_i \in \R^m$ and $\hu_i \in \R^n$ are stepwise constant
over the partitions of $R_1 ,\dots ,R_k$ of the rows and
$C_1 ,\dots ,C_k$ of the columns of $\C$, obtained by spectral clustering
tools. The vectors $\hv_i$ and $\hu_i$ themselves were constructed
via several SVDs in the proof of the forward
statement of~\cite{Bolla14} so that
$\DD_{row}^{1/2} \hv_i$ and $\DD_{col}^{1/2} \hu_i$ be `close' to $\v_i$ and
$\u_i$, respectively, for $i=1,\dots ,k-1$ (for $i=0$, they coincide),
where
$\v_i \in \R^m , \u_i \in \R^n$ is the unit-norm singular vector pair
corresponding to $s_i$ $(i=1,\dots ,r)$. In particular,
$\v_0 =(\sqrt{d_{row,1}} ,\dots ,
\sqrt{d_{row,m}})^T$ and $\u_0 =(\sqrt{d_{col,1}} ,\dots ,\sqrt{d_{col,n}})^T$.
The point is that the so-called error matrix $\EE$ is close to the matrix
$\DD_{row}^{1/2} (\C_{nor} -\sum_{i=0}^{k-1} s_i \v_i \u_i^T ) \DD_{col}^{1/2}$,
and $ \| \C_{nor} -\sum_{i=0}^{k-1} s_i \v_i \u_i^T \| =s_k$.
If now $\x\in \CC^m$ and $\y \in \CC^n$ are step-vectors such that
$\| \DD_{row}^{1/2} \x \| \le 1$, $\| \v_k -\DD_{row}^{1/2} \x \| \le \frac13$ and
$\| \DD_{col}^{1/2} \y \| \le 1$, $\| \u_k -\DD_{col}^{1/2} \y \| \le \frac13$,
then,
$$
s_{k} \le \frac92 \langle (\DD_{row}^{1/2} \x ),
( \DD_{row}^{-1/2} \C \DD_{col}^{-1/2} -\sum_{i=0}^{k-1} s_i \v_i \u_i^T )
(\DD_{col}^{1/2} \y ) \rangle .
$$
Here the upper bound is very close to $\frac92 |\langle \x , \EE \y \rangle |$.
The problem is that $\langle \1_X , \EE \1_Y \rangle$ cannot be
directly related to the discrepancy, like $\langle \1_X , \F \1_Y \rangle$.
However, $\F$ and $\EE$ are very `close' to each other, since comparing
Formulas~(\ref{F}) and~(\ref{E}), the difference between the corresponding
entries of the block-matrices $\RR$ and $\hat \C$ is
$$
|\rho (R_a,C_b) - {\hat c}_{ab}| =\frac{1}{\Vol (R_a) \Vol (C_b )}
\left| \sum_{i\in R_a } \sum_{j \in C_b } \eta_{ij} \right| ,
$$
which is the density of the error matrix $\EE =(\eta_{ij})$ between
$R_a$ and $C_b$.
If this is small enough, we may expect a finer upper estimate for $s_k$,
based on $\EE$.
\end{itemize}
\section{Conclusions and applications}\label{conc}
\subsection{Undirected graphs}\label{undir}
The notion of multiway discrepancy naturally extends to edge-weighted graphs.
A weighted
undirected graph $G=(V,\W )$ is uniquely characterized by its weighted
adjacency matrix $\W$, which is symmetric of nonnegative entries and
zero diagonal.
$\DD = \diag (d_1 ,\dots ,d_n )$ is the diagonal \textit{degree-matrix}
($d_i =\sum_{j=1}^n w_{ij}$), $\Vol (U) =\sum_{i\in U} d_i$ is the
volume of $U\subset V$, and for simplicity
we assume that $\sum_{i=1}^n d_i =1$; it does not hurt the generality, because
neither the normalized matrix $\W_D =\DD^{-1/2} \W \DD^{-1/2}$,
nor the
multiway discrepancies to be introduced are affected by the scaling of $\W$.
In case of a simple graph, $\W_D$ is the \textit{normalized adjacency matrix}.
Definition~\ref{diszkrepancia} extends to this case as follows.
\begin{definition}
The multiway discrepancy of the undirected,
weighted graph $G=(V,\W )$ in the proper $k$-partition
$V_1 ,\dots ,V_k$ of its vertices is
$$
\disc (G; V_1 ,\dots ,V_k ) =
\max_{\substack{1\le a\le b\le k \\ X\subset V_a , \, Y\subset V_b}}
\frac{|w (X, Y)-\rho (V_a,V_b ) \Vol (X)\Vol (Y)|}{\sqrt{\Vol(X)\Vol(Y)}}.
$$
The minimum $k$-way discrepancy of the undirected weighted graph $G=(V,\W )$ is
$$
\disc_k (G) = \min_{V_1 ,\dots ,V_k } \disc (G; V_1 ,\dots ,V_k ) .
$$
\end{definition}
A result, analogous to that of Theorem~\ref{fotetel} can now be formulated in
terms of the
normalized modularity matrix of $G$, defined in~\cite{Bolla11} as follows.
Denoting by $\d =(d_1 ,\dots ,d_n)^T$ the \textit{degree-vector} (of entries
summing to 1), the so-called \textit{modularity matrix} is
$\M =\W -\d \d^T$, the $(i,j)$ entry of which just measures the deviation of
$w_{ij}$ (actual connection of vertices $i$ and $j$) from
$d_i d_j$ (their connection under independent attachment
with the vertex-degrees as probabilities).
With the notation $\sqrt{\d }=(\sqrt{d_1} ,\dots ,\sqrt{d_n})^T$,
the \textit{normalized modularity matrix} is
$$
\M_D = \DD^{-1/2} \M \DD^{-1/2} = \W_D -\sqrt{\d} \sqrt{\d}^T .
$$
The spectrum of $\M_D$ is in the [-1,1] interval, and 0 is always an
eigenvalue with unit-norm eigenvector $\sqrt {\d}$. All the other eigenvalues
are the same as those of $\W_D$, except the trivial one. Indeed,
1 is a single eigenvalue of
$\W_D$ with corresponding unit-norm eigenvector
$\sqrt {\d}$, provided $\W$ is irreducible.
This becomes a zero eigenvalue of $\M_D$ with the same
eigenvector. In~\cite{Bolla}, I denoted the eigenvalues
of $\M_D$ in decreasing absolute values by
$|\mu_1 | \ge \dots \ge |\mu_{n-1}| \ge \mu_n =0$. Then the absolute values of
the eigenvalues of $\W_D$ are $1=\mu_0 \ge
|\mu_1 | \ge \dots \ge |\mu_{n-1}|$, and they are also the singular values:
$s_k =|\mu_k|$, $k=0,\dots ,n-1$.
\begin{proposition}
Let $G= (V, \W)$ be an edge-weighted, undirected graph. Then
\begin{equation}\label{enyem}
|\mu_k | \le 9\disc_{k } (G ) (k+2 -9k\ln \disc_{k } (G )) ,
\end{equation}
where $\mu_k$ is the $k$-th largest absolute value eigenvalue of
the normalized modularity matrix $\M_D$ $(k=1,\dots ,n-1 )$.
\end{proposition}
Recall that Bilu and Linial~\cite{Bilu} prove the following converse of the
expander mixing lemma for $d$-regular simple graphs on $n$ vertices.
Assume that for
any disjoint vertex-subsets $S,T$: $\vert e (S,T ) -\frac{|S| |T| d}{n} \vert
\le \alpha \sqrt{|S||T|}$. Then all but the largest adjacency eigenvalue of
$G$ are bounded (in absolute value) by
$O (\alpha (1+\log \frac{d}{\alpha} ))$.
Note that for a $d$-regular graph the adjacency eigenvalues are $d$
times larger than the normalized adjacency ones, and the deviation
between $e(S,T)$ and the one what is expected in a random $d$-regular graph,
is also proportional to our (1-way) discrepancy in terms of the volumes.
Though they use disjoint subsets $S,T$, their upper estimate for the
absolute value
of the second largest (in absolute value) eigenvalue with the (1-way)
discrepancy $\alpha$ is $C \alpha (1 -A\log \alpha)$ with some
absolute constants $A,C$.
Hence, the upper estimate of~(\ref{but}) or that
of~(\ref{enyem}) in the $k=1$ case are reminiscent of this.
In the other direction, for the $k=1$ case, a straightforward generalization
of the \textit{expander mixing lemma for irregular graphs} is the following.
\begin{proposition}\label{EML}
$$
\disc (G) =\disc_1 (G) \le \| \M_D \| =s_1 = |\mu_1 |,
$$
where $\| \M_D \|$ is the spectral norm of the normalized modularity matrix
of $G$.
\end{proposition}
Though, with different notation (sometimes even a stronger version of it) is
proved in~\cite{Bollabeyond,Butler,Chung2},
we give another short proof here.
\noindent
\textbf{Proof.}
Via separation theorems for singular values, $s_1 =|\mu_1 |$ is the
maximum of the bilinear form $\v^T \M_D \u$ over the unit sphere.
Let $X,Y\subset V$ be arbitrary, and
denote by $\1_X , \1_Y\in \R^n$ the indicator vectors of them. Then
$$
\begin{aligned}
\| \M_D \| &=\max_{ \| \u\| =\| \v\| =1} |\v^T \M_D \u | \ge
\left| \left( \frac{\DD^{1/2} \1_X}{\| \DD^{1/2} \1_X \|} \right)^T
\M_D \left( \frac{\DD^{1/2} \1_Y}{\| \DD^{1/2} \1_Y \|} \right) \right| \\
&=\frac{|\1_X^T \M \1_Y |}{\| \DD^{1/2} \1_X \|\cdot \| \DD^{1/2} \1_Y \| } =
\frac{| w (X, Y) - \Vol (X) \Vol (Y)|}{\sqrt{\Vol (X)}\sqrt{\Vol (Y)} } .
\end{aligned}
$$
Taking the maxima on the right-hand side over subsets $X,Y\subset V$,
the desired relation follows.
Note that the estimate is also valid if we take maxima over disjoint
$X,Y$ pairs only.
For an arbitrary $k$ (between 1 and $\rk \W $), in Theorem 3 of~\cite{Bolla}
we proved that under some balancing conditions for the degrees and the
cluster sizes (when $n\to \infty$), and denoting by $V_1 ,\dots ,V_k $ the
clusters obtained by spectral clustering (see the forthcoming explanation),
the $(V_a, V_b)$ pairs are
$O (\sqrt{2k} S_k + |\mu_k |)$-volume regular $(a\ne b)$ and
similar statement holds for the subgraphs induced by $V_a$'s too.
In fact, inspired by~\cite{Alon10},
there we used a bit different notation and concept of $\alpha$-volume
regular pairs, namely, for every $X\subseteq V_a$, $Y\subseteq V_b$
we required
$$
| w (X, Y) -\rho (V_a ,V_b ) \Vol (X) \Vol (Y)| \le \alpha
\sqrt{\Vol (V_a) \Vol (V_b)} .
$$
In the above formula,
the right had side contains the squareroots of the volumes of the
clusters, unlike~(\ref{dif}), which contains the squareroots of the volumes of
$X$ and $Y$. However, in the spirit of the
Szemer\'edi regularity lemma~\cite{Szemeredi},
if we require (\ref{dif}) to hold only for $X,Y$'s satisfying
$\Vol (X) \ge \ep \Vol (V_i )$, $\Vol (Y) \ge \ep \Vol (V_j )$ with some
fixed $\ep$, then the so modified $k$-way discrepancy, $\disc'_k (G)$, is
$O (\sqrt{2k} S_k + |\mu_k |)$, and so does $\disc_k (G)$.
Here the partition $V_1,\dots ,V_k $ is defined so that
it minimizes
the weighted $k$-variance $S_k^2$ of the vertex representatives
$\r_1 ,\dots ,\r_n \in \R^{k-1}$
obtained as row vectors of the $n\times (k-1)$ matrix of column vectors
$\DD^{-1/2} \u_i$,
where $\u_i$ is the unit-norm eigenvector
corresponding to $\mu_i$ $(i=1,\dots ,k-1 )$. The $k$-variance of the
representatives is defined as
\begin{equation}\label{kszoras}
{S}_k^2 (\X ) =\min_{(V_1 ,\dots ,V_k )}
\sum_{a=1}^k \sum_{j\in V_a } d_j \| \r_j -{ \cc }_a \|^2 ,
\end{equation}
where ${\cc }_a =\frac1{\Vol (V_a ) } \sum_{j\in V_a } d_j \r_j $ is the
weighted center of cluster $V_a$.
It is the weighted $k$-means algorithm that gives this minimum, and
the point is that the optimum $S_k$ is just the minimum distance
between
the eigensubspace corresponding to $\mu_0 ,\dots \mu_{k-1}$ and the one
of the suitably transformed step-vectors over the $k$-partitions of $V$.
In~\cite{Bolla} we also discussed
that, in view of subspace perturbation theorems, the larger the gap
between $|\mu_{k-1}|$ and $|\mu_k |$, the smaller $S_k$ is.
So the message is, that here the eigenvectors corresponding to
the largest absolute value eigenvalues have to be used, unlike
usual spectral clustering methods which automatically use the bottom
eigenvalues of the Laplacian or normalized Laplacian matrix
(latter one is just $\I -\W_D $).
The clusters or cluster-pairs of small discrepancy
behave like expanders or bipartite expanders. In another context,
they resemble the generalized random or quasirandom graphs
of~Lov\'asz, S\'os, Simonovits~\cite{LovSos,SimonovitsS}.
In some
special cases, $S_k =0$, and then,
$\disc_k (G)\le B |\mu_k | = B s_k$ follows from the above results.
In particular, $S_k=0$ whenever the vectors
$\DD^{-1/2} \u_1, \dots ,\DD^{-1/2} \u_{k-1}$ are step-vectors over the
same proper $k$-partition of the vertices. Some examples:
\begin{itemize}
\item
If $k=1$, then the unit-norm eigenvector corresponding to $\mu_0 =1$ is
$\u_0 =\sqrt{\d}$, and $\DD^{-1/2} \u_0 =\1$ is the all 1's vector.
Consequently, the variance of its coordinates is $S_1 =0$.
But in this case, by Proposition~\ref{EML}, we already know that
$\disc (G)$ can be estimated from above merely by $|\mu_1 | =s_1$.
\item
If $k=2$ and $G$ is bipartite, then $\mu_1 =-1$, $s_1 =1$, and
$S_2^2$, i.e., the 2-variance of the coordinates of the transformed eigenvector
corresponding to $\mu_1$ can be small if $|\mu_2 |$ is separated from
$|\mu_1 |=1$ (see also the bipartite expanders of~\cite{Alon0}).
\item
Let $k=2$ and $G$ be bipartite, biregular on the independent vertex-subsets
$V_1 ,V_2$. That is, all the edge-weights within $V_1$ or $V_2$ are zeros,
and the 0-1 weights between vertices of $V_1$ and $V_2$ are such that
$d_i =k_1$ if $i\in V_1$ and $d_i =k_2$ if $i\in V_2$ with the understanding
that $|V_1| k_1 = |V_2| k_2$ (both are the total number of edges in $G$).
It is easy to see that the unit-norm eigenvector corresponding to the
eigenvalue $\mu_1 =-1$ is $\u_1 =\DD^{1/2} \1_{V_1 } -\DD^{1/2} \1_{V_2 }$, and
$\DD^{-1/2} \u_1 = \1_{V_1 } - \1_{V_2 }$. Therefore, the representatives
of vertices of $V_1$ are all 1's, and those of $V_2$ are $-1$'s, so
$S_2 =0$. Consequently,
$\disc_2 (G)\le B |\mu_2 |$, with some absolute constant $B$.
Up to a constant, this was another proof of
Lemma 3.2 of Evra et al.~\cite{Evra}. They call their result expander mixing
lemma for bipartite graphs, and use cardinalities instead of volumes,
but in this special case, these cardinalities are proportional to the volumes
both within $V_1$ and $V_2$.
\item
Let $G_n$ be a generalized random graph over the symmetric $k\times k$ pattern
matrix $\PP =(p_{ab})$, i.e., there is a proper $k$-partition,
$V_1 ,\dots ,V_k$, of its vertices such that
$|V_a |=n_a$ $(a=1,\dots ,k)$, $\sum_{a=1}^k n_a =n$,
and for any $1\le a\le b\le k$, vertices
$i\in V_a$ and $j\in V_b$ are connected independently,
with the same probability $p_{ab}$. This is the $k$-cluster generalization of
the classical Erd\H os--R\'enyi random graph, see also~\cite{LovSos} for their
generalized quasirandom counterparts.
In~\cite{Bolla8} we characterized
the adjacency and normalized Laplacian spectra of such graphs, that
extends to their normalized modularity spectra as follows: both $|\mu_k|=s_k$
and $S_k$ tend to zero almost surely when when $n\to\infty$,
under some balancing conditions for the cluster sizes ($\frac{n_a}{n}\ge c$
with some constant $c$, for $a=1,\dots ,k$).
By our results, it also holds for the $k$-way discrepancy in the clustering
$V_1 ,\dots ,V_k$. However,
this is not surprising, since this almost sure limit for the
$k$-way discrepancy is easily
obtained with large deviation principles too, see~\cite{Bolla5}.
\end{itemize}
Summarizing, in the $k=1$ case:
when the second singular value $|\mu_1 |=s_1$ is small
(much smaller than $s_0 =1$),
then the overall discrepancy is small. But for $k>1$, a small $s_k$ is
necessary, but not sufficient for a small $k$-way discrepancy. In addition,
$S_k$ should be small too. With subspace perturbation theorems, it is small
if $s_k$ is much smaller than $s_{k-1}$. Hence, a gap in the normalized
modularity
spectrum may be an indication for the number of clusters.
The two directions together may give a hint about the optimal choice of $k$
if a practitioner wants to find a $k$-clustering of the rows and
columns (or just of the vertices of a graph) with
small pairwise discrepancies.
If there not exists a fairly `small' $k$
with this property, then in the worst case
scenario, the Szemer\'edi regularity lemma~\cite{Szemeredi} with an
enormously large number of clusters (which number only depends on
the maximum pairwise discrepancy to be attained, and does not depend on $n$)
comes into existence. Weak versions of this lemma
(where $V_1 , \dots ,V_k $ are not necessarily equitable) are also
available, see e.g.,~\cite{Borgs,LovSzeg}.
Note that $\M_D$ corresponds to the compact operator taking conditional
expectation between the margins with respect to the symmetric joint
distribution embodied by $\W$. In~\cite{Bolla} we proved that for given $k$,
the eigenvalues
$\mu_1 ,\dots ,\mu_{k-1}$ and the corresponding eigensubspace are testable,
consequently $S_k$ is also testable, in the sense of~\cite{Borgs}. This is
important when we have a very large network and want to estimate these
quantities based on a smaller sample selected with an appropriate randomization
from the large one. We also remark that spectral or operator proofs of the
regularity lemma, together with low-rank constructions, are at our disposal,
for example,~\cite{Frieze,Gharan,Szegedy}.
\subsection{Directed graphs}\label{dir}
A directed weighted graph $G=(V,\W )$ is described by its quadratic, but
usually not symmetric
weight matrix $\W=(w_{ij})$ of zero diagonal,
where $w_{ij}$ is the nonnegative weight of the $i\to j$ edge $(i \ne j )$.
The row-sums
$d_{out,i} =\sum_{j=1}^n w_{ij}$ and column-sums $d_{in,j}=\sum_{i=1}^n w_{ij}$
of $\W$ are the out- and in-degrees, while
$\DD_{out} =\diag (d_{out,1} ,\dots ,d_{out,n})$ and
$\DD_{in} =\diag (d_{in,1} ,\dots ,d_{in,n})$ are the diagonal
out- and in-degree matrices, respectively.
Now Definition~\ref{diszkrepancia} can be formulated as follows.
\begin{definition}
The multiway discrepancy of the directed,
weighted graph $G=(V,\W )$ in the in-clustering
$V_{in,1} ,\dots ,V_{in,k}$ and out-clustering $V_{out,1} ,\dots ,V_{out,k}$
of its vertices is
$$
\begin{aligned}
&\disc (G; V_{in,1} ,\dots ,V_{in,k}, V_{out,1} ,\dots ,V_{out,k}) \\
&=\max_{\substack{1\le a\le b\le k \\ X\subset V_{out,a} , \, Y\subset V_{in,b}}}
\frac{|w (X, Y)-\rho (V_{out,a},V_{in,b} ) \Vol_{out} (X)\Vol_{in} (Y)|}
{\sqrt{\Vol_{out}(X)\Vol_{in} (Y)}},
\end{aligned}
$$
where $w(X,Y)$ is the sum of the weights of the $X\to Y$ edges, whereas
$\Vol_{out} (X) =\sum_{i\in X} d_{out,i}$ and
$\Vol_{in} (Y) =\sum_{j\in Y} d_{in,j}$ are the out- and in-volumes,
respectively.
The minimum $k$-way discrepancy of the directed weighted graph $G=(V,\W )$ is
$$
\disc_k (G) = \min_{\substack{V_{in,1} ,\dots ,V_{in,k} \\
V_{out,1} ,\dots ,V_{out,k} }}
\disc (G; V_{in,1} ,\dots ,V_{in,k}, V_{out,1} ,\dots ,V_{out,k}).
$$
\end{definition}
Butler~\cite{Butler} treats the $k=1$ case, and for a general $k$,
Theorem~\ref{fotetel} implies the following.
\begin{proposition}
Let $G= (V, \W)$ be directed edge-weighted graph. Then
$$
s_k \le 9\disc_{k } (G ) (k+2 -9k\ln \disc_{k } (G )) ,
$$
where $s_k$ is the $k$-th largest nontrivial singular value of
the normalized edge-weight matrix $\W_{D}=\DD_{out}^{-1/2} \W \DD_{in}^{-1/2}$.
\end{proposition}
We applied the SVD based algorithm to find migration patterns in
the set of 75 countries, and found 3 underlying
immigration and emigrationin trait clusters.
Since the algorithm is the same as for rectangular
matrices, I will describe it in the next subsection.
\subsection{Back to rectangular arrays}
In multivariate statistics, sometimes our data are collected in an
$m\times n$ matrix $\C$, where the entries are
frequency counts
corresponding to the joint distribution of two categorized random variables
(taking on $m$ and $n$ discrete values, respectively).
Such a $\C$ is called contingency table in statistical language, and the
data are popularly said to be cross-tabulated.
The $\chi^2$ statistic, which measures the deviation from independence,
is $N\sum_{i=1}^{r-1}s_i^2$ with my notation, where $N$ is the (usually
`large') sample size, but
the second factor can be `small' if $s_1$ is `small', and this corresponds to
the existence of a good rank 1 approximation of $\C$. This fact is
also supported by the $\disc (\C) =\disc_1 (\C) \le s_1$ relation.
Otherwise, one may ask, whether there exists a `good'
rank $k$ approximation for some integer $1<k<r=\rk (\C)$, which problem is
treated in correspondence analysis by the first $k$ dyads of the SVD of
$\C_D$. However, there it is not made exact how $s_k$ is estimated by
$\disc_k (\C )$. Our Theorem~\ref{fotetel} says that if the minimum $k$-way
discrepancy is very `small', i.e., the sub-tables $R_a \times C_b$ behave
like independent tables in the optimal $k$-partitions of the rows and columns,
then $s_k$ is small too.
In the other direction, in~\cite{Bolla14}, we
proved the following. Given the $m\times n$ contingency
table $\C$, consider the spectral clusters $R_1 ,\dots ,R_k$ of its rows and
$C_1 ,\dots ,C_k$ of its columns,
obtained by applying the $k$-means algorithm for the $(k-1)$-dimensional
row- and column representatives, defined as the row vectors of the matrices
of column vectors $(\DD_{row}^{-1/2} \v_1 ,\dots ,\DD_{row}^{-1/2} \v_{k-1})$
and $(\DD_{col}^{-1/2} \u_1 ,\dots ,\DD_{col}^{-1/2} \u_{k-1})$, respectively,
where $\v_i , \u_i$ is the unit norm singular vector pair corresponding to $s_i$
$(i=1,\dots ,k-1)$.
In fact, these partitions minimize
the weighted $k$-variances $S_{k,row}^2$ and
$S_{k,col}^2$ of these row- and column-representatives (see (\ref{kszoras})).
Then, under some balancing conditions for the margins
and for the cluster sizes, we proved that
$\disc_k (\C ) \le B (\sqrt{2k} (S_{k,row} +S_{k,col}) +s_k )$,
with some absolute constant $B$.
This is the base of our algorithm, with fixed $k$.
We remark that the correspondence analysis uses the above $(k-1)$-dimensional
row- and column-representatives for simultaneously plotting the row- and
column-categories in $\R^{k-1}$ ($k=2,3$ or 4 in most applications),
and hence, the practitioner can draw
conclusions from their mutual positions.
For example, in microarray analysis we can plot the genes and conditions
together, and the biclusters obtained by $k$-clustering the row- and
column-representatives give clusters of the genes and the conditions
such that, every gene-cluster and condition-cluster pair behaves like
a random weighted bipartite graph in the sense, that genes and conditions of
the same cluster nearly independently influence each other, which fact may have
importance for practitioners. In~\cite{Bolla14} it is also shown that when these
$k$-variances are very `small', then our construction (described there
with the modified
dyads) for the rank $k$ approximation produces a table of nonnegative entries.
On the contrary,
a drawback of correspondence analysis is that the automatic low-rank
approximation of the table usually contains negative entries.
In the possession of networks or microarrays, practitioners
want to find a fairly small $k$, such that there is a $k$-cluster structure
behind the table or the graph in the sense that the subgraphs and bipartite
subgraphs have `small' discrepancy. It depends on the table or the graph that
how small discrepancy can be attained and with what $k$. The above theory
tells that we have to inspect the normalized spectra, together with
spectral subspaces, since the leading ones carry a lot of information
about the smallest attainable discrepancy.
\section*{Acknowledgement}
The author wishes to thank Gergely Kiss and Zolt\'an Mikl\'os S\'andor for
discussions on the topic.
Parts of the research were done under the auspices of the
Budapest Semesters of Mathematics program,
in the framework of an undergraduate
research course on spectral clustering
with the participation of US students James Drain, Cristina Mata,
Matthew Willian, and in particular, Calvin Cheng whose
computer processing of real-word data helped in formulating the main theorem.
The research was also supported by the
T\'AMOP-4.2.2.C-11/1/KONV-2012-0001 project.
|
2,869,038,156,727 | arxiv | \section{Introduction}
\label{sec1}
In recent years, compared with fixed-wing UAVs, multi-rotor UAVs have unique flight capabilities such as vertical takeoff and landing and hovering in the air. It has been widely used in agriculture, industry, military, civil civil applications such as search and rescue~\cite{1,2}, disaster response~\cite{3,4}, safety monitoring~\cite{5,6}, infrastructure inspection~\cite{7,8}, precision agriculture~\cite{9,10}, exploration and mapping~\cite{11,12}. The rapid development of multi-rotor UAVs has played an irreplaceable role in these fields. A multi-rotor UAV is a non-linear system with non-linear, statically unstable, underdynamic, inter-axis coupling, multiple input and multiple output characteristics~\cite{13}. A quadrotor UAV has six degrees of freedom when flying in space (displacement motion in three directions in a rectangular coordinate system and rotational motion about three axes of the coordinate system). However, there are only four controllable variable inputs, making the control complex and susceptible to uncertainty. The design and implementation of high quality flight controllers has been a new challenge due to the tedious parameter tuning of non-linear control and the performance analysis involved.
Since the development of multi-rotor aircraft,there are three main types of flight control algorithms: 1) Flight control method based on linear flight control theory~\cite{14}; 2) Learning-based flight control method~\cite{15}; 3) Model-based nonlinear control method~\cite{16}. At present, the flight control of multi-rotor UAV systems mainly adopts linear controller or combined with an optimization algorithm to improve the fast response and steady-state error of control to improve flight control performance. The flight control methods based on linear flight control theory include the classical cascade PID method~\cite{18}. Classical cascade PID control algorithms are one of the most successful and widely used control methods, for example, classical PID control ~\cite{18}, fuzzy PID controller~\cite{19}, neural network based PID control ~\cite{20}. Pixhawk flight control~\cite{21} is a cascade PID controller and has achieved a good control effect,which is the most classical and general flight control method at present; The possible reasons are as follows: 1. After tuning, the cascade PID controller can meet the needs of conventional flight missions; 2. The principle of PID controller is intuitive, simple structure and easy to implement; Although the PID controller occupies a high proportion in practical application, it is a single input single output controller designed for hovering balance point, which can meet the requirements of general flight missions, but can not ensure the large-scale stability of the system. In the case of extensive external interference, the nonlinear characteristics of the controlled object will cause the decline of control quality, and the cascade PID controller generally includes multiple control loops. For the new model without setting experience, there are problems of cumbersome parameter setting and strong experience dependence.
With the rapid development of sensors and drone technology, humans can use multi-rotor drones to carry data collection equipment to inspect hard-to-reach areas. For example, the Swiss company flyability's Elios drone ~\cite{22} inspects sewers and performs maintenance on many indoor industrial facilities. A full inspection takes 20-30 minutes, but the drone's endurance is only 7 minutes. Vijay R. Kumar at the University of Pennsylvania studied the automated flight of drones in dam aqueducts over five years, from semi-autonomous in 2014 to fully automated in 2019~\cite{23,24,25,26}.However, the UAVs developed by the team had a short endurance and detected the lower bend and lower horizontal sections of the dam aqueduct, rather than the entire area of the diversion pipeline. In this paper, the entire area of the Three Gorges Hydropower Station diversion pipeline was inspected. The automatic inspection area includes:
\begin{itemize}
\item Lower horizontal section.
\item Lower bend section.
\item Oblique straight section.
\item Upper bend section.
\item Upper flat section.
\end{itemize}
The presence of strong air convection from the chimney effect and turbulent wind fields caused by the rotation of the UAV blades inside the hydropower station's diversion pipes can cause significant disruption to the UAV's flight control. A high quality controller with solid stability and robustness over a wide range needs to be designed for the UAV.As the GPS signal is shielded in the confined space of the diversion pipeline, the positioning of the UAV depends on sensors such as LIDAR, optical flow and IMU. The navigation method on UAV positioning is not the focus of this paper and is not described in detail in this paper. In this paper, a high quality controller based on a hybrid stabilisation method with adaptive backstepping control is proposed. First, a model of a quadrotor UAV is developed and transformed into a strict feedback form with uncertainty. Then, an adaptive method is used to estimate the upper limit of delay due to large airflow disturbances and an adaptive backstepping controller is designed for attitude loop control. a PID controller is used to control the altitude and position of the aircraft. Finally, the stability of the closed-loop system is analysed in conjunction with Lyapunov stability theory. A dual closed-loop controller with an inner-loop attitude and an outer-loop position achieves hybrid stability-enhanced control of a multi-rotor UAV.
\section{Basic Model of Multi-rotor UAV}
\label{sec2}
The quadrotor UAV is the most typical multi-rotor UAV and the dynamics model of the quadrotor UAV is used as an example in this paper.The quadrotor UAV can establish body coordinate system $\bm{B}(O_b-x_b y_b z_b)$ and geographic coordinate system $\bm{E}(O_e-x_e y_e z_e)$. In the body coordinate system, the coordinate origin $(O_b)$ is located in the center of mass of the drone. The axis $x_b$ and axis $y_b$ point to the center of the propellers 1 and 2 respectively, and the axis $(z_b)$ is oriented perpendicular to the plane upwards, as shown in Fig~\ref{fig1a}. In the geographic coordinate system, the coordinate origin $O_e$ is located at the takeoff point of the quadrotor UAV, and the positive direction of $(x_e,y_e,z_e)$ can be recorded as the north, east and up respectively, as shown in Fig~\ref{fig1b}.
\begin{figure}[ht!]
\centering
\subfigure[The body coordinate system.]{\includegraphics[height=3cm]{fig1-01.jpg}%
\label{fig1a}}\qquad
\subfigure[The geographic coordinate system.]{\quad\includegraphics[height=2.7cm]{fig1-02.jpg}%
\label{fig1b}}
\caption{{The body coordinate system and the geographic coordinate system of the UAV.}\label{fig1}}
\end{figure}
The position and attitude of the drone in the geographic coordinate system are $\bm{\xi}=[x,y,z]^T$ and $\bm{\eta}=[\phi,\theta,\psi]^T$, where the roll angle $(\phi)$, pitch angle $(\theta)$, and yaw angle $(\psi)$ of the drone are represented, respectively. In the body frame, the four-rotor UAV's line velocity and angular velocity are represented by $\bm{v}=[v_x,v_y,v_z]^T$ and $\bm{\varpi}=[\varpi_{xb},\varpi_{yb},\varpi_{zb}]^T$.
Then, the kinematic model of the quadrotor UAV can be represented as:
\begin{equation}
\label{eq1}
\left\{
\begin{array}{@{}l}
\dot{\bm{\xi}}=R_t\bm{v}\\
\dot{\bm{\eta}}=R_t\bm{\varpi}
\end{array}
\right.
\end{equation}
Among them, the line velocity transfer matrix $R_t$ indicates the rotation relationship of the body coordinate system relative to the geographical coordinate system, and the angular velocity transfer matrix $R_r$ represents the rotational relationship between the velocity vector and Euler angular velocity in the body coordinate system.The velocity vector is $\bm{\varpi}=[\varpi_{xb},\varpi_{yb},\varpi_{zb}]^T$ and the Euler angular velocity is $\dot{\bm{\eta}}=[\dot{\phi},\dot{\theta},\dot{\psi}]^T$. According to the literature~\cite{28,27}, the specific expressions $R_t$, $R_r$ are (2) and (3):
\begin{equation}
\label{eq2}
R_t=
\begin{bmatrix}
c\theta c\psi &s\phi s\theta c\psi-c\phi s\psi &c\phi s\theta c\psi+s\phi s\psi \\
c\theta s\psi &s\phi s\theta c\psi+c\phi s\psi &c\phi s\theta c\psi-s\phi s\psi \\
-s\theta &s\phi c\theta &c\phi c\theta \\
\end{bmatrix}
\end{equation}
\begin{equation}
\label{eq3}
R_r=
\begin{bmatrix}
1&0&-s\theta \\
0&c\phi &c\theta s\phi \\
0&-s\phi &c\theta c\phi
\end{bmatrix}
\end{equation}
Among them, $c\theta$ is $\cos\theta$, $c\psi$ is $\cos\psi$, $c\phi$ is $\cos\phi$, $s\theta$ is $\sin\theta$, $s\psi$ is $\sin\psi$, $s\phi$ is $\sin\phi$.
The kinematic model of the multi rotor UAV is composed of translational motion and rotation. As shown in formula (\ref{eq1}), it represents the motion characteristics of the position and attitude of the multi rotor UAV. However, the kinematic equations only reflect changes in the state of motion of the UAV and do not include the force and moment factors that cause the changes in motion, such as strong air convection due to the chimney effect and turbulent wind field perturbations due to paddle rotation in the diversion pipeline.The flight controller of UAV needs a dynamic model to describe the motion including wind disturbance. Assuming that the drone is a rigid body and the center of mass is the origin of the body coordinate system, ignoring the gyro effect of the blade, the translation equation and rotation equation are finally established from Newtonian mechanics or Euler Lagrange equation, such as literature~\cite{27,28}.The dynamic model of four rotor UAV is shown in equations (\ref{eq4}) and (\ref{eq5}):
\begin{equation}
\label{eq4}
\left\{
\begin{array}{@{}l}
m\ddot{x}=u_1 (c\psi s\theta c\phi+s\psi s\phi)-K_f \dot{x}\\
m\ddot{y}=u_1 (s\psi s\theta c\phi-s\psi s\phi)-K_f \dot{y}\\
m\ddot{z}=u_1 c\theta c\phi-K_f \dot{z}-g
\end{array}
\right.
\end{equation}
\begin{equation}
\label{eq5}
\left\{
\begin{array}{@{}l}
I_x \ddot{\phi}=\dot{\theta}\dot{\psi}(I_y- I_z)+J_r \dot{\theta}\Omega_r +lu_2-d_{\phi}\\
I_y \ddot{\theta}=\dot{\theta}\dot{\phi}(I_z- I_x)-J_r \dot{\phi}\Omega_r +lu_3-d_{\theta}\\
I_z \ddot{\psi}=\dot{\theta}\dot{\phi}(I_x- I_y) +lu_4-d_{\psi}
\end{array}
\right.
\end{equation}
Among them, l is the distance between the quadrotor center of mass and the rotation axis of propeller, $m$ is the total load weight of the UAV; $I_x$, $I_y$, $I_z$ are the rotational inertia around each axis; $K_f$ is the wind disturbance coefficient;$J_r$ is the total rotational inertia of the entire motor rotor and propeller around the body's rotational axis;$d_i(i=/phi,/theta,/psi) is the airflow disturbance moment$; $u_1$ is the total thrust of the UAV, $u_2$ is the roll motion control torque, $u_3$ is the pitch motion control torque, $u_4$ is the yaw motion control torque.$\Omega_r$ is the combined speed of each rotor of the drone, it satisfy the following relationship:
\begin{equation}
\label{eq6}
\Omega_r=-\varpi_1-\varpi_3+\varpi_2+\varpi_4
\end{equation}
The flight control algorithm of the UAV can be divided into internal and external rings, including the attitude ring and position ring, as shown in Figure~\ref{fig2}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{Flight control algorithm structure}
\label{fig2}
\end{figure}
Corresponding dividing the formula (\ref{eq4}) and (\ref{eq5}) into two-part subsystems
\begin{equation}
\label{eq7}
\left\{
\begin{array}{@{}l}
m\ddot{x}=u_1 (c\psi s\theta c\phi+s\psi s\phi)-K_f \dot{x}\\
m\ddot{y}=u_1 (s\psi s\theta c\phi-s\psi s\phi)-K_f \dot{y}
\end{array}
\right.
\end{equation}
\begin{equation}
\label{eq8}
\left\{
\begin{array}{@{}l}
m\ddot{z}=u_1 c\theta c\phi-K_f \dot{z}-g\\
I_x \ddot{\phi}=\dot{\theta}\dot{\psi}(I_y-I_z)+J_r \dot{\theta}\Omega_r +lu_2-d_{\phi}\\
I_y \ddot{\theta}=\dot{\psi}\dot{\phi}(I_z- I_x)-J_r \dot{\phi}\Omega_r +lu_3-d_{\theta}\\
I_z \ddot{\psi}= \dot{\theta}\dot{\phi}(I_x-I_y) +lu_4-d_{\psi}
\end{array}\right.
\end{equation}
From (\ref{eq7}) and (\ref{eq8}),it can be seen that the controller consists of two parts: the attitude controller and the position controller.When the total rotor lift u1 is a definite value, the displacement acceleration depends on the magnitude of the attitude angle, so the attitude angle can determine the flight path. Therefore, the general design idea of the control system is:
\begin{itemize}
\item The position controller receives the desired position($x_d,y_d,z_d$) and desired yaw($\psi_d$) from the input and outputs the desired attitude($\phi_d,\theta_d$) and total lift of the UAV($u_1$)
\item The attitude controller receives the input desired attitude and outputs the attitude control torque($u_2,u_3,u_4$) of the UAV.
\item The motor distribution model and the motor efficiency model convert the total lift control and the attitude control torque ($u_1,u_2,u_3,u_4$) of the UAV into the speed of each rotor of the UAV and thus control the motion of the UAV.
\end{itemize}
The objective of this paper is therefore to design a control law for the control total lift control and the attitude control torque ($u_1,u_2,u_3,u_4$) of the UAV
\section{Design a hybrid stability-increasing controller based on adaptive backstepping}
\label{sec3}
\subsection{Attitude controller design}
\label{sec3.1}
The following assumptions are made before designing the controller:
\textbf{Assumption 1}: During the movement of UAVs, ($x_d$, $y_d$, $z_d $), ($\phi_d$, $\theta_d$, $\psi_d $) are continuously steerable.
\textbf{Assumption 2}: For the airflow disturbance moment uncertain term $d_i$ ($i=\phi,\theta,\psi$), there is a constant $r_i>0$, so that $|d_i| \leqslant r_i$
(1)Pitch angle subsystem design:
Firstly,the pitch angle equation of state model in the dynamics model equation (5) is transformed into a strict feedback form so that it meets the backstepping controller design requirements.
\begin{equation}
\label{eq9}
\left\{
\begin{array}{@{}l}
{x_1}=\theta\\
{x_2}=\dot{\theta}
\end{array}
\right.
\end{equation}
\begin{equation}
\label{eq10}
\left\{
\begin{array}{@{}l}
\dot{x_1}=x_2\\
\dot{x_2}=a_1 U_3+f_{\theta}+d_{\theta}
\end{array}
\right.
\end{equation}
Among them, $a_1=\frac l{I_y}$, ${f}_{\theta} =\dot{\phi}\dot{\psi}\frac{(I_z- I_z)}{I_y}+\frac{J_r}{I_y}\dot{\phi}\Omega_r$
The tracking error of the roll angle is:
\begin{equation}
\label{eq11}
\mathrm{e}_1=\theta_d-\theta
\end{equation}
There is the derivative for the error $\mathrm{e}_1$:
\begin{equation}
\label{eq12}
\dot{\mathrm{e}_1}=\dot{\theta_d}-\dot{\theta}
\end{equation}
This paper constructs Lyapunov function$V_1$:
\begin{equation}
\label{eq13}
V_1=\frac12 \mathrm{e}_1^2
\end{equation}
There is the derivative for the $V_1$:
\begin{equation}
\label{eq14}
\dot{V_1}=\mathrm{e}_1 \dot{\mathrm{e}_1}=\mathrm{e}_1(\dot{\theta_d}-\dot{\theta})
\end{equation}
To ensure the stability of the system, i.e. $\dot{V_1} < 0$, introduce the virtual control quantity $x_v$.
\begin{equation}
\label{eq15}
x_v=c_1\mathrm{e}_1+\dot{\theta_d}
\end{equation}
where $c_1$ is a positive constant and there is an error between $x_v$ and $x_2$, noted as $\mathrm{e}_2$.
\begin{equation}
\label{eq16}
\mathrm{e}_2=x_2-x_{v}=-c_1\mathrm{e}_1-\dot{\mathrm{e}_1}
\end{equation}
To ensure the stability of the system $V_1$ at $\mathrm{e}_1$ = 0, this paper constructs another the Lyapunov function $V_2$:
\begin{equation}
\label{eq17}
V_2=V_1+\frac12 \mathrm{e}_2^2+\frac1{2\beta_1} \tilde{r_1}^2
\end{equation}
In the formula (\ref{eq17}), $\hat{r_1}$ is the estimation of $r_1$, $\tilde{r_1}$ is the estimation error of $r_1$($\tilde{r_1}=r_1-\hat{r_1}$), $\beta_1$ is a positive adaptive constant.
There is the derivative for the Lyapunov function $V_2$:
\begin{align}
\label{eq18}
\dot{V_2}&=\dot{V_1}+\mathrm{e}_2 \dot{\mathrm{e}_2}-\frac1{\beta_1} \tilde{r_1}\dot{\hat{r_1}}\notag\\
&=-c_1 \mathrm{e}_1^2+\mathrm{e}_2 (\dot{x_2}-\mathrm{e}_1-\dot{x_{v}})-\frac1{\beta_1} \tilde{r_1}\dot{\hat{r_1}}\notag\\
&=-c_1 \mathrm{e}_1^2+\mathrm{e}_2 (a_1 U_3+f_{\theta}+{d_{\theta}}-\mathrm{e}_1-\dot{x_{v}})-\frac1{\beta_1} \tilde{r_1}\dot{\hat{r_1}}
\end{align}
In order to ensure the stability of the pitch angle subsystem,i.e. $\dot{V_2} < 0$, the control law of the pitch angle controller is designed as formula (\ref{eq19}).
\begin{equation}
\label{eq19}
\left\{
\begin{array}{@{}l}
U_3=\frac1{a_1}(\mathrm{e}_1+\dot{x_{v}}-f_{\theta}-\hat{d_\theta}-c_2\mathrm{e}_2)\\
\dot{\hat{d_\phi}}=\beta_1\mathrm{e}_2
\end{array}
\right.
\end{equation}
Where, $\beta_1>0$, $c_1>0$, $c_2>0$.
Substituting the above control law equation (19) into equation (17) yields:
\begin{equation}
\label{eq20}
\dot{V_2}= -c_1 \mathrm{e}_1^2-c_2 \mathrm{e}_2^2 \leqslant 0
\end{equation}
Therefore,from Lyapunov stability theory, the pitch angle controller subsystem designed in this paper is stable.
(3)Design of roll angle controller and yaw angle controller subsystems
Based on formulas (\ref{eq7}) and (\ref{eq8}), The Roll control torque $u_2$ and yaw control torque $u_4$ can be designed:
\begin{equation}
\label{eq21}
\left\{
\begin{array}{@{}l}
U_2=\frac1{a_2}(\mathrm{e}_3+\dot{x_{2v}}-f_{\phi}-\hat{d_\phi}-c_4\mathrm{e}_4)\\
\dot{\hat{d_\phi}}=\beta_2\mathrm{e}_4
\end{array}
\right.
\end{equation}
\begin{equation}
\label{eq22}
\left\{
\begin{array}{@{}l}
U_4=\frac1{a_3}(\mathrm{e}_5+\dot{x_{3v}}-f_{\psi}-\hat{d_\psi}-c_6\mathrm{e}_6)\\
\dot{\hat{d_\psi}}=\beta_3\mathrm{e}_6
\end{array}
\right.
\end{equation}
In the formula (\ref{eq21})and (\ref{eq22}) , the expected roll angle and yaw angle are $\phi_d$ and $\psi_d$. The tracking error is $\mathrm{e}_3=\phi_d-\phi$, $\mathrm{e}_5=\psi_d-\psi$, :
\begin{equation}
\label{eq23}
V_3=\frac12 \mathrm{e}_3^2+\frac12 \mathrm{e}_4^2+\frac1{2\beta_2}\tilde{r_2}^2,\quad V_4=\frac12 \mathrm{e}_5^2+\frac12 \mathrm{e}_6^2+\frac1{2\beta_3}\tilde{r_3}^2
\end{equation}
Refer to the analysis process of $V_2$ above, and we can get:
\begin{equation}
\label{eq24}
\dot{V_3}\leqslant -c_3 \mathrm{e}_3^2-c_4 \mathrm{e}_4^2 \leqslant 0, \quad \dot{V_4}\leqslant -c_5 \mathrm{e}_5^2-c_6 \mathrm{e}_6^2 \leqslant 0
\end{equation}
In the formula (\ref{eq24}), $\mathrm{e}_4=-c_3\mathrm{e}_3-\dot{\mathrm{e}_3}$, $\mathrm{e}_6=-c_5\mathrm{e}_5-\dot{\mathrm{e}_5}$, $c_3$, $c_4$, $c_5$, $c_6>0$, so the attitude subsystem is asymptotically stable.
\subsection{Position PID controller design}
\label{sec3.2}
The stability controller designed in this paper uses a double closed-loop structure: PID control for the height and horizontal position of the UAV and the backstepping adaptive control for the attitude control of the UAV.
Taking $P=(x_d,y_d,z_d,\psi_d)$ input control parameters, three virtual control parameters $(U_x,U_y,U_z)$ can be obtained:
\begin{equation}
\label{eq25}
U_x=K_{px} (x_d-x)+K_{dx} (x'_d-x')
\end{equation}
\begin{equation}
\label{eq26}
U_y=K_{py} (y_d-y)+K_{dy} (y'_d-y')
\end{equation}
\begin{equation}
\label{eq27}
U_z=K_{pz} (z_d-z)+K_{dz} (z'_d-z')
\end{equation}
To make full use of acceleration information and strengthen adjustment, this article adopts the following form:
\begin{equation}
\label{eq28}
U_x=K_{px} (x_d-x)+k_{dx} (x'_d-x') +K_{ddx} x''
\end{equation}
\begin{equation}
\label{eq29}
U_y=K_{py} (y_d-y)+k_{dy} (y'_d-y') +K_{ddx} y''
\end{equation}
\begin{equation}
\label{eq30}
U_z=K_{pz} (z_d-z)+k_{dz} (z'_d-z')+K_{ddx} z''
\end{equation}
Taking $(U_x, U_y, U_z)$ as the resultant force in the three-axis directions of the $E(X,Y,Z)$ system, from Newton's second law, the following formula can be obtained:
\begin{equation}
\label{eq31}
\begin{bmatrix}
0\\
0\\
mg
\end{bmatrix}
+R_t
\begin{bmatrix}
0\\
0\\
u_1
\end{bmatrix}
=m
\begin{bmatrix}
a_x\\
a_y\\
a_z
\end{bmatrix}
\end{equation}
From the above formula (\ref{eq31}):
\begin{equation}
\label{eq32}
\begin{bmatrix}
a_x\\
a_y\\
a_z
\end{bmatrix}
=\frac1m
\begin{bmatrix}
(c\phi s\theta c\psi+s\phi s\psi)u_1\\
(c\phi s\theta s\psi-s\phi c\psi)u_1\\
c\phi c\theta u_1-mg
\end{bmatrix}
\end{equation}
Then the three-axis resultant force is:
\begin{equation}
\label{eq33}
\begin{bmatrix}
U_x\\
U_y\\
U_z
\end{bmatrix}
=\frac1m
\begin{bmatrix}
(c\phi s\theta c\psi+s\phi s\psi)u_1\\
(c\phi s\theta s\psi-s\phi c\psi)u_1\\
c\phi c\theta u_1-mg
\end{bmatrix}
\end{equation}
For the UAV inspection process, the UAV always keeps the nose still, that is, $\psi_d$ is a fixed value, which can be inversely solved to obtain:
\begin{equation}
\label{eq34}
u_1=m\sqrt{U_x^2+U_y^2+(U_z+g)^2}
\end{equation}
\begin{equation}
\label{eq35}
\theta_d=\frac{\arcsin[U_x m-u_1 s\phi_d s\psi_d]}{u_1 c\psi_d c\phi_d}
\end{equation}
\begin{equation}
\label{eq36}
\phi_d=\arcsin[U_x s\psi_d-U_y c\psi_d] \frac{m}{u_1}
\end{equation}
In the process of controlling the pitch angle to move along the x-axis, it can be assumed that the pitch $(\psi)$ and yaw $(\phi)$ angles of the UAV are fixed at this time, so $a_x= (f_1\sin\theta-f_2)\frac{u_1}m$, where $f_1=c\phi_d c\psi_d$, $f_2=s\phi_d s\psi_d$, both are constants, then
\begin{align}
\label{eq37}
\theta&=\arcsin\bigg\{\frac{u_1}m\bigg[f_1\bigg[K_{px} (x_d-x)+K_{ix}\int(x_d-x) dt+K_{dx} \dot{x_d}\notag\\
&\quad~-\dot{x})\bigg]+f_2\bigg]\bigg\}
\end{align}
For the process of controlling the roll angle along the y-axis direction, a derivation similar to the above can be done, and this article will not repeat it.
For the height control of the UAV, the PID control is linear, and the tracking error is $\mathrm{e}_7=z_d-z$, where $z_d$ is the desired height, then:
\begin{equation}
\label{eq38}
a_z=\frac1m c\phi c\theta u_1-g
\end{equation}
Where $g$ represents the local acceleration of gravity, when the drone is hovering, the pitch and roll angles of the drone are 0, which can be simplified to:
\begin{equation}
\label{eq39}
a_z=\frac{u_1}m-g
\end{equation}
From the above formula, the height direction control input is:
\begin{equation}
\label{eq40}
u_1=[K_{pz} (z_d-z)+K_{iz} \int(z_d-z)\mathrm{d}t+K_{dz} (\dot{z_d}-\dot{z})]-mg
\end{equation}
According to the above attitude loop design functions (\ref{eq19}) (\ref{eq21}) (\ref{eq22}) and Lyapunov function proof of the stability , combined with the position and height loop design functions (\ref{eq35})(\ref{eq36})(\ref{eq40}), a high-quality UAV controller in a confined space is finally obtained.
\section{Experiment}
\label{sec4}
This paper uses a self-assembled multi-rotor UAV as the airborne platform. The flight control uses CuavV5+, and the attitude loop is rewritten based on the open-source APM firmware. The attitude loop controller adopts the adaptive backstepping control method described above. The drone's airborne platform is equipped with Velodyne16 lidar for environmental information perception. The Insta360-OneX2 camera collects image information for 3D pipeline reconstruction and defect recognition; The onboard processor of DJI Manifold 2-C is used for multi-sensor fusion and navigation, VIJIM-VL66 fill light enhances lighting, and other equipment, as shown in Figure~\ref{fig3} below. The water pipe diameter is 12.4~m, and the elevation is 64.3~m. The specific details of the model are shown in Figure~\ref{fig4} below.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig3-01.jpg}
\caption{Schematic diagram of drone equipment}
\label{fig3}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig02.pdf}
\caption{Diversion pipeline model of hydropower station}
\label{fig4}
\end{figure}
The experimental scenario in this paper is a water diversion pipeline of a hydropower station, and the inside of the water diversion pipeline is GPS rejection space. The GPS position cannot be obtained as the true value to compare the control effects of the position loop controller. In order to obtain the actual reference value of the position loop controller, the test is carried out under the condition of outdoor wind force 5 (wind speed 8m$/$s-10m$/$s), and the expected position and actual position of the position ring are compared, as shown in Figure~\ref{fig5} below. This paper uses the following three parameters to evaluate the position loop control effect: 1. The average value of the tracking error of the position loop is the average value of the difference between the expected position and the actual position; 2. The standard deviation of the tracking error; 3. The tracking error The percentage of the mean value of the actual position relative to the mean value of the actual position. The position loop's x-direction and y-direction horizontal directions use the same parameters. This article uses the x-direction expected position and actual position to compare the control effect in the horizontal direction. The comparison diagram of the horizontal desired position and the actual position is shown in Figure~\ref{fig6}(a). The percentage of the tracking error relative to the average actual position is shown in Figure~\ref{fig6}(c). The comparison chart between the desired height position and the actual height position is shown in~\ref{fig6}(b) below, the percentage of the tracking error relative to the average actual position, As shown in Figure~\ref{fig6}(d).
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{Comparison of expected and actual positions of drones}
\label{fig5}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig6.eps}
\caption{Comparison of tracking errors in the horizontal direction and the height direction}
\label{fig6}
\end{figure}
\begin{table}[!ht]
\centering
\caption{The evaluation index of the position loop control effect in the horizontal direction}
\label{tab1}
\tabcolsep=2pt
\begin{tabular}{cm{1.6cm}<{\centering}m{1.8cm}<{\centering}m{4cm}<{\centering}}
\toprule
&Mean tracking error(/m)&The standard deviation of tracking error&The percentage of the mean value of the tracking error relative to the mean value of the actual position\\
\midrule
Takeoff &0.1228&0.1225&1.3\%\\
Flight&0.0444&0.0872&0.47\%\\
Land&0.272&0.522&2.88\%\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluation index of position loop control effect in the height direction}
\label{tab2}
\tabcolsep=2pt
\begin{tabular}{cm{1.6cm}<{\centering}m{1.8cm}<{\centering}m{4cm}<{\centering}}
\toprule
&Mean tracking error(/m)&The standard deviation of tracking error&The percentage of the mean value of the tracking error relative to the mean value of the actual position\\
\midrule
Takeoff&1.665&0.3218&26.67\%\\
Flight&0.204&0.1067&3.26\%\\
Land&0.337&0.2349&5.43\%\\
\bottomrule
\end{tabular}
\end{table}
This article divides the drone's flight into three stages: 1. 0--200~s is the takeoff phase; 2. 200--1250~s is the flight phase; 3. 1250--1500~s is the landing phase. The z-direction starts to be pulled up to the desired height while keeping the x-direction motionless; during the landing phase, the z-direction gradually reduces the desired height to 0 and keeps the x-direction still.As can be seen from figures~\ref{fig5} and~\ref{fig6}, even in the case of level 5 wind interference, the actual position of the drone and the desired position follow better in the flight phase, the drone achieves an ideal control effect. During the takeoff and landing phases, the drone will have large fluctuations that will likely cause the UAV to roll over, which is a relatively dangerous moment. From Tables~\ref{tab1} and~\ref{tab2}, it can be seen that the tracking error in the takeoff and landing phases is large, and the tracking error in the Z direction is bigger than that in the X-direction. For the x-direction, the desired position during the takeoff phase is always 0, and the drone's position during the unlocked takeoff will have a sudden change. The drone's position during the landing phase will be unstable and fluctuate to a certain extent, resulting in large errors. The UAV can fly smoothly during the flight phase. The mean value and standard deviation of the tracking error are small, providing a better guarantee for the UAV attitude loop control. For the Z direction, in the UAV before unlocking, its desired position is 2~m, and the actual position of the drone is about 0.5~m. After the drone is unlocked, its desired position becomes 8m, and then it is pulled up to 9~m, and the corresponding drone will climb upward, which gradually reaches the desired position. As a result, the difference between the desired position in the climbing phase and the actual position in the z-direction is large, and the average tracking error is enormous. In the flight phase, the UAV has better followability, with the three evaluation indicators of the attitude control loop are all small, and can achieve the desired control effect. The three evaluation indicators in the landing phase are smaller than those in the takeoff stage. The positioning error mainly causes the tracking error in the height direction of the UAV.
In this paper, the flight data inside the water pipeline of the Three Gorges Hydropower Station is used to evaluate the control effect of the attitude loop. When taking off, the wind speed at a distance of 2m around the UAV measured by the anemometer is about 6.5m/s.Three parameters similar to the evaluation of the position loop are used as evaluation indicators for the evaluation of the attitude loop control effect: 1. the mean value of the attitude loop's tracking error is the mean value of the difference between the expected attitude and the actual attitude; 2. the standard deviation of the attitude tracking error; 3. the percentage of the average attitude tracking error relative to the actual average attitude. As shown in Table~\ref{tab3} below, it is the evaluation index of the attitude loop control effect.
\begin{table}[!ht]
\centering
\caption{Evaluation index of the position loop control effect in the height direction}
\label{tab3}
\tabcolsep=2pt
\begin{tabular}{cm{1.9cm}<{\centering}m{2.3cm}<{\centering}m{3.5cm}<{\centering}}
\toprule
&Mean value of attitude tracking error (/deg)&The standard deviation of attitude tracking error&The percentage of the average tracking error relative to the average actual attitude\\
\midrule
pitch&0.028&0.3218&0.16\%\\
roll&0.037&0.1067&0.32\%\\
yaw&0.193&0.2349&0.62\%\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig7.eps}
\caption{Comparison of UAV attitude tracking error}
\label{fig7}
\end{figure}
From the comparison of the expected attitude and the actual attitude in Figure~\ref{fig7} , the pitch angle and roll angle follow better, the mean value and standard deviation of the tracking error are small. The tracking error of the yaw angle is more extensive, and the convergence speed is relatively slow. This is because penstocks splice the water pipeline of the Three Gorges Hydropower Station, and there is a strong magnetic shield inside the pipeline. The magnetometer carried by the drone fails. This paper uses the yaw estimated by the RCFIC algorithm. The yaw has a specific error, so the yaw direction control's tracking error and standard deviation are larger than the pitch and roll angles. In order to avoid the tracking error fluctuation of the UAV's yaw from causing interference to the flight of the UAV, it is necessary to keep the UAV's yaw constant during the flight. From an overall point of view, the adaptive backstepping control adopted by the attitude loop controller has a better effect. Although there is a large airflow disturbance in the enclosed space of the water pipeline, the dynamic response is still good, and the overshoot and fluctuation of the actual position are slight. Realize the smooth flight of the UAV in the water diversion pipeline.
\section{Conclusion}
\label{sec5}
When the multi-rotor UAV is inspecting the water diversion pipeline of the hydropower station, because of the problem that the airflow caused by the rotation of the UAV blade has a greater impact on the flight motion, this paper proposes a hybrid stabilized flight algorithm with adaptive backstepping control. The algorithm designs the outer loop position controller into a PID controller, and the inner loop controller adopts adaptive backstepping control, which achieves hybrid stability enhancement by combining the two. The interior of the Three Gorges Hydropower Station is GPS-rejected space, and GPS positioning and navigation cannot be carried out, so the outer loop controller test was performed outdoors. The interior of the Three Gorges Hydropower Station is GPS-rejected space, and GPS positioning and navigation cannot be carried out, so the outer loop controller test was performed outdoors. Comparing the actual position of the GPS positioning and navigation system with the expected position, from the horizontal and altitude evaluation indicators in Table~\ref{tab1} and Table~\ref{tab2}, it can be seen that the real-time followability and error fluctuation of the UAV during the flight phase are slight. In contrast, during the takeoff and landing phases, the tracking error of the drone is relatively large, which is a dangerous phase in the drone's flight. In this paper, the attitude loop controller is tested in the water diversion pipeline of the Three Gorges Hydropower Station. The tracking attitude error of the pitch and roll angles is small. Due to the failure of the magnetic compass carried by the UAV, the yaw angle estimated by the RCFIC algorithm is used. There is a certain error, so the tracking attitude error and standard deviation of the yaw angle are relatively large. The water pipeline of the hydropower station is a confined space, the rotation of the blades of the UAV will cause large airflow disturbances during the flight. Still, the adaptive backstepping control hybrid stability control algorithm proposed in this paper can ensure that the UAV dynamic response is still good. The overshoot and fluctuation of the flight position are slight so that the UAV can fly smoothly in the water diversion pipe to reach an ideal robust and stable tracking control.
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.